An Elemental Flow is a self-contained, autonomous unit designed to perform specific, unique actions. These flows serve as the foundational building blocks within the Mira Flows ecosystem, enabling precise and reliable AI-powered operations.

Elemental Flow Attributes

Version

ComponentDescriptionRequiredExample
versionFlow specification version using semantic versioningYes"0.0.1"

Metadata

ComponentDescriptionRequiredExample
nameUnique identifier for the flowYes"your-flow-name"
descriptionExplanation of flow’s purposeYes"A brief description of your flow"
authorCreator’s usernameYes"your-username"
tagsKeywords for categorizationNo[tag1, tag2, tag3]
privateAccess control settingYesfalse

Input Configuration

Supported Input Types:

TypeDescription
stringText-based input values

Input Structure:

ComponentDescriptionRequiredExample
inputsMap of input parametersYesCollection of input definitions
typeData type of input (currently only string)Yes"string"
descriptionPurpose of the inputYes"Description of input1"
requiredWhether input is mandatoryYestrue or false
exampleSample input valueNo"Example value for input1"

Model Configuration

ComponentDescriptionRequiredExample
providerAI service providerYes"provider-name"
nameSpecific model identifierYes"model-name"

Dataset Configuration (Optional)

ComponentDescriptionRequiredExample
sourceReference to datasetNo"author_name/dataset_name"

Prompt and Readme Configuration

ComponentDescriptionRequiredExample
promptInstructions for model behaviorYes”Generate a tweet on the given topic:
readmeUsage documentationYesMarkdown-formatted guide

Basic Flow Structure

The official YAML structure for a basic Mira Flow:

RAG Integration 🔍

To integrate RAG capabilities, you’ll need to:

  1. Create a dataset

  2. Add data sources to your dataset

  3. Link the dataset to your flow

The following file formats are supported for datasets:

File TypeProcessing Method
PDF (.pdf)Text extraction from document
Markdown (.md)Text extraction from document
URLWeb content scraping
CSV (.csv)URL extraction and content scraping
Text (.txt)Direct text extraction
Zip (.zip)Processing of contained supported files

Creating a Dataset 🗃️

from mira_sdk import MiraClient

client = MiraClient(config={"API_KEY": "YOUR_API_KEY"})

# Create dataset
client.dataset.create("author/dataset_name", "Optional description")

Adding Data Sources

# Add URL to your dataset
client.dataset.add_source("author/dataset_name", url="example.com")

# Add file to your dataset
client.dataset.add_source("author/dataset_name", file_path="path/to/my/file.csv")

Linking Dataset with Flow

Add the following configuration to your flow.yaml file:

dataset:
  source: "author/dataset_name"

Flow Structure with RAG

The official YAML structure for a flow with RAG capabilities:

version: "your.version.here"

metadata:
  name: "your-flow-name"
  description: "A brief description of your flow"
  author: "your-username"
  tags: [tag1, tag2, tag3]
  private: false

inputs:
  input1:
    type: string
    description: "Description of input1"
    required: true
    example: "Example value for input1"

model:
  provider: "provider-name"
  name: "model-name"

dataset:
  source: "author_name/dataset_name"

prompt: |
  Your flow's primary instruction or role...
  You can use {input1} placeholders to reference inputs.

readme: |
  Your flow's readme...
  You can use raw text or markdown here.