Text Generation
Use chat_completions_create()
to generate text responses from AI models.
Basic Usage
from mira_network import MiraClient
async def generate_text():
async with MiraClient(api_key="your-api-key") as client:
response = await client.chat_completions_create(
model="your-chosen-model",
messages=[
{"role": "user", "content": "What is the capital of France?"}
]
)
print(response["choices"][0]["message"]["content"])
Streaming Responses
For real-time text generation:
async def stream_response():
async with MiraClient(api_key="your-api-key") as client:
stream = await client.chat_completions_create(
model="your-chosen-model",
messages=[{"role": "user", "content": "Write a story"}],
stream=True
)
async for chunk in stream:
print(chunk["choices"][0]["delta"]["content"], end="")
Multi-turn Conversations
Handle complex dialogues:
messages = [
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Hi! Can you help me?"},
{"role": "assistant", "content": "Of course! What can I help you with?"},
{"role": "user", "content": "Tell me about Paris"}
]
async with MiraClient(api_key="your-api-key") as client:
response = await client.chat_completions_create(
model="your-chosen-model",
messages=messages
)
Error Handling
Implement robust error handling:
try:
response = await client.chat_completions_create(
model="your-chosen-model",
messages=[{"role": "user", "content": "Hello"}]
)
except ValueError as e:
print(f"Validation error: {e}")
except Exception as e:
print(f"An error occurred: {e}")
Custom Node Configuration
For specialized deployments:
response = await client.chat_completions_create(
model="your-model",
messages=[{"role": "user", "content": "Hello"}],
mira_node={
"base_url": "https://custom-node.com",
"api_key": "node-api-key"
}
)
Message Structure
Core components of a message:
Message:
role: str # "system", "user", or "assistant"
content: str # The message content