Skip to content

The Mira Network

Welcome to Mira - Decentralised Infrastructure to universalise access to advanced AI

Introduction to Mira

Mira provides developers all the infrastructure and platform primitives needed to build AI-native applications in a reliable, convenient, and cost-effective manner. By abstracting away all the complexity of developing and productionizing AI-native applications, Mira enables developers to focus on what they do best: build differentiated content and services for their customers.

Why Mira?

Building AI-native applications presents several significant challenges for developers: -

1. Convenience:

Integrating AI capabilities into existing systems and workflows often requires a complete rearchitecting of major components. Developers need to redesign their architectures, data pipelines, and deployment processes to accommodate AI models and their unique requirements.

2. Reliability:

AI systems are inherently probabilistic, making it difficult to productionize them due to the lack of determinism. Ensuring consistent and reliable performance across different environments and edge cases can be a daunting task.

3. Cost:

Running AI inference at scale can be cost-prohibitive, especially for resource-intensive models and applications with high traffic volumes. The computational costs associated with AI are a significant barrier to adoption.

Mira addresses these pain points in the following ways:


1. Convenience:

Mira introduces Flows and the Flow Marketplace, which provide pre-built, modular components for common AI tasks. Developers can easily integrate these Flows into their applications, eliminating the need for extensive rearchitecting.

2. Reliability:

To enhance reliability, Mira incorporates community-sourced and client-specific Retrieval Augmented Generators (RAGs), along with graphs and semantic systems. These components help mitigate the probabilistic nature of AI by providing additional context and knowledge, leading to more consistent and reliable outputs.

3. Cost:

Mira abstracts the compute layer, enabling developers to leverage cost-effective and scalable infrastructure for running AI inference. This abstraction allows for seamless scaling and optimized resource utilization, reducing the overall computational costs associated with AI applications.

Mira embraces an open-source philosophy, empowering builders with choice and flexibility. Developers can access compute resources from multiple providers, choose from a variety of open-source foundation models, and leverage community-developed workflows. Every component of Mira's stack is community-driven, with multiple providers to choose from.

Website
Twitter
Discord
Whitepaper (coming soon!)