Mirora AI

[ ARCHITECTURE ]

Service Architecture

Microservices deployed on Render, backed by Supabase and Upstash.

Live Infrastructure

Core microservices currently deployed and serving production traffic.

Core AI Service

LLM Gateway with multi-provider routing. Prompt Registry. Guardrails Engine. Metrics collection.

mirora-core-ai.onrender.com

Information Service

Schema-driven extraction. Gap analysis engine. Question generation. LLM-powered processing.

mirora-information.onrender.com

Document Processing

File ingestion (PDF, DOCX, images). Text extraction. Semantic chunking. Queue-based async jobs.

mirora-doc-processing.onrender.com

In Progress

Services in active development, available for local testing.

Synthetic Data Service

Object, Event, Timeline, Arc, and Document engines for corpus generation.

localhost:8010-8015

Evaluation Framework

Ground-truth management. Automated eval runs. Regression detection.

localhost:8004

Vector Store Service

Unified interface to FAISS, Chroma, Weaviate. Collection management.

localhost:8009

Platform Stack

Compute

Render Web Services. Docker containers. Auto-deploy from GitHub.

Database

Supabase PostgreSQL. Transaction pooling. Managed backups.

Cache & Queue

Upstash Redis. TLS encryption. Serverless scaling.

LLM Provider

Groq API. Fast inference. Llama and Mixtral models.

Deployment Status

ServiceEndpointStatusDependencies
PRODUCTION
Core AI Serviceapi.mirora.app✅ LivePostgreSQL, Redis, Groq
Information Serviceapi.mirora.app✅ LiveCore AI, Redis
Document Processingapi.mirora.app✅ LiveCore AI, Redis
DEVELOPMENT
Synthetic Data Servicelocalhost🟡 DevCore AI
Evaluation Frameworklocalhost🟡 DevPostgreSQL, Redis
Vector Store Servicelocalhost🟡 DevChromaDB

Processing Pipelines

Document Ingestion

Upload → Document Processing → Text Extraction → Chunking → Information Service

End-to-end document processing with structured output

Career Analysis

Resume Upload → Document Processing → Information Extraction → Gap Analysis → Role Matching

Powers the Career Analysis tool

LLM Request

Request → Core AI Gateway → Provider Selection → Groq/OpenAI → Response Caching

Unified LLM access with caching and metrics

Platform Standards

Framework

FastAPI with async support. Pydantic models. OpenAPI docs. Structured logging.

Containerization

Docker multi-stage builds. Health checks. Render Blueprint deployment.

Security

API key auth (X-API-Key). CORS configuration. TLS everywhere.

Observability

Structured JSON logging. Health endpoints. Prometheus-compatible metrics.

Ready to discuss your architecture needs?

START A CONVERSATION