FastAPI microservices orchestrated with Docker Compose across foundation, processing, and orchestration tiers.
Active services in the `mirora-ai-microservices` platform.
LLM gateway, prompt registry, guardrails, and shared request metrics.
localhost:8000File ingestion, extraction, and semantic chunking for downstream AI workflows.
localhost:8001ChromaDB-backed vector storage, collection management, and similarity search.
localhost:8002Retrieve-augment-generate pipeline with citation support and caching.
localhost:8003Evaluation suites, datasets, and regression-focused service validation workflows.
localhost:8004Topic lifecycle management, freshness controls, and semantic search for reusable knowledge.
localhost:8005Domain-specific synthetic entities, timelines, events, and document generation.
localhost:8010Schema-driven extraction, gap analysis, and question generation.
localhost:8020Dedicated text-to-image generation API in the main microservices stack.
localhost:8040Standalone embedding API using sentence-transformers and OpenAI-compatible routes.
localhost:8006 (standalone)Platform orchestration is profile-driven from the root Docker Compose file.
Infrastructure-only startup for local dependencies (Redis + PostgreSQL).
Infrastructure plus core platform services for API and integration work.
Infrastructure, services, gateway, and monitoring stack for end-to-end validation.
Infrastructure plus Prometheus/Grafana observability components.
Dockerized FastAPI services orchestrated with Compose profiles.
PostgreSQL 15 with SQLAlchemy async persistence where required.
Redis 7 for caching, queueing, and service-specific DB index partitioning.
Traefik routing with Prometheus and Grafana for platform observability.
| Service | Endpoint | Status | Dependencies |
|---|---|---|---|
| MAIN COMPOSE SERVICES | |||
| Core AI Service | localhost:8000 | ✅ Active | PostgreSQL, Redis |
| Document Processing | localhost:8001 | ✅ Active | Core AI, Redis |
| Vector Store Service | localhost:8002 | ✅ Active | ChromaDB |
| RAG Service | localhost:8003 | ✅ Active | Core AI, Embedding Service, Vector Store, Redis |
| Evaluation Framework | localhost:8004 | ✅ Active | Core AI, Document Processing, Information Service, Synthetic Data, PostgreSQL, Redis |
| Knowledge Generation | localhost:8005 | ✅ Active | Core AI, Document Processing, Information Service, Vector Store, Redis, PostgreSQL |
| Embedding Service | localhost:8006 | ✅ Active | Redis (optional) |
| Synthetic Data Service | localhost:8010 | ✅ Active | Core AI, Redis |
| Information Service | localhost:8020 | ✅ Active | Core AI, Redis |
| Image Generation | localhost:8040 | ✅ Active | Service-specific model providers |
Upload → Document Processing → Text Extraction → Chunking → Embeddings → Vector Store
End-to-end document processing with vector storage for retrieval
Resume Upload → Document Processing → Information Extraction → Gap Analysis → Role Matching
Powers the Career Analysis tool
Request → Core AI Gateway → Provider Selection → LLM Provider → Metrics + Response
Unified LLM access with caching and metrics
Query → Embedding → Vector Store Search → Context Retrieval → Core AI (LLM) → Response
Retrieval-augmented generation grounded in your documents
Topic Request → Knowledge Generation → Information + Vector Enrichment → Status Tracking → Ready Knowledge Base
Managed topic knowledge with freshness and lifecycle state transitions
FastAPI with async support. Pydantic models. OpenAPI docs. Structured logging.
Docker multi-stage builds. Health checks. Render Blueprint deployment.
API key auth (X-API-Key). CORS configuration. TLS everywhere.
Structured JSON logging. Health endpoints. Prometheus-compatible metrics.
Ready to discuss your architecture needs?
START A CONVERSATION