LLM Council
Project type
Multi-Model AI Experiment
Date
2025
Location
Remote
A debate-club style AI assistant where you don't trust a single model—you convene a council of OpenRouter LLMs, have them write independent answers, anonymously rank each other, and then appoint a chairman model to synthesize the final take. Streaming updates show each stage in real time so you watch the deliberation, not just the verdict.
What I Built
- •FastAPI backend that orchestrates a three-stage council run: collect parallel responses from a configurable roster of OpenRouter models, ask them to critique and rank anonymized peers, then feed all of that to a designated chairman model for synthesis.
- •Authenticated APIs for spinning up conversations, streaming each stage over Server-Sent Events, and persisting the full transcript plus metadata so you can revisit any deliberation.
- •React + Vite frontend with login gating, a conversation sidebar, and a chat pane that renders markdown answers, stage-specific loaders, ranking visualizations, and final council output as it streams in.
- •Adaptive timeouts and parallel httpx calls keep long prompts and slow models from stalling the session, while structured JSON storage keeps every exchange queryable without a database.
Why it's interesting
- •Multi-agent deliberation pipeline: parallel generation, anonymized peer review, and chairman synthesis that mirrors human committee dynamics.
- •Production-grade ergonomics: auth-gated endpoints, CORS allowlists, adaptive timeouts, and resilient streaming for long prompts and flaky models.
- •Polished UX: real-time stage progress, markdown transcripts, and conversation management so users can browse prior debates and jump back in quickly.
Tech Stack
FastAPIReactViteOpenRouterServer-Sent EventsJWT AuthPydantichttpx
Previous Project
Next Project