Every serious researcher β from grad students grinding out their lit reviews to tenured professors trying to track five subfields at once β knows the pain:
- New research moves too fast
- It's scattered across a dozen platforms
- And if you blink, you've missed a whole trend
Between arXiv, Twitter threads, conference keynotes, and shadow releases on GitHub, there's no single feed that captures the full signal.
We weren't inspired by another one-shot summarizer. We were inspired by how actual researchers work:
- Constant pivots between sources
- Zooming in and out across timescales
- Evaluating claims, not just regurgitating them
- Prioritizing trust, depth, and novelty
So we built Vizier β not another autocomplete wrapper, but a system that actually understands your goals and assembles a live research team around them.
Because in 2025, Insight isn't just another token.
Vizier is a modular, agentic research engine β your personal research ops team, not just a chatbot. Whether you're building a newsletter, writing a paper, pitching an idea, or just staying on the bleeding edge, Vizier gives you:
- π Precision-curated content from credible, multi-platform sources
- π§± Structured, editable reports tailored to your research priorities
- π§ Control over what gets emphasized, where deeper sourcing is needed, and how frequent updates should be
- π§βπ¬ Researchers and technical professionals who need rigorous updates on specialized domains
- π Students and professors tracking rapid fields like GenAI, climate science, or synthetic bio
- π£ Content creators and analysts writing newsletters, reports, or breakdowns on bleeding-edge developments
Once you've locked in a great output, you can:
- π Schedule that research plan to auto-run daily, weekly, or monthly
- π Revisit past reports, tweak scopes, swap source weights, or layer in new domains
This isn't just "use LangChain and call it a day." Vizier's agents think for themselves.
Analyzes your refined query and decides:
- How many agents to spawn
- Which domains get which budget
- Which model contexts are needed
Don't just follow rules β they evaluate:
- How noisy a domain is
- Whether depth is sufficient
- If second-level validation is required
Actively rerank or prune content if trust scores fall short, pushing quality higher through intelligent evaluation.
-
π§ Query Refiner
- Builds multi-component research plans
- Considers user role and goals
- Sets update frequency parameters
-
π§ Router v0_4
- Maps query scope to modality
- Assigns sourcing budgets
- Manages independent agents
-
βοΈ Writer Agent
- Synthesizes modular content
- Auto-queries for clarification
- Enables source re-ranking
-
β‘ Live Agent Graph UI
- SSE-driven real-time updates
- Visual decision tracing
- Interactive feedback system
-
Query Processor Pipeline
- Query Refinement (Claude 3 Sonnet)
- Web Search Agent
- Twitter Search Agent
- Source Review & Reranking
- Router_04 for Agent Orchestration
- Draft Generation
-
State Management
- Real-time SSE progress streaming
- Session persistence
- Source caching and reranking
-
Database Schema
- Queries table with JSONB for source storage
- Drafts with versioning
- User profiles and preferences
-
Query Flow
POST /queries # Create new query GET /queries/{id} # Get query status POST /queries/{id}/refine # Start refinement GET /queries/stream/{id} # SSE progress updates -
Source Review
GET /queries/{id}/sources # Get sources for review POST /queries/{id}/sources/review # Submit reviewed sources -
Draft Management
POST /drafts/generate # Generate from sources GET /drafts/{id} # Get draft content POST /drafts/{id}/accept # Accept draft POST /drafts/{id}/reject # Reject with feedback GET /drafts/stream/{id} # Stream generation
-
Query Processing
Loadinggraph TD A[Raw Query] --> B[Query Refinement] B --> C[Web Search] B --> D[Twitter Search] C --> E[Source Review] D --> E E --> F[Router_04] F --> G[Draft Generation]
-
Source Processing
Loadinggraph TD A[Raw Sources] --> B[Trust Scoring] B --> C[User Review] C --> D[Final Reranking] D --> E[Router_04]
The backend uses Server-Sent Events (SSE) to provide real-time updates on:
- Query refinement progress
- Source collection status
- Source review readiness
- Draft generation progress
Events are emitted in the format:
{
"stage": "ProcessStage",
"timestamp": "datetime",
"data": { stage-specific data }
}- FastAPI - Asynchronous API framework
- PostgreSQL - JSONB storage for flexible document handling
- SSE - Real-time event streaming
- OpenRouter - Model provider abstraction and fallback
- Claude 3 Sonnet - Primary LLM for refinement and generation
- JWT - Authentication and session management
- Node.js (v18+)
- npm (v9+) or yarn
- Visual Studio Code (recommended)
- Python 3.10+
- PostgreSQL
- Google OAuth credentials
- Clone the repository:
git clone https://github.com/your-username/vizier.git
cd vizier- Backend Setup:
# Create virtual environment
python -m venv venv
# Activate it
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt- Configure Backend Environment:
Create a
.envfile:
GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
REDIRECT_URI=http://localhost:8000/auth/callback
FRONTEND_URL=http://localhost:3000
JWT_SECRET=
DATABASE_URL=
sslmode=require
- Frontend Setup:
cd frontend
npm installBackend:
uvicorn main:app --reloadFrontend:
npm run dev
# or
yarn devAccess the application at http://localhost:5173
-
Set up PostgreSQL:
createdb vizier
-
Configure environment:
cp example.env .env # Edit .env with your credentials: # - DATABASE_URL # - JWT_SECRET # - OPENROUTER_API_KEY
-
Initialize database:
python -m alembic upgrade head
-
Run development server:
uvicorn main:app --reload
vizier/
βββ frontend/
β βββ public/ # Static files
β βββ src/
β β βββ app/
β β β βββ pages/ # Application pages
β β β β βββ discover/ # Discover page
β β β β βββ graph/ # Graph visualization
β β β β βββ library/ # Library page
β β β β βββ login/ # Login and authentication
β β β β βββ onboarding/ # User onboarding
β β β β βββ query/ # Query interface
β β β β βββ settings/ # Settings page
β β β β βββ spaces/ # Spaces page
β β β βββ App.tsx # Main application component
β β β βββ App.css # Main application styles
β β β βββ index.css # Global styles
β β β βββ main.tsx # Application entry point
β β βββ components/ # Reusable components
β β β βββ navigation/ # Navigation components
β β β βββ querybar/ # Query bar components
β β βββ vite-env.d.ts # Vite environment typings
β βββ index.html # HTML entry point
β βββ tsconfig.json # TypeScript configuration
β βββ tsconfig.app.json # App-specific TypeScript config
β βββ tsconfig.node.json # Node-specific TypeScript config
β βββ vite.config.ts # Vite configuration
β βββ package.json # Project dependencies and scripts
β βββ README.md # Project documentation
βββ backend/
β βββ .gitignore
β βββ database.py
β βββ dummyapi.py
β βββ main.py
β βββ README.md
β βββ requirements.txt
β βββ test.py
β βββ processes/
β β βββ main.py
β β βββ connectors/
β β β βββ router_0.py
β β β βββ router_04.py
β β βββ query/
β β β βββ refiner.py
β β βββ report/
β β β βββ refiner.py
β β β βββ writer.py
β β βββ search/
β β β βββ agents.py
β β β βββ twitter.py
β β β βββ web.py
β β βββ sourcing/
β β β βββ agent.py
β β β βββ director.py
β β βββ writer/
β β βββ generator.py
β βββ routers/
β βββ auth.py
β βββ drafts.py
β βββ openrouter.py
β βββ queries.py
β βββ user.py
Refer to our API Documentation for detailed endpoint specifications and integration guides.
- Fork the repository
- Create your feature branch:
git checkout -b feature/amazing-feature - Commit your changes:
git commit -m 'Add amazing feature' - Push to the branch:
git push origin feature/amazing-feature - Open a pull request
This project is licensed under the MIT License - see the LICENSE file for details.