A multi-agent AI research paper analysis tool built for the Ignite USD Hackathon.
Upload a PDF or paste a link to any academic paper. ScholarAI runs multiple AI agents in parallel to break it down from every angle: summary, deep explanation, code extraction, future research directions, and interactive chat. All personalized to your expertise level and goals.
The agents are model-agnostic. Each agent slot can call whichever AI model is best suited for that category. The current implementation uses Anthropic Claude (Haiku for speed-critical tasks, Sonnet for depth), but any specialized model can be swapped in per agent.
- Upload a research paper (PDF drag-and-drop) or paste any URL (ArXiv, etc.)
- Fill a short questionnaire to personalize the output to your expertise, goals, and preferences
- Get analysis across 5 tabs, all running simultaneously:
- Summary: title, authors, problem, methodology, results, datasets with links, keywords
- Deep Explainer: section-by-section breakdown, KaTeX-rendered equations, SVG diagrams
- Code: extracted algorithms with syntax highlighting, simplified versions, setup guides
- Future Research: limitations, open questions, research ideas, big picture
- Chat with Paper: full streaming chat powered by the complete paper text
- Every tab has a "Dive Deeper" bar at the bottom for context-aware follow-up questions without leaving the panel
Every agent's system prompt is prepended with the user's profile so the AI adjusts vocabulary, depth, analogy choice, and format automatically.
| Layer | Technology |
|---|---|
| Frontend | React 19, Vite 8 |
| Styling | Tailwind CSS v4 |
| Animation | Motion (Framer Motion) |
| Backend | Express 5, Node.js |
| AI | Anthropic Claude API (swappable per agent) |
| PDF parsing | pdfjs-dist (web worker) |
| Math rendering | KaTeX |
| Code highlighting | highlight.js |
| Markdown | react-markdown, remark-gfm |
Each agent is an independent module that can call any AI model. The current configuration:
| Agent | Model | Purpose |
|---|---|---|
| Summarizer | Claude Haiku 4.5 | Fast structured summary (runs first) |
| Deep Explainer | Claude Sonnet 4 | Section-by-section analysis + SVG diagrams (2 parallel calls) |
| Code Explainer | Claude Sonnet 4 | Algorithm extraction and explanation |
| Future Research | Claude Sonnet 4 | Limitations, gaps, and research ideas |
| Chat | Claude Haiku 4.5 | Streaming conversation with full paper context |
After the summary returns, three agents fire simultaneously without waiting for each other:
Summary done
|-- Deep Explainer (analysis + diagrams in parallel)
|-- Code Explainer
|-- Future Research
Before analysis runs, the user fills a questionnaire covering expertise level, goal, pain points, format preference, and optional free-text instructions. This gets converted into a reader profile block that is prepended to every agent's system prompt.
The frontend never calls Anthropic directly. All requests go through Express endpoints:
POST /api/claudefor non-streaming calls (returns full JSON)POST /api/claude/streamfor streaming calls (pipes SSE to client)GET /api/fetch-urlfor fetching paper content from URLs
The API key lives only on the server. The backend validates a model allowlist and enforces token limits per request.
An AbortController is created when analysis starts. Every agent call receives its signal. Clicking "New Paper" or navigating away aborts all in-flight requests instantly.
ScholarAI/
├── server.js Express backend, proxies all AI calls
├── .env ANTHROPIC_API_KEY + PORT (gitignored)
├── vite.config.ts Vite config with /api proxy
└── src/
├── App.tsx Root component, state management, agent orchestration
├── main.tsx Entry point
├── index.css Global styles, scrollbar, fonts
├── types/
│ └── index.ts All TypeScript interfaces
├── agents/
│ ├── summarizerAgent.js Structured JSON summary
│ ├── deepExplainerAgent.js Deep analysis + SVG diagrams
│ ├── codeExplainerAgent.js Code extraction + explanations
│ ├── futureResearchAgent.js Limitations + research ideas
│ └── chatAgent.js Streaming chat wrapper
├── components/
│ ├── HomePage.tsx Landing page with video background
│ ├── Questionnaire.tsx Personalization modal
│ ├── ResultsPage.tsx Main results layout with tab routing
│ ├── TabBar.tsx Pill-style navigation bar
│ ├── AppShell.tsx Root layout wrapper
│ ├── DropZone.tsx PDF drag-and-drop upload
│ ├── panels/
│ │ ├── SummaryPanel.tsx
│ │ ├── DeepExplainerPanel.tsx
│ │ ├── CodePanel.tsx
│ │ ├── FuturePanel.tsx
│ │ └── ChatPanel.tsx
│ └── ui/
│ ├── AnimatedCard.tsx Fade-in card wrapper
│ ├── CodeBlock.tsx Syntax-highlighted code with copy
│ ├── EquationBlock.tsx KaTeX math rendering
│ ├── Prose.tsx Markdown renderer
│ ├── KeywordPill.tsx Keyword tags
│ ├── SectionDivider.tsx Visual separator
│ ├── GlowCard.tsx Glowing card variant
│ ├── LoadingBar.tsx Progress indicator
│ └── SuggestedQuestions.tsx Chat starter prompts
├── utils/
│ ├── api.js callClaude() + streamClaude()
│ ├── extractText.js PDF + URL text extraction
│ ├── buildUserProfile.js Questionnaire answers to prompt string
│ └── formattingRules.js Shared formatting rules for agents
└── hooks/
└── usePdfExtractor.ts PDF file text extraction
- Node.js 18+
- An Anthropic API key
git clone https://github.com/HarshithReddy01/ScholarAI.git
cd ScholarAI
npm installCreate a .env file in the project root:
ANTHROPIC_API_KEY=sk-ant-your-key-here
PORT=3001
npm run devThis starts the Express backend (port 3001) and Vite dev server (port 5173) concurrently. Open http://localhost:5173.
Other commands:
npm run dev:server # backend only
npm run dev:client # Vite only
npm run build # production build
npm start # serve production build- API key lives only in
.envon the server, never sent to the browser - SVG diagrams from AI are sanitized before rendering (strips scripts, event handlers, javascript: hrefs)
- Backend validates a model allowlist and enforces token caps
.envis gitignored