Interactive Research Library
From research papers
to interactive products
PaperMap turns landmark AI research papers into visual, interactive learning experiences.
Explore the Transformer, GPT-3, InstructGPT, RAG, and LoRA — rebuilt as production-grade explainers.
Paper Library
Published Explainers
Each paper is a self-contained interactive guide with visual demos, architecture breakdowns, quizzes, and research-accurate detail from the original publications.
Live
Vaswani et al. · 2017
Attention Is All You Need
The paper that started it all. Interactive walkthrough of the Transformer architecture with tokenization, embeddings, self-attention, multi-head attention, positional encoding, and full encoder-decoder visualization.
Transformer
Self-Attention
NLP
Architecture
Open Explainer →
Live
Brown et al. · 2020
Language Models are Few-Shot Learners
How 175 billion parameters unlocked in-context learning. Interactive few-shot demos, scaling law visualizations, benchmark tables, data contamination analysis, and the full GPT-3 model family.
GPT-3
Few-Shot
Scaling Laws
175B
Open Explainer →
Live
Ouyang et al. · 2022
Training Language Models to Follow Instructions with Human Feedback
The RLHF paper that paved the way for ChatGPT. Interactive 3-step pipeline (SFT → Reward Model → PPO), human evaluation results, bias analysis, and qualitative examples.
InstructGPT
RLHF
PPO
Alignment
Open Explainer →
Live
Lewis et al. · 2020
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
An interactive deep dive into the original RAG paper: dense retrieval with DPR, BART generation, non-parametric memory with Wikipedia, and benchmark gains on knowledge-intensive QA.
RAG
Dense Retrieval
BART
Knowledge QA
Open Explainer →
Live
Hu et al. · 2021
LoRA: Low-Rank Adaptation of Large Language Models
An interactive walkthrough of LoRA covering low-rank updates, parameter-efficient fine-tuning, practical deployment tradeoffs, and why LoRA matches full fine-tuning at a fraction of the cost.
LoRA
PEFT
Low-Rank
LLM Fine-Tuning
Open Explainer →
About
What is PaperMap?
A scalable, static-first platform that transforms dense research papers into interactive learning products — no frameworks, no build tools, no dependencies.
🎯
Research-Accurate
Every number, formula, and architectural detail is sourced directly from the original paper. Section references included throughout.
⚡
Interactive Demos
Tokenizers, attention maps, few-shot playgrounds, scaling charts, and quiz blocks — learn by building intuition, not just reading.
📱
Mobile-First Design
Responsive layouts, touch targets, and reduced-motion support. Works perfectly on phones, tablets, and desktops.
🚀
Zero Build Step
Pure HTML, CSS, and vanilla JavaScript. Deploy anywhere — GitHub Pages, Netlify, Vercel, or any static host.
♿
Accessible by Default
Semantic HTML, ARIA labels, keyboard navigation, skip links, focus styles, and prefers-reduced-motion support.
🎨
Consistent Design System
Shared typography, color palette, components, and interaction patterns across all paper explainers.
Roadmap
Building the Library
PaperMap grows one paper at a time. Each explainer is handcrafted with the same production-quality standards.
✓ Attention Is All You Need
Transformer architecture, tokenization, embeddings, self-attention, multi-head attention, positional encoding, training.
✓ GPT-3: Few-Shot Learners
In-context learning, scaling laws, 175B architecture, benchmark results, data contamination, societal impact.
✓ InstructGPT & RLHF
3-step RLHF pipeline, reward modeling, PPO, human evaluations, alignment tax, bias analysis.
✓ RAG: Retrieval-Augmented Generation
Dense retrieval with DPR, BART generation, non-parametric memory, and SOTA gains on knowledge-intensive QA.
✓ LoRA: Low-Rank Adaptation
Low-rank updates for efficient LLM fine-tuning with massive parameter savings and zero extra inference latency.
→ Next: More Papers
BERT, diffusion models, multimodal systems, and more landmark papers coming soon.