Inspiration

Our inspiration came from popular year-in-review experiences like Spotify Wrapped. We wanted to bring that same personalized, shareable, and narrative-driven experience to League of Legends players. Instead of just showing them a dashboard of raw stats, we wanted to use AI to tell them the story of their season, identifying their unique playstyle and offering personalized, data-driven coaching.

What it does

Rift Rewind is an AI-powered coaching platform that generates a personalized, "Spotify Wrapped" style dashboard for League of Legends players.

It operates in two distinct phases:

-Instant Profile Load (<5s): The moment a user enters their Riot ID, they see an immediate, lightweight profile with their rank, icon, and main role.

-Deep AI Analysis (3-6 min): In the background, the system analyzes 50-200 of their recent matches to generate deep insights. The final dashboard displays:

-Accurate Champion Pool: Top champions with real win rates and stats from match history.

-AI-Generated Personality: A unique playstyle persona (e.g., "Aggressive Playmaker," "Vision Master") based on their in-game metrics.

-Deep AI Insights: A 4-5 sentence narrative from Claude Sonnet 4 that identifies critical strengths and weaknesses, using specific match evidence to back up its claims.

-Stat Highlights & Growth Areas: Key performance indicators and 2-3 concrete, actionable tips for improvement.

How we built it -We built Rift Rewind using a modern, decoupled architecture:

-Frontend: A responsive Next.js 16 (React 19) app built with TypeScript, Tailwind CSS, and Framer Motion for a polished, animated UI.

-Backend: A high-performance FastAPI (Python 3.12+) server using Pydantic for type-safe API validation.

AI & Data Analysis:

-AI Models: We used a tiered-LLM strategy with Amazon Bedrock.

-Claude Sonnet 4 serves as our "Expert Analyst," synthesizing all data to generate the final, high-quality user-facing insights.

-Claude 3 Haiku acts as a "High-Speed Assistant" for internal tasks like selecting key matches for analysis and summarizing match events to manage token counts.

-Data Pipeline: We built a "Wide + Deep" data pipeline. We first use pandas to aggregate "Wide" stats from all matches, then use our Haiku model to select 3-5 key matches for a "Deep Dive" timeline analysis. Both data sets are fed to Claude Sonnet 4 for the final report.

-Data Source: The Riot Games API (Match-V5, Summoner-V4, League-V4).

-Caching: A crucial, multi-layer caching system.

-Persistent JSON Cache: A file-based cache (cache_manager.py) stores all match and profile data with a 24-hour TTL for a massive performance boost on repeat analyses.

-In-Memory Agent Cache: A retry-safe cache within the AI agent (year_rewind_agent.py) prevents re-fetching data if a Bedrock API call is throttled mid-analysis.

Challenges we ran into

-The "Double Wall" of Rate Limits: Our biggest challenge was navigating the extremely strict and opposing rate limits of both the Riot API (100 requests/2 min) and the AWS Bedrock API (~5-10 req/s with throttling). A single 100-match analysis requires 101 Riot calls, which instantly triggers a 2-minute API ban. We couldn't "burst" Riot, and we couldn't "burst" Bedrock.

-Massive Data & Token Limits: A single match timeline is enormous. Analyzing 100 of them was impossible, as it would vastly exceed any LLM's context window and be incredibly slow and expensive. We had to find a way to distill 100+ matches into a concise, token-efficient prompt.

-API Throttling & Resilience: Our initial AI agent would fail the entire 3-6 minute analysis if a single Bedrock API call was throttled. We had to build a system that could survive and retry through these throttles without losing progress.

Accomplishments that we're proud of

-"Solving" the Rate Limit Problem: We built two custom, intelligent rate limiters. The Riot API limiter (rate_limiter.py) respects both the per-second and 2-minute token buckets, while the Bedrock limiter (bedrock_rate_limiter.py) handles throttling with exponential backoff and jitter, making our data pipeline robust.

-80-90% Performance Boost: Our JSON-based caching system (cache_manager.py) is a huge win. A 3-6 minute first-time analysis becomes a 30-60 second repeat analysis, making the app feel incredibly fast for returning users.

-The "Wide + Deep" AI Pipeline: Our solution to the token-limit problem. By using pandas for "Wide" stats and a lightweight LLM (Haiku) to select "Deep Dive" matches, we can send a perfectly summarized, token-efficient, and data-rich prompt to our main analyst LLM (Claude Sonnet 4). This allows for high-quality analysis without overwhelming the model.

-Resilient AI Agent: The year_rewind_agent.py's retry-safe in-memory cache was a breakthrough. Now, if Bedrock throttles a request, the agent waits, retries, and re-uses its already-analyzed data (like champion pool stats) instead of starting the entire 6-minute process from scratch.

What we learned

--Preprocessing is Everything: You can't just send raw, massive data to an LLM and expect good results. The real work is in the data engineering—preprocessing 100 matches into a "Wide" aggregate profile and summarizing "Deep" timelines into event logs. This data-shaping is what enables the AI to provide quality insights.

--Caching is a Two-Layer Problem: We learned to think of caching in two ways: persistent caching (like our JSON files) for performance (speeding up user requests) and in-memory caching (like in our agent) for resilience (surviving API failures).

--Tiered LLMs are Smart Architecture: Using a powerful model like Claude Sonnet 4 for everything is slow and expensive. We learned to use a cheaper, faster model (Haiku) as an "assistant" for high-volume, low-complexity tasks like summarization and selection. This saves the "expert" model (Sonnet) for the one task that matters most: generating the final insight for the user.

What's next for Rift Review

-Multi-Region Support: Expand the app from NA-only to support all Riot regions worldwide.

-Historical Growth Charts: Implement long-term data tracking to create rank timeline visualizations and show players their performance trends over an entire year, not just recent matches.

-Production-Grade Caching: Migrate our JSON-based file cache to a proper in-memory database like Redis. This will support multiple server instances and be significantly faster.

-Real-Time Progress Streaming: Implement WebSockets to stream the analysis progress (e.g., "Fetching Matches...", "Analyzing Playstyle...") to the user in real-time, improving the 3-6 minute wait experience.

Built With

Share this project:

Updates