Inspiration
We all know what it feels like to open a page and instantly shut down: tiny fonts, dense paragraphs, jargon everywhere. For students with ADHD, dyslexia, visual impairments, or just end-of-day brain fog, that “wall of text” can be a real barrier, not just an annoyance.
We wanted something that follows the user, not the website. Instead of begging every site to fix its accessibility, we asked:
“What if you could take any web page and instantly make it easier to read, understand, and listen to?”
That became PatriotRead – an AI-powered accessibility companion for the browser.
Functionality
PatriotRead is a browser extension that turns any page into something more approachable.
Core features
📝 Select any text on a website
- Right-click or use our UI to send selected content to PatriotRead.
🔊 Text-to-Speech (TTS)
- Have selected text read aloud in the browser.
- 🌍 Supports other languages (currently French and Spanish).
- ⏮️ Adjustable playback speed to better follow and understand the text.
📤 Export to Salesforce
- Export selected text as a Salesforce Note (
.json) for workflow integration.
- Export selected text as a Salesforce Note (
↔️ Toggle Accessibility View
- 👀 Change the format of the site to be visually easier to read, especially for users with dyslexia.
🌓 High Contrast Mode
- ✨ Make the site’s text stand out more for people with low vision or light sensitivity.
🤖 AI Summarize
Click “AI Summarize” to:
- Turn dense text into short, clear bullet points.
- Optionally have the summary read aloud.
- → Download or copy the summarized text for later use.
Who we’re helping
Our goal is to make dense content less scary and more accessible, especially for:
Students with ADHD/dyslexia who benefit from shorter, clearer text.
Users with visual impairments who rely on audio.
Anyone who has to read long articles, docs, or PDFs when they’re exhausted.
Methods
We built PatriotRead as a multi-cloud, multi-component system.
1. Browser extension (frontend)
A popup UI + service worker + content script that:
Detects and grabs selected text from the current page.
Sends requests to our backend endpoints.
Displays simplified/summarized text from the AI.
Triggers TTS playback in the browser.
The extension only ever talks to our own HTTPS endpoints, not directly to Azure. This keeps secrets out of the frontend and makes the architecture swappable.
2. AWS layer – our public API
We use AWS as the API layer and orchestrator:
Amazon API Gateway exposes endpoints like:
-
POST /llm→ AI rewrite (simplify/summarize). -
POST /tts→ text-to-speech (implemented by another teammate).
-
AWS Lambda runs our Node.js functions:
The
/llmLambda:- Parses requests and validates input.
- Enforces length limits.
- Handles CORS so the browser extension can talk to it.
- Defines a clean JSON contract so the frontend doesn’t care which model/provider we use behind the scenes.
This gives us a realistic “startup-style” backend: real APIs instead of frontend-only hacks.
3. Azure layer – AI brain (OpenAI)
Behind the /llm Lambda, we use Azure OpenAI:
Model / deployment
-
gpt-4.1-mini(deployed asgpt-4.1-mini-2).
-
Azure client
A small helper (
azureLlmClient.js) that:- Reads env vars for
AZURE_OPENAI_ENDPOINT, deployment name, and API version. - Calls the Azure Chat Completions endpoint.
Implements two modes:
- Simplify – rewrite text in clear, accessible language.
- Summarize – produce concise bullet-point summaries.
- Reads env vars for
Normalized Lambda response
On success:
{ "success": true, "requestId": "...", "mode": "simplify | summarize", "outputText": "...", "source": "azure-openai-mini" }
On failure or rate limit:
{ "success": false, "error": { "code": "...", "message": "..." }, "fallbackText": "<original (clamped) text>" }
The frontend can always show or read something (using outputText or fallbackText) instead of breaking when AI fails.
Challenges
We definitely hit some classic hackathon walls:
Multi-cloud wiring
Using both AWS and Azure was powerful but tricky:
We initially misconfigured the Azure endpoint (double
/openai/deployments/...in the URL) and got mysterious404 Resource not founderrors.We had to debug Azure’s dev-tier quotas and
429rate-limiting, then design fallback behavior so users weren’t punished when the AI was unhappy.
Environment variables & local testing
Getting everything to work locally first was harder than it sounded:
Path issues (
/llmHandlervs./llmHandler) and case-sensitivity caused confusingMODULE_NOT_FOUNDerrors.We had to learn how to manage and inspect environment variables in PowerShell, then mirror them correctly in Lambda configuration.
We built a dedicated
testLlmLocal.jsharness to simulate API Gateway events and test/llmwithout deploying every time.
CORS, JSON shape, and contracts
Browser extensions are picky:
We needed correct CORS headers (
Access-Control-Allow-Origin, methods, headers) so the extension could call our API Gateway endpoints.We iterated on the JSON shape so:
- Frontend devs always got either
outputText(success) orfallbackText(graceful failure). - No one had to think about Azure error codes or raw responses.
- Frontend devs always got either
Time and coordination
With only a weekend:
We divided roles: frontend UX, AWS infra, AI backend (Azure +
/llm), and glue.We had to lock API contracts early so the frontend and backend could move in parallel.
Debugging multi-cloud issues under time pressure forced us to be disciplined about logging, testing, and not over-engineering.
Accomplishments
✅ Built a working browser extension that can grab text from any page and route it through our backend.
✅ Shipped a real multi-cloud backend with:
- AWS API Gateway + Lambda on the front.
- Azure OpenAI
gpt-4.1-mini-2as the AI engine.
✅ Designed and implemented the
/llmendpoint with:- Two modes: simplify and summarize.
- Input validation, length limits, and clean JSON responses.
- A
fallbackTextmechanism so users still get readable text even when AI fails.
✅ Got an end-to-end demo working:
select text → simplify/summarize with AI → send to TTS → listen in the browser.✅ Collaborated as a 4-person team with clear ownership and API contracts, successfully integrating all the moving parts under hackathon time pressure.
What we learned
Accessibility
It’s not just about screen readers; comprehension, cognitive load, and smaller rewrites matter a lot.Multi-cloud design
Using AWS as a public API layer and Azure as the AI engine gave us flexibility and a strong story:- The extension calls our
/llmendpoint. -
/llmcalls Azure OpenAI. - We can swap models or providers later without changing the extension.
- The extension calls our
API design & contracts
Defining a clear request/response format early made teamwork smoother and debugging easier.Resilience over perfection
AddingfallbackTextand structured errors turned flaky AI calls into recoverable UX paths instead of dead ends.
What's next for PatriotRead?
If we had more time, we’d love to:
Add more accessibility features (font/spacing presets, dyslexia-friendly font, keyboard shortcuts).
Cache frequent rewrites to reduce cost and latency.
Let users save their preferences and favorite voices.
Explore integrating with learning management systems so students can launch PatriotRead directly from course content.
Members
Anoushka Chavan: Background / Service Worker
Allison Tran: Extension UI & Content Script
David Zhou: AI & Client
Vu Nguyen: AWS Infrastructure | Lambda | Cloudwatch
Log in or sign up for Devpost to join the conversation.