Inspiration

Small nonprofits, mutual aid groups, and community organizers face a critical barrier: they need a professional web presence to reach volunteers and donors, but lack the budget to hire designers or developers. We've seen countless grassroots organizations stuck with outdated Facebook pages or hand-coded HTML sites that look unprofessional and are impossible to maintain.

We asked ourselves: What if AI could democratize web design? Not by generating generic templates, but by creating truly custom brands that reflect each organization's unique mission and values. We wanted to build something that could understand "We help refugees learn English" and produce colors, fonts, and copy that genuinely feel right for that mission.

The inspiration came from watching a local food bank struggle to update their website for weeks because they couldn't afford their previous developer. We realized that with modern AI models like Google Gemini, we could simulate an entire design agency—brand strategist, copywriter, accessibility expert, and developer—all working together in real-time for under a dollar per site.

What it does

Good Studio is an AI-powered microsite builder that generates professional nonprofit websites in under 2 minutes. Here's the magic:

Step 1: Tell us about your organization (30 seconds)

  • Enter your name and mission statement
  • Upload a logo (optional)
  • Select your target audience (volunteers, donors, community members)
  • Choose your tone (warm, professional, energetic, etc.)

Step 2: Watch AI agents work in real-time (5-15 seconds) Unlike fake progress bars, our system uses Server-Sent Events (SSE) to stream actual updates as four specialized AI agents collaborate:

  1. Brand Agent: Analyzes your mission to generate a 5-color palette, font pairing, and brand voice (e.g., "compassionate, energetic, trustworthy")
  2. Copywriter Agent: Writes a compelling hero headline, subheadline, about section, and call-to-action buttons
  3. Accessibility Agent: Validates WCAG AA contrast ratios, generates descriptive alt-text, and checks reading level (target: 8th grade)
  4. Developer Agent: Creates custom React components with Tailwind CSS—not templates, but actual code

Step 3: Edit and publish

  • Preview your generated site with live branding
  • Use our visual content editor to add text blocks, images, or flex layouts with drag-and-drop
  • Edit brand colors with a visual picker
  • Regenerate content with a different tone if needed
  • Publish to /orgs/your-name

Bonus: Event Management

  • Create public events with capacity limits
  • Accept RSVPs with email validation
  • View attendee lists in your dashboard
  • IP-based rate limiting prevents spam

How we built it

We chose a modern, type-safe stack optimized for AI integration:

Frontend (Next.js 14 + React 18)

  • App Router with Server Components for optimal performance
  • TanStack React Query for declarative data fetching (zero useEffect for data!)
  • Custom SSE hook to consume real-time agent updates via ReadableStream
  • Visual content block editor with nested layout support
  • TypeScript throughout for type safety

Backend (FastAPI + Python 3.13)

  • Four specialized AI agents orchestrated with LangGraph
  • Google Gemini 2.0 Flash Lite for all AI generation (15 RPM free tier)
  • sse-starlette for Server-Sent Events streaming
  • Pydantic v2 models for request/response validation
  • Smart fallback system: if AI fails (timeout, invalid key, rate limit), returns deterministic high-quality data instead of crashing
  • 100+ pytest tests with 90%+ coverage

Database & Auth (Supabase)

  • PostgreSQL with Row Level Security (RLS) for multi-tenant data isolation
  • Email magic link authentication (no passwords!)
  • Storage bucket for logo uploads
  • Service role key for server-side operations

Key Technical Innovations:

  1. Real-time SSE Streaming: Instead of fake progress bars, we stream actual agent completions as they happen. Frontend uses fetch() with ReadableStream to parse SSE format (event: agent_complete\ndata: {...}).

  2. AI Fallback System: Environment variable TEST_AI_FALLBACK=true forces fallback mode for testing without API keys or during rate limits. Fallback data is production-quality, not lorem ipsum.

Challenges we ran into

1. SSE streaming from FastAPI to Next.js Getting Server-Sent Events to work across the stack was harder than expected. We had to:

  • Handle both \r\n\r\n (CRLF) and \n\n (LF) line endings
  • Parse SSE format manually (event: name\ndata: json)
  • Deal with browser buffering that delayed events
  • Implement proper error boundaries so UI doesn't crash mid-stream

Solution: Created a robust useSSEGeneration hook with state refs to avoid stale closures, and added extensive logging to debug the parsing.

2. Row Level Security (RLS) complexity Supabase RLS is powerful but tricky. We needed policies like:

  • "Users can read their own orgs" (simple)
  • "Anyone can read public events, but only owners can see RSVP emails" (complex)
  • "Service role can bypass RLS for server operations" (critical)

We spent hours debugging why authenticated users couldn't see their own data (forgot auth.uid() vs auth.users.id difference).

Solution: Built comprehensive RLS test suite with 30+ assertions. Created helper script scripts/test_rls_simple.sh for quick validation.

Accomplishments that we're proud of

Real-time AI streaming that actually works We didn't fake it with setTimeout. Our SSE implementation streams genuine agent progress as it happens. Watching the brand agent complete, then the copywriter start, feels magical.

AI that understands mission statements The Gemini prompts we crafted genuinely extract meaning. Input "We teach coding to homeless youth" and you get vibrant, optimistic colors + empowering copy. Input "We provide end-of-life care" and you get calming, respectful tones. It's not random.

Never-crash architecture In 100+ tests, the system has never returned a 500 error. AI fails? Fallback. Database timeout? Cached data. Invalid input? Clear validation errors. This is production-ready.

Accessibility from day one We didn't add accessibility as an afterthought—it's baked into the AI pipeline. Every site gets WCAG-validated contrast, descriptive alt-text, and 8th-grade reading level checks. Small nonprofits shouldn't have to hire specialists for this.

What we learned

Technical Lessons:

  1. SSE is underused: Server-Sent Events are perfect for one-way streaming (AI progress, log tails, notifications), yet most devs default to WebSockets. We'll use SSE for every future real-time feature.

  2. LLM prompt engineering is an art: We iterated 20+ times on the Gemini system prompt. Small changes like "Generate exactly 5 colors in hex format" vs "Create a color palette" made the difference between 90% success rate and 40%.

  3. Fallbacks aren't optional: Users don't care why something failed—they just want it to work. Our AI fallback system saved countless hours of debugging API keys during the hackathon.

  4. React Query > useEffect: We banned useEffect for data fetching and our code is 50% cleaner. useMutation with onSuccess callbacks feels like magic compared to manual state management.

  5. Type safety catches bugs at compile time: TypeScript + Pydantic caught dozens of bugs before runtime. The upfront cost of defining schemas pays off instantly.

Design Lessons:

  1. Show, don't simulate: Real-time progress streaming feels 10x better than fake spinners. Users trusted our AI more because they could see it working.

  2. Accessibility is a feature, not a checkbox: Auto-generating alt-text and validating contrast takes 5 seconds—no excuse to skip it.

  3. Constraints breed creativity: The "fixed template" constraint (hero + about + CTA) forced us to perfect those sections instead of building a mediocre page builder.

Process Lessons:

  1. Kill features ruthlessly: Deprecating image generation at hour 18 was painful but right. Scope creep is the enemy of shipping.

  2. Write tests early: We wrote pytest tests alongside features, not after. Caught bugs in minutes instead of hours.

  3. Document decisions: Every time we made a tradeoff (e.g., visual agent deprecation), we wrote a markdown doc. Future us (and judges) will appreciate it.

What's next for Good Studio

Long-term Vision: We want Good Studio to be the Canva for nonprofits—the tool every community organizer bookmarks. We're exploring:

  • Nonprofit-specific features: Grant calendars, volunteer scheduling, recurring donation management
  • Federated identity: Let organizations bring their own Supabase instance for data sovereignty
  • Open-source community: Accept Gemini/GPT-4/Claude as LLM backends via plugin system
  • Impact measurement: Track real-world outcomes (dollars raised, volunteers recruited, events held)

Built With

Share this project:

Updates