The Inspiration
It all started with a tweet. I was scrolling through my feed when I saw a post from a solo developer claiming they were making over a million dollars a year with a personalized AI-photo service. I stopped scrolling immediately. The industry heavyweights like Midjourney were focused on general-purpose image generation, but the niche of creating custom, fine-tuned models for individuals was wide open. I saw a clear gap in the market and a proven business model. My first thought was, “Why not me?” If he could do it, I could do it too. I've been waiting for the right AI idea.
How I Built It: The Tech Stack
I knew I needed a modern, robust, and scalable stack to pull this off. I chose technologies that would allow for rapid development without sacrificing quality or performance.
| Layer | Tools & Libraries | Notes |
|---|---|---|
| Frontend | Next.js 14 (App Router), Tailwind CSS, Shadcn/ui | Rapid UI development |
| Zustand | Lightweight state management | |
| React Dropzone | Seamless file uploads | |
| Backend | Node.js & TypeScript via Next.js API routes | Unified front- and back-end |
| Database | PostgreSQL + Prisma ORM | Type-safe queries & migrations |
| AI & Cloud Services | Replicate – LoRA training & inference pipeline Cloudflare R2 – Store zipped training sets & generated images HuggingFace – Host model files post-training |
Core of the product |
| Payments | Stripe | Stripe for subscriptions |
| Authentication | NextAuth.js | Secure, full-featured auth |
| Testing | Jest + React Testing Library | Unit & integration tests |
Challenges I Faced
My biggest hurdle was a misleading blog post. The original plan:
- Train LoRA models on Replicate.
- Upload fine-tuned weights to a HuggingFace repo.
- Use Together.ai for fast, cheap inference.
After a day or two of work I discovered Together.ai’s support for custom HuggingFace LoRAs simply didn’t exist. I had to scrap the flow and re-architect the AI pipeline, ultimately keeping both training and inference on Replicate. This required a major rewrite of my service classes and logic—a tough reminder to verify marketing claims before betting on them.
What I Learned
- Infrastructure matters. Manually configuring Nginx reverse proxies across services was painful. I’ll invest in a dedicated configuration tool or Infrastructure-as-Code (e.g., Terraform) moving forward. I always wanted to host models directly instead of relying on an external service, but I was forced to because my server doesn't have a dedicated GPU. Now I'm happy to offload whatever I can off of my server as long as I can afford it. That won't stop me from building innovative apps any longer.
- Orchestrating async workflows is tricky. The training pipeline—validating & zipping images, uploading to R2, calling Replicate, polling status, updating the DB—forced me to design robust state management and error handling so users always know where their job stands.
- Validate early, refactor fast. Missteps in third-party tooling can cost precious time; quick pivots keep momentum alive.
Built With
- cloudlfare
- nginx
- node.js
- postgresql
- prisma
- replicate
- typescript
- zustand
Log in or sign up for Devpost to join the conversation.