More and more developers are vibe coding. The speed of development is getting faster and faster. But you can’t vibe code DevOps (you can try and probably fail). So what happens when lean teams of Vibe Coders need to update their deployments and scale their features?
Should they learn DevOps? That’s too slow, it apparently takes at least 6 months
Should they hire a DevOps engineer? Hopefully you can hire someone fast!
Our solution: VibeOps
VibeOps lets developers deploy and scale without any experience in DevOps needed. Developers can easily integrate VibeOps by simply connecting Cursor to the VibeOps MCP server.
We want to be a part of the Y-combinator Block challenge
Inspiration
We were inspired by the increasing complexity of DevOps workflows and the recent emergence of “vibe coding.” DevOps is notoriously painful to set up, and as low-code / AI-assisted development spreads, fewer developers maintain deep infrastructure expertise. We asked ourselves: what if provisioning AWS, GCP, and on-prem resources felt as simple as chatting with ChatGPT?—no YAML spelunking, no dozens-of-tabs terminal juggling. VibeOps was born to collapse that cognitive overhead into a single, conversational experience.
What it does
VibeOps is an LLM-driven Model-Context-Protocol (MCP) orchestrator that turns plain-English requests into deployable infrastructure—then keeps iterating until everything is live. In one command, you can go from vibe code to production.
Capability-pool routing For each capability (compute, storage, CDN, observability), it will select the right cloud-specific MCPs from a registry—respecting the user’s cloud preferences.
Plan generation & execution It autowrites Terraform (or Pulumi, Helm, etc.), applies it, fetches temporary creds from a vault, and streams logs back to the chat.
Ask-back loop If a detail is missing—“Which VPC should I use?”—VibeOps pauses and prompts the user in the IDE, then resumes.
Challenges we ran into
One command to do everything is hard. Very hard. It took time, iterative and sometimes recursive calls within the MCP server to make it happen.
Secure credential bootstrapping – You need cloud keys on day 1, but storing them in code violates least-privilege; Vault/SSO adds setup friction.
Capability-pool schema design – Registry must stay flexible for new MCP types yet strict enough for deterministic matching.
IDE clarification loop – Non-blocking “ask-back” prompts, logs for observability, and resume functionality are deceptively tricky to wire into editors.
Cost & quota guardrails – Almost ran out of free aws credits.
Demo vs. production UX – Users need live logs, diff previews, and cancel/rollback—features often skipped in quick demos.
Opinionated defaults – Choosing “secure vs. cheap vs. fastest” defaults early sets expectations and is painful to change later. ...
Accomplishments that we're proud of
Single-agent design, not “agent soup.” We proved that one conversational orchestrator can replace a patchwork of Terraform, Helm, Kubernetes, Azure, AWS, GCP , Vercel and cost-analysis extensions—cutting IDE clutter instead of adding yet another sidebar.
Abstracted DevOps layer. A prototype “ask once, deploy everywhere” flow shows how VibeOps hides multi-cloud complexity behind one prompt + one config file, giving developers the results (running infra) without exposing the tools (CLI, YAML, terraform, provider plugins).
...
What we learned
Architecture beats piling on agents. A single orchestration layer that understands capabilities—not individual tools—keeps IDEs lean and cognitive load low. The MCP-pool pattern shows how one well-scoped “brain” can out-perform a fleet of overlapping plug-ins.
Capability-first registries are the unlock. By advertising what an MCP can do (compute, CDN, observability) rather than how it does it, we gained true swap-ability across clouds and vendors. This abstraction is the missing piece that lets Model-Context-Protocol servers scale beyond demos.
Config-driven preference files trump hard-coding. A lightweight cloud_prefs.yaml gives users cloud and region choice without forcing them to edit IaC. It’s the bridge that makes LLM orchestration personal yet deterministic.
Guard-railed LLMs are production-ready. Combining strict schemas with runtime policy checks kept “vibe coding” creative but safe, proving that MCPs can deliver real infra without hallucinated settings or runaway spend.
Observable feedback loops close the trust gap. A path to view logs lets you tail the logs so you know how the deployment is going.
The next frontier is networked MCPs. Our prototype hints at a future where specialized MCPs (security, cost, edge AI) plug into the same capability registry, letting any team compose infrastructure like Lego blocks through one conversational interface. That composability is what makes this architecture the next big step for Model-Context-Protocol systems. ...
What's next for VibeOps
Our current demo proves the concept—one prompt can spin up a minimal environment—but it touches only a slice of the capability map. The roadmap is to flesh out every block in the MCP Capability Pools diagram and tighten the orchestration loop.
- Adding support for more cloud providers and IaC tools (e.g., Azure,Pulumi)
- Building a feedback loop to improve prompt-to-code accuracy
- Implementing team collaboration features and version control for generated ops
Built With
- amazon-web-services
- azure
- cursor
- gcp
- mcp
- multi-cloud
- python
- vercel
Log in or sign up for Devpost to join the conversation.