Inspiration
DubOps was inspired by the frustrations programmers face when deploying applications. Writing code for web apps has become much easier with generative AI, but deploying them remains a slow, manual, and error-prone process. Many existing tools still require developers to manually write configuration files or deeply understand deployment infrastructure. DubOps was created to bridge that gap, providing automation for deployment in the same way AI automates coding, making it faster, more reliable, and accessible to developers of all levels.
What it does
DubOps provides a three-step process to seamlessly deploy any application:
- Analyze – DubOps examines your codebase using AI, inspecting directories and files to determine the most likely configuration fields for deployment. The LLM reasons about project structure and infers settings even for unconventional or complex repositories.
- Configure – You review the AI’s analysis and provide any missing inputs, double-checking fields required for proper deployment. This ensures both accuracy and flexibility.
- Generate – DubOps produces all the necessary infrastructure as code, including Terraform and Dockerfiles, and automatically generates a pull request in your GitHub repository. You can review the PR, accept it, and follow the instructions to deploy with minimal manual work.
This workflow lets developers move from code to cloud deployment quickly and confidently, eliminating tedious manual configuration steps.
How we built it
DubOps is powered by a Python/Flask backend and a TypeScript/Next.js frontend. For AI integration, we used AWS Bedrock as an API gateway to interact with large language models. GitHub integration leverages the GitHub API and GitPython, enabling the system to analyze repositories and generate pull requests automatically.
The backend performs repository analysis and prepares structured JSON outputs for the LLM. The LLM then reasons through directory and file structures to infer technology stacks, required configurations, and deployment settings. These outputs feed into automated generation of Dockerfiles and Terraform scripts.
Challenges we ran into
One of the main challenges was determining how to analyze repositories and extract the necessary information for automated deployment. Our initial approach relied on rule-based heuristics, assuming certain files implied specific languages or frameworks, such as requirements.txt indicating Python. This method was unreliable, especially for more complex or non-standard repositories.
Integrating an LLM improved reasoning capabilities, but analyzing full file content proved too demanding for large codebases, exceeding the context window and producing inconsistent outputs. Another difficulty was ensuring valid and deterministic JSON output from the LLM, as AI outputs are non-deterministic by nature.
We iterated on prompt design and schema structure to enforce strict compliance and found a reliable solution: analyzing file and directory structure with an LLM and strictly adhering to a JSON schema. This combination produced accurate and usable outputs for generating infrastructure code.
Accomplishments that we're proud of
- Intuitive user experience: Our website has a sleek design, with clear steps that guide users seamlessly from analysis to deployment.
- Expertise in generative AI: We successfully engineered prompts using few-shot examples and chain-of-thought reasoning to guide the LLM in making accurate, context-aware decisions.
- API integration achievements: We worked with GitHub and GitPython for the first time and built an agent capable of automatically creating pull requests with correct infrastructure changes. This level of automation exceeded our initial expectations and demonstrates the potential of AI-driven DevOps.
What we learned
We learned that simplicity is powerful. Concise prompts allowed the LLM to focus on the most relevant information, improving output quality and reducing errors. Similarly, a clean, intuitive website design made it easy for users to deploy without unnecessary complexity or cognitive load.
We also learned the critical importance of communication and coordination. With time constraints and multiple contributors working on different components, continuous discussion was necessary to avoid conflicts and ensure that work was integrated smoothly. This hands-on collaboration highlighted the balance between autonomy and teamwork in fast-paced development projects.
What's next for DubOps
Currently, DubOps performs best with smaller repositories, such as portfolio projects. Our future goals include:
- Support for larger and more complex repositories: Implementing a two-pass analysis system where the initial JSON output is verified and refined by a second LLM to ensure accuracy and completeness.
- Live infrastructure updates: DubOps will detect changes such as the addition of new microservices, re-run analysis, compare new configurations with existing infrastructure, and automatically generate pull requests with required updates. This allows infrastructure to self-heal and adapt to evolving codebases, ensuring deployments remain synchronized with development.
- Expanded automation capabilities: Beyond Terraform and Dockerfiles, we aim to integrate additional deployment targets and cloud providers, enabling seamless deployment pipelines for diverse project types.
These enhancements will make DubOps not just a deployment tool, but a fully automated, intelligent DevOps assistant.
Built With
- amazon-web-services
- bedrock
- flask
- nextjs
- python
- typescript


Log in or sign up for Devpost to join the conversation.