Inspiration

As developers, we know that debugging is one of the most important skills in programming - even more so when AI is involved. Code written by AI is not perfect, and with this CLI game, we hope to train programmers to debug AI-written code.

What it does

Cope-Pilot is a CLI game where programmers are set coding challenges that ChatGPT has already attempted. Given the AI-generated solution, they must debug as many challenges as possible in a time trial and earn points towards their score on the leader board.

How we built it

The CLI is written in Typer and the REST API for the scoring server was produced with FastAPI, both Python-centred packages. For simplicity and quick prototyping, we used Redis for the database which was run inside of a Docker container.

Challenges we ran into

  • GPT prompt design - for many problems (in particular easier ones) ChatGPT gives a complete, working solution. Thus we needed to alter the prompt to ensure there are bugs. However simple syntax errors are (while frustrating) easy to solve, so we needed to ensure that the bugs ChatGPT left were logical errors and invite teachable challenges.
  • Cross-platform automated testing - was hard to overcome due to juggling multiple locations across the file system.

Accomplishments that we're proud of

First and foremost we are incredibly proud to have a working product after 24h, but here are some other memorable moments:

  • Guided tutorial implemented to allow users to learn the commands before competing against others.
  • Other hackers have tried out the game and added to the leader board.

What's next for Cope-Pilot

Some ideas we discussed to extend the game:

  • Difficulty levels and respective leader boards.
  • App extension to be more accessible to different devices.
  • AI-generation of unlimited coding challenges.

Built With

Share this project:

Updates