Inspiration: We wanted to make something completely unnecessary. ChatGPT was too helpful, so we made it worse, on purpose. Thus, CatPhisher was born: the AI that confidently gets everything wrong.

What it does: CatPhisher is a one-to-one ChatGPT wrapper that provides intentionally bad answers. It’s designed to sound helpful while being completely useless, misleading, or just plain wrong.

How we built it: We used Gemini API, wrapped it in a frontend chat interface, and added logic to subtly (or not-so-subtly) corrupt the output. As well as having a database that stores chats and a user database. Think of it as “misinformation-as-a-service.”

Challenges we ran into: Trying to recreate the exact ChatGPT user interface one-to-one was harder than expected. Matching the look, feel, and behavior while intentionally breaking the logic took more effort than we thought.

Accomplishments that we're proud of: It works. It answers. It's completely unreliable. Mission accomplished.

What we learned: Bad ideas can still be good projects. Also, ChatGPT is disturbingly hard to sabotage. It really wants to be helpful, which is exactly what we didn’t want.

What's next for CatPhisher: We might add settings like "Mildly Misleading" vs "Absolute Nonsense." Maybe even a leaderboard for users who follow CatPhisher’s advice and survive.

Built With

Share this project:

Updates