Inspiration

With the rise of generative AI and easy-to-use editing tools, it’s becoming harder for people to trust photos and videos they see online. Critically, we felt that people's privacy was being invaded when they were used in video edits (think fake Facebook ads with ex-PM Lee, or AI-edited videos trained on people's images and videos). We wanted to create something lightweight and mobile-first that gives everyday users more transparency about how their media has been altered — without needing expensive desktop tools or expert knowledge. We decided to make sure media that is posted can indicate whether it was shot from a camera (or edited/generated), and how likely it is to be misleading based on present edits. This presents a whole new take on media authenticity - one where it is backed by hardware rather than visual inspections.

What it does

verisnap is a camera app that provides users with some common edit functions as well. When a picture is saved (with or without edits), a trained AI model automatically detects potentially misleading manipulations and gives the user a risk score with plain-language reasons. The app also confirms file integrity, that the image was created from hardware (not chat-AI generated, for example) and allows users to strip sensitive metadata to protect privacy before sharing content.

How we built it

Frontend / App side: Built with Kotlin and React Native for cross-platform UI. Backend: Implemented in Python for AI features and file integrity verification, Kotlin for C2PA generation and media verification Integration: The frontend communicates with the backend via REST APIs, returning structured JSON.

Challenges we ran into

Camera side: Due to the short duration of the hackathon, we were not able to fully implement every editing feature we wanted to. AI side: Due to the unique nature of the data we needed, datasets that fit our functionality specifically is not available for use. We needed to find a way to train our own AI model, so testing needed to be done on AI-generated data due to the lack of ability to generate such data manually. There is probably some accuracy concerns attached to this method.

Accomplishments that we're proud of

Achieving a 70% model accuracy rate based on 50 tests. Being able to implement the camera functions with no prior experience in mobile app development. Being able to implement a rather novel hardware attestation method for media using Android Keystore.

What we learned

Feature prioritisation is key when facing limited time.

What's next for Verisign

We want to fully implement all of our originally planned edit features and generate more data to train our custom model on to improve accuracy. We also hope to update the app's UI to give Verisign a fresher look!

Built With

+ 3 more
Share this project:

Updates