Inspiration
RAGguard helps with the problem of ever-growing misinformation spread across the Web. Misinformation influences public opinion, decision-making, and trust in online content. With the rise of generative AI and easily spreadable misinformation, we wanted to create a tool that helps users verify the credibility of data, be it text, links, or images, efficiently.
What it does
RAGguard is an AI-powered application that checks text, images, and links for potential misinformation. It leverages Retrieval-Augmented Generation techniques to cross-reference input text with reliable sources, flag inconsistencies, and provide contextual evidence to support or refute claims. It follows 4 stages of processing: first, it takes a prompt that can be an image, a file, text or a link, embeds it in a way accessible for Claude API, and sends it to the database. Then, it fetches it and generates potential vulnerabilities in data through a moderation layer. Third layer ensures data consistency through in-depth questions about the statement and previously found violations that arised. Fourth layer generates general summary of the data, and still considers all the files presented before. Finally, this Retrieval-Augmented Generation (RAG) model returns a consise but factually supported summary of data consistency and validity, and sends it to the frontend through an API layer.
How we built it
We developed a web application utilizing such frameworks as React for Front-end and Flask for Back-End. With the help of Claude 3.5 Sonnet and OpenAI API and prompt engineering, we analyzed different kinds of data on whether it is valid. Moreover, the application provides similarity The application is run on the Apache Kafka server.
Challenges we ran into
- We had to properly implement prompt engineering to get desirable analysis done by AI, which took more time than was planned initially.
- We used MongoDB Database and experienced some connection issues at first.
- Finally, we had to resolve some problems and bugs related integrating everything together (Front-End, Back-End, scripts).
Accomplishments that we're proud of
- Successfully integrated a RAG-based approach to verify data, including the comparison against previous sources and .
- Developed an intuitive UI that allows users to quickly check source credibility.
- Implemented a confidence scoring system to help users interpret verification results.
- Learned techniques about how to stay safe and informed in the world of uncertainty.
- Finally, we're proud of maintaining good team atmosphere and collaborative environment over the past 24 hours. ## What we learned
- Gained knowledge in data pipelining to properly format and feed data into AI.
- Learned Prompt engineering and working with AI API.
- Improved Web Development skills.
- Honed our teamwork and collaboration skills.
What's next for RAGguard
It may be possible to expand data sources: integrate more fact-checking platforms, government databases, and academic sources to improve accuracy. An advanced version of the application may also be able to proactively detect and flag trending misinformation in real-time. We also aim to refine our AI model, enable crowdsourced validation, and collaborate with media and research institutions to scale impact and help users stay informed and safe.
Log in or sign up for Devpost to join the conversation.