Inspiration

The spread of misinformation that acan lead to more political and personal conflict.

What it does

Fact checks information inputted by the user. This has multimodal capability, including text, images and video.

How we built it

Using the Pixtral multimodal model and the Brave search API. We also used an open-source transcription model to extract the transcript of videos uploaded.

Challenges we ran into

Video transcription, trying to remove search engine bias and getting formatted data from Pixtral.

Accomplishments that we're proud of

We were able to provide a functional prototype for determining whether the internet thinks something is fact or fake.

What we learned

Learned about how to programmatically interact with multimodal models and how to enrich the LLMs knowledge through additional information, such as up-to-date search info.

What's next for FactstralAI

  • Integrate into social media sites, possibly as a browser extension for seamless use.
  • Cut down on input tokens by fine-tuning the model to perform query generation based on a claim (will increase efficiency)
  • Explore the bias and reliability of individual sources and convey this information to the user.
  • Build a more intuitive UI for better interactions with the user.

Built With

  • brave
  • mistral
  • pixtral
  • python
  • streamlit
Share this project:

Updates