Roshan (Adj.)

Meaning: bright light, clear and without confusion
Roots: Persian

Inspiration

The idea for Roshan was inspired by the recent protests in Iran and the overwhelming amount of propaganda that appears in state-controlled media. As an Iranian, I grew up understanding that state news sources often contain manipulation and political messaging. However, for many people outside the country who trust their own state media systems, these signals are much harder to recognize. Seeing international audiences unintentionally repeat or believe narratives from propaganda sources was frustrating and concerning. We wanted to build a tool that helps people notice these patterns. Roshan does not tell users what to believe. Instead, it highlights potential manipulation techniques and encourages readers to apply their own critical thinking before trusting the information they consume.

What it does

Roshan is a Chrome extension that helps readers critically evaluate news articles by highlighting potential propaganda signals directly on the page. Instead of deciding what is true or false, Roshan surfaces indicators that suggest possible manipulation or weak evidence.

The extension highlights sentences that contain patterns such as:

  • Emotional framing
  • Absolutist language
  • Use of unclear sources
  • Propaganda-style persuasive language

When users hover over highlighted text, they can see which category triggered the flag. This gives readers more context and encourages them to question the language being used. Roshan also integrates an OpenAI-powered assistant that allows users to ask questions about highlighted text, discuss potential biases, and explore alternative interpretations of the article.

See more information at the extension website: https://roshansite.vercel.app/

How we built it

Roshan consists of a Chrome extension frontend and a FastAPI backend. The Chrome extension extracts article text, splits it into sentences, and sends them to the backend for analysis. The backend processes each sentence and returns labels that the extension uses to highlight suspicious language patterns directly on the page.

Originally, we planned to use existing bias-detection libraries such as Dbias and Unbias. However, these tools were outdated and difficult to install in our environment. Because of this, we decided to build our own model.

We trained a transformer-based classification model using a dataset we found online. Since the dataset did not perfectly match our needs, we cleaned and restructured it, mapped the labels to our four categories, and used LLMs to generate additional training examples. This allowed us to quickly build a dataset suitable for detecting the types of propaganda signals we wanted to highlight.

The project was built using:

  • Chrome Extension (JavaScript, HTML, CSS)
  • FastAPI (Python backend)
  • Transformer model for text classification
  • OpenAI API for conversational exploration of flagged text

Challenges we ran into

One of our biggest challenges was the lack of reliable tools and datasets. Initially, we tried to integrate existing bias-detection libraries such as Dbias and Unbias. Unfortunately, these projects were outdated and could not be installed properly in our environment. This forced us to rethink our approach and build our own model instead.

Another challenge was finding a suitable dataset. Most propaganda or bias detection datasets were either inaccessible, paid, or not structured for the categories we wanted to detect. We had to clean an existing dataset, adapt the labels, and generate additional examples to create something usable within the limited hackathon timeframe.

Accomplishments that we're proud of

We are proud that we were able to train and deploy our own model in such a short time. Despite the limited dataset and tight timeline, we managed to build a working system that can identify multiple propaganda patterns and highlight them directly inside real news articles.

Creating a fully functioning Chrome extension that integrates machine learning and an AI assistant was a major milestone for us.

What we learned

This project taught us how to train and deploy a text classification model from scratch, as well as how to build and integrate a Chrome extension with a backend API. We also learned a lot about collaboration. Working on different parts of the system required clear communication, effective use of Git, and coordination between frontend and backend development. Perhaps most importantly, this project reinforced the idea that AI systems should not be blindly trusted. Even highly accurate models can make mistakes, which is why Roshan focuses on helping users think critically rather than making decisions for them.

What's next for Roshan

There are many directions we want to take Roshan next. First, we want to expand and improve the training dataset so the model can detect propaganda patterns more accurately and across more contexts. We also plan to introduce additional categories of manipulation techniques, allowing the extension to identify a wider range of persuasive or misleading language. Finally, we want to expand Roshan beyond news articles. In the future, the extension could analyze content on platforms such as Twitter, Instagram, or other social media, where propaganda and misinformation often spread even faster.

Built With

Share this project:

Updates