Inspiration

Social media algorithms quietly shape what we see, think, and believe. We realized that people rarely see the bias—they just absorb it. Instead of removing biased content, we wanted to build a tool that makes bias visible. Re-Bias was inspired by the idea that awareness, not censorship, is the first step toward digital balance.

What it does

Re-Bias analyzes short-form videos from YouTube Shorts, TikTok, and Instagram Reels to reveal their ideological or emotional bias. It extracts the speech, subtitles, and visual tone of each clip, runs them through AI models, and calculates a Bias Index (0–100%). The result appears as a small bias banner overlay—like a battery icon—showing how neutral or extreme the content is. Tapping the banner opens a detailed reasoning page explaining why that bias score was given.

How we built it

Frontend: HTML, CSS, JavaScript for the interactive bias visualization and transcript display.

Backend: Node.js handling API calls and AI model inference.

AI Models: Whisper for speech-to-text, BERT/RoBERTa for sentiment and ideological tone, and OpenCV for visual mood analysis.

Tools: GitHub + VS Code for collaboration, Node.js for integration, and Vercel/AWS for deployment.

We trained the models to detect emotional polarity and framing patterns typical of political or feminist discourse, then combined them into a single Bias Index.

Challenges we ran into

Extracting accurate transcripts from short-form videos with background noise and fast speech.

Calibrating bias scores to avoid over-detection on neutral or sarcastic content.

Ensuring real-time rendering of the bias banner without delaying video playback.

Integrating multiple AI pipelines smoothly within one lightweight app.

Accomplishments that we're proud of

Built an end-to-end system that visualizes bias in real time across multiple platforms.

Created a clear Bias Index framework (0–100%) interpretable for both experts and everyday users.

Developed a clean UI that combines data science with design clarity—bias as a visible layer, not a hidden metric.

Managed full-stack collaboration using GitHub and VS Code in just a hackathon time.

We created a concept that has never been implemented in real life.

What we learned

Bias is not binary—it’s a spectrum influenced by tone, framing, and omission.

Building ethical AI isn’t only about filtering; it’s about transparency and accountability.

Even advanced models like BERT and Whisper can reflect their own training biases, which taught us to validate and cross-check results.

Multimodal analysis (text + audio + visual) gives a much richer understanding of bias than text alone.

What's next for Re-Bias

Launch a browser extension and mobile app that automatically shows bias indicators while scrolling.

Expand analysis to detect misinformation, emotional manipulation, and framing bias beyond politics and feminism.

Implement user feedback loops so viewers can correct or contest detected bias.

Partner with educators and journalists to make algorithmic transparency accessible to everyone.

Built With

Share this project:

Updates