Inspiration 🧠
This project was motivated by the need for a Chrome plugin that can particularly identify harmful language in queer-unfriendly content. The goal was to create a tool that, by spotting and flagging potentially objectionable content, can make consumers' web browsing experiences safer.
What it does 🌈
The sentiment analysis model's backend is provided by the Flask API web app used by the Chrome addon. In order to evaluate the text on web sites, the plugin injects a content script into each page. The sentiment analysis model analyzes the text content and evaluates whether or not it is offensive. Depending on the classification, the extension then shows the outcomes or executes the necessary steps, assisting users in identifying and avoiding offensive content.
How we built it 🛠
The backend API web app for the project was created using Flask. Flask offers routes that take text input from the extension's JavaScript code and return the findings of the analysis. The Flask app included a sentiment analysis model i.e. VADER. A JavaScript file was constructed to handle user interactions and communicate with the background script, and an HTML file was generated to build the user interface on the front end.
Challenges we ran into 💪🏼
Several difficulties that might have been faced while working on the project include:
- Flask integration as the API web app and suitable extension-Flask communication.
- Putting the sentiment analysis model into practice and properly preparing the text leading to reliable classification.
- Handling the effective capture of the pertinent text material and the injection of the content script into web pages.
- Creating and putting into practice an intuitive user interface for the Chrome addon.
Accomplishments that we're proud of 🏆
The successful integration of Flask as the backend API web app and the establishment of proper communication between the extension and the Flask server are some project milestones. Putting in place a reliable sentiment analysis model that can distinguish between hostile and inoffensive words, particularly in relation to queer-unfriendly content, and creating a Chrome plugin that efficiently assesses the text content of web pages and gives users useful information to securely explore the internet.
What we learned 📚
Understanding of how to integrate Flask as an API web app and use it to provide backend functionality for the Chrome extension. Putting into practice and perfecting a sentiment analysis model for particular foul language classification tasks. Taking care of the challenges involved in inserting content scripts into web pages and gathering necessary text data for analysis.
What's next for QueerSafe 🔬
The project's potential outcomes in the future could be: Improving the sentiment analysis model's accuracy and coverage of derogatory language related to content that isn't favorable to queers. Enhancing the Chrome extension's functionality to add new features, such as customized filtering choices or the ability to report offending items. User testing and feedback collection to iteratively enhance the utility and efficiency of the extension.
Built With
- api
- azure
- css
- flask
- html
- javascript
- jupyternotebook
- machine-learning
- markdown
- natural-language-processing
- nltk
- python
- vader



Log in or sign up for Devpost to join the conversation.