Skip to content

imrahnf/Safecord

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SafeCord

AI-assisted Discord moderation bot that helps track potential child predators and unsafe behavior.
Built with FastAPI, Discord py, and Zero-Shot Learning (ZSL).

Python 3.12.7 discord.py 2.6.3 FastAPI

Overview

SafeCord is a prototype moderation tool designed to support Discord server moderators in identifying unsafe or predatory interactions.

It uses natural language processing (zero-shot classification) to analyze a variety of messages in context, flag suspicious user behavior, and log moderation events. Moderators can manually flag users, view server watchlists, and clear logs of any specified user. All data is securely stored in a lightweight SQLite database.

⚠️ Note: SafeCord is a demo prototype and not intended to be a production ready tool. All data in this repository is fake/test data generated purely for demonstration purposes only.

Why I Built This

Child grooming and exploitation online are rarely sudden. Predators use long-term psychological manipulation, beginning with trust-building and ending in coercion, secrecy, or solicitation. Unfortunately, these behaviors often don’t trigger automated moderation systems because they don’t use overly offensive or blatant language. Additionally, platforms like Discord operate in isolated server environments, meaning there’s no way for moderators to see if someone has been flagged or warned elsewhere. A predator removed from one server can simply hop into another without consequence. This creates an ecosystem where patterned harm can continue undetected.

This is one of the most urgent and complex safety problems facing young people online today. While platforms have introduced content moderation and reporting tools, these tend to be reactive, not proactive. What’s missing is a moderator-focused system that enables cross-server communication and trust tracking-a way for communities to quietly share red flags, patterns, and safety concerns without exposing users or violating privacy.

Solving this wouldn’t eliminate grooming entirely, but it could disrupt the repeat behavior patterns that predators rely on, and give moderators the context they need to act sooner. That’s a creative, ethical, and highly impactful design challenge worth exploring.

Demo Data Analysis

The notebook analysis/demo.ipynb demonstrates how moderation logs can be used to generate impactful insights.

Included are examples of:

  • User flagging activity
  • Score distributions (boxplots, histograms)
  • Heatmaps of specific user behavior

Features & Commands

  • Manual flagging of users with /flag
  • Server-wide watchlists with /watchlist
  • Clear flags for users with /clear
  • AI powered analysis with /analyze (zero-shot classification)
  • SQLite database storage for moderation logs (flags + confidence scores)
  • Demo Jupyter notebook showcasing analysis of collected logs

Architecture

  • Discord Bot (slash commands): moderation commands
  • FastAPI backend: processes data & stores logs
  • SQLite database: stores flags & NLP scores
  • Jupyter Notebook: demonstrates possible insights through analysis

Installation & Setup

Steps to run locally:

Clone the repo:

git clone https://github.com/ImrahnF/Safecord
cd safecord

Install dependencies:

pip install -r requirements.txt

Add your bot token in a .env file:

DISCORD_TOKEN=discord_bot_token

Run backend API:

uvicorn api.main:app --reload

Turn bot online:

python bot/main.py

Screenshots/Demo

Analyzing a user

Analyzing a user The /analyze command running zero-shot classification on a message.

List of commands

List of Commands Slash commands available to moderators.

Stored data

Database SQLite database showing the confidence scores generated by the classification model.

Disclaimer

Important to note:

  • The project was not trained on any real grooming datasets.
  • Not designed for environments with extremely high traffic
  • All data/insights are based on synthetic and for educational/demo purposes only.

Future Improvements

  • Fine tuned models for grooming detection
  • Migrate database from SQLite to a more robust engine (such as PostgreSQL/MySQL)
  • Extensive moderation web dashboard
  • Containerization and cloud deployment with Docker/Kubernetes

About

Moderation tool designed to support Discord server moderators in identifying unsafe or predatory interactions

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors