⬇️ Install from Chrome Web Store | ▶ Watch Demo
A Chrome extension that creates nutrition labels for digital content, helping users make informed decisions about their digital consumption.
- Content Classification: Automatically categorizes posts as Education, Entertainment, or Emotional using AI analysis
- Background Processing: Operates passively with no user interaction required - no tagging, buttons, or interruptions
- Attention Budgeting: Set daily consumption limits per content category with automatic post filtering when budgets are exceeded
- Analytics Dashboard: Real-time consumption tracking, weekly trends, and budget monitoring with visual breakdowns
- Local-First Privacy: All data stored locally in browser storage - no credentials or personal data transmitted
- Platform Support: Twitter/X implementation with extensible architecture for additional platforms
-
Clone the repository:
git clone <repository-url> cd resist
-
Install dependencies:
npm install
-
Build the extension:
npm run build
-
Load in Chrome:
- Open Chrome and navigate to
chrome://extensions/ - Enable "Developer mode"
- Click "Load unpacked" and select the
distfolder
- Open Chrome and navigate to
For development with extensive logging/debugging:
npm run dev:build- Browse Social Media: Visit Twitter/X as normal
- View Nutrition Labels: The extension automatically analyzes posts and attaches displays nutrition labels to each tweet
- Access Dashboard: Click the extension icon to view detailed analytics and settings
- Customize: Use the settings page to adjust parameters and preferences
src/
├── manifest.json # Chrome extension manifest
├── content.ts # Content script for social media sites
├── background-service-worker.ts # Background service worker
├── popup.ts # Extension popup interface
├── settings/ # Settings dashboard
├── platforms/ # Platform-specific implementations
└── utils/ # Shared utilities
- TypeScript - Type-safe JavaScript
- Vite - Build tool and bundler
- Hugging Face Transformers - AI/ML for local content analysis
- Chrome Extensions API - Browser integration
The extension supports remote content analysis (using Grok) through a configurable API endpoint. Developers are requested to either disable remote classification or use their own servers during development.
To set up your own analysis server:
- Configure the endpoint in Settings > Advanced > Remote Analysis URL
- Default endpoint:
https://api.resist-extension.org/api/analyze
Endpoint: GET /api/analyze?content={urlencoded_json}
Request Query Parameter:
{
"text": "AuthorName: Post content text (truncated to 1000 chars)",
"media_elements": ["https://example.com/image1.jpg", "https://example.com/image2.png"]
}Response Format:
Success Response:
{
"Education": {
"subcategories": {
"Learning and education": {"score": 0.25},
"News, politics, and social concern": {"score": 0.05}
},
"totalScore": 0.3
},
"Emotion": {
"subcategories": {
"Anxiety and fear": {"score": 0.2},
"Controversy and clickbait": {"score": 0.1}
},
"totalScore": 0.3
},
"Entertainment": {
"subcategories": {
"Celebrities, sports, and culture": {"score": 0.1},
"Humor and amusement": {"score": 0.3}
},
"totalScore": 0.4
},
"totalAttentionScore": 1.0
}Processing Response (for async analysis):
{
"status": "processing",
"retry_after": 5
}Error Response:
{
"status": "error",
"message": "Analysis failed: reason"
}The extension uses Hugging Face models for local content analysis. You can customize which models to use by editing src/background-service-worker.ts:
static model = 'Xenova/mobilebert-uncased-mnli';Alternative models:
Xenova/distilbert-base-uncased-mnliXenova/bart-large-mnli
static model = 'Xenova/vit-gpt2-image-captioning';Alternative models:
Salesforce/blip-image-captioning-baseXenova/vit-gpt2-image-captioning-large
Note: After changing models, rebuild the extension with npm run build. First-time model loading may take longer as models are downloaded and cached locally.
The extension uses a task-based architecture for processing the content of every post. You can extend the functionality of the software easily by adding new analysis tasks (for eg., video analysis, sentiment analysis) in src/task-manager.ts:
Add your task type to the task initialization in initializeTasksForPost():
const tasks: Task[] = [
// ... existing tasks
{
id: `${postId}-your-task`,
type: 'your-task',
status: 'pending',
resultType: 'text' // or 'classification'
}
]Add a case to the task execution switch statement:
switch (task.type) {
// ... existing cases
case 'your-task':
result = await this.executeYourTask(platform, post, postId)
break
}Create the execution method:
private async executeYourTask(platform: SocialMediaPlatform, post: PostElement, postId: string): Promise<string> {
// Your analysis logic here
}Available task types: mock-task (for testing/debugging), post-text, image-description, remote-analysis
MIT License - see LICENSE file for details


