Skip to content

ImdataScientistSachin/ImdataScientistSachin

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 

Repository files navigation

Typing SVG


LinkedIn Email Live Demo Profile Views


👋 About Me

class SachinPaunikar:
    role        = "Data Scientist | ML Fairness & Responsible AI Engineer"
    location    = "Nagpur, Maharashtra, India 🇮🇳"
    company     = "SparrowAI Research and Development Center"
    education   = "B.E. Computer Science — Nagpur University  |  CGPA: 8.2 / 10.0"
    open_to     = ["Remote roles", "Full-time positions", "Collaborations"]

    focus_areas = [
        "⚖️  ML Fairness & Bias Detection",
        "📊  Real-time Drift Monitoring",
        "🔍  Explainable AI (SHAP · DiCE · LIME)",
        "🏥  Medical Computer Vision",
        "🎵  Deep Learning · Audio Classification",
        "🗣️  NLP · Generative AI Pipelines",
    ]

    currently   = "Building production-grade ethical AI systems aligned with EU AI Act & EEOC"
    philosophy  = "Make AI responsible, transparent, and fair — for everyone."

🚀 Featured Projects

# Project Description Stack Demo
🥇 Bias Drift Guardian Real-time AI fairness & drift monitoring. EEOC / EU AI Act compliant. Intersectional bias detection across compound subgroups Python Streamlit FastAPI SHAP Fairlearn Docker 🔴 Live
🥈 Urban Sound Classifier 96.63% accuracy on UrbanSound8K (8,732 samples). Hybrid U-Net + CNN ensemble with real-time microphone classification TensorFlow U-Net Librosa TFLite Flask
🥉 Transcript → Ad Generator NLP pipeline: transcript ingestion → NER → LLM ad copy → async video rendering. CI/CD via GitHub Actions spaCy Redis Queue MoviePy Docker GitHub Actions
4️⃣ Skin Lesion Segmentation Medical AI: U-Net pixel segmentation on HAM10000. Temporal tracking with automated >15% growth alerts TensorFlow U-Net OpenCV Albumentations HAM10000
5️⃣ RetinaFace Detection Face detection & 5-point landmark localisation using RetinaFace architecture Python InsightFace OpenCV Jupyter [🔴 Live]

🛡️ Flagship — Bias Drift Guardian

Detect bias before it becomes a lawsuit. Monitor drift before it breaks your model.

Live Demo GitHub License: MIT

What makes it unique: Standard fairness tools check one attribute at a time (gender or age). Bias Drift Guardian detects compound discrimination across intersecting subgroups — the kind courts care about.

Standard:  "No gender bias detected" ✅  (Male: 70%,  Female: 68%)
Ours:      "Female employees aged 50+ → only 38% approval rate!" ❌  (Disparity: 0.48)

Core capabilities:

Capability Details
🎯 Intersectional Fairness Compound subgroup analysis (gender × age × race)
📊 Drift Detection PSI · KS Test · Chi-Square with configurable thresholds
🔍 Root Cause Analysis SHAP feature importance drift attribution
🔮 Counterfactual XAI DiCE What-If explanations (constraint-aware, EEOC-auditable)
📁 Multi-Dataset German Credit · Adult Census · COMPAS Recidivism
🚀 Deployment Docker Compose · FastAPI · Streamlit Cloud · MIT licensed
Bias Drift Guardian Live Dashboard

↑ Click the image to open the live demo


🧰 Tech Stack

Core Languages

Python SQL R Java

Machine Learning & Deep Learning

TensorFlow PyTorch scikit-learn Keras XGBoost

Responsible AI & Explainability

SHAP Fairlearn EU AI Act EEOC

Data & Visualisation

Pandas NumPy Plotly Seaborn Power BI

NLP & Computer Vision

spaCy OpenCV Librosa

Deployment & MLOps

Streamlit FastAPI Flask Docker GitHub Actions Redis Git Linux


📊 GitHub Stats

  




🎯 Responsible AI — My Niche

I focus on a gap most ML engineers ignore: what happens after your model is deployed.

The hidden reality of production ML:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  • 80% of models experience drift within 6 months of deployment
  • Compound discrimination (gender × age × race) goes undetected
    by standard single-attribute fairness checks
  • EEOC / EU AI Act compliance is now a legal requirement,
    not a nice-to-have
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

My solution: Real-time monitoring that catches what others miss.

Areas of deep expertise:

  • Intersectional Fairness — compound multi-attribute subgroup analysis beyond single-attribute tools
  • Data Drift Detection — PSI, KS Test, Chi-Square with root-cause attribution via SHAP
  • Counterfactual Explanations — DiCE-based What-If analysis that is constraint-aware and audit-ready
  • Regulatory Compliance — EEOC (US), EU AI Act, GDPR-aware system design
  • MLOps — Docker, FastAPI, GitHub Actions CI/CD, Redis async queues, Streamlit Cloud deployment

📈 Experience Highlights

🏢 SparrowAI Research and Development Center  |  Data Scientist & AI Researcher  |  Jan 2025–Present
   → Built Bias Drift Guardian: production fairness monitoring system (EEOC / EU AI Act)
   → Intersectional bias detection across compound subgroups (e.g. Female + Age 50+ → 38% approval)
   → Drift detection via PSI, KS Test, Chi-Square across 3 real-world datasets
   → SHAP root-cause analysis + DiCE counterfactual explanations for compliance teams
   → Dockerised full stack; 2,800+ lines of technical documentation

🏢 Sparrow AI Pvt. Ltd.  |  Data Science Intern  |  Jan 2025 – Jun 2025
   → Customer churn prediction & sales forecasting (classification + regression)
   → Automated preprocessing pipelines — reduced manual effort ~30%
   → Stakeholder dashboards (Streamlit · Matplotlib · Seaborn)

🤝 Let's Connect

I am actively looking for Data Scientist and ML Engineer roles — remote or Nagpur-based.

If you work in Responsible AI, MLOps, FinTech, HealthTech, or HR Tech — let's talk.


LinkedIn Email Live Demo


"Empowering ethical AI through transparent fairness monitoring, real-time bias detection, and responsible machine learning."


Releases

No releases published

Packages

 
 
 

Contributors