A lightweight posture tracking app built during the AI Research Hackathon at UNC Charlotte. Uses MediaPipe Pose and a simple classifier to detect good vs bad posture from a webcam stream. Includes tools to collect your own dataset and train a model.
- Real‑time webcam posture prediction
- Data collection script to label frames as good/bad
- Simple training script (Decision Tree) with saved model
posture/– Python package with reusable modulesgeometry.py– simplePointand angle utilscollect.py–run_data_collector()to capture labeled framespredict.py–run_live_predictor()for real‑time inferencetrain.py–train_model()to train and save a model
data/– dataset location (CSV), kept out of Gitmodels/– trained model artifacts, kept out of Gitlive_posture_predictor.py– Thin wrapper that runs the live predictorPython_data_collector.py– Thin wrapper that runs the collectortrain_model_v3.py– Thin wrapper that trains the model
- Create and activate a virtual environment
python3 -m venv .venv
source .venv/bin/activate # macOS/Linux
# On Windows: .venv\Scripts\activate- Install dependencies
pip install -r requirements.txt- Collect labeled data (optional, if you want to retrain)
python Python_data_collector.py- Press
gfor good posture,bfor bad posture,qto quit - Data will append to
data/posture_dataset.csv
- Train the model (optional)
python train_model_v3.py- Outputs
models/posture_model_v3.pkl
- Run the live predictor
python live_posture_predictor.py- Press
qto quit
- The included trainer uses a Decision Tree for simplicity. You can swap in any sklearn model.
- Features are the raw 3D pose landmark coordinates from MediaPipe; you can experiment with engineered angles.
- Python 3.9+
- Webcam access
- If the webcam doesn’t open, try changing the camera index in
cv2.VideoCapture(0)to1. - If MediaPipe install fails on Apple Silicon, ensure
pipis up to date and Python is from python.org or useconda.
This project is released under the MIT License. See LICENSE.