Skip to content
#

bias-variance-tradeoff

Here are 34 public repositories matching this topic...

L2-regularization-ridge-regression-course

L2 regularization, or Ridge regression, is a technique to prevent overfitting in machine learning by adding a penalty proportional to the sum of squared weights to the loss function. It forces weights to be small but rarely zero, resulting in a smoother, more stable model. Solver

  • Updated Mar 17, 2026
  • Python
underfitting-overfitting-polynomial-regression-course

Underfitting and overfitting are critical concepts in machine learning, particularly when using Polynomial Regression to model data. Polynomial regression allows a model to learn non-linear relationships by increasing the polynomial degree (e.g. ), making it highly susceptible to both underfitting (too simple) and overfitting (too complex).Solver

  • Updated Mar 17, 2026
  • Python

Nine diagnostic tools for detecting and understanding overfitting in scikit-learn models — polynomial overfitting, learning curves, validation curves, bias-variance decomposition, regularisation sweeps, data leakage detection, and more. Companion code for the ML Diagnostics Mastery series.

  • Updated Apr 6, 2026
  • Python

This project focuses on developing and training supervised learning models for prediction and classification tasks, covering linear and logistic regression (using NumPy & scikit-learn), neural networks (with TensorFlow) for binary and multi-class classification, and decision trees along with ensemble methods like random forests and boosted trees

  • Updated Oct 12, 2024
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the bias-variance-tradeoff topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the bias-variance-tradeoff topic, visit your repo's landing page and select "manage topics."

Learn more