Skip to content

Latest commit

 

History

History
255 lines (187 loc) · 15 KB

File metadata and controls

255 lines (187 loc) · 15 KB

License GitHub forks GitHub stars

Python Machine Learning Notebooks (Tutorial style)

Authored and maintained by Dr. Tirthajyoti Sarkar, Fremont, CA. Please feel free to add me on LinkedIn here.


https://miro.medium.com/max/1838/1*92h6Lg1Bu1F9QqoVNrkLdQ.jpeg

Requirements

  • Python 3.5
  • NumPy (pip install numpy)
  • Pandas (pip install pandas)
  • Scikit-learn (pip install scikit-learn)
  • SciPy (pip install scipy)
  • Statsmodels (pip install statsmodels)
  • MatplotLib (pip install matplotlib)
  • Seaborn (pip install seaborn)
  • Sympy (pip install sympy)

You can start with this article that I wrote in Heartbeat magazine (on Medium platform):

“Some Essential Hacks and Tricks for Machine Learning with Python”

Essential tutorial-type notebooks on Pandas, Numpy, and visualizations

Jupyter notebooks covering a wide range of functions and operations on the topics of NumPy, Pandans, Seaborn, matplotlib etc.

Regression related Notebooks


Classification related Notebooks


Clustering related Notebooks

  • K-means clustering (Here is the Notebook).
  • Affinity propagation (showing its time complexity and the effect of damping factor) (Here is the Notebook).
  • Mean-shift technique (showing its time complexity and the effect of noise on cluster discovery) (Here is the Notebook).
  • DBSCAN (showing how it can generically detect areas of high density irrespective of cluster shapes, which the k-means fails to do) (Here is the Notebook).
  • Hierarchical clustering with Dendograms showing how to choose optimal number of clusters (Here is the Notebook).
  • Clustering metrics better than the elbow-method (Here is the Notebook).

Dimensionality reduction related Notebooks


Complexity and Learning curve analysis

Complexity and learning curve analyses are essentially are part of the visual analytics that a data scientist must perform using the available dataset for comparing the merits of various ML algorithms.

Learning curve: Graphs that compares the performance of a model on training and testing data over a varying number of training instances. We should generally see performance improve as the number of training points increases.

Complexity curve: Graphs that show the model performance over training and validation set for varying degree of model complexity (e.g. degree of polynomial for linear regression, number of layers or neurons for neural networks, number of estimator trees for a Boosting algorithm or Random Forest).

  • Complexity and learning curve with Lending club dataset (Here is the Notebook).
  • Complexity and learning curve with a synthetic dataset using the Hastie function from Scikit-learn (Here is the Notebook).

Random data generation using symbolic expressions


Simple deployment examples (serving ML models on web API)


Object-oriented programming with machine learning

Implementing some of the core OOP principles in a machine learning context by building your own Scikit-learn-like estimator, and making it better.

Here is the complete Python script with the linear regression class, which can do fitting, prediction, cpmputation of regression metrics, plot outliers, plot diagnostics (linearity, constant variance, etc.), compute variance inflation factors.

I created a Python package based on this work, which offers simple Scikit-learn style interface API along with deep statistical inference and residual analysis capabilities for linear regression problems. Check it out here.

See my articles on Medium on this topic.