This repository is meant to be a quick intro to Support Vector Machines (SVMs) for someone who already has a little bit of experience with Machine Learning techniques.
SVMs are directed learning models (we have labels a priori) used for classification and regression and made popular by Vladimir Vapnik and Alexey Chervonekis in the 60s. Soft margin classifiers and the use of kernels in SVMs are relatively new (last 20 years) discoveries.
In this presentation I will cover:
- Separating Hyperplanes
- The Maximal Margin Classifier
- Support Vector Classifiers
- Support Vector Machines with K = 2 classes
Some relevant topics that might be included in future updates to this repository are:
- SVM Regression
- SVM Classification with K > 2 classes
Many thanks to Christopher Olah for a couple of visualizations I took from his Neural Networks, Manifolds, and Topology blog post that really help with visualizing separation in larger feature spaces and Michael Hahsler for his decision boundary plotting function in R. Any other resources I used are included in the citations.
Citations:
-
Berwick, Robert. An Idiot's Guide to Support Vector Machines. web.mit.edu/6.034/wwwbob/svm-notes-long-08.pdf.
-
Hasler, Michael. R Code for Comparing Decision Boundaries of Different Classifiers. 17 May 2017, michael.hahsler.net/SMU/EMIS7332/R/viz_classifier.html.
-
Hastie, Trevor, et al. The elements of statistical learning: data mining, inference, and prediction. Springer, 2017.
-
Olah, Christopher. “Neural Networks, Manifolds, and Topology.” Neural Networks, Manifolds, and Topology -- colah's blog, colah.github.io/posts/2014-03-NN-Manifolds-Topology/.