Implementation of a Support Vector Machine using the Sequential Minimal Optimization (SMO) algorithm for training.
Implementation of Platt's SMO algorithm (Platt, John (1998). "Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines") with improvements by Keerthi (S.S. Keerthi. S.K. Shevade. C. Bhattacharyya &. K.R.K. Murthy (2001). "Improvements to Piatt's SMO algorithm for SVM classifier design").
- NumPy
Following parameters are default
from SVM import SVM
model = SVM(C=1.0, kernel='rbf', degree=3, gamma='auto', coef0=0.0, tol=1e-3, max_iter=-1)Parameters have the same meaning as parameters in SVC of scikit-learn package (link). But parameter kernel must be one of ‘linear’, ‘poly’, ‘rbf’ or a callable.
This implementation only for two classes: -1 and 1.
Since Keerthi's article proposes two training methods, there are two modifications to this implementation.
First method:
model.fit_modification1(X_train, y_train)and second method:
model.fit_modification2(X_train, y_train)Method for classification of samples.
y_hat = model.predict(X_test)Method for getting a value of w * x - b.
decision = model.decision_function(X)Method for getting the accuracy on the given data and labels.
score = model.score(X, y)