Classification Algorithms for Supervised Learning on popular datasets

Published:

This is a collection of the classification algorithms I implemented as part of the Machine Learning course at Texas A&M. To touch upon them briefly, I started off with implementing Bayes classifier on the noisy Iris dataset and achieved an accuracy of 78%. For another assignment, I compared SVM, Neural Network classifiers on the noisy MNIST dataset. Here, I improved the MNIST classifier by employing data augmentation (using rotated MNIST data) and achieved the following accuracies: (i) SVMs: sigmoid/RBF kernels, 89% and (ii) NNs: 3 layers, Adam optimizer, batch normalization, 85%. I then used the classifiers on medical datasets such as fMRI, EEG recordings, albeit reaching a low accuracy. As a side-exercise, I also used the Stanford CS-231n course implementation of CNN to obtain a 75% accuracy on the CIFAR-10 dataset. A description of some of the problems, experiments, and the codebase is available here.