用python实现自己的机器学习库(NumPy)
Posted _less is more
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了用python实现自己的机器学习库(NumPy)相关的知识,希望对你有一定的参考价值。
A Mini Machine Learning Library with Detailed Math Derivation
🛵Motivation
Even though there are a lot of open-source tools out there, I still don’t really understand the mechanism and math behind them until I reinvent the wheel. The process of doing that can help me have a better grasp of the knowledge and prove to myself that I do understand it.
Learning these algorithms from those powerful open-source machine learning tools like scikit-learn is not practically possible for beginners, because they have complex library dependency and class inheritance relationships. Therefore, to help people and to help myself as well, I make one. Well, more precisely, a mini one.
🪐Introduction
This is a mini machine learning library written with the NumPy library. Before doing that, I actually already finished a Master’s course on machine learning at USC, which makes me have the basic knowledge and cognition. But some deep details were not mentioned in class such as SVM (SVC & SVR). Only SVC was mentioned and the math derivation just includes primal and dual representation, which is not enough to really code one. And SVR was the same case. So I finally decided to code all the algorithms mentioned in class during the summer break. So, yes, here it comes.
The algorithms in this mini machine learning library include:
- Support Vector Classifier (SVC)
- Support Vector Regressor (SVR)
- Ridge Regression
- Nearest Mean
- K-Means
- K-Nearest Neighbors (KNN)
- Perceptron Learning
- MSE techniques (classification & Regression)
- Density Estimation (Non-parametric & parametric)
- ANN
- PCA
🤔More specifically:
SVM. My version of SVC supports nonlinear classification, soft margin, multi-classification, high-dimension data, etc. All computation is written as matrix operation with NumPy, which makes running fast. SVR supports nonlinear regression and high-dimension data regression as well.
ANN. This algorithm is organized as a multi-level class. The bottom is the Node class, which includes the basic components such as inputs, outputs, values, gradients, and forward & backward operations. Upon that, various classes are created, including Placeholder, Add, Linear, Sigmoid, MSE, etc. With these, a basic regression and classification model could be built.
MSE techniques. This is an aggregation of 2 similar models. One is for classification, and another is for regression. Both classification and regression models are implemented with the Algebraic method and Gradient Descent method, which makes it composed of 4 somewhat different algorithms.
Density Estimation. Both parametric and non-parametric ways are implemented. The parametric method includes a regression version and classification version with Maximum Likelihood Estimation (MLE). The non-parametric method involves both Kernel Density Estimation (KDE) and K-Nearest Neighbors (KNN) implementations.
👨🏻💻Comments
Of course, this little library cannot compete with any open-source tools like scikit-learn, but it’s better to understand the ins and outs of these ML concepts and algorithms with this relatively simpler implementation because there is relatively less code and inheritance between objects.
Admittedly, sometimes my version of these algorithms cannot achieve the same performance and speed as scikit-learn, which I don’t know why because I actually write all math with NumPy’s matrix operations which should be fast. But it’s still not as fast as scikit-learn, such as using SVC as a multi-classifier in the experiment of digits recognizer.
If you are good at this and have any suggestions, please contact me! If you are a machine learning beginner, I hope this helps!
Repo link (All codes, math, and tutorials):
https://github.com/lujiazho/MachineLearningPlayground
以上是关于用python实现自己的机器学习库(NumPy)的主要内容,如果未能解决你的问题,请参考以下文章