A Fun Approach to Machine Learning
Machine Learning is like sex in high school. Everyone is talking about it, a few know what to do, and only your teacher is doing it. If you ever tried to read articles about machine learning on the Internet, most likely you stumbled upon two types of them:
- Thick academic trilogies filled with theorems (I couldn’t even get through half of one) or
- Fishy fairytales about artificial intelligence, data-science magic, and jobs of the future
So here it is, a simple introduction for those who always wanted to understand machine learning. Only real-world problems, practical solutions, simple language, and no high-level theorems. One and for everyone. Whether you are a programmer or a manager.
Classical Machine Learning
The first methods came from pure statistics in the 50's. They solved formal math tasks — searching for patterns in numbers, evaluating the proximity of data points, and calculating vectors’ directions. Nowadays, half of the Internet is working on these algorithms.
Big tech companies are huge fans of neural networks. Obviously. For them, 2% accuracy is an additional 2 billion in revenue. But when you are small, it doesn’t make sense. I heard stories of the teams spending a year on a new recommendation algorithm for their e-commerce website, before discovering that 99% of traffic came from search engines. Their algorithms were useless. Most users didn’t even open the main page.
Classical machine learning is often divided into two categories:
- Supervised and
- Unsupervised Learning
The machine has a “supervisor” or a “teacher” who gives the machine all the answers, like whether it’s a cat in the picture or a dog. The teacher has already divided (labeled) the data into cats and dogs, and the machine is using these examples to learn. One by one dog by cat.
There are two types of such tasks:
- Classification — an object’s category prediction, and
- Regression — prediction of a specific point on a numeric axis.
Classification refers to a predictive modeling problem where a class label is predicted for a given example of input data.
Today used for:
- Spam filtering,
- Language detection,
- Sentiment analysis,
- Handwritten characters and numbers Recognition, and
- Fraud detection
- Logistic Regression
- Naive Bayes
- K-Nearest Neighbors
- Decision Tree
- Support Vector Machines
Regression is basically classification where we forecast a number instead of a category. Examples are car price by mileage, traffic by time of the day, demand volume by the growth of the company, etc. Regression is perfect when something depends on time.
When the line is straight — it’s a linear regression, when it’s curved — polynomial. These are two major types of regression. The other ones are more exotic. Logistic regression is a black sheep in the flock. Don’t let it trick you, as it’s a classification method, not regression.
Today used for:
- Predictive Analytics,
- Operation Efficiency,
- Supporting Decisions,
- Correcting Errors, and
- Finding New Insights
- Linear Regression
- Decision Tree
- Support Vector Regression
- Lasso Regression
- Random Forest
Unsupervised learning means the machine is left on its own with a pile of animal photos and a task to find out who’s who. Data is not labeled, there’s no teacher, the machine is trying to find any patterns on its own. We’ll talk about these methods below. Clearly, the machine will learn faster with a teacher, so it’s more commonly used in real-life tasks.
There are one type of such tasks — Clustering
Clustering or cluster analysis is a machine learning technique, which groups the un-labelled dataset. It can be defined as “A way of grouping the data points into different clusters, consisting of similar data points.
- For market segmentation,
- To merge close points on a map,
- For image compression,
- To analyze and label new data, and
- To detect abnormal behavior.
Finally, we get to something looks like real artificial intelligence. In lots of articles reinforcement learning is placed somewhere in between of supervised and unsupervised learning.
Reinforcement learning is used in cases when your problem is not related to data at all, but you have an environment to live in. Like a video game world or a city for self-driving car.
Any neural network is basically a collection of neurons and connections between them. Neuron is a function with a bunch of inputs and one output. Its task is to take all numbers from its input, perform a function on them and send the result to the output.
Here is an example of a simple but useful in real life neuron: sum up all numbers from the inputs and if that sum is bigger than N — give N-1 as a result. Otherwise — zero.
Connections are like channels between neurons. They connect outputs of one neuron with the inputs of another so they can send digits to each other. Each connection has only one parameter — weight. It’s like a connection strength for a signal. When the number 10 passes through a connection with a weight 0.5it turns into 5. These weights tell the neuron to respond more to one input and less to another. Weights are adjusted when training — that’s how the network learns. Basically, that’s all there is to it.
To prevent the network from falling into anarchy, the neurons are linked by layers, not randomly. Within a layer neurons are not connected, but they are connected to neurons of the next and previous layers. Data in the network goes strictly in one direction — from the inputs of the first layer to the outputs of the last.
If you throw in a sufficient number of layers and put the weights correctly, you will get the following: by applying to the input, say, the image of handwritten digit Ђ, black pixels activate the associated neurons, they activate the next layers, and so on and on, until it finally lights up the exit in charge of the four. The result is achieved.
I hope you can now start getting a feel about this field. Please note this amazing masterpiece has been taken in reference from https://vas3k.com/,
Feel free to use this for your presentation, sessions, workshops, and for other educational purposes.