img

Superviced and Reinforced Learning

Supervised Machine Learning Problems and Solutions The most straightforward tasks fall under the umbrella of supervised learning. In supervised learning, we have access to examples of correct input-output pairs that we can show to the machine during the training phase. The common example of handwriting recognition is typically approached as a supervised learning task. We show the computer a number of images of handwritten digits along with the correct labels for those digits, and the computer learns the patterns that relate images to their labels.

app development

Learning how to perform tasks in this way, by explicit example, is relatively easy to understand and straightforward to implement, but there is a crucial task: we can only do it if we have access to a dataset of correct input-output pairs. In the handwriting example, this means that at some point we need to send a human in to classify the images in the training set. This is laborious work and often infeasible, but where the data does exist, supervised learning algorithms can be extremely effective at a broad range of tasks.

Supervised machine learning tasks can be broadly classified into two subgroups: regression and classification. Regression is the problem of estimating or predicting a continuous quantity. What will be the value of the S&P 500 one month from today? How tall will a child be as an adult? How many of our customers will leave for a competitor this year? These are examples of questions that would fall under the umbrella of regression. To solve these problems in a supervised machine learning framework, we would gather past examples of “right answer” input/output pairs that deal with the same problem. For the inputs, we would identify features that we believe would be predictive of the outcomes that we wish to predict.

For the first problem, we might try to gather as features the historical prices of stocks under the S&P 500 on given dates along with the value of the S&P 500 one month later. This would form our training set, from which the machine would try to determine some functional relationship between the features and eventual S&P 500 values.

Classification deals with assigning observations into discrete categories, rather than estimating continuous quantities. In the simplest case, there are two possible categories; this case is known as binary classification. Many important questions can be framed in terms of binary classification. Will a given customer leave us for a competitor? Does a given patient have cancer? Does a given image contain a hot dog? Algorithms for performing binary classification are particularly important because many of the algorithms for performing the more general kind of classification where there are arbitrary labels are simply a bunch of binary classifiers working together. For instance, a simple solution to the handwriting recognition problem is to simply train a bunch of binary classifiers: a 0-detector, a 1-detector, a 2-detector, and so on, which output their certainty that the image is of their respective digit. The classifier just outputs the digit whose classifier has the highest certainty.