Everything You Need to Know, In a Single Daily Email.

The 1440 Guide To Machine Learning

Share on facebook
Share on google
Share on twitter
Share on linkedin

Master The Elevator Talk

Over the past five years it has been nearly impossible to read the news without hearing about the coming artificial intelligence revolution. While the revolution may be on its way, what has arrived first is an onslaught of buzzwords like neural networks, autonomous decision-making, deep learning, machine learning, and more. So how to make sense of it all? It turns out that one of the most-often heard examples – machine learning – is also the best place to begin. 

Let’s start at the top. Artificial intelligence (a concept coined in 1956) refers to the general concept of machines doing things that normally require human intelligence. Unlike machines that, for example, turn screws or lift heavy objects in place of humans, AI-powered machines have the capacity to make decisions based on input from their surroundings. Theorists generally divide AI into four categories, from simplest to most complex. 

Reactive machines are those that can perform specific tasks in “smart” ways, like a software program that plays poker. Limited memory refers to machines able to base future decision-making on past events. An example is a self-driving car that observes other cars’ speeds, curves in the road, stoplights, and more, and adds this info to a pre-programmed representation of the world. Theory of mind machines actually build full representations of the world around them based on sensory input, along with full representations of entities it encounters (i.e. maybe your robot friend thinks you have a bad sense of humor).  Self awareness is when, well, machines gain a sense of self and possess intelligence that mimics human consciousness. 

Machine learning generally falls into the second category, limited memory. What separates machine learning from classical computer operations is that these types of algorithms take in data from their “surroundings” – which can range from input from other programs to visual data like photos or videos taken in real time – and modify their operations based on that data. Classical computers typically execute the original code as written. 

A simple analogy is a bread recipe – a cookbook gives you a set of instructions, in order. Traditional, non-machine learning programs would execute the recipe as written, every time, forever. Machine learning would, say, notice the amount of time the b

read is in the oven is too long, perhaps because your oven heats unevenly. The program would not only adjust the recipe in the future, but use this new knowledge about the temperature profile to modify other, non-bread recipes. This may sound simple, but thereal-world applications are astounding, from cancer detection to self-driving vehicles, and more. 

Dinner Party Expert

The definition of machine learning is pretty broad, encompassing any approach that learns from external data input. But the real-world applications, type of data, and how that data is processed “under the hood” can differ dramatically. Most machine learning applications can be placed into two general categories (though additional specialized approaches exist) – supervised and unsupervised learning. These two approaches differ in their complexity and the “freedom” that the algorithm has. 

In supervised learning, programs are typically given well-defined training instructions and well-defined answers – but not precisely how those inputs are related to the outputs. The program then attempts to figure out how the inputs lead to (or are “functionally related” to) the outputs. The “learning” aspect involved is that the algorithm remembers the previous input-output pairs, and as it establishes how the inputs are mapped to outputs, it uses that data set to check accuracy. The ultimate goal is to figure out the input-output relationship, so that it can be applied to new inputs in the future. 

The major difference with unsupervised learning is, though the program is given data inputs, corresponding outputs and their relationship to the inputs are not necessarily provided (think of a data set that appears random to the human eye). The algorithm is left to search through seemingly randomized datasets, trying to find patterns between data points that were previously unknown. 

As a hypothetical example, let’s say you have 10 molecules (which you have extensive details about), 5 of which are drugs. If you know which 5 are drugs, you can use supervised learning to develop a model to determine whether a new molecule is a drug or not. If you know 5 are drugs, but don’t know which of the 10 they are, you could use unsupervised learning to try and determine which ones are drugs. Other examples of the former include handwriting and speech recognition, self-driving vehicles, predictive modeling in sciences like chemistry and biology, and more. Applications for the latter are still being thought up, but examples include advanced medical imaging, finding hidden patterns in financial markets, social network analysis, and more. 

Of course, there are other types of machine learning (like “semi-supervised”) and tweaked versions of supervised and unsupervised approaches for niche uses. But the above concepts provide a solid starting point for the uninitiated. 

Be The Smartest Person In The Room

The above discussion about machine learning and where it fits in the spectrum of artificial intelligence is just the tip of the iceberg. Indeed, many experts view machine learning not as a separate technique, but as one of many steps in a progression towards a future of fully realized artificial intelligence. If basic reactive machines represent a 1 on a scale of 1 to 10, with 10 being self-aware machines, machine learning exists somewhere between 3 and 5 depending on whether the learning schemes are closer to supervised (less advanced) or unsupervised (more advanced). Supervised machine learning is currently the field where most advances are being made, while researchers are just beginning to understand the possibilities and applications of full-fledged unsupervised machine learning programs. 

It’s worth discussing a few other topics related to machine learning and the general field of artificial intelligence. You may have heard of neural networks, which in this context refers to a computing architecture that resembles the way the human brain operates. On the surface a neural network is composed of artificial “neurons”, or nodes, assigned a value representing how “excitable” the connection is. For example, a node could be assigned a value of 1 (very excited) or -1 (inhibitory). The output of calculations performed using the set of neurons then reflects how each individual node is weighted at any point in time. 

While that captures a general description of the approach, there is a subtle but paradigm-shifting difference with classical computing. In traditional computers, an operation is processed and the output is stored separately in a computer’s memory. In many neural networks, memory and processing are not separated. Instead, the flow of a signal through the node – often taken from the previous calculation’s output – changes the state of the node itself, which represents stored information. In this sense the program remembers, or learns, through each progressive iteration – neural networks are a type of machine learning, and are particularly useful in unsupervised learning applications. 

Another term to be familiar with is deep learning. Whereas neural networks refer to types of architecture, deep learning describes the concept of using layers to extract increasingly abstract information from input data. As a simplified example, a deep learning program may take in speech data – the first layer of the algorithm may distinguish changes in tone and frequency, the next layer may identify syllables, and so on, until the highest layers are encoding not only words, but the concepts and meaning associated with those words. In practice, these programs are run iteratively, learning and improving with each new input. As such, deep learning is a type of machine learning, often relying on neural network architecture to operate. 


The world (and future) of machine learning and artificial intelligence continues to evolve, and each day new applications are discovered. As computing becomes cheaper and more powerful, AI-driven programs, devices, and robots are likely to continue creeping into every aspect of our daily lives. The easiest prediction at this point is that AI, in whatever form it takes, will dramatically alter what the future looks like 5, 10, 20 years out and beyond.