Pentagon shapes with red light

Machine Learning’s Evolution in the 20th Century

Factored
Factored

Machine learning’s widespread recognition and understanding is today making more headway than ever before, but we wanted to take a look back at what made machine learning what it is today. 

So, why are we looking back on all of this? You might be an accomplished data scientist, Head of Machine Learning, or CTO who’s probably been living and breathing machine learning for a significant amount of time now.  

Well, we all know that the evolution of AI and machine learning is an ongoing process and one that consists of several different (often moving) parts, so here we’re going to take a look back at what was researched and discussed in the field several decades ago and how those developments contributed to machine learning as we know it today.

Cracking Codes 

Alan Turing is a key figure in the early stages of computer science and indeed artificial intelligence. As part of his studies at the University of Cambridge, UK, he wrote a paper on computable numbers and decision problems. Intrigued by the evolving science of computing, he then went on to study a PhD at Princeton University in mathematical logic.

Turing went on to create the eponymous universal Turing machine and, most famously, was part of the team responsible for designing Bombe, a code-breaker capable of deciphering encrypted messages from Germany during the Second World War. This creation allowed Turing and his team to decipher a staggering 84,000 encrypted messages per month. 

Referred to as a pioneer of modern day artificial intelligence, Turing likened the human brain to a digital computing machine, believing that the cortex goes through a process of training and learning throughout a person’s life, much like the training of a computer that we know as machine learning today.

 

                                                                       

Getting Into Gear

 Beyond Turing’s personal contribution to the field of computer science, we saw several interesting developments in subsequent decades. 

In the late 50s, Frank Rosenblatt, an American psychologist, created the Perceptron, an algorithm for supervised learning for binary classifiers, while working at the Cornell Aeronautical Laboratory. The Perceptron paved the way for the concept of neural networks. 

In the late 60s, an algorithm used to map routes preceded the origin of the Nearest Neighbor algorithm which is the bedrock of pattern recognition utilized in machine learning today.  

Subsequently, we saw the Finnish mathematician and computer scientist, Seppo Linnainmaa publish his seminal paper on Backpropagation, at the time known as reverse mode AD (automatic differentiation). The method has become widely used in deep learning and with a variety of applications, such as the backpropagation of errors in multi-layer perceptrons. 

In the late 80s, British academic Christopher Watkins developed the Q-Learning algorithm, a straightforward way for agents to learn how best to act in controlled environments. Q-Learning hugely improved the practicality and feasibility of reinforcement learning, a key element of machine learning given that learning from experience is an essential aspect of intelligence, both human and artificial.  

Building Momentum

 In 1995, computer scientist Tin Kam Ho published a paper describing random decision forests, which became key in the random forest method subsequently used in data science. 

In the same year, Corinna Cortes, a Danish computer scientist and today’s Head of Google Research, and Vladimir Vapnik, a Russian computer scientist, published a paper on the subject of support-vector networks, which gave rise to support-vector machines (SVMs), supervised learning methods used for classification, regression and outlier detection in machine learning today. 

Later on in the decade, we saw German computer scientists, Sepp Hochreiter, and Jürgen Schmidhuber, invent the long short-term memory (LSTM) recurrent networks, an artificial recurrent network architecture utilized in deep learning. LSTMs can learn order dependence in sequence prediction problems and are today used in areas such as machine translation and speech recognition. 

In 1998, a team led by French computer scientist Yann LeCun released the MNIST (Modified National Institute of Standards and Technology) database, a dataset comprising a mix of handwritten digits. The database has since become a benchmark for evaluating handwriting recognition and for training various image processing systems.

 

That marks the end of our first overview of developments in machine learning in the 20th century. We think it’s important to go back to the roots of machine learning and to acknowledge the various figures and academics who’ve contributed to the field over the years. 

Retrospectively looking at how far machine learning has come over the decades allows us to appreciate and acknowledge its ongoing evolution. In a future post, we’ll look at the more recent significant developments in AI from the 21st century.

Related Posts