The Evolution of AI: Three Big Shifts

Karen Hao writes MIT’s “The Algorithm” newsletter, and in the latest edition she has some very interesting results of an analysis she completed of 16,625 AI research papers from the website arXiv. In her analysis she finds three major trends: “a shift toward machine learning during the late 1990s and early 2000s, a rise in the popularity of neural networks beginning in the early 2010s, and the growth in reinforcement learning in the past few years.” For those of you who do not get her newsletter, below are here major conclusions.

export-BnYi1 (4).png

Hao:

In the ’80s, knowledge-based systems amassed a popular following thanks to the excitement surrounding ambitious projects that were attempting to re-create common sense within machines. But as those projects unfolded, researchers hit a major problem: there were simply too many rules that needed to be encoded for a system to do anything useful. This jacked up costs and significantly slowed ongoing efforts.

Machine learning became an answer to that problem. Instead of requiring people to manually encode hundreds of thousands of rules, this approach programs machines to extract those rules automatically from a pile of data. Just like that, the field abandoned knowledge-based systems and turned to refining machine learning.

export-9bEVj.png

Hao:

Competition stayed steady through the 2000s before a pivotal breakthrough in 2012 led to another sea change. During the annual ImageNet competition, intended to spur progress in computer vision, a researcher named Geoffrey Hinton, along with his colleagues at the University of Toronto, achieved the best accuracy in image recognition by an astonishing margin of more than 10 percentage points.

The technique he used, machine learning through deep neural networks, aka: deep learning, sparked a wave of new research within the vision community and then beyond. As more and more researchers began using it to achieve impressive results, its popularity—along with that of neural networks—exploded.

export-kAnvy.png

Hao:

As well as the different techniques in machine learning, there are three different types: supervised, unsupervised, and reinforcement learning. Supervised learning, which involves feeding a machine labeled data, is the most commonly used and also has the most practical applications by far. In the last few years, however, reinforcement learning, which mimics the process of training animals through punishments and rewards, has seen a rapid uptick of mentions in paper abstracts.

The idea isn’t new, but for many decades it didn’t really work. But, just as with deep learning, one pivotal moment suddenly placed it on the map. That moment came in October 2015, when DeepMind’s AlphaGo, trained with reinforcement learning, defeated the world champion in the ancient game of Go. The effect on the research community was immediate.

The Future

Hao’s conclusions about the past few decades and what they suggest about future research are interesting:

Every decade, in other words, has essentially seen the reign of a different technique: neural networks in the late ’50s and ’60s, various symbolic approaches in the ’70s, knowledge-based systems in the ’80s, Bayesian networks in the ’90s, support vector machines in the ’00s, and neural networks again in the ’10s.

The 2020s should be no different…meaning the era of deep learning may soon come to an end. But characteristically, the research community has competing ideas about what will come next—whether an older technique will regain favor or whether the field will create an entirely new paradigm.

It’s fascinating to consider that the technique most in vogue at any given moment in AI will in all likelihood be obsolete in a decade or so. It’s a humbling thought but also one worth considering carefully, since each evolution not only increases dramatically the power of AI but also its focus. In other words, just as it’s hard to predict how AI will work twenty years from now it equally hard to predict what its purpose and focus will be. This  conclusions reminds me of a discussion I had last year with a CIO, who noted that no one can predict how AI, and the personal data companies collect about us today, will be used in the future. Hao’s interesting analysis confirms once again the need to think carefully about not just current but also future potential uses of private data.

 

 

Advertisements
Carlos Alvarenga

Founder and CEO at KatalystNet and Adjunct Professor in the Logistics, Business and Public Policy Department at the University of Maryland’s Robert E. Smith School of Business.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s