Again the rise of A.I: How far has artificial intelligence come since 2010?
2010 does not seem that long ago. Facebook was already a gigantic, time-consuming leviathan; smartphones and the iPad were part of people’s lives on a daily basis; The Walking Dead was a big hit on televisions throughout America; and the most talked about popular musical artists were Taylor Swift and Justin Bieber. So pretty much like life when we enter 2020, then? Maybe in some ways.
However, a place where things have definitely changed in leaps and bounds is in the field of artificial intelligence. Over the past decade, A.I. has made a number of huge advances, both technically and in public consciousness, that mark this as one of the most important ten-year stretching exercises in the history of the field. What have been the biggest advances? Funny that you would ask; I have just written a list about exactly that topic.
IBM Watson triumphs at Jeopardy!
For most people, few things say: “AI is here & # 39; very much like an artificial intelligence beating two champion Jeopardy! Players on prime time television. That’s exactly what happened in 2011, when IBM & Watson computer Brad Rutter and Ken Jennings, the two best-earning American game show competitions of all time, routed during the popular quiz show.
It is easy to dismiss striking public displays of machine intelligence as more about hype-driven glasses than serious, objective demonstrations. What IBM had developed, however, was very impressive. Unlike a game such as chess, with rigid rules and a limited board, Jeopardy! is less easily predictable. Questions can be about anything and often include complex puns, such as puns.
“I was in A.I. classes and knew the kind of technology that a person can beat at Jeopardy! was still decades away, ”Jennings told me when I was writing my book Thinking Machines. “Or at least I thought it was.” At the end of the game, Jennings scribbled a sentence on his answer board and held it up in front of the cameras. It said: “I welcome our new overlords of robots.”
Here come the smart assistants
October 2011 is most remembered by Apple fans as the month in which co-founder and CEO Steve Jobs died at the age of 56. However, it was also the month in which Apple released its A.I. revealed. assistant Siri with the iPhone 4s.
The concept of an A.I. with which you could communicate through spoken words, which had been dreamed about for decades. Remarkably, Apple’s former CEO had predicted a Siri-style assistant in the 1980s; getting the date of Siri almost to the month. But Siri was still a remarkable achievement. Admittedly, the initial implementation had some notable shortcomings, and Apple probably never succeeded in offering a flawlessly smart assistant. Nevertheless, it introduced a new type of technology that was quickly picked up for everything from Google Assistant to Microsoft & # 39; s Cortana to Samsung & # 39; s Bixby.
Of all the tech giants, Amazon has demonstrably done the most for the A.I. since then assistant. Alexa-powered Echo speakers not only have the potential of this A.I. assistants; they have demonstrated that they are convincing enough to exist as independent hardware. Nowadays, voice assistants are so common that they hardly register. Ten years ago, most people had never used one.
Deep Learning goes overdrive
In-depth neural networks are not entirely an invention of the 2010s. The basis for today’s artificial neural networks can be traced back to a 1943 paper by researchers Warren McCulloch and Walter Pitts. Much of the theoretical work that underlies neural networks, such as the groundbreaking back-propagation algorithm, was pioneered in the 1980s. Some of the advancements that directly lead to modern deep learning were implemented in the early years as the 2000s with work as Geoff Hinton & # 39; s advancements in uncontrolled learning.
I am happy to announce that my lab is working on AI + Climate Change. Climate change is one of the most pressing issues facing humanity and the technical community must help. I’m still exploring more projects. If you want to help, please let me know here: https://t.co/GlauMT75TE #EarthDay
– Andrew Ng (@AndrewYNg) April 22, 2019
But the years 2010 are the decade in which technology became mainstream. In 2010, researchers George Dahl and Abdel-rahman Mohamed demonstrated that in-depth speech recognition tools could be better than what the state-of-the-art industry approached. Then the locks were opened. From image recognition (example: the famous Jeff Dean and Andrew Ng article on identifying cats) to machine translation, barely a week went by when the world was not reminded of how powerful deep learning could be.
It was also not just a good PR campaign, the way in which an unknown artist could eventually find fame and fortune after having worked the same way in the dark for decades. The years 2010 are the decade in which the amount of data exploded, making it possible to utilize in-depth learning in a way that simply would never have been possible at an earlier point in history.
DeepMind blows our mind
Of all the companies that do great AI work, DeepMind deserves its own mention on this list. Founded in September 2010, most people had never heard of deep learning company DeepMind until it was purchased by Google for what seemed like an insane $ 500 million in January 2014. However, DeepMind made up for it in the following years.
Much of DeepMind & # 39; s most public work involved the development of game AI & # 39; s capable of controlling computer games ranging from classic Atari titles such as Breakout and Space Invaders (using some useful learning-enhancing algorithms) until, more recently, attempts at StarCraft II and Quake III Arena.
These game-playing A.Is demonstrated the core principle of machine learning and got better as they played more. In the process, they were able to form new strategies that in some cases even their human makers were not familiar with. All this work has contributed to DeepMind & # 39; s greatest success of all …
Beat people at Go
As this list has already shown, there are no shortage of examples when it comes to A.I. beat human players in different games. But Go, a Chinese board game aiming to surround more territory than your opponent, was different. Unlike other games in which players can be defeated by simply counting faster than people can, the total number of board positions allowed in Go is staggeringly shocking: far more than the total number of atoms in the universe. That makes brutal force attempts to calculate answers virtually impossible, even with a supercomputer.
Yet DeepMind has succeeded. In October 2015, AlphaGo became the first computer Go program to beat a human professional Go player without disabilities on a full-sized 19 × 19 board. The following year, 60 million people tuned in live to see that & # 39; the world’s largest Go player, Lee Sedol, lost to AlphaGo. By the end of the series, AlphaGo had defeated four games against Sedol.
In November 2019, Sedol announced its intentions to retire as a professional Go player. He quoted A.I. as the reason: “Even if I become number one, there is an entity that cannot be defeated,” he said. Imagine Lebron James announcing that he quit basketball because a robot was better at shooting hoops than he did. That is the equivalent!
Cars that drive themselves
In the early years of the twenty-first century, the idea of an autonomous car never seemed to go beyond science fiction. The 2004 book The New Division of Labor by MIT and Harvard, economists Frank Levy and Richard Murnane, described driving a vehicle as an overly complex task for machines to perform. “Running a left-hand turn against oncoming traffic involves so many factors that it’s hard to imagine discovering the set of rules that can replicate a driver’s behavior,” they wrote.
In 2010, Google officially unveiled its autonomous car program, now called Waymo. In the following decade, dozens of other companies (including technically heavy batters such as Apple) started developing their own self-driving vehicles. Together, these cars have driven thousands of kilometers on public roads; apparently less susceptible to accidents than people in the process.
Infallible full autonomy is still a work-in-progress, but this was nevertheless one of the most visible demonstrations of A.I. in action during the years 2010.
The emergence of generative opponent networks
The dirty secret of many of the current A.I. is that its core algorithms, the technologies that make it work, were developed a few decades ago. What has changed is the processing power available to execute these algorithms and the huge amounts of data they need to train on. Hear about a completely original approach to building A.I. tools are therefore surprisingly rare.
Timothy A. Clary / Getty Images
Generative opponent networks are certainly eligible. Often abbreviated to GAN & # 39; s, this class of machine learning system was invented by Ian Goodfellow and colleagues & # 39; s in 2014. No less an authority than A.I. expert Yann LeCun has described it as “the coolest idea in machine learning in the last twenty years.”
Conceptually, the theory behind GAN & # 39; s is fairly simple: take two advanced artificial neural networks and contrast them. One network creates something, such as a generated image. The other network then tries to find out which images are generated by the computer and which are not. Over time, the generative opponent process enables the “generator” network to become sufficiently good at creating images that they can successfully deceive the “discriminator” network every time.
The power of Generative Adversarial Networks was most seen when a collective of artists used them to create original “paintings” developed by A.I. The result was sold for a shockingly large amount of money at a Christie’s auction in 2018.
Recommendations from the editors