The world is abuzz with rhetoric about artificial intelligence and machine learning. These terms appear to be used interchangeably, and the perception that they are both the same side of the coin can lead to confusion. So, what are the differences?
First, let’s consider what AI is not. It is not Skynet (yet), and it is not HAL 9000 (yet), although sometimes IBM Watson appears to be getting there.
In the broader sense of the term, artificial intelligence is the concept of computers dealing with situations related to data and figuring out for themselves the best way to do something or improving on a method for undertaking a task. Machine learning is the current top of the pile in AI techniques.
So, basically, AI is an all-encompassing term for algorithms that look at data. However, this is too simplistic an idea.
Computers are already “more intelligent” than humans. A computer can calculate pi to many more decimal places than any human can. Without computers, humans would never have been able to decipher the human genome. However, we do not call those devices AI: they are just computer programs. When a chess program beats a world master, we do not call the machine “intelligent.” This is because we understand the algorithms that made up the decision processes that the machine undertook.
What about Cortana, Siri, and Google Assistant? These are tools that we interact with audially to search on the web, play music, shop via Amazon, etc. Are these true science fiction–level AI? Sometimes it may feel like they know what you need, but they are certainly fallible. Then again, so are humans, and we do not question their intelligence.
Does this mean that AI is reached when we do not understand the process that is being undertaken to reach a decision or endpoint? With this in mind, we look to Google, which is conducting some serious research in the field of learning computers. I do not mean the AI games that it has published. It has a project called DeepMind. DeepMind has displayed some disturbing behaviours, in that it tends to get aggressive whilst undertaking competitive gaming workloads.
Facebook, too, is working on AI. Famously, it shut down one experiment when the two robots that were part of the experiment developed their own language to communicate that the people in charge of the experiment could not decipher. What was interesting about this was that the robots developed their language in a manner to that of humans, but created shortcut versions of phrases to optimise communication. For example, saying the same word five times meant “I want 5 of these, please.” Now, although this may seem a stupid experiment to undertake, it does have pertinence. Machines that can learn idioms and slang are going to be much more efficient at interacting with humans, to the point that humans will not know they are talking to a machine. This is called the Turing test.
Then we have Tay from Microsoft, a Twitter bot that was designed to interact with people. Tay had to be shut down, because during the course of the experiment, it learned to make racist and inflammatory statements. Now, it is obvious that the machine was not coded to be racist and inflammatory. However, the bot did not have the boundaries to understand that what it was learning was inappropriate behaviour. Microsoft has deleted the vast majority of the tweets the account made, but you can find a fair representation of the debacle here.
What is slightly worrying is that artificial intelligence appears to display a propensity toward the darker aspects of human nature—including cheating, shortcuts, and racism. I would have hoped that logic-driven entities would have developed compassion before aggression. Then again, AI was created in the image of its creator, so it is perhaps to be expected that it will have the same frailties that humans have. Perhaps we are destined to have a dystopian machine overlord with a superiority complex like HAL or Ultron, rather than one like Bigweld or Jarvis.
Now, this article is a little tongue in cheek, but I personally think that machine learning is as far as we should go if we cannot evolve a learning computer that can rise above its programming without a racist, aggressive, and flawed personality.