August 13, 2017
If you have a strong interest in technology, you would have heard of the term Artificial Intelligence by now. Also, if you are an avid science fiction reader or viewer, the same would be true.
Yet, you may be wondering: What is Artificial Intelligence, and is everything I read about it true? How can we separate the science from the fiction?
In this article, we are going to explore this dividing line and get a clearer understanding of what A.I. is, and more importantly, what it is not.
Let me start off by saying, yes, there is such a concept as Artificial Intelligence, but for the moment a concept is all it is.
What we are talking about in the current technological environment should be labeled as Machine Learning, and with no other term.
The difference here, which is very important to understand is that Artificial Intelligence implies machines not only being able to learn from big sets of data, and emulating intelligent behavior at the back-end of this process. But also being able to actually reason upon itself.
We often talk about machine learning as being a part of the artificial intelligence spectrum. However, while current technology is both inspired by, and on some very abstract level "mimics" the behavior of intelligence, there is no reasoning whatsoever going on within these algorithms.
Okay, so this might be controversial, but in our quest to truly understand the topic of A.I. we have to move away from thinking that computers are necessary to develop artificial intelligence.
In fact, mathematics is the true engine that drives the algorithms, and theoretically a pen and paper are all you need to perform them.
Of course, some of you may have the historical background to know that people calculating algorithms on paper actually used to be called “computers”. Which is of course where the term for the machine came from in the first place.
It gets even better, the basic implementation of a neural network is not even that difficult to understand. For the most part, it uses simple addition and multiplication to perform its function.
There are a few more advanced calculations used, such as the activation function, and the so-called “loss function” which calculates the error of the outputs during training so that the weights between neurons in the network can be adjusted to end up with the final trained model.
The best way, I have heard a neural network described is this: Basically think of it as a huge compound function, one big function composed of smaller functions.
Yet, as I said before, you do not (technically) need a computer to perform this function. It can be done manually on paper, using an abacus, or even using apples and pears. Of course, it helps that a computer is able to process information a lot quicker than a human doing it on paper.
The reason I am making this point is to try and remove your mind a little from the concept of the evil (or Utopian) cold machine, and more into the abstract idea of a created intelligence.
I was talking about this a lot a couple of months ago, when working on an automated answering system to have really great integrated machine learning applications, we need to involve the human touch.
By that I mean we need to find ways where machine learning augments human work(such companies like Playment - Human-in-the-loop platform to build AI), instead of trying to take it over completely. Sure, there are definitely jobs that could, and probably will be fully automated, but there is a real lack of understanding how this will impact human society.
The much more beneficial option is to keep humans in the loop, both for the benefits of the humans, as well as the machines.
You see, humans will be able to become way more productive this way, while still feeling like they have a real impact on their own society, while machines will be able to get help in the places where they are not quite that performant yet.
Sure, there are many experts in the field, and many people who are working on amazingly interesting technology and implementations. Yet nobody can truly tell you what will happen next, and when it will happen.
There are of course many people out there who will have very elaborate theories on the subject, and to be fair, some have a lot more weight than others, but the fact remains that this topic is just too complex for humans to understand.
Another common problem in this space is that the true experts in the field often do not show any interest of putting themselves in the spotlight for the general audience out there to get a glimpse of their opinions.
They mostly restrain themselves to writing scientific papers which, while published out in the open for everyone to see, are generally not sought out by readers with only a fleeting interest in the topic.
This is why overly sensationalist headlines grab the most eyeballs out there nowadays, and why misinformation is spread too easily and too widely. For an example, we have to look no further than the recent craze around the Facebook chat-bot program that supposedly “invented” its own language.
This story was finally laid to rest when Yann LeCun (head of Facebook’s A.I. department) spoke out on the matter that there was no such inventing going on, and nobody in their lab shut anything down.
The problem is that writing about the subject of machine learning is incredibly difficult for the average journalist- not only do they need to have a very strong background in computer science, but also a deep mathematical base to build their stories on.
Yet, most journalists don’t even seem to go through the trouble of consulting with someone who would mentor them on the real concepts behind the technology they are writing about. Which makes sense because most media outlets are not concerned with accurate reporting, rather writing the headline that makes you click so they can present you with advertising.
Of course, many people talk about A.I. safety at the moment, and so this might not be entirely something that people are not telling you, but it must be reiterated that A.I. safety is currently considered an unsolved problem.
This does not mean at all that we are definitely looking at a future where artificial intelligence will go rogue on us, and throwing this world into a dark future.
The truth is that we just don’t know at this time what will happen, but since we do not know it seems quite sensible to assume the inherent safety risks of this unknown future, and open up the discussion about A.I. safety now.
We do need to keep a clear head, and refrain from anthropomorphizing artificial intelligence, and not take too many queues from what we have seen in science fiction movies, or read in novels.
It is easy for people to think that the opinion of (self-proclaimed) experts mean more than the opinions of the general public.
Certainly, this is a feeling easily derived from the various places on the Internet where people come together to discuss artificial intelligence.
Still, this is a future that is going to affect us all, and in many ways is already affecting us today. It is this reason why we should all weigh in on the technical, social, and political implications that this new technology is going to have on our lives.
This article is originally written by Daniel Owen, an independent AI researcher at TheApeMachine.com in collaboration with Mothi Venkatesh.