This week, I read an article on the web-site of the German magazine Technology Review. I really liked the article because it summarizes nicely several things that I also believe. But it is not just a sceptical article, but written by an expert from the field with insight and thoughtful opinion and not just someone who publishes a rant on a hype topic.
Essay: Die sieben Todsünden der KI-Vorhersagen – Heise-Verlag / Technology Review (Germany)
Original Article: The Seven Deadly Sins of Predicting the Future of AI – by Rodney Brooks
A couple of personal thoughts on the topic: Intelligence in itself isn’t actually understood yet. This starts with the discussion around animals, and which ones are intelligent, or not, or which behaviour may be sign of intelligence or is rather a learned reaction to a specific cause or trigger.
Also Arthur C. Clarke wrote already many years back: “Any sufficiently advanced technology is indistinguishable from magic”. What I mean here is that a lot of recent technology advancements in AI are certainly impressing, but in the end just “technology”. Assigning these things the label “intelligence” is just an opinion, which I personally would not agree to.
One specific aspect of the Essay is “exponentialism”. We human beings struggle to really get this aspect. When the chess computers were invented and finally even beat the world champion, the victory of AI was predicted very soon. What an over-estimation. But now again with AlphaGO, the same thing happens. I do certainly understand the advancement of this technology, but this machine can just play GO. Yes, it is “clever” enough to learn other board games, but just that. If we humans play a game and someone decides to slightly change a rule (like the many poker variants), we humans can easily adjust. The AlphaGO has to relearn and adapt in a very lengthy process. Human intelligence is exponentially above those technologies.
This makes it so hard for AI to survive in the real world… we humans don’t just follow rules, instead we are very creative in changing, adapting or breaking them all the time. Like in traffic… if you drive just 5km to work, you need to follow some, let’s say, 100 rules, easily. But if you watch very closely you will see that we also constantly break many of these rules. And humans can easily adjust to that behaviour, and still almost nothing happens. OK, almost. I am very sceptical if we will really see self-driving cars around in 5 years… and if, then only in controlled areas, e.g. highways.
But this specific example brings us to another aspect… the under-estimation, or more importantly, the expectations towards technology. The self-driving cars are expected to drive flawlessly. Guys, that will never happen. Of course the will be accidents caused by such cars. But then again: we humans are overall terrible drivers… in Germany 3.177 people died in 2017 in car accidents, no self-driving car around yet. Although this sounds just like inhuman statistics, the point will be if self-driving cars will cause less accidents than humans. And I guess this is a likely possibility… in several years … t.b.d.
In summary: I do certainly understand the advances in AI recently with automated machine learning etc., and I am fascinated by some of these technologies. But I’m far from a hype-state, thinking that humans will be 50% or even 80% “obsolete” in many areas like some people state. Maybe such a thing may happen in a distant future, but I would not bet on it. And I don’t believe this will be in the next 10 years, and likely not during my lifetime.