The Future of Artificial Intelligence

By Benjamin Rudolph

Today’s “Artificial Intelligence” is a limited one, called “narrow or weak AI”, which is only capable of solving specific problems that control present issues. A good example of a narrow AI is SIRI; it has no genuine intelligence and works on a pre-defined pattern. Nevertheless, weak AI is becoming more and more impressive, especially in terms of providing a better understanding between what a human says and what he really wants.

The next stage we’re looking forward is the so-called “Artificial General Intelligence”, known as “strong AI” or “AGI”, which is capable of performing on the same, or even higher, cognitive abilities as humans do.

Is it really worth the risk to create such a powerful artificial intelligence?

If you think about it logically, it could solve several problems, literally in “no time,” in comparison to the expenditure of what humans would need to solve such complex problems like climate prediction, understanding of the universe or many other unsolved problems.

But there are a lot of researchers, experts and scientists, like Elon Musk, Bill Gates and Steven Hawking, who agreed that AI can be extremely dangerous if it gets out of control. Even our narrow AI can cause severe trouble, for example the so-called “flash crash” in May 2010 which caused a big dip in the market.

Let’s take a look at two scenarios which I think are most likely to happen:

  1. AI doing something disastrous!

Mass destruction through AI-driven autonomous weapons, programmed to kill, can easily cause massive destruction and mass casualties. It would be very difficult to simply find a “turn-off button” of weapons designed to avoid being blighted by the enemy.

  1. Achieving a goal, no matter the cost.

This can possibly happen if the AI goals are not aligned with ours. Imagine you are sitting in a super intelligent car and ask it to take you to the airport in the fastest way possible. It could get you there by helicopter, but that wasn’t that what you wanted. Anyway it fulfilled its task.

I personally think that creating artificial intelligence which has the same or even better intellectual and cognitive abilities as humans have is some kind of an evolutionary process that mankind needs to undergo.

Benjamin Rudolph

Student at Technikum Wien University


Leave a Reply

Your email address will not be published. Required fields are marked *