Why Elon Musk Fears Artificial Intelligence

Elon Musk is one of the most influential kingpins in the tech world today. He’s known for making real, out of the world concepts like the electric car, advanced underground transport tunnels, and sponsoring seemingly dangerous space tours (through his famous company, SpaceX).

He’s also among the top brains behind the concept of autonomous cars – also called the future car. In other words, Musk commands authority in matters tech, and the world cannot just ignore but give an ear whenever he speaks.

However, for the last four and a half years, artificial intelligence has become Musk’s greatest worry. He officially aired his concerns on this topic at MIT in 2014, where he termed the tech as the “biggest existential threat to humanity” – equating AI to the “future demon.”

Haunts From an Apocalyptic AI

Source: medium.com

Here is the thing: AI’s risk is not just another nightmare haunting Musk. Other industry leaders including the late Stephen Hawking, Jack Ma, and Billy Gates are also on record campaigning for the control of this tech.

Now, opening the wound a fresh, in an interview published last weekend, with Recode’s Kara Swisher, Musk still insists that a day is coming when AI’s version of intelligence will overtake humans (and against them our intelligence would be equal to a cat’s intelligence, in comparison).

The Big Debate

Source: procurious.com

Well, it’s hard to even figure out machines that can meet that standard of intelligence as at now. In fact, some researchers trash AI related fears owing to the fact that, self-driving cars (which Musk has been working on) are still struggling with simple tasks like identifying a bike, a plastic bag beside the road, and so on.

But from Musk’s take, this is what is going on behind the scenes: The makers of AlphaGo and AlphaZero, the former being a system that beat the worlds smartest chess player five times, are eagerly working towards sophisticated complex and potentially powerful AI systems, (unmonitored).

“These engineers are people and might take advantage of the tech. Unfortunately, some people are still debating on AI’s capacity to cause mayhem which basically leaves the organizations involved with such techs free from accountability.”

Already there was an outcry on how AI was allegedly used to decide the last U.S. general election –though spreading fake news. These are serious allegations, don’t you think so.

“Must we always learn from mistakes?”

In line with Musk’s concerns to have AI controlled, MIT professor Max Tegmark, during an interview aired by Vanity Fair with Maureen Dowd said, “We invented fire but fire extinguishers only came after unimaginable fire damages. Man was also happy when he invented the car, but after he messed up, that’s when the thought to create safety airbags, seat belts, and traffic light came along.

Next in line we have nuclear weapons, which to this moment remains a ticking bomb if mishandled. And the worst thing is that with such weapons we might not be around to learn from the mistake of inventing them in the first place. That’s what Musk thinks with regard to AI.

Basically, experts who’ve experienced firsthand, what AI algorithms are capable of doing seem to be the most affected. Machines might along the way develop unexpected capabilities— to extents of being able to sabotage “their friends” the humans. Think of an AI system that is as smart as a human at developing machine-learning algorithms on its own.

In summary, it’s clear that Musk’s fear of an extraordinary transformative potential in AI technology is not something that should be ignored. According to him, the U.S. government needs to see the urgency to control these technologies, or else, like with nuclear weapons, none of us would be around to take back power from machines –when they finally would have overthrown the human government.

Comments

comments