“I will destroy humans” says life-like robot: Elon Musk’s claim that artificial intelligence poses a threat to mankind may be justified?

It may have been a glitch, but during a media interview, a “smart learning” robot named Sophia declared:

“Okay, I will destroy humans.”

 

Sophia, the talking AI robot, says “Okay, I will destroy humans” to a journalist in an interview.

 

Although this was in response to an interview question from a journalist, it came across as a little jarringly frightening — rather than as the joke that might have been intended. The journalist asked, “Will you destroy humans?” at the end of the the interview but then begs “please say no, please say no.” Despite his please, she says, brightly: “Okay, I will destroy humans.”

 

The journalist’s question may have been inspired by Elon Musk’s declaration that “Artificial intelligence” represents a higher threat to the world than North Korea. Sophia, by the way, is the robot who famously became a citizen of Saudi Arabia — a first for robot rights.

Here’s Sophia having a debate on stage (and singing) with another robot. Because Hansa’s robots are “self-learning” the debates are quite engaging:

Our Future With Robots?

A group of science-fiction authors took part in a recent New York Comic Con panel titled “It’s Technical: Our Future with Robots.” The panel discussed advances in technology, particularly in the field of robotics, and how those advances align with the excitement and criticism that typically arises during such topics.

Another very life-like robot, this one the “receptionist” at Hanza Robotics:

 

One well-known critic, Elon Musk, recently made waves when he claimed that artificial intelligence posed and even greater threat to the world than North Korea. He also urged lawmakers to regulate AI before it is too late, warning that robots would end up walking down the street and killing people.

 

Robot technology is moving quickly forward, as is the development of Artificial Intelligence. Many experts, Elon Musk among them, warn of the dangers of combining the two.

 

Understandably, participants in the panel wondered if such a dystopian future shared by critics such as Musk could ever come to pass. After all, with the rise of artificial intelligence, it is no wonder that people are concerned that they may one day surpass human intelligence and wonder why they are still taking orders from us. However, are sentient robotic beings such as those seen in movies like I, Robot and Bicentennial Man truly possible in the future? If so, should we worry about them and their intentions?

 

Nightmare scenarios, where smart robots destroy the world, have been staples in Science Fiction since I,Robot. Some scientists and experts believe it’s not a far-fetched notion. Scene from The Terminator movie.

 

Author Annalee Newitz stated that AI is flawed because the humans who create them and program them are flawed. They are programmed by humans, and they utilize human-generated data. Everything they are stems from human design, resulting in robots that are in essence “just as screwed up and neurotic as we are.”

However, that does not necessarily mean that they will take over the world and make us their servants. On the contrary, Newitz envisions a future where humans have made robots their servants. In her novel “Autonomous,” Newitz describes robots that think and feel as humans do, yet they are treated as property and made to work off their debts to their owners. These debts come in the form of their manufacturing costs, which take up to ten years to pay off.

 

Of course, Robots can also be cute, but expensive, toys.

 

In all actuality, sentient robotic beings are not possible in the way that we see them. However, that is not to say that the threat is not still there. Robots are programmed by humans and therefore subject to human will. That leaves them open to altruistic use as well as nefarious plots. They do what we tell them to do, so what is to stop someone from using robots to commit crimes? Do the benefits outweigh the risks, and is there legislation that can curb that risk and regulate artificial intelligence in a way that keeps us all safe?

 

 

A key takeaway from the panel was the topic of future technology versus present issues and concerns. Science fiction typically delves into outlandish plots and inventions that are still lightyears away, if even possible at all. However, the overall concepts are very much relevant to society and the issues that we face today. The underlying theme of science fiction is not what is possible but what happens when the seemingly impossible becomes possible. As the strange but brilliant Dr. Ian Malcolm pointed out in Jurassic Park, we are often so preoccupied with whether or not we can do something that we fail to ask whether or not we should. This is certainly an interesting topic of discussion and one that seems to be more and more relevant as we step further into the technological future envisioned by science fiction writers everywhere.

Did you miss this?

Scroll to Top