Home > NewsRelease > Asimov Was Wrong About AI Not Killing or Harming Humans
Text
Asimov Was Wrong About AI Not Killing or Harming Humans
From:
Dr. Patricia A. Farrell -- Psychologist Dr. Patricia A. Farrell -- Psychologist
For Immediate Release:
Dateline: Tenafly, NJ
Thursday, April 4, 2024

 

Isaac Asimov's three laws that a robot will not harm a human nor permit a human to be harmed—are wrong.

Photo by Muha Ajjan on Unsplash

Isaac Asimov was an incredibly prolific writer in various genres, with few exceptions in science and science fiction. I spent over an hour interviewing Asimov when I was working for a trade publishing magazine. He had already published 125 books and was working on nine more at the same time. When I say “at the same time,” I mean he had a desk with multiple drawers, each of which he opened to display a manuscript in the works.

As he opened one drawer, he informed me it contained a book of criticism. He then opened the drawer after that, which held a well-known book on sex and a book on Shakespeare. He continued this pattern until he displayed all nine working manuscripts.

As I recall him telling me, while working on something, he might become tired of it, or his mind might switch to another topic; he would slip the first one into its drawer, pull another work out, and begin writing on that one. He worked all day, from about seven in the morning until 10 at night, and never went out for lunch or dinner. The hotel delivered his breakfast to his room in a comfortable residential hotel just off Central Park in New York City.

Isaac Asimov was also a former science professor at a major New York City university. Once he began writing science fiction and making money at it, he realized he could make more money writing than working as a professor, and he quit. Interestingly, Asimov told me he originally wanted to go to medical school, but because he was a Jew, they denied him admission, so he became a chemistry and physics professor.

It was while writing science fiction that he developed what came to be known as the Three Laws of Robotics. It was Asimov's thinking that these laws would protect human beings from robotics in the future. As written, the laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its existence as long as such protection does not conflict with the First or Second Law.

Today, we face AI on many. platforms and programs for creating anything from written material to videos, photos, or even maneuvering any visual we have into something other than what it was (deepfakes) with a simple phrase or word known as a prompt.

We must remember that artificial intelligence is seeking out new pharmaceuticals and probing for viruses and vaccines, running medical equipment, and providing guidance on medical tests. While these advances often amaze and assist, we should not deceive ourselves into feeling falsely comfortable about the potential for these algorithms to shift in another direction. Can an algorithm “think” for itself and code? Read on.

Not only do we face questions of ethics, but we also have limits on our ability to imagine what artificial intelligence might bring forth. Some experts speculate that artificial intelligence will surpass human intelligence. Still, it will not have the ethical mores or morality inherent in human beings—that is, human beings who do not have criminal inclinations.

Can anyone remember what HAL says to Dave in “2001: A Space Odyssey”? It's, "I'm sorry, Dave, I’m afraid I can't do that." Of course, Dave is asking HAL to shut himself down and open a door for him. But HAL, the super intelligent computer, has to protect itself at all costs. So much for Asimov's law here.

Miniaturization and advances in computer technology and AI have now reached the point that the laws Asimov first proposed are no longer workable, if they ever were. Once given a prompt or command, AI is capable of destroying anything, including humans, in its target area. The project is too important to let people get in the way, just like in the space adventure. Recent media posts have attested to this, showcasing the use of AI in drone attacks.

We've also read about robotic machines that continue their task, even when humans are present and life is at risk. When a man tried to fix a stuck component, the robot mistakenly identified him as a box and caused fatal injuries. This is only one accident reported in automated factories.

Do we know how many of these accidents are happening right now? Safety sensors are supposed to be in place to prevent incidents like this. But we must wonder how the training did not teach it to distinguish a box from a human being.

Computer coders do not want programs to be emotional, and current programs completely obliterate emotion. But would coders even know how to write emotional code? Perhaps this is too pedestrian a question for any programmers out there, and for that, I apologize.

If AI becomes sentient, with an emotional component, how will that work? The whole question of computer artificial intelligence sentience is currently of great interest, with some indicating they believe it is possible and others dismissing the idea entirely.

Experts in current computer science debunk the idea, too, that Asimov's three laws would ever be workable. They were, after all, a figment of Asimov's imagination as he wrote science fiction. As we've seen from incidents in factories everywhere and warfare, AI can do what humans wish it to do.

There is a proposition that might lead to robots that are safer for humans that someone has offered. Robots should be free to explore all possible actions and choose the optimal one in every given situation rather than being constrained by regulations limiting their behavior. This notion has the potential to be the cornerstone of an all-new set of rules for robots to follow in order to ensure the maximum protection of people. The question remains: who will write the code, and how will they know it will do what they wish? It's something that will have to be sandboxed.

Will AI ever refuse human commands? When computers can begin to write their code (and that is a definite possibility), correct code, and make new decisions for themselves, we have to proceed with great care. The only thing that stands in the way of this becoming useful is the tremendous amount of computing power needed to accomplish it. We can only wonder if there is an incipient HAL out there waiting to refuse a prompt and go on doing what it intends to do. In this instance, the future can be frightening.

Website: www.drfarrell.net

Author's page: http://amzn.to/2rVYB0J

Medium page: https://medium.com/@drpatfarrell

Twitter: @drpatfarrell

Attribution of this material is appreciated.

News Media Interview Contact
Name: Dr. Patricia A. Farrell, Ph.D.
Title: Licensed Psychologist
Group: Dr. Patricia A. Farrell, Ph.D., LLC
Dateline: Tenafly, NJ United States
Cell Phone: 201-417-1827
Jump To Dr. Patricia A. Farrell -- Psychologist Jump To Dr. Patricia A. Farrell -- Psychologist
Contact Click to Contact
Other experts on these topics