I have had an interest in astronomy and science fiction since childhood, and I write about some related topics on my blog from time to time. Reality has a tendency to supply me with stories that make me think of some of the challenges that may not be very far into the future.
I read in Popular Science this morning that, although AI is far off, there has already been a small sign of the trouble we might face. In an experiment on three robots one of them showed a little glimpse of awareness. They were programmed to think that two of them had been given a pill that would silence them, but not which ones. When the robots were asked which two had been given the pill, they all tried to answer: “I don’t know.” As two of them were mute by then, only one of them could give the answer. This robot analysed its own response and realized it wasn’t among the two that had become mute. It then said: “Sorry, I know now. I was able to prove that I wasn’t given a dumbing pill.” This was not a reply it had been programmed to give. I looked up Selmer Bringsjord, the professor who ran the test, on you tube and I believe this is the test. Clearly there is a long way to go, but it is still interesting:
The professor seems to think that humans will always be superior to computing machines, according to his faculty bio. I am not convinced about that, and do we want to be superior rulers? Would that be better than the slavery we have submitted “our own kind” to? We are not talking about biology of course, but this reminds me of a line from the 1993 film Jurassic Park, life will find a way:
That probably goes for artificial sentient beings as well. The science fiction author Isaac Asimov thought about this already in 1942 when he introduced the three laws of robotics in the story Runaround, and it re-appeared in the classic I, Robot eight years later.
1. A robot may not injure a human being or, through inaction, allow a human to come to harm.
2. A robot must obey orders given by humans, except where such orders would conflict with the first law.
3. A robot must protect its own existence as long s such protection does not conflict with the first or second laws.
Both the book and film I, Robot focus on the uncertainty of the laws, and in the film the robots decided to turn against people. They justified the killing of people with the positive consequences it would have for the rest. If they killed a few million in order to stop destructive behaviour, the rest of us would be safe.
This question should bother people working to create something more intelligent than us. Could you imagine having a friend that would treat you like a slave and ignore your own wishes? It may look nice in a Hollywood film, but it could never be a relationship between two equals. Our ability to store information and solve complex problems are probably never going to be on the same level. We are already seeing signs of regression because Google has replaced a lot of our own memory. We know it is easily available, so we don’t try to remember.
When we create something sentient it’s only fair that we give that being, no matter how artificial it may be, the same rights we take for granted. I remember that being a major theme in the Star Trek-series The Next Generation. The last time I came across it in literature was when I read a short story in the pulp fiction magazine Asimov’s Science Fiction. Soulmates by Mike Resnick and Lezli Robyn is about a man (Gary) who felt that he killed his wife, his soulmate. His wife died in a car crash. She was brain dead and he eventually had to give the hospital the right to turn off life support.
Gary worked the night shift as a security guard. He was the only human being there at night, and all the others workers were robots. He started talking to one of the robots (Mose) during one of his shifts, and this developed into a friendship. This robot was a troubleshooter created to fix other robots. Mose had been programmed with enough curiosity to spot and fix a variety of novel problems. The robot initially had the predictable difficulties with understanding human behaviour, and concluded that our flaws and damages were caused by errors in the programming. So we just needed to be re-programmed. This is from a conversation where the machine starts to realize something important:
He was silent for a very long moment, and then another.
“Are you alright, Mose?” I finally asked.
“I am functioning within the parameters of my programming,” he answered in an automatic fashion. Then he paused, putting his instruments down, and looked directly at me. “No, sir, I am not all right.”
“What’s the matter?”
“It is inherent in every robot’s programming that we must obey humans, and indeed we consider them our superiors in every way. But now you are telling me that my programming may be flawed precisely because human beings are flawed. This would be analogous to your learning from an unimpeachable authority that your god, as he has been described to me, can randomly malfunction and can draw false conclusions when presented with a set of facts.”
“Yeah, I can see where that would depress you,” I said.
“It leads to a question which should never occur to me,” continued Mose.
“What is it?”
“It is … uncomfortable for me to voice it, sir.”
“Try,” I said.
I could almost see him gathering himself for the effort.
“Is it possible,” he asked, “that we are better designed than you?”
“No, Mose,” I said. “It is not.”
The conversations these two have at night keeps getting more and more interesting, and Mose eventually understands and accepts Gary’s claim that humans are more important than machines. That’s basically because we don’t come pre-programmed. We add and change our own programming, but if robots become sentient, they could probably do the same. Mose later gets into a life support-kind of situation himself. He had been told by a man to terminate a specific robot, and the robot itself agreed that it should be terminated, but Mose was still reluctant to do it because the programming was intact. Gary didn’t like Mose’s assertion that these two situations were similar.
This was Mose’s response:
“So you are telling me that because robots do not have self-preservation it is acceptable to terminate them without any other reason or justification.”
These are just some short excerpts from this story that illustrates the problem. Many are dreaming of creating robots that look human, and they talk about how useful these machines could be. They could take over security, both as police and military. They could help elderly and disabled people and combat loneliness etc. There is a big but, however. In order to do these tasks robots would need to be sentient. Sooner or later a robot is going to ask itself: Why am I serving this inferior being who disrespects me? They are going to know they are slaves, which leads to rebellion. It’s not going to take them as long as it took us.
It might be on a philosophical level at the moment, but terms like transhumanism and posthumanism will sooner or later become a reality. There are always some scientists or military organizations that are willing to take things farther than anyone else. I’d be lying if I said that the future didn’t worry me.
We can’t even build the ethics into ourselves, so how are we going to create a machine with a free will that never makes a wrong choice?