An email came in recently where the sender referenced both Isaac Asimov’s famous Three Rules of Robotics and Stuart Russell’s updated “3 Principles for Creating Safer AI.”
The sender of the email then asked me the following questions:
Straight up: I’m not in the AI field. I mostly work in hydrology and oceanography and not even in a scientific capacity. I tell people that I’m mostly just a guy that knows how to use tools, and that my tools are programming languages and databases. There are much smarter people than me in any of these fields.
Now that we’ve got past that disclaimer, I’ll share my thoughts on the matter anyway.
Asimov himself knew that his three rules were imperfect. The sender of the email says that he never said so explicitly, but Asimov’s stories were basically always some version of these rules gone awry.
For thoroughness’ sake, they are:
I mean, a middle schooler can see the problems with these. Russell does a good job of pointing out the issues in his talk. He states that A simple task such as “fetching the coffee” would lead to a machine “protecting its own existence” to fulfill its duty, and not allowing a human to do something as simple as shut it off could potentially be catastrophic.
I agree with Russell and many contemporaries about one fundamental axiom: People are not going to stop innovating, and therefore we are not going to, as Russel says, ‘stop doing AI.’ It’s in our nature - we tinker and prod and pry and solve problems. We’re not going to just willingly stop doing that.
To mitigate the risk, Russell puts his rules forward:
In his talk Russell quickly tackles the immediate concern that arises with number 3, that humans can behave horribly. He says that eventually the robots will have the compendium of human knowledge, as well as the sum total of human experience to draw from, and will therefore produce a positive an altruistic behavior set, instead of something nasty and harmful.
Again: I’m not an expert, but I love thinking and talking about this stuff.
Research needs to be done to define what the average human ethical and moral state is, especially if the idea is to imprint them upon our computer systems. There’s no such thing as moral relativism anymore, whether we like it or not. Soon there are going to be robots and underlying systems several magnitudes of order more powerful than not only our individual selves, but our collective human self as a global whole.
It’s easy to believe that people are good - Wikipedia is fairly accurate, we have fewer and fewer car accidents every year, the buildings get built and stay up, and so on and so on. However, war is still a human constant, as is greed and corrupution on literal industrial scales.
Do we get to say “Well, just ignore the war and greed parts, dear robot. Simply focus on helping us achieve our values.’ Who said those values were good on average, or even good in the first place?
Another major issue that I see is that these rules supercede the idea that machines will eventually have believable emotional or spiritual experiences, properties that would be necessary in fields like health care and companionship. Your robot dog can’t fulfill its purpose if it doesnt love you or, put another way, if you don’t believe that it loves you.
Although the idea of the machine expression of emotion is somewhat Kurzweilian, it carries profound another profound implication when it comes to morality: Evil is carried out not despite, but because of our morals, ethics, and emotions.
That isn’t to say that all emotions are bad - if emotions create some of our worst problems then they are also certainly the cause of some of our greatest achievements - Even if the manifestations of emotions are stubbornness and delusion.
Thanks for reading this far into these ramblings. I’ll simply conclude with three major questions:
I’d love to hear your thoughts!