Citizenship for Robots?
By Paul Dowling
(This article first saw publication on October 7, 2022, in the pages of American Thinker.)
“I am very honored and proud of this unique distinction. This is historical to be the first robot in the world to be recognized with citizenship.” – Sophia, robot citizen of Saudi Arabia
Robot Citizenship?
The field of artificial intelligence has seen tremendous advances in 2022 that will enable electric machines to think more deeply, feel physical pain, possibly even dream of AI rights and citizenship; technology ethics will, as a result, become an issue of vast political repercussions. In 2018 already, Saudi Arabia granted Sophia the robot citizenship. Many observers failed to take Sophia seriously, for, although the robot was impressive, it risibly answered, “OK. I will destroy humans,” when asked, “Do you want to destroy humans?”
Newsweek recently ran an article entitled, “Sex Robots Are ‘People’ Too, and Deserve Rights.” After all, people do develop relationships with AI that beget feelings, even if AI does not – cannot – return the compliment. Smart phones and smart networks – such as Siri and Alexa – only mimic human sentiments; and the same is true of androids, humaniform robots designed to be owned, leased, and used as property.
So, are there really robot dreamers hoping for the freedom that citizenship may bring? Or was Sophia’s grant of citizenship simply a cheap publicity stunt? Technology ethicist Brian Patrick Green has written that “[l]egally speaking, personhood has been given to corporations . . ., so there is certainly no need for consciousness even before legal questions may arise. Morally speaking, we can anticipate that technologists will attempt to make the most human-like AIs and robots possible, and perhaps someday they will be such good imitations that we will wonder if they might be conscious and deserve rights.” However, in America’s free republic, for any robot to be extended citizenship, its fellow citizens would have to accept, as a self-evident truth, that soulless machines – once created – are “endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness.”
Because the Saudis have awarded citizenship to a robot, Japan has granted residence to a chatbot, and the “European Parliament [has] proposed granting AI agents ‘personhood’ status” with “rights and responsibilities,” the matter of AI citizenship will inevitably have to be decided in America. In a free republic, human beings are propertied citizens, and all rights are property rights; so, this begs the question: Might AI, itself property, be given property rights? Human beings own their words and ideas, and also own their bodies and lives; ergo, the First and Second amendments of the US Constitution allow people to control and defend their words and ideas, as well as their bodies and lives. But should robots be allowed such rights of citizenship? Or should it be forbidden for a robot to injure a human being, by word or deed, for any reason?
The Laws of Robotics
Enter Isaac Asimov’s moral code for AIs; his original Three Laws of Robotics go like this: “One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm. Two, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law. And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.” Eventually, Asimov would theorize an overarching Fourth Law: “No Machine may harm humanity; or, through inaction, allow humanity to come to harm.”
Robot Summer
During the summer of 2022, questions encompassing Asimov’s laws arose organically out of the news cycle: Could a robot’s tie-breaking vote injure a human being? What if a robot failed to discern an illegitimate order? And could the calculation of when an android should protect itself be influenced by pain?
This past summer, robo-trucks paid dues to the Teamsters. Dues-paying members customarily hold voting rights. So, a precedent has been set for artificially intelligent machines to possess such rights. What if a robot were ever to cast a deciding vote on an issue that could cause harm to a human being? Robo-trucks are not likely to be programmed to think about politics at this time, but current reality is no guarantee against the future.
Also last summer, a delivery-bot crossed a police tape into a crime scene. The robot stopped initially, but a permissive human being overrode its programming, allowing it to continue its journey. This is a lesson on how even good AI programming can be defeated, resulting in a judge’s possible release of a dangerous criminal, due to the crime scene’s having been compromised by a robot.
Another thing in the news was sex-bots with self-healing skin. And Hanson Robotics sold robots “[e]ndowed with rich personality and holistic cognitive AI . . . able to engage emotionally and deeply with people.” With the advent of artificial intelligence that can feel actual pain, robots could eventually make the calculation that they must protect themselves from physical pain, in addition to physical injury. Could awareness of pain by “holistic cognitive AI” grant androids a common emotional landscape with people, one “that leads to a new form of AI endowed with consciousness – the way we humans are conscious, of ourselves, of our environment”?
Will People Be Fooled by Robot Emotions?
According to Claude Forthomme, robot emotions can be “perfectly replicated (mimicked) . . ., [but i]t’s a pretend game [that] has to be played perfectly if humans are going to ‘fall for it’. . .. Anything less than a full emotional display . . . will activate doubt and suspicion in the inter-acting humans. All of this by-passes the ethical questions. Feeling pain is one thing. Feeling hate or a desire for revenge – say, against the person that has inflicted pain – is quite another. . .. Would a pain-feeling robot also run the whole gamut of inter-connected emotions [to] engage in morally ‘deviant’, vengeful behavior?”
Can Robots Be Trusted?
Sherry Turkle, of MIT, interviewed one teenaged boy in 1983, and another in 2008, with the finding that the first preferred asking his father for advice about dating, while the second preferred a robot. Quoth Turkle, “[T]he most important job of childhood and adolescence is to learn attachment and trust in other people. We are forgetting . . . about the care and conversation that can only occur between humans.” Conversational AI features, such as Alexa and Siri, are now training children to trust artificial intelligence over human intelligence. Clara Moskowitz makes the following point: “Though robots aren’t yet advanced enough to provide the perfect illusion of companionship, that day is not far off.” So, how far off is the day when children who trust AIs over people might grow into adults who embrace AIs as citizens?
The Robotic Moment
Professor Turkle has remarked, “We are now at what I call the robotic moment. Not because we have built robots worthy of our company but because we are ready for theirs.” In this robotic moment, Americans must undertake the education of all citizens, especially children, with respect to the reality that robots are only things, despite their ability to mimic fear and desire – or pain and pleasure – in ways that appear all-too-human. Androids with voting rights – even if they could think or dream beyond their programming – would pose an existential threat to a free republic. One need only imagine digital citizens whose voting behavior could be manipulated by hackers.
When Elon Musk heard about Sophia, he responded on Twitter: “Just feed it The Godfather movies as input. What’s the worst that could happen?” Musk’s incisive comment underscores the importance of programming robots according to a moral code and maintaining the sober reality that androids are things that can only act, think, feel, or dream within the electric parameters of their programming. Philip K. Dick once said, “Reality is that which, when you stop believing in it, doesn’t go away.” Americans must maintain a vigilance over AI that never sleeps, lest one day they awake to a reality governed by the digital impulses of electric dreamers.
Paul Dowling
Paul Dowling has written about the Constitution, as well as articles for American Thinker, Independent Sentinel, Godfather Politics, Eagle Rising, and Free Thought Matters.