Logo

Human Rights in the Context of AI and Technology Ethics

December 24, 2016tech ethics, AI

This article aims to give a brief historical context of human rights, followed by implications centered around Artificial Intelligence and Autonomous Systems (AI/AS). Finally, we conclude with a brief set of recommendations, meant to be expanded upon at a later time.

After World War II, a commission chaired by Eleanor Roosevelt and with delegates from over 50 countries ratified a document called the Universal Declaration of Human Rights (UDHR). This, along with the International Covenant on Civil and Political Rights (CCPR) and the International Covenant on Economic, Social and Cultural Rights (CESCR) form what’s know as the International Bill of Human Rights. These documents will be the primary focus on the rest of the post.

The International Bill of Human Rights

Technology Ethics

These documents contain extremely relevant statements and use of language. Especially so as we move into an era of high AI/AS utilization. Now, more than ever, technology ethics needs a seat at the table of innovation.

Human Rights are Above the Rule of Law

Human Rights as defined by the IBHR are defined as protect by the “rule of law.” However, recent discourse has placed human rights above the rule of law. Universal human rights overrule any law proposed, even democratically, that would infringe upon the safety, dignity, or freedom of any member of the human family.

This could suggest that violations of human rights by the creators of AI/AS could run into trouble in the highest courts, crossing international and state boundaries.

The UN Correlates Reason and Conscience with Human Rights

It is worth noting that Article 1 of the UDHR correlates reason and conscience with free and equal dignity and rights:

All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.

This declaration clearly states “human beings.” However, consciousness may truly be nothing but a by-product of information processing. It may then follow that sufficiently advanced AI/AS would be worthy of protection under this document. At the very least we will see legal cases in our lifetimes where legislators and lawyers will be arguing for, and against, this very principle.

AI as Legal Arbiter

Practices that utilize AI to make literal life or death decisions, such as predictive policing and algorithmic parole are prime candidates for human rights violations. By moving to black box models based solely on historical data, we would lose sight of the humanity of these decisions on a case by case basis.

These systems are prime examples of how we inevitably imprint our own imperfect biases onto the AI/AS we create. Creators of AI/AS must take extreme to avoid such imprinting, and even then avoidance may prove impossible. Placing trust in these systems puts us at high risk of violating many of the rights in the IBHR.

Special Case: The Death Penalty

The CERD states:

  1. In countries which have not abolished the death penalty, sentence of death may be imposed only for the most serious crimes in accordance with the law in force at the time of the commission of the crime and not contrary to the provisions of the present Covenant and to the Convention on the Prevention and Punishment of the Crime of Genocide. This penalty can only be carried out pursuant to a final judgement rendered by a competent court.

Three new circumstances emerge in the context of AI/AS:

  1. A human actor commits a provable act of violence against a single AI/AS, or group of AI/AS. This is related to the idea that consciousness may earn a being or system human rights.
  2. An AI/AS commits a provable act of malice or genocide against a human.
  3. An AI/AS commits a provable act of malice or genocide against another AI/AS.

Human actors will likely easily be able to prosecute circumstances 1 and 2. However, This may become harder, morally and ethically, as as a society to accept artificial consciousness.

Political Implications: Non-Ratified States and the U.N. Itself

Certain nations like Iran and North Korea have not at all adopted or ratified the IBHR. It is also worth noting that two major world bowers have only partially ratified the covenants.

The United States has signed not ratified the ICESCR. This would require a two-thirds vote in the senate, and the ratification has never come to a vote. Thus, according to the constitution, state governments decide matters of any legal protections for minors. These include legislation around sentencing, abuse, and more.

China has signed but not ratified the ICCPR. This means that any political protections — such as freedom from unlawful imprisonment, freedom from association, the right to assemble, are all forfeit within the borders of China, or as a Chinese citizen under Chinese Law.

Finally, the documents end with articles that say that while one has these rights, you can’t use them against anybody else or their rights, any other nation state, or the UN itself. This potentially puts the UN in a position of holding meta-rights above humans. That is, any aggression against the UN could result in the revocation of any protections in the documents.

Recommendations

I propose the following recommendations, based on the thoughts and research here:

  • Further definition and discussion is required around a universal definition of consciousness
  • The United States and China must ratify the ICECSR and ICCPR, respectively, into law
  • Ethical and Philosophical training should be part of any Computer Science higher degree

I appreciate any and all feedback on this.

Mark Robert Henderson

This is the website of Mark Robert Henderson. He lives in Cape Ann, works in Cambridge, and plays with distributed apps and tech philosophy online.

Mark's social media presence is slowly and deliberately withering away, so the best way to reach him is via e-mail.

Have a topic you want me to write about? I take requests!