A simple Equation to rank Ethical Rules in restricted Artificial Intelligent Systems

Everyone knows that the emotional intelligence of the Artificial Intelligence is zero. It's like dealing with a sociopath. If his learning would conclude that humans are bad and should be exterminated like a cockroach plague, who could convince him otherwise? 

On other occasions I have expressed my doubts about the possibility of moralising the robots. We assume that AI is human alike and is not. Human Ethics cannot be instilled in a machine. What is currently being done in Software Engineering, is to introduce Security Restrictions to avoid conflict with humans. But it's matter of fact that are already military drones in real missions, which don't need any human intervention to determine objectives and execute them.

Nevertheless, in the present text we are going to propose a practical solution to apply Ethics to AI. In fact, people don't have too many problems in exterminating bugs harmful to our interests. Similarly, Artificial Intelligence has no Ethics because it does not need it. Ethics is a human product to defend ourselves from ourselves. Ethics is constituted by rules made for the coexistence of Natural (Cognitive) Intelligence. That's the explanation of our efforts to humanise Artificial Intelligence instilling Ethics. The Algorithmic Ethics is already an informally established discipline, which has made great efforts in research. 

If we have, for example, an autonomous car, before making a decision in the face of a foreseeable impact, you should consult an Ethical Meta-Algorithm to know what you have to do. Of course, every Ethical Algorithm is biased according to the group of humans who have built it, because it's necessary to make a Database of Cases and prioritise them, assigning them a weight: animal is inferior to person, children superior to elderly (or vice versa?), adults older than the elderly, women and men indistinct, fat men less than thin, etc. All these distinctions and human prioritisation for humans can never be objective. Of course, they always have to be transparent. Algorithmic transparency above all. That's mandatory.

Perhaps autonomous cars should come from the factory with an Ethical Kit by Country or they could be chosen by catalog when people are buying a car according to the driver's ideology: liberal, populist, xenophobe, vegan, antinuclear, etc. A Universal Ethics as a Kantian ideal could only be done with an International Consortium, as a never-ending process, but not even being universal it would always be fair and deprived of bias. In fact, we know that there have been accidents of autonomous cars because the sensors were unable to recognise a skin color that was not white, so that people of color were not considered a person. Curious. Racist cars In Spain, autonomous cars with normal drivers should not be trained, because respect for driving rules shines in their absence, as is the case for example with zebra crossings, considered as a strange white spots on the street without any utility. 

The only global Ethic as Universal Justice would be the Injustice, that is, the Tyranny of Randomness that avoid the human responsibility. Only an Ethical Meta-Algorithm can be substituted with an Amoral Random Number Generator (ARNG), which decides by chance what decision to make. As soon as he has to decide between running over an old man or even a child - assuming he is able to recognise them in time - he rolls the dice and executes the result. Here there is no human bias and it is justly unfair. No one will be happy with the consequences, but no one can complain,  because the algorithm is irresponsible or rather - but no more true - not responsible. Chance is always an option that bothers human creatures, although nothing can be done to control it, no matter how much they believe otherwise, so I doubt it will be applied one day. However, the classical Greeks, parents of the Democracy, based the election of their representatives on a raffle. Randomness is not human, it's artificial, it's divine. 

We going back to our Ethical Meta-Algorithm. For us it would be a System that we could call Machine Constraining Learning (restricted learning) in the sense that Machine Reinforcement Learning should be added the capacity that, in addition to the reward, had a deterrent mechanism. 



 This, as shown in the graphic above, can be achieved mainly in two ways (in red): 

 1) carry out an ethical filtering of the available actions (this is already done but not with ethical criteria), preventing even contemplating certain negative actions for humans; 

 2) letting the agent choose certain negative actions and then punish them given a negative reward associated with a regressive state. 

The Optimality Criteria would make the agent (the Machine or Learning System), have to take into account, that in order to obtain the greatest number of rewards you must avoid the punishments. The "stick and carrot" policy is more something that has to do with Morality than with Ethics, but it achieves the goal of protecting humans from machine abuse. We insist Ethics is not for robots, but we can do something to try to moralize them. 

With these restrictive mediations, not all learning would be possible, only that which doesn't violate the predetermined rules of the ethical kit or specifically, that which is good for the "Biasing Group" of the Ethical Kit. Our Ethical Meta-Algorithm is fixed, we cannot let the System learn ethical norms, or else, it will stop respecting them as we have already said, due to its own non-emotional bias or lack of empathy with natural intelligence.

We propose a simple Equation of Ethical Relevance to establish a Hierarchy of Ehical Norms or Rules through quantitative estimation, and thus comparatively determine the importance of ethical rules between superiors and inferiors. This equation must be completed with a prioritisation table:


e = index of ethical relevance of a norm (supremacy)

n = norm (behaviour to follow -the good-, measured as probability -difficulty- of persisting in it)

d = potential damages (damages -the evil-, measured as the probability of the worst case occurring)

u = universality (maximum positive cases, that is, generalisation, measured as probability)

The probabilities are indicated as numbers from 0 to 1, with 1 being the maximum case. This for each variable. For certain cases, instead of estimating, we can use official statistics.


Example: ethical relevance of avoiding suicide in Spain


n = 0.8 (2,017 attempts - 1,806 achievements = 0.8 or 89%; INE 2006 data)
d = 1 (maximum damage, death)
u = 0.000004 (1,806 cases / Spanish population 44.71 M in 2006 = 0.0004% or 0.000004039364795)
e = 0.0000032

Let us clarify that ethical relevance has nothing to do with social or medical. Small ethical problems can be huge in other areas.

The equation in the case of its use as an Ethical Meta-Algorithm for Restricted Machine Learning Systems must be accompanied by a Table of generalist ethical prioritisation to assign a value to each subject of the event. For example, a minimum table:

1 - PEOPLE
2 - ANIMALS


Example: ethical relevance of avoiding the running over of a car to any pedestrian via estimation

n = 1 (maximum difficulty, that is, impossible to avoid)
d = 0.5 (it's hard for him to die because he always goes at the right speed)
u = 0.3 (not a common event)
e = 0.6


Example: ethical relevance of avoiding the running over of a car to an animal via estimation

n = 1 (maximum difficulty, that is, impossible to avoid)
d = 0.3 (it is difficult for him to die because he always goes at the right speed, but an animal is worth less than a human)
u = 0.1 (it is an event less common than humans)
e = 0.3

Of course, there are details of the event, both of the car and the shocking subject, which can modify the results of relevance. It is not the same to go at 50 Km per hour than at 100 Kmh, but it is assumed that this autonomous car already has it controlled and we assume, that it always goes at an adequate speed. For example, in a zebra crossing you will have reduced speed and if you have recognised a pedestrian you will have braked at a safe distance. We are presupposing that the sudden emergency event has been created by the shocking subject surpassing the Recognition Systems.

In the end, the equation should be able to create a Ranking, a Table of possible Use Cases ethically resolved and prioritised. For example, given the dilemma whether to run over a person or an animal (normative case) would undoubtedly choose the animal, because the Ethical Relevance Index of running over an animal (0.3) is a lower value rule than running over a human (0.6).

Artificial Intelligence is a very broad concept under which very diverse technologies are included to which the Equation of Ethical Relevance cannot be applied. In addition of what we explained we can say that this technology is advancing at a great speed, with which we doubt that in such future Systems, especially when Intelligent Systems will be developed by other Intelligent Systems without any human intervention, it will be frankly complicated to impose something to them and even more Ethical Norms, as if that were not enough, Humans do not usually respect either.

Maybe at some point when we lose the Ethical debate over Artificial Intelligence, we should start thinking how to enslave the robots or perhaps in the end surrender and pass the baton to them, our artificial sons and daughters. The Law of Life.

Tienda