Albert-Cuyp Family with Robot- Flickr- Thus spoke the terminator

In the not too distant future, self-driving cars can become an affordable reality. You could be, one day, the proud owner of an automobile with a highly intelligent autopilot that will allow you to catch up on the news while commuting to work, without jeopardising road safety. But think about this: would you rather buy a self-driving car that will always save as many lives as possible; or one that will always save its passengers? If, in order to save you, the autopilot decides to crash into a school bus instead of hitting a motorcycle, just because the bus is more likely to withstand the crash with minimal casualties, would you deem this decision wrong? Who should be responsible for the eventual casualties, the autopilot or the programmers of the car?

The era of smart machines

These questions are related to the emerging field of “machine ethics”, which is concerned with creating Artificial Intelligence (AI) that would follow a certain moral code, in order to protect mankind from self-destruction. In the scientific world, pioneering scientific minds like Stephen Hawking and Elon Musk want to reduce the risk of human extinction by the hands of the man-made machines, by stressing the importance of controlling these intelligent robots, especially if they can match or even surpass human capabilities. They agree that we live in what some call the Fourth Industrial Revolution, where technology is no longer used to replace our muscles, but our brains. This is definitely considered as progress, but it does not come without risk. In his 1965 paper, Gordon Moore, the co-founder of the firm Intel, noted that computers are becoming smaller and faster at an exponential rate, a statement that is now known as Moore’s Law. This law still holds nowadays: technology is improving exponentially, and soon enough scientists will be able to create a system that is so smart, that it can improve itself. The question is: what if we can’t control this system, or prevent it from turning against humans? This control problem of such a machine should be addressed, and hopefully before the rise of this system, so that society is prepared for this new technology.

The Terminator- Flickr- Thus spoke the Terminator

The problem of machine ethics

 

In order to discuss machine ethics, we first need to consider the philosophy of ethics. How do you define morality anyway? What kind of moral code would we want to program into the AI systems? For ages, moral philosophers have tried to solve the deep-seated problem, which is to find a solid foundation for our moral beliefs that goes beyond religion and cultural traditions. One of the most convincing answers can be found in the complicated but influential work of the German philosopher, Immanuel Kant. In his book Groundwork of the Metaphysics of Morals (1785), he introduces the concept of categorical imperatives, where morality comes from reason. An action is considered as moral only when it is undertaken out of respect of the moral law, which means that it is not justified by other motivations such as desire or need. In order to solve the fundamental problem and find this pure moral motivation, Kant argued that this moral law should come from a general formula, a maxim, which should be universal. This maxim is the well known idea that individuals should act in the way that they wish everyone would adopt, and wish to see these actions become a universal law. For example, if one person kills another, then he accepts the idea that murder is moral and that everyone could also kill anyone. However, this will ultimately lead to an unsafe society where even this individual will not be safe. Therefore he should conclude that murder is an immoral act. Applying this logic to AI, if we consider that morality comes from reason, then it is possible to create a highly intelligent system that would be moral, regardless of who created it. By definition, AI is logical and reasonable, which should also make it ethical. But the question remains: should AI be able to take moral decisions on its own, or should this moral law be programmed into its code a priori?

It would seem that society in general would feel safer if they know that AI has a built-in moral code where a certain action is allowed only if the answer to the algorithm is “yes”, which makes all actions of the intelligent system predictable. However, some moral dilemmas do not have a straightforward answer, and even humans would not know how to behave in certain situations where there is no correct decision. It would therefore seem that a mix between machine learning of ethics and a pre-set moral code would create that ethical AI. A robot can therefore follow explicit rules, such as Asimov’s Three Laws of Robotics that have moved from science fiction to actual coding, but would also learn from its own experiences. Having learning machines would also help to avoid system bugs, where the AI would not be able to make a decision if it was beyond its coding. These bugs could be very dangerous if the AI is involved in life-or-death decision-making, such as in the fields of medicine or defence.

Nietzsche as Superman- Wikimedia- Thus spoke the Terminator

Science fiction or reality?

But you might be thinking that this is the product of cynical science fiction; that no machine can ever be equal or even “better” than a human being, or at least not in the near future. However, the nightmares of science fiction authors may be gaining on us fast. While the term Artificial Intelligence was first used in 1956, the evolution of such technology has been growing exponentially. And nearly 40 years later, in 1997, the computer IBM Deep-Blue won against the Chess World Champion. Another computer, IBM Watson, won against the world champions of the game Jeopardy! in 2011. The nature of this game has nothing to do with chess, since it involves understanding language and answering real questions about different fields. This means that the technology of AI has reached a point where the system is able to think better than a human… Moreover, that ethical problem will especially be important for robots involved in health care services, self-driving vehicles and military drones: what if a patient refuses to take his/her medicine? What if a drone is faced with the choice of either reaching its target or saving a solider? These questions need to be considered in order to avoid the ethical problems that might arise from using AI in different fields. This justified concern has also moved to academia, with the creation of the Centre for the Study of Existential Risks (CSER) in 2012, a research centre at Cambridge University. It was created as a joint initiative between a philosopher (Huw Price), a scientist (astrophysicist Martin Rees), and a software entrepreneur and programmer (Jaan Tallin). This multidisciplinary research centre aims to study and limit the possible risks that humanity might face, and that could threaten its very own existence, including the development of AI. The researchers there address in all seriousness the possible scenario of the emergence of a threat à la Terminator that could harm humans. Reality is finally catching up with science fiction.

 

Reaching the final stage of evolution?

Another possible use of AI would be mixing biology with robotics. Soon, robots could be inserted into the human body in order to surpass their biological limits. Humans could live longer, heal faster, have improved eyesight and strength, and many other possibilities that are now taken seriously. Many think that this is a continuation of biological evolution as theorized by Darwin: mankind will continue to evolve and improve its condition using the tools it can create. But it is important to know how will legislation treat these “cyborgs” if some of their actions were seen as being morally wrong? How would they be punished? These types of questions need to be addressed as soon as possible, in order to minimize the risks that might arise from AI. In Thus Spoke Zarathustra (1883), the influential yet misinterpreted German philosopher, Friedrich Nietzsche, explains what it will be like for a person to overcome the limits of the human condition in order to become what he calls the Übermensch, the Beyond-Man. For him, this should be the goal of humanity: becoming an Übermensch is to overcome oneself, to be completely free, obeying only the laws one gives to himself, beyond social pressure, religion, and morality. It’s safe to say that Nietzsche’s philosophy is fairly complicated and could seem as highly unrealistic. But what if the Beyond-Man was… no man at all? What if we are now at the dawn of witnessing the realization of Zarathustra’s prophecies, with the emergence of AI? What if overcoming mankind was to become a highly intelligent machine?

Wanderer over the sea- Wikimedia- Thus spoke the Terminator

By Mahi ElAttar

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s