No data scientist interested in intelligent systems can now afford to ignore the ethical angle – but do we know what it is?
In his 1942 short story “Runaround”, the legendary sci-fi writer Isaac Asimovdiscussedthe potential risks a robot could invoke. It was still the early days of automation, and sci-fi plots generally imagined robots to be independent thinking machines. Asimov laid down Three Laws of Robotics in the story:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Although part of a fictional work, these three rules formed the basis of later automation and robotics principles. It can be considered the first code of ethics to counter risks inherent in intelligent systems.
The risks of AI
Eighty-years later, we are now entering an age of human-machine interactions on a near-reciprocal footing. While human-to-human engagements are limited in time and governed by various emotional/ethical parameters, a machine could remain engaged infinitely and without any emotional/ethical restraints imposed – unless these are specifically fed into it. This means artificial bots can channel virtually unlimited resources into building relationships, leading to a new facet of human addiction – tech dependency. Imagine the harms possible if such a powerful attachment is exploited with evil intent!
Artificial Intelligence (AI) is a technology developed to replicate, augment, or replace human intelligence. Such systems are trained on a huge corpus of sample data to develop insights that help the system to make decisions. The rapid deployment of AI over the past decade has spurred groups of experts to develop safeguards for protecting against the risk of AI to humans. The reason is AI systems trained on incorrect, insufficient or prejudiced data can have inadvertent and harmful consequences – simply because decisions made by such systems were based on faulty inputs. Also, with rapid progressions in increasingly complex algorithmic systems, the logic employed by the AI often remains hidden to us. So, in effect, humans are engaging AI systems to make impactful decisions without knowing the exact flow of cause-and-effect that led to the conclusion. This is not a happy situation. First, the creators might not have the control; and second, decisions are being made by systems that are inherently not governed by socially acceptable human ethics.
With snowballing use of AI in businesses, organisations are realising fast that AI doesn’t just scale solutions – it also scales risk. In such a scenario, all stakeholders need a clear plan to deal with the ethical quandaries this new technology is introducing. An AI ethics framework that highlights the risks and establishes guidelines for responsible use thus becomes a practical necessity and no longer a theoretical debate.
Why ethics is important in AI?
AI ethics is a set of moral principles and techniques that would guide and regulate the development of artificial intelligence technology for responsible use. Generally, ethics of artificial intelligence is a branch of the Ethics of Technology with a two-pronged concern: (i) Robot ethics (or robo-ethics) that deals with the moral behaviour of humans while designing, using and treating artificially intelligent systems, (ii) Machine ethics (or machine morality) that is concerned with the behaviour of the machines themselves.
Robot ethics considers the use of machines to harm or benefit humans, how they can impact individual autonomy, and influence social justice. This intersects with AI ethics, because not all robots are AI systems and not all AI systems are robots. In a way, Asimov’s Three Laws of Robotics covers both robot ethics and machine morality.
It is true that the more powerful a technology becomes, the more can it be used for evil reasons. And with AI, we are dealing with a system that is faster and more capable than us to an unimaginable extent. Moreover, evolving machine learning systems with neural networks can make decisions that cannot be explained by the humans who first programmed them. Left unaddressed, these could lead to biased AI systems. This takes things out of human control and to a grey area, where it is difficult to determine if such decisions are fair and trustworthy. That is why some countries are demanding laws for explainable artificial intelligence.
The curious notion of robot rights
In October 2017, Saudi Arabia granted “honorary citizenship” to Sophia – a social humanoid robot developed by Hong Kong-based Hanson Robotics. Although this was interpreted to be more of a publicity stunt than a meaningful legal recognition, it opened up a whole new debate. While some lauded this as a futuristic measure, others criticised the move as undermining the basic tenets of human rights and law.
But if machines can have “intelligence”, should they not have “rights” as well? While technology is making every attempt to grant autonomous decision-making powers to AI systems, should they be treated like inanimate objects, or at the most like animals or slaves with no independent dignity, feeling, or suffering? If we consider machines as intelligent entities, then the next logical step might be to consider their legal status.
This gives rise to “robot rights” – a concept that people should have moral obligations towards their machines, similar to human rights or at least animal rights. This could include a right to exist and perform its own mission, and possibly be linked to duties to serve humanity, right to life and liberty, freedom of thought and expression, and equality before the law.
Experts disagree on how soon such laws could be necessary. A group of scientists in 2007 speculated that at least 50 years had to pass before any system sufficiently advanced for legal status would exist. However, the issue of robot rights has actually been considered by the Institute for the Future and the U.K. Department of Trade and Industry.
In our next episode. we shall consider the key ethical issues for AI and the global efforts to formulate an effective AI Code of Ethics.
(To be continued)