While taking crucial decisions that impact society, AI systems are expected to factor in ethical dilemmas. But are they ready for it yet?
As artificial intelligence (AI) is increasingly being used in decision-making, certain doubts have begun to accumulate in various quarters. Real-life decision-making relies on multiple factors when it involves humans. Mathematical logic is never enough for such scenarios. And ethics – a factor that can hardly be defined within the parameters of formulae – is one grey area that machine algorithms are yet to come to grips. This is where the question pops up: can AI really ever be ethical?
It is not that machine logic works on unethical principles. At the core of the problem is the fact that any logic, driven by mathematical possibilities, is not concerned with the ethical consequences of activities. Such systems are neither moral or immoral – they are simply “amoral”, and considerations based on human views on ethical dilemmas are beyond their frame of reference.
And this is where doubts creep in. In an age and time when machine learning is aiming to replace key human contact points to reduce uncertainties and automate processes, can we allow algorithms to decide on situations which might involve ethical considerations? So long AI was just another component in manufacturing shop floors and supply-delivery logistic chains, this issue did not arise. But now AI is getting more and more involved in human decisions – starting from job interviews to content generation to granting parole or financial disbursals. Each of these activities carries societal and humane implications, where decisions might not be possible based on a binary data-sheet of zeroes and ones. A human mind would naturally factor in moral and ethical considerations while making the right choices. How would AI do that since it is totally oblivious of ethical parameters?
The misadventure that Microsoft suffered with its chatbot Tay can be exemplary. The bot was released on Twitter in 2016 to showcase Microsoft’s progresses in natural language processing (NLP) capabilities. It was meant to be a fun release where people could engage in online conversations with the bot. Within hours, naughty and tech-savvy netizens had manipulated the threads of conversation such that Tay was tweeting a series of politically and/or socially unacceptable comments – like outright pro-Nazi or anti-feminist statements. Microsoft had no alternative but to hurriedly withdraw the bot.
It is obvious that Tay did not have a “mind” and hence it had no personal “views” on the controversial topics – unethical or ethical. Like all AI systems, it simply lacked the concepts of “right” and “wrong.” The offensive comments it posted was the output of mindless statistical analysis; they were result of offensive input data it ingested from the Internet. The statistically high occurrence of these statements led the bot to accept those statements as the most probable responses — completely missing the ethical or social significance of the statements.
Such lapses in ethical understanding has a parallel in algorithms not being able to grasp the relationship between cause and effect. Understanding the causal mechanisms that underlie those patterns in real-world dynamics – is also beyond the grasp of AI. As AI expert Judea Pearl succinctly puts it: “The language of algebra is symmetric: If X tells us about Y, then Y tells us about X…. [however] Mathematics has not developed the asymmetric language required to capture our understanding that if X causes Y that does not mean that Y causes X.”
Causal reasoning is an essential part of human intelligence. It governs how we decode the world round us and interact with it. Unless AI can grasp this connection, it will never fully capture the world and be able to communicate with us on our terms.
AI systems follow predetermined rules, but this rule-based approach only gets us so far. It is impossible to enumerate all possible conditions, and consequent implications, to derive a set of rules that, taken collectively, would guarantee ethical conclusions. This is because human values are nuanced, amorphous, often apparently contradictory – and practically impossible to reduce to a predetermined set of definitive maxims. That is why we are humans, after all!
There is still no headway in developing ethically sensitive AI systems. For now, human-machine collaboration looks like the best compromise.