The Fairness Code

The Fairness Code

The task of ensuring fairness of AI models cannot be automated, but a collaboration between AI and humans can enable both parties to offer their best

The Facebook whistle-blower episode has sent shudders down Big Tech, as the algorithms that run their business comes under heightened scrutiny, leading to ever tightening, and complex regulations from governments across the world. While some say this is the Big Tobacco moment for technology companies, others view it as a natural process in the evolution of artificial intelligence (AI) – the maturing of a set of technologies which provide the foundation for every business that has digitally transformed itself.

As the debate rages across technological, legal, ethical, and international boundaries, the outcomes will no doubt shape the business models of every tech company, social media platforms, and any organization that collates, processes, and analyses data.

The risk of Unfair Bias

As organisations automate or augment their decision-making with AI, there is a high risk that the resultant decisions will either create or reinforce unfair bias. The negative impact of bias and unfairness in AI does not affect individual victims alone. Organizations that design, develop, and deploy AI can face serious repercussions such as brand/reputational damage, negative sentiment among employees, potential lawsuits or regulatory penalties, and loss of trust from all stakeholders, including customers and the public. Out of this turmoil, an interesting goal has emerged for data scientists – to create the ‘Fairness Code’, the holy grail of data science.

Microsoft’s ‘FATE (FairnessAccountabilityTransparency& Ethics) in AI’ project claims to study the complex societal implications of artificial intelligence (AI), machine learning (ML), and natural language processing (NLP). Its aim, says the company, is to facilitate computational techniques that are both innovative and responsible while prioritising issues of fairness, accountability, transparency, and ethics as they relate to AI, ML, and NLP by drawing on fields with a sociotechnical orientation, such as human-computer interface (HCI), information science, sociology, anthropology, science and technology studies, media studies, political science, and law.

What is the Fairness Code?

So what’s the Fairness Code all about? The World Economic Forum, in a whitepaper, has tried to clear the confusions on fairness. It states, “decisions made by AI systems are said to be fair if they are objective with regard to protected indicators such as gender, ethnicity, sexual orientation or disability and do not discriminate among various people or groups of people. For example, an AI-based hiring system may recommend candidates who are more outgoing or extroverted because many extroverted candidates were hired in the past. However, this decision does not take into account whether introverted mannerisms could be a result of cultural differences. This could be an unfair outcome of a technically accurate AI system.”

Along with fairness, Amazon promises to give user control over their information. It has an entire research area devoted to security, privacy, and abuse prevention. The company is working on “creating a secure suite of hardware, software, and services with privacy, while giving you control over your information.” Amazon has turned bias detection in data models into a business proposition. Its SageMaker Clarify solution helps detect statistical bias in data and machine learning models. It also helps explain why those models are making specific predictions. Achieving this requires the application of a collection of metrics that assess data for potential bias. One Clarify metric in particular – conditional demographic disparity (CDD) – drew upon research done by the Oxford Internet Institute (OII) at the University of Oxford.

Fairness cannot be automated

Nevertheless, there is a growing opinion that ensuring fairness of AI models cannot be entrusted to AI – in other words: fairness cannot be automated. Fairness is a social construct that humans use to coordinate their interactions and subsequent contributions to the collective good, and it is subjective. An AI decision-maker should be evaluated on how well it helps people connect and cooperate; people will consider not only its technical aspects but also the social forces operating around it.

Humans as the Devil’s Advocate

A Harvard Business Review paper recommends that the fairness issue is best addressed if the AI model is evaluated by a human devil’s advocate – meaning, a reviewer who deliberately provokes a debate to test the strength of the opposing arguments generated by the model. Although humans are much less rational than machines and are to some extent blind to their own inappropriate behaviours, research shows that they are less likely to be biased while evaluating the behaviours and decisions of others.

In view of this insight, the perfect strategy for achieving AI fairness must involve a collaborative act between AI and humans. Both parties can bring their best abilities to the table to create an optimal prediction model adjusted for social norms.

The Fairness Code

Leave a comment

Your email address will not be published. Required fields are marked *

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us