Algorithmic Hygiene → Trustworthy AI

Algorithmic Hygiene → Trustworthy AI

Mitigation of biases crucial to AI longevity

Unbeknownst to most of us, the myriad ways in which artificial intelligence has already seeped into our lives may come as a major surprise to many. The mass-scale digitisation of data along with the host of emerging technologies which use them has led to a major transformation in the way we do things across most sectors, including retail, transportation, energy and advertising, among others. And, it is not just economic spheres that are undergoing changes. AI has been deployed across several aspects of governance and democracy as well, hoping to push objectivity and accuracy in decision-making beyond all else.

Whilst it has aided in the automation of several simple and complex decision-making processes in both the private and the public sectors, algorithmic decision-making comes with its own set of caveats that need to be treated rather carefully if they are to stick effectively for the long run.

On Biases and its Handling

Prior to algorithmic models becoming central for decision-making, decisions were often governed by state, federal and local laws that governed processes in terms of maximising fairness, equity and transparency. This has, of course, today been replaced by algorithmic processes with much greater robustness in terms of statistical rigour and efficiency. Massive volumes of micro and macro data are being fed into these models, influencing the widest array of decisions from movie recommendations, policy setting to banks determining credit-worthiness.

However, given the fact that machines treat similarly-situated objects and people differently based on even the slightest variations in parameters, the risk of amplifying several pre-existing human biases are often rather high – especially noteworthy in the case of effect on protected groups. For example, according to research by Brookings, “automated risk assessments used by U.S. judges to determine bail and sentencing limits can generate incorrect conclusions, resulting in large cumulative effects on certain groups, like longer prison sentences or higher bails imposed on people of colour.”

This is a prime example of what is known in the community as ‘bias’. It is broadly defined as being systemically less favourable to individuals or groups without any relevant socioeconomic ground that justifies the harm. Algorithmic biases, in this regard, may emanate from several sources. The primary cause for biases, however, is using incomplete or unrepresentative training data in model development, or replicating models based on previously-existing inequalities. If left unchecked, biased algorithms are said to have a major collective disparate impact on certain groups even without the programmer’s intention to discriminate.

According to Brookings: “Amazon made a corporate decision to exclude certain neighbourhoods from its same-day Prime delivery system. Their decision relied upon the following factors: whether a particular zip code had a sufficient number of Prime members, was near a warehouse, and had sufficient people willing to deliver to that zip code. While these factors corresponded with the company’s profitability model, they resulted in the exclusion of poor, predominantly African-American neighbourhoods, transforming these data points into proxies for racial classification. The results, even when unintended, discriminated against racial and ethnic minorities who were not included.”

The exploration of the intended and unintended consequences of algorithms is both necessary and timely, particularly since current public policies may not be sufficient to identify, mitigate, and remedy consumer impacts.

Maintaining Hygiene

In the face of the several unintended biases that our algorithms may propagate, it is essential for all groups involved to maintain a proper degree of algorithmic hygiene, wherein specific causes can be both identified and nullified. In this regard, it is crucial in the public policy sphere to update civil rights and non-discrimination laws to digital practices, use of regulation in anti-bias experimentation and prudence in using sensitive information for the mitigation of biases.
Whilst tackling such issues as cleaning of training data and accruing more rounded and representative data, prudence needs to be observed in high measure as even correcting training data biases may lead to problematic results. It is thus essential for all data scientists to observe a ‘bias detection’ stage in the algorithmic life cycle where careful scrutiny of all sources of bias can be observed and mitigated.

In this regard, it is highly necessary to handle sensitive information with the utmost care, given the fact that research has observed that it is the sensitive attributes that can even cause algorithmic biases in several cases. Blinding algorithms from sensitive data with the objective of maintaining security may often lead to the exacerbation of a certain kind of bias as well. It is thus absolutely necessary for data scientists and algorithm operators to use transparency in the handling of sensitive information.

Although there is much research about the potential trade-offs between model accuracy and bias mitigation, what is essentially the need of the hour is an ethical framework with potential guardrails for machine learning systems and tasks. Recently released guidelines from the European Union are keen to stress on the following aspects to produce ‘Trustworthy AI’: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) environmental and societal well-being, and (7) accountability. These principles interpret fairness through the lenses of equal access, inclusive design processes, and equal treatment – aspects that will be crucial in assuring the longevity of AI systems.

© 2023 Praxis. All rights reserved. | Privacy Policy
   Contact Us
Praxis Tech School
PGP in Data Science