Let’s talk Ethics, again

Let’s talk Ethics, again

The lack of a robust ethical framework will hurt organisations sooner rather than later – here’s why

We’ve been discussing ethics in AI for months now: hundreds of organisations, ranging from BMW to Google to even the Government of Canada have already implemented it into their operational structures. Yet, the primary reason why the topic keeps propping up is because of the ill-defined principles in ethical frameworks for artificial intelligence currently. Although ‘environmental well-being, transparency and human agency’ sounds good on paper, their clear implementation in a well-defined and structured framework still remains a problem. With global organisations embracing AI with increasing speed, adopting a robust ethical framework will be crucial in making sure AI does not cause unintended harm and remains at the helm of innovation for the long term.

In the Realm of Paper Tigers

In a research project titled ‘In the realm of paper tigers – exploring the failings of AI ethics guidelines’, German non-profit research and advocacy organization AlgorithmWatch launched the AI Ethics Guidelines Global Inventory in order to compile “frameworks and guidelines that seek to set out principles of how systems of how systems for automated decision-making (ADM) can be developed and implemented ethically.” Massive support received on this front has allowed them to expand on the database and come up with some rather interesting observations.

IMAGE: Global Ethics frameworks, AlgorithmWatch (2020)

According toAlgorithmWatch, of the 160 global framework documents in their database, only 6% have practical enforcement mechanisms. Most private and public sector policies are still voluntary – or just general recommendations. They write: “Strikingly, the private sector relies heavily on voluntary commitments, while state actors mainly make recommendations for administrative bodies. Many guidelines contain wording that plays down the scope of the document, presenting them as an orientation aid or proposal.

Therefore, in such a scenario, it is rather tricky for technical personnel to uphold high-level guidance against a framework – primary because the framework itself isn’t robust or specific enough. All ethical AI eventually ends up being is a good marketing campaign, doing not much else to actually prevent the harms associated with AI – the reason why the framework was required in the first place. In fact, many argue that creating frail frameworks does organisations more harm than good – creating a false sense of risk mitigation at a time when risk still looms large.

A Roadmap for the Future

According to the Harvard Business Review, going ahead, organisations must ensure that ethical AI frameworks “are also developed in tandem with a broader strategy for ethical AI that is focused directly on implementation, with concrete metrics at the centre. Every AI principle an organization adopts, in other words, should also have clear metrics that can be measured and monitored by engineers, data scientists, and legal personnel.”

The challenge, however, is quite large. Applying principles of ethical AI to real-world deployment environments will require a significant investment of time and resources spanning data science, risk and legal departments, and in some cases, even external expertise. However, there isn’t a single one-size-fits-all approach that can be created to quantify potential risks for all organisations simultaneously. Each organisation’s ethical AI framework must be unique and based on a thorough analysis of the use cases and regulatory jurisdictions of AI, using a combination of existing research, technicalities and legal precedents.

According to an HBR report: “In the world of privacy, there are a host of metrics that organizations can adopt to quantify potential privacy violations as well. While there are numerous examples of research on the subject […], a set of techniques called “privacy-enhancing technologies” may be one of the best places to start for operationalizing principles related to privacy. Methods like differential privacy, which have open-source packages that data scientists can adopt out of the box, are based upon the explicit notion that privacy can be quantified in large data sets and have been deployed by many tech giants for years. Similar research exists in the world of AI interpretability and security as well, which can be paired with a host of commonly espoused AI principles like transparency, robustness, and more.”

The question isn’t if the lack of ethical AI will cause adverse effects to an organisation, the question is when. In such a case, precaution will definitely serve a much greater purpose than cure.

Reference:https://inventory.algorithmwatch.org/

 

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us