The Zen of AI

The Zen of AI

Buddhism may provide the enlightenment for technology as AI ethics takes centre-stage in 2021

One of the world’s largest organizations, Google, has been in the news for the wrong reasons; and it is over ethics in technology, especially Artificial Intelligence (AI). An ethics crisis hit the organization that snowballed into technology employees coalescing to form the first ever trade union in the industry. The incident, which followed the sacking on TimnitGebru, a Google researcher, who had co-authored a landmark work on the discriminatory nature of facial recognition, highlighted the ethics debate once again making the topic one of the most critical one for the future of AI in 2021.

AI is often considered a black box, a mysterious technology that divulges few clues about how findings are reached. Despite general acceptance that AI can identify patterns and uncover trends that are difficult for humans to discern (thereby boosting an organisation’s operational efficiency and agility), decision-makers are often reluctant to act on AI-driven insights. Business users are at times skeptical or distrustful of the information gained from AI models because they don’t understand how the results are obtained.

Unfortunately, lack of clarity can make enterprises slow to adopt AI on a broad scale or to make significant changes to business processes based on AI-driven findings. The efficiency- and productivity-enhancing potential of the technology is therefore missed. Evolution AI platform providers are developing tools to foster greater trust in AI models. They recognize that for AI to be more broadly adopted, business users that aren’t AI specialists need to understand the factors that lead to the findings generated by AI models as well as the safeguards that ensure model quality.

Line-of-business leaders don’t need a detailed methodology, but they do require basic insights into why a model yields the results it does, assurance that it is fair, and confirmation that it doesn’t suffer from drift. AI platform providers are addressing this need by creating new tools, such as those that identify the factors which contribute to model findings or help ascertain data lineage. Key to the effectiveness of these tools is the degree to which complex information can easily be communicated to line-of-business users.

Concern over ethics and AI has also opened business opportunities for start-ups like Parity and Fiddler, to help organizations adopt responsible AI practices. The growing crop of start-ups in this space is promising organizations ways to develop, monitor, and fix their AI models. They offer a range of products and services from bias-mitigation tools to explainability platforms. Initially most of their clients came from heavily regulated industries like finance and health care. But increased research and media attention on issues of bias, privacy, and transparency have shifted the focus of the conversation. New clients are often simply worried about being responsible, while others want to “future proof” themselves in anticipation of regulation.

The European Union, Japan, the US have been coming up with regulations on ethical use of AI. However, MIT argues that most AI ethics guidelines are being written in Western countries which means that the field is dominated by Western values such as respect for autonomy and the rights of individuals, especially since the few guidelines issued in other countries mostly reflect those in the West.

It introduces an interesting concept of using Buddhist teachings as perhaps more suited to create the foundations of a ethical AI. Insights derived from Buddhist teaching, says MIT, could benefit anyone working on AI ethics anywhere in the world, and not only in traditionally Buddhist cultures (which are mostly in the East and primarily in Southeast Asia).Buddhism proposes a way of thinking about ethics based on the assumption that all sentient beings want to avoid pain. Thus, the Buddha teaches that an action is good if it leads to freedom from suffering.

The implication of this teaching for artificial intelligence is that any ethical use of AI must strive to decrease pain and suffering. In other words, for example, facial recognition technology should be used only if it can be shown to reduce suffering or promote well-being. Moreover, the goal should be to reduce suffering for everyone—not just those who directly interact with AI.

We can of course interpret this goal broadly to include fixing a system or process that’s unsatisfactory or changing any situation for the better. Using technology to discriminate against people, or to spy on and repress them, would clearly be unethical. When there are grey areas or the nature of the impact is unclear, the burden of proof would be with those seeking to show that a particular application of AI does not cause harm.

A Buddhist-inspired AI ethics would also understand that living by these principles requires self-cultivation. This means that those who are involved with AI should continuously train themselves to get closer to the goal of totally eliminating suffering. Attaining the goal is not so important; what is important is that they undertake the practice to attain it. It’s the practice that counts. Designers and programmers should practice by recognising this goal and laying out specific steps their work would take in order for their product to embody the ideal. That is, the AI they produce must be aimed at helping the public to eliminate suffering and promote well-being.

For any of this to be possible, companies and government agencies that develop or use AI must be accountable to the public. Accountability is also a Buddhist teaching, and in the context of AI ethics it requires effective legal and political mechanisms as well as judicial independence. These components are essential in order for any AI ethics guideline to work as intended.

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us