Time to Leash the Demon

Time to Leash the Demon

Governments across the world plan regulatory interventions as the toxic roles of doctored AI and Social Media pose grave consequences. Smart control is the need of the hour.

Recent developments at the White House clearly indicate that the new US President Joe Biden and his administration are contemplating regulatory control over the AI industry. Last week, an article by White House officials plainly stated that the new administration is aware of the detrimental role of misused AI technologies. It was co-authored by Eric Lander – the White House Office of Science and Technology Policy science advisor, and Alondra Nelson – the White House deputy director for science and published in Wired. The authors make a strong case when they write: “Powerful technologies should be required to respect our democratic values and abide by the central tenet that everyone should be treated fairly…Codifying these ideas can help ensure that.”

This assumes additional significance after social media use of the former President Trump had been at the centre stage of constant criticism. The article mentioned recent cases where biased AI facial recognition systems had led to false arrests or healthcare algorithms had discounted certain diseases in marginalised groups. The authors also pointed out that while some cases involved deliberate abuse of technology, unintentional biases make up the majority of instances – which are technological loopholes waiting to be plugged. They admitted that the world is yet to have rules or safeguards governing the uses of AI technology, and the need of the hour is a “Bill of Rights for AI,” although the ask is steep.

In a parallel development, the White House Office of Science and Technology Policy has officially put out a “public request for information” from AI experts or anyone interested, who can reach out via email to [email protected].

A Watershed Moment?

Both Facebook and Twitter had finally banned the outgoing President Trump in January this year. While Facebook banned him indefinitely, the Twitter ban is understood to be permanent. This was a step both social media giants were compelled to take in the face of rising public discontent against the propagation of inflammatory and doctored content. Harsh as it might seem to be, banning a President might be that watershed moment in the history of social media industry. If taken ahead with the right spirits, this could well be the dawn of a new area of reforms, with course corrections being made both through control policies and self-imposed moderation.

For a long, long time social media companies have professed that all content they host are user generated, over which they have no ownership or control. Some even vouched for upholding freedom of expression. However, as things got murkier and hate-posts started creating considerable damage, we have witnessed editorial censorship being applied to posts – or to individuals propagating such posts – on a case-by-case basis. It goes without saying that most of these cases involved high-profile personalities or establishments, ruffling whose feathers could have been harmful for the company. It is time they also consider the harm being caused to the society overall; excuses will no longer do. It is time for serious regulations, as the ramifications are far-reaching.

Manipulating choices and swaying elections

The recent case of Ms. Frances Haugen –former product manager at Facebook who acted as a whistle-blower – highlighted how Facebook, WhatsApp and Instagram allowed hateful content to spread unchecked, triggering a chain of toxicity that affected 3.5 billion people across the world.

But this is not the first time that Facebook has been accused. In September 2020, Sophie Zhang penned a devastating 8,000-word exit memo to Facebook. Employed as a data scientist, Zhang became consumed by the task of finding and taking down fake accounts and likes which were being used to sway elections globally. She had identified dozens of countries, including India, Mexico, Afghanistan, and South Korea, where such abuse was observed. The company did little despite Zhang’s repeated efforts to bring it to the attention of leadership.

The biggest privacy breach in Facebook’s history happened in March 2018 when Cambridge Analytica, a consultancy involved in Donald Trump’s 2016 presidential campaign, had secretly siphoned personal data of millions of Americans from Facebook accounts, to influence how they voted. Although Facebook admitted breaches, not much action was evident. More recently, we again saw far-right agitators posting openly about plans to storm the US Capitol before doing just that on January 6.

Wilful evasion must stop

Something which these platforms could do right away is stop promoting hateful posts to grab user attention. Consider the following cases:

  • CASE 1:Before quitting in May this year, Haugen combed through Facebook Workplace, the company’s internal employee social media network, and gathered a wide swath of internal reports and research in an attempt to conclusively demonstrate that Facebook had wilfully chosen not to fix the problems on its platform.
  • CASE 2:Samidh Chakrabarti the former leader of Facebook’s Civic Integrity team, on which Haugen had worked, left Facebook in August 2021. He said on Twitter that although Facebook employs some of the most advanced specialists in the world to research the impact of its products on users, democracy, and vulnerable groups – yet their findings are often ignored because the company’s interests could be hurt if the proposed fixes were applied.
  • CASE 3:Yael Eisenstat joined Facebook as the global head of election integrity operations for political advertising in 2018. She left after a project she worked on, to build a tool to scan political ads for misinformation and subject them to fact-checks, was rejected by senior leaders.

It is common practise to promote sensational (read biased and inflammatory) content by pushing them to the top of a thread, so as to maximise user engagement. This is a kind of bias which the world can do without. Why not follow a strict chronological timeline for all posts? That means users will then have to specifically search for a hateful post – instead of controversial content being wilfully thrust on their faces. This might not stop bias altogether, but at least deliberate promotion can be prevented.

Skeletons in every cupboard

No use blaming Facebook alone, though. In December 2020, Dr Timnit Gebru, an esteemed researcher in the field of AI ethics was allegedly fired by Google after sending an internal email that accused the company of “silencing marginalised voices”.Dr Gebru is well-known for her work on racial bias in applied technology and has been a strong critic of AI systems that fail to recognise black faces. She said she was fired by Google after she questioned an order to retract a research paper which concluded that AI systems mimicking language could hurt marginalized populations.

Although the company disputed her version, scores of professionals in the field, including her co-workers spoke out in Dr Gebru’s support. Hundreds of colleagues signed a message accusing Google of racism and censorship. Joy Buolamwini, her co-author on another paper, plainly commented to the press that “Ousting Timnit for having the audacity to demand research integrity severely undermines Google’s credibility for supporting rigorous research on AI ethics and algorithmic auditing.”

Before the dust had settled, Google fired staff scientist Margaret Mitchell in February 2021. The company’s version was that Mitchell had violated organisational policies by moving electronic files outside the company. However, it was Mitchell who had started the ethical AI team at Google, had co-led it with Dr Gebru for two years, was the co-author of the paper that reportedly led to Gebru’s dismissal, and had publicly criticised the company for firing Gebru and ignoring her findings. Naturally, Mitchell’s sacking was wide open to interpretations.

AI regulatory scenario

Since 2016, many national and international authorities have been adopting strategies, actions plans, and policy papers on AI. Let’s cast a quick glance on the global AI regulatory scenario.

  • Global:In 2018, Canada and France contemplated a G7-backed International Panel to study and steer the global impact of AI on individuals and economies. It was launched in June 2020 as The Global Partnership on Artificial Intelligence. The founding members were: Australia, Canada, the European Union, France, Germany, India, Italy, Japan, Rep. Korea, Mexico, New Zealand, Singapore, Slovenia, the USA, and the UK. Its stated objective was to develop AI in accordance with human rights and democratic values.
  • Council of Europe: The CoE is an international organization of 47 member states, including all 29 Signatories of the EU’s Declaration of Cooperation on Artificial Intelligence. The members of CoE have a legal obligation and aim to identify areas where AI encroaches existing standards on human rights, democracy and law.
  • European Union: Most EU member countries have their own AI strategies which are overall guided by theEuropean Strategy on Artificial Intelligence, supported by the EU Commission’s High Level Expert Group on Artificial Intelligence. This group focuses on Trustworthy AI, and haspublished reports on AI safety,liability, and ethics. Since 2020, the Commission is working onan AI-legislation proposal.
  • Canada:The Pan-Canadian Artificial Intelligence Strategy (2017)aims at increasing the number of outstanding AI researchers and skilled graduates in Canada as well as developing ‘global thought leadership’ on the economic, ethical, policy and legal implications of AI.
  • United Kingdoms: On AI issues, the UK is guided by its Digital Economy Strategy 2015-2018. TheirDepartment for Digital, Culture, Media and Sportguides on data ethics, while the Alan Turing Institute provides guidelines for design and implementation of responsible AI systems.
  • United States of America: The Obama administration had considered the risks and regulations for artificial intelligence as early as 2016, whenthe US National Science and Technology Council published a report onPreparing for the Future of Artificial Intelligence. The National Security Commission on Artificial Intelligence was set up in 2018 for regulating security-related AI systems. In January 2019, the White HouseOffice of Science and Technology Policy drafted a Guidance for Regulation of Artificial Intelligence Applicationswith ten principles for United States agencies on whether and how to regulate AI. The Biden administration’s current initiative seems to be the latest effort towards that end.
  • India: No specific law has yet been formulated by the Indian government regarding artificial intelligence or machine learning. So long, the focus was more on promoting AI-systems to maximise benefits from its application. However, in February this year, the Niti Ayog has published an Approach Document listing down the Principles for Responsible AI. The paper identifies thefollowing broad principles for responsible management of AI:

1. Safety and Reliability

2. Equality

3. Inclusivity and Non-discrimination

4. Privacy and Security

5. Transparency

6. Accountability

7. Protection and reinforcement of positive human values.

The full paper is available online at: https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf

It looks like authorities worldwide are finally waking up to the hatred nuisance, and we can expect regulatory developments soon. However, the policy landscape for AI is still an emerging issue with complex ramifications involving multiple jurisdictions spread over the globe.

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us