The Algorithmic War

The Algorithmic War

Part 1

How the ‘Governments vs. Big Tech’ war is taking on sinister proportions and could alter the approach of the technology industry 

A high-stakes game is unfolding at a blazing speed – it’s between big tech and governments around the world. While China has cracked down in its heavy authoritarian style, the US is debating the issue in its senate with growing urgency, after a Facebook whistle-blower, alleged that the social media behemoth, which includes WhatsApp and Instagram, uses algorithm models that increases engagement and revenues through hateful and harmful messages. It was only a matter of time before governments acted to seize back control from big-tech, and social media which had gained unprecedented powers to influence citizens and sway political currents – the pace has just become frenzied, as data governance and ethics take centre stage.

Authorities in at least 48 countries pursued new rules for tech companies on content, data, and competition over the past year. With a few positive exceptions, the push to regulate the tech industry, which stems in some cases from genuine problems like online harassment and manipulative market practices, is also being exploited to subdue free expression and gain greater access to private data by some governments with authoritarian tendencies. The Facebook episode reveals how some of these governments have misused the data, and simultaneously provides a reason for authorities to intervene with harsher regulations.

Archaic rules

Some of the privacy laws were so archaic (scripted even before the age of social media) that it must be redrafted to address the urgent issues cropping up in the wake of incredible and unstoppable march of social media and eCommerce platforms. There are 4.48 billion social media users around the world in July 2021 – that’s 57% of the total global population. More than 9 in 10 internet users now use social media each month. Social media user numbers have surged in the past 12 months too, with 520 million new users joining in the year to July 2021. That translates to annualized growth of 13.1%, or an average 16½ new users every single second. Adobe is forecasting that global e-commerce sales will reach $4.2 trillion this year, with US consumers accounting for close to one-quarter of that spending.

Against this tsunami of social media and eCommerce, accelerated by the pandemic, the regulation of artificial intelligence (AI) algorithms, which drives the business models, has not been keeping pace with the developments, with several regulators taking an ex-post approach, ensuring a technology neutral approach or developing ethics, privacy, and security guidelines. 

Moving from policy to implementation

Initiatives to regulate the use of AI are trying to urgently move from policy development to the implementation stage. At the same time, many regulators are cautious about over-regulating the AI sector before it has fully matured as this could stifle innovation. Developing a trustworthy and robust AI system will involve multiple professional stakeholders, such as developers, statisticians, academics, and data cleansers.

While the European Union had taken a lead with its GDPR (General Data Protection Regulation) Act, the US, especially in the aftermath of the Facebook whistle-blower allegations, is considering a slew of tough measures to protect privacy, and most importantly clamp down on the very business model of social media and eCommerce platforms that uses algorithms which increases the engagement of certain types of posts and digital interactions. This will include everything from posts that Facebook pushes up on our timelines, to recommendations from eCommerce platforms based on machine learning models used to learn about our preferences and push advertisements accordingly.

GDPR penalties surge 113%

The GDPR’s primary aim was to enhance individuals’ control and rights over their personal data. As one of the first pieces of regulations in the world in this space, it will provide the roadmap for future rules. Since the GDPR took effect in May 2018, we’ve seen over 800 fines issued across the European Economic Area (EEA) and the UK. Enforcement started off somewhat slow. But between July 18, 2020, and July 18, 2021, there was a significant increase in the size and quantity of fines, with total penalties surging by around 113.5%.

Amazon’s gigantic US$877 million GDPR fine, announced in the company’s July 2021 earnings report, is nearly 15 times bigger than the previous record. Google was finedUS$50 million in 2019. The case related to how Google provided privacy notice to its users—and how the company requested their consent for personalized advertising and other types of data processing. Google should have provided more information to users in consent policies and granted them more control over how their personal data is processed.

Facebook paid a $5 billion penalty to the Federal Trade Commission to resolve a sweeping investigation into its privacy practices, as well as a £500,000 (about $643,000) fine to the UK government over the Cambridge Analytica scandal in which user data from the social media site was used to influence political outcomes. But critics said the FTC fine, while the largest privacy settlement in the agency’s history, amounted to a slap on the wrist, given that it equated to about a month of revenue for Facebook.

From a slap to a wallop – The government is angry

This time things look different as the slap might turn into a mighty wallop by the government if proved that the social media behemoth was responsible for unforgivable crimes such as allowing hate speech to fuel genocides in countries like Myanmar and Ethiopia. Things are ominous as some of the allegations are taken from Facebook’s own research. After the whistle blower interview was aired on television, Connecticut Sen. Richard Blumenthal shared on Twitter, “Facebook’s actions make clear that we cannot trust it to police itself. We must consider stronger oversight, effective protections for children, & tools for parents, among the needed reforms.” Moreover, eight complaints filed with the US Securities Exchange Commission charges Facebook with misleading investors, which is a crime under US securities law.

Artificial intelligence observatories

While the Facebook scandal will play out over the next few months, some countries have already set the ball in motion setting up AI observatories and knowledge centres, which acts as a collaborative platform for all stakeholders in the AI space. The key objectives of these regulatory centres are to share insights, analyse best practices for shaping AI-related policy, and identify legal barriers of AI adoption. For example, the Czech Republic launched an AI Observatory and Forum (AIO&F), which is responsible for monitoring legal aspects of AI and identifying any specific legal barriers to the development of AI technologies.

The EU is concerned about how consumers can be protected from any harms as the technology develops. The European Commission (EC) has been at the forefront, having published the first-ever draft AI regulatory framework in April 2021. It has adopted a risk-based approach, which imposes prohibitions on AI systems based on security risk levels, while the regulatory action plan of a few countries also stipulates a sector-specific approach for AI regulations. 

More on this regulatory approach in our next episode.

(To be continued)

The Algorithmic War

Leave a comment

Your email address will not be published. Required fields are marked *

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us