The case opens up a Pandora’s box about recommendation engines that power billions of dollars of digital ad revenues
Google vs. Gonzalez, the case before the US Supreme Court, has riveted worldwide attention as it discusses whether Internet platforms like Google, YouTube, etc are liable for content posted by their users. But more importantly the case also opens up a Pandora’s box about recommendation engines that power the billions of dollars of ad revenues of companies like Facebook, Google, Apple, Twitter, YouTube, Instagram, et all. Per Statista, the worldwide revenue from digital advertising was US$616 billion in 2022 and expected to grow to US$1,005 billion up to 2027.
It is an open secret that social media platform algorithms push up posts with higher engagement that attract more eyeballs and hence good for their advertising models. Algorithms are designed mainly to optimise engagement. For instance, a recommender engine could be directed to maximise ad revenue on certain content or minimise “likes” and “shares” with low-quality content.
Recommendation algos find needles in the largest haystack
If engagement with specific content is deemed valuable to a platform, its algorithm calculates the probability of a user liking, commenting or sharing it. If that calculated value meets a certain threshold, it will display on a user’s feed. Algorithms by their very construct do not take “ethical” decisions on whether the content is good or bad. The ugly truth is that hateful posts often have higher engagement and algorithms will lift those up to get even more engagements. Google argues that recommendation algorithms are making it possible to find “the needles in humanity’s largest haystack.
The family of Nohemi Gonzalez, a 23-year-old US citizen who was among at least 130 people killed in coordinated attacks by the Islamic State in Paris in November 2015, is arguing that YouTube, owned by Google, helped recruitment by terrorist organisations that led to this attack.
In Gonzalez v. Google LLC, the court will determine if Google LLC’s YouTube LLC is liable for content the platform algorithmically recommends to users. It is the first time the U.S. high court has agreed to hear a challenge to Section 230 of the Communications Decency Act, a landmark piece of legislation that protects internet platforms from civil and criminal liability from user-created content.
An old law in a troubled new world
But the issue is Section 230 was passed nearly 30 years ago, when websites were young and perceived to be vulnerable. The provision ensured that the companies that hosted them would not get bogged down in lawsuits if users posted material to which others might object, such as bad restaurant reviews or complaints about neighbours. The law has been interpreted by federal courts to do two things. First, it immunises both “provider[s]” and “user[s]” of “an interactive computer service” from liability for potentially harmful posts created by other people. Second, it allows platforms to take down posts that are “obscene…excessively violent, harassing or otherwise objectionable”—even if they are constitutionally protected—without risking liability for any such content they happen to leave up.
A business model under threat
If the court forces a redrafting of Section 230, it will have a drastic effect on the automated recommendation algorithms of digital platforms, threatening their very ability to suggest advertising based on engagement of posts. Recommender algorithms drive a variety of traffic on today’s internet platforms by, for example, suggesting items to buy on e-commerce platforms, videos to watch on streaming platforms or sites to visit in search engine results. In Google’s case, a recommender algorithm drives the functionality of a Google or YouTube search by finding an optimal URL or video for users based on what is typed into the search bar.
There is an increasing demand across the world for digital platforms to make recommendation algorithms understood but making those transparent, interpretable and explainable, and that level of transparency has commercial and competitive implications. Google has been secretive about its algorithms because there’s an incentive to game the algorithm. If one understands the Google algorithm, then one can get one’s content higher in the recommendation.
In April 2021, The European Commission proposed the AI Act (EC Proposal). Later, in December 2022, the Council of the EU adopted its Common Position on the proposed AI Act, marking a milestone on the EU’s path towards comprehensive regulation of AI systems.The proposed regulation on AI follows a risk-based approach which classifies AI into three categories: (i) those that create an unacceptable risk, (ii) those that create a high risk, and (iii) those that create a low or minimal risk.
Know more about our Top Ranked PGDM in Management, among the Best Management Diploma in Kolkata and West Bengal, with Digital-Ready PGDM with Super-specialization in Business Analytics, PGDM with Super-specialization in Banking and Finance, and PGDM with Super-specialization in Marketing.