With antitrust authorities increasingly concerned about the market power of Big Tech, these giants are now employing AI models to solidify their dominance. To ensure benefit to all, policymakers must establish essential ground rules.
Artificial Intelligence (AI) is advancing rapidly. Generative AI and large language models (LLMs) are being utilised to create novel services and enhance existing tasks, with the technology itself evolving at a breakneck pace. As Nobel laureate economist Michael Spence points out, this wave of adoption may bring substantial productivity gains after nearly two decades of lacklustre growth. Each day, we hear about breakthroughs like Google’s recent announcement that its AI has helped American Airlines reduce contrails by 54%, thereby lessening the environmental impact of each flight.
However, the news is not entirely positive. Currently, AI is more likely to further entrench the dominance of Big Tech companies. They possess the resources to develop and maintain the most formidable AI models, and they are rapidly integrating LLMs into their existing services. This occurs at a time when antitrust authorities worldwide are growing increasingly concerned about the market power wielded by tech companies.
Certainly, some observers, including a Google engineer in an internal memo, argue that these concerns are exaggerated due to the existence of open-source LLMs that theoretically allow anyone to enter the market. Nevertheless, even with a proliferation of smaller newcomers, the dominance of Big Tech seems unshaken. A recent study comparing open-source models to the AI application programming interface (API) services offered by Big Tech companies to third parties indicates that the latter outperform in most aspects.
While this situation may change in the future, presently, the top LLMs continue to improve their performance with increased investment, potentially reaching tipping points where they can unveil new and unexpected capabilities. Deep pockets do matter.
A challenge for policymakers
Given the sheer power of Big Tech in many countries, it is unsurprising that policymakers are grappling with the challenge of crafting forceful, effective, and coherent responses. In some jurisdictions, policymakers and industry leaders are embroiled in political standoffs. For instance, Meta (Facebook) recently blocked news links originating from Canada in response to the Canadian government’s requirement for platforms to compensate news publishers. A similar dispute took place earlier in Australia, where the government subsequently announced plans to fine online platforms for facilitating the spread of misinformation.
In the United Kingdom, a widely criticised “online safety bill” has led some tech companies to consider exiting the market altogether. In the United States, Congress has contemplated pro-competition interventions such as the proposed Open Markets Act, and the Biden administration’s newly activist antitrust authorities have filed various lawsuits against Google, Amazon, Meta, and Apple.
However, many policymakers have limited knowledge of AI, and their expertise often tends to be narrow. Most decision-makers lack a comprehensive understanding of the issue, making it difficult to formulate effective policies. Due to this knowledge gap and the inherent information asymmetry between regulators and the regulated, policy responses to specific issues are likely to remain inadequate, susceptible to lobbying, or hotly contested.
So, what can be done? Perhaps the best approach is to adopt a more principles-based policy. This strategy has already gained traction in addressing issues like misinformation and trolling, where many experts and advocates argue that Big Tech companies should have a general duty of care, signifying a default orientation toward caution and harm reduction.
In some countries, similar principles already apply to news broadcasters, who are obligated to prioritise accuracy and impartiality. While enforcement in these domains can be challenging, the advantage is that a legal basis already exists for compelling less socially harmful behavior from technology providers.
Regarding competition and market dominance, telecoms regulation offers a workable model with its principle of interoperability. Users of competing service providers can still communicate with one another because telecom companies must adhere to common technical standards and reciprocity agreements. The same principle applies to ATMs: you may incur a fee, but you can still withdraw cash from any bank’s machine.
In the realm of digital platforms, a lack of interoperability has largely been created by design as a means to lock in users and establish “moats.” This is why discussions about improving data access and ensuring access to predictable APIs have made little progress. However, there is no technical reason why some level of interoperability could not be reintroduced. After all, Big Tech companies don’t seem to face significant challenges in integrating new services when they acquire competitors.
In the case of LLMs, interoperability may not be feasible at the model level itself, as even their creators don’t fully understand their inner workings. However, it can and should be applied to interactions between LLMs and other services, such as cloud platforms.
For instance, if I subscribe to Microsoft 365, I should still have the option to use Google’s PaLM2, rather than being compelled to use Microsoft’s Copilot or GPT-4 add-on. This principle was established in the landmark 2001 antitrust decision against Microsoft’s bundling of Internet Explorer, and again in the 2007 verdict against its bundling of Windows Media Player.
A steadfast commitment to applying these principles to LLMs would go a long way in preventing further market concentration. For AI to fulfill its potential for society, it must be readily accessible to all and subject to the enhancements that arise from free and fair competition.