Governments rush to regulate AI

Governments rush to regulate AI

To reduce risks, everyone has an interest in AI research being conducted carefully, safely and with proper oversight and transparency

As adoption of artificial intelligence becomes widespread, the concern for how AI makes decisions have increased all over the world. Governments are worried about its risks and are grappling with controlling its ability to do harm – before it actually does so. Big Tech companies like Alphabet and Microsoft are already lobbying to ensure that the regulations do not stifle innovation. At the same time, whistle-blowers from within these organisations are raising red flags about the dangers of AI spinning out of control.

A 2022 McKinsey survey shows that AI adoption has more than doubled over the past five years, and investment in AI is increasing apace. It’s clear that generative AI tools like ChatGPT and DALL-E (a tool for AI-generated art) have the potential to change how a range of jobs are performed. The full scope of that impact, though, is still unknown – as are the risks.

China, EU leaves US behind in AI guardrails

The United States appears to be behind both allies and adversaries on AI guardrails. While officials in Washington talk about delivering user rights and urge CEOs to mitigate risks, Beijing and Brussels are actually delivering rights and mitigating risks. Generative AI’s breakthroughs have happened in the U.S., but adoption of consumer-facing AI is widespread in China. Xiaoice, Microsoft’s China-focused chatbot, has 660 million users, and Beijing is betting its faster-paced AI regulation efforts will be a driver for further uptake.

A couple of weeks ago the White House summoned the bosses of the biggest AI companies to explore the benefits and perils of the technology before outlining future guidelines. The EU and China are already well advanced in drawing up rules and regulations to govern AI. And the UK’s competition authority is to conduct a review of the AI market.

Hundreds of technologists and researchers have warned about the dangers of AI in multiple open letters, with one published in late March advocating a six-month “pause” on the development of new AI models. More recently, veteran scientist Geoffrey Hinton–often referred to as the godfather of AI–stepped down from his role at Google with a similarly dire prognosis.

EU enacting sweeping laws

Europe is struggling to agree on new rules to govern AI, which is revealing how policymakers around the world have a lot to learn about the technology. The European Parliament is getting closer to a sweeping political agreement that would outline its vision for regulating AI, including an outright ban on some uses of AI, such as predictive policing, and extra transparency requirements for AI judged to be high-risk.

EU designates “high risk” applications of artificial intelligence, such as law enforcement, critical infrastructure, education, and employment, that will be subject to more stringent compliance and testing requirements for companies that make and deploy those applications.

However, this is only the start of a long process and once the Members of European Parliament vote on the agreement later this month, it will need to be negotiated all over again with EU member states. The discussions about risks should not focus on existential threats to the future of humanity because there are major issues with the way AI is being used right now. At the core of the debate about regulating AI is the question of whether it’s possible to limit the risks it presents to societies without stifling the growth of a technology that many politicians expect to be the engine of the future economy.

The EU is increasingly an early mover on efforts to regulate the internet. Its privacy law, the General Data Protection Regulation, came into force in 2018, putting limits on how companies could collect and handle people’s data. Last year, MEPs agreed on new rules designed to make the internet safer as well as more competitive. These laws often set a global standard–the so-called “Brussels effect.”As the first piece of omnibus AI legislation expected to pass into law, the AI Act will likely set the tone for global policymaking efforts surrounding artificial intelligence.

China releases first draft rules

China released its draft AI regulations in April, and Canada’s Parliament is considering its own hotly contested Artificial Intelligence and Data Act. In the US, several states are working on their own approaches to regulating AI, while discussions at the national level are gaining momentum. White House officials, including vice president Kamala Harris, met with Big Tech CEOs in early May to discuss the potential dangers of the technology. In the coming weeks, US senator Ron Wyden of Oregon will begin a third attempt to pass a bill called the Algorithmic Accountability Act, a law that would require testing of high-risk AI before deployment.

Algorithmic Accountability

While decision automation is widespread in industry, consumers and regulators lack insight into where these “automated critical decision processes” are being used. This makes it difficult to hold companies accountable and for consumers to make informed choices. The American public and government needs more information to understand where and why automation is being used, and companies need clarity and structure to make the impact assessment process effective. The Algorithmic Accountability Act of 2022 requires companies to assess the impacts of the automated systems they use and sell, creates new transparency about when and how automated systems are used, and empowers consumers to make informed choices about the automation of critical decisions.

AI is a public good, given its potential to complete tasks far more efficiently than human operators: everything from diagnosing patients by analysing medical data to taking over high-risk jobs in the military or improving mining safety.

But both its benefits and dangers will affect everyone, even people who don’t personally use AI. To reduce AI’s risks, everyone has an interest in the industry’s research being conducted carefully, safely and with proper oversight and transparency. For example, misinformation and fake news already pose serious threats to democracies, but AI has the potential to exacerbate the problem by spreading “fake news” faster and more effectively than people can.

Know more about the syllabus and placement record of our Top Ranked Data Science Course in KolkataData Science course in BangaloreData Science course in Hyderabad, and Data Science course in Chennai.

Data Science Course in Bangalore

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us