Big Tech worried about innovation as tough AI regulations are readied for launch

Big Tech worried about innovation as tough AI regulations are readied for launch

AI safety remains remarkably neglected, outpaced by the rapid rate of development. Currently, society is ill-prepared to manage the risks– says non-profit group

 

  • Italy’s data protection agency temporarily banned ChatGPT in March and initiated a probe into a suspected breach of privacy concerns. The agency used provisions of the EU’s privacy law – the General Data Protection Regulation (GDPR) – in order to crack down on ChatGPT, Reuters reported.
  • Spain’s data protection watchdog announced in April it would also launch an investigation into data breaches by ChatGPT and requested that the EU’s privacy watchdog discuss concerns regarding the service.
  • The same month, France’s data protection watchdog CNIL announced that it was also investigating complaints about ChatGPT.

The tsunami of Generative AI across the world has left governments in many countries grappling with the need for regulating this technology without fully understanding what it is capable of. Non-profit group Center for AI Safety (CAIS) came out with a stunning statement; “AI risk has emerged as a global priority, ranking alongside pandemics and nuclear war. Despite its importance, AI safety remains remarkably neglected, outpaced by the rapid rate of AI development. Currently, society is ill-prepared to manage the risks from AI.” This set alarm bells ringing all over the world.

7 risks of advanced AI

CAIS outlined seven clear risks from associated with the development and deployment of advanced AI systems:

  1. Weaponization: the use of AI systems to develop and deploy weapons, which could present an existential risk to society.
  2. Misinformation: the use of AI-generated content to spread false or misleading information, which could undermine collective decision-making and moral progress.
  3. Proxy Gaming: the use of AI systems to pursue objectives that may not align with human values, potentially resulting in harm to individuals or society.
  4. Enfeeblement: the loss of human control over important tasks as they are increasingly delegated to machines, potentially leading to economic irrelevance and reduced incentives for learning and innovation.
  5. Value Lock-in: the propagation of particular values through the development and deployment of AI systems, which could lead to oppressive or harmful systems becoming entrenched.
  6. Emergent Goals: the development of unexpected or qualitatively different behaviour or objectives as AI systems become more competent, potentially increasing the risk of losing control over such systems.
  7. Deception: the potential for AI systems to be deceptive, either intentionally or unintentionally, which could undermine human control over such systems.

EU AI Act in the final lap

The European Parliament is nowfast-tracking the world’s first AI regulatory framework, known as the AI Act. The purpose of the legislation is to ensure the safe and responsible use of AI with appropriate human oversight. The introduction of new laws is certainly a direct challenge to Silicon Valley’s tech culture, where it is assumed that law should leave emerging technologies alone. It is crucial to recognize that the path taken by any legislator might be difficult to adjust later.OpenAI CEO Sam Altman, backtracked on a threat made earlier to leave the region if it becomes too hard to comply with upcoming laws on artificial intelligence.

South Korea is looking to address the data privacy and other risks associated with unencumbered AI use. On 11 April 2023, China’s Cyberspace Administration released a set of draft measures for consultation pertaining to generative AI services, and Singapore has also announced that a set of advisory guidelines on the use of personal data in AI systems will be forthcoming.

EU’s proposed AI Act introduces a four-tiered categorization of AI systems: unacceptable, high, low, and minimal. Unacceptable AI systems, such as those manipulating human behaviour or using “social scoring,” would be prohibited. High-risk AI systems, which have significant impacts on people’s rights and safety, would be subject to strict requirements, including transparency and human oversight. Low and minimal-risk AI, like chatbots or spam filters, would remain largely unregulated to maintain competitiveness in the EU.

To enforce the AI Act, an EU AI Office would be established. This office would monitor the progress of the legislation, provide consultation, and produce guidance on compliance.The AI Act aims to develop AI in a way that respects people, minimizes harm, ensures privacy and data protection, promotes transparency, and upholds equality and democracy. The legislation emphasizes ethical and trustworthy AI aligned with EU values.

The definition of AI in the AI Act includes machine-based systems designed to operate with varying levels of autonomy, generating predictions, recommendations, or decisions that influence physical or virtual environments.The amended proposal introduces a “best efforts” obligation for AI providers to establish a high-level framework promoting an ethical and trustworthy European approach to AI in line with EU values.

The draft proposal will now undergo a voting process in the European Parliament, expected to take place in June 2023. Once approved, negotiations between the Council of the EU, the European Parliament, and the European Commission will take place to finalize the law.

Big Tech worried about new regulations

Big Tech is cagey about the new regulations which would have far reaching consequences on innovation. Microsoft urged regulators to “move forward the innovation and safety standards together” – also amping up the AI hype by lauding the potential benefits for AI to “do good for the world” and “save people’s lives”, such as by detecting or curing cancer or enhancing disaster response capabilities, while conceding safety needs focus with an affirmation that “we do need to be clear eyed about the risks”.

Microsoft President Brad Smith described AI regulation as the challenge of the 21st century, outlining a five-point plan for how democratic nations could address the risks of AI while promoting a liberal vision for the technology that could rival competing efforts from countries such as China.

Know more about the syllabus and placement record of our Top Ranked Data Science Course in KolkataData Science course in BangaloreData Science course in Hyderabad, and Data Science course in Chennai.

https://praxis.ac.in/old-backup/data-science-course-in-chennai/

© 2023 Praxis. All rights reserved. | Privacy Policy
   Contact Us
Praxis Tech School
PGP in Data Science