Top Management College in Kolkata | PGDM College in India Praxis

As the technology landscape evolves, experts caution against the blind implementation of AI into every product and service. Exaggerated or misleading claims about the use of AI technology pose a significant threat to the credibility of the industry

 

“AI washing” is shaking up the technology industry as regulatory authorities, especially in the US, are coming down hard on companies making deceptive or exaggerated claims about using artificial intelligence (AI) in their products. The US Federal Trade Commission has stated that simply using an AI tool in development does not justify claiming the product is “powered by AI”.

Gary Gensler, the chairman of the Securities and Exchange Commission (SEC), who noted that new technologies like AI can create hype and lead to false claims by companies. The SEC has taken enforcement actions against investment firms for engaging in “AI washing” and other unlawful practices.

As the rapid advancements in artificial intelligence (AI) continue to captivate the public imagination, a growing concern has emerged around “AI washing.” This practice, where companies make exaggerated or misleading claims about their use of AI technology, poses a significant threat to the credibility of the industry and the trust of consumers.

The recent case of Amazon’s “Just Walk Out” technology in its physical grocery stores is a prime example of AI washing in action. While the system utilizes AI-powered sensors to automate the checkout process, it was revealed that the company relied on a sizable workforce in India to manually review a significant portion of the transactions. This revelation highlighted the disconnect between the company’s claims and the actual implementation of the technology.

Far-reaching implications

“AI washing can have concerning impacts for businesses, from overpaying for technology and services to failing to meet operational objectives the AI was expected to help them achieve,” warns Dougal Dick, UK head of emerging technology risk at KPMG in an article published in the BBC web portal.

The problem extends beyond individual companies, as evidenced by a 2019 study by MMC Ventures that found 40% of new tech firms describing themselves as “AI start-ups” used little to no AI at all. This trend has only exacerbated in recent years, with a significant increase in the number of startups mentioning AI in their funding pitches, often without the tangible results to back up their claims.

“Some founders seem to believe that if they don’t mention AI in their pitch, this may put them at a disadvantage, regardless of the role it plays in their solution,” explains Sri Ayangar, team member at investment firm OpenOcean.

The implications of AI washing extend beyond the companies themselves. Investors face greater difficulty in identifying genuinely innovative firms, while consumers may be left with unmet expectations and a growing distrust in the industry’s capabilities.

Regulatory bodies have taken notice, with the U.S. Securities and Exchange Commission (SEC) recently charging two investment advisory firms for making false and misleading statements about their use of AI. In the UK, the Advertising Standards Authority (ASA) has also clamped down on deceptive AI claims in marketing communications.

As the technology landscape evolves, experts caution against the blind implementation of AI into every product and service. “AI does not grow on trees… the technology already contributes more to climate change than aviation. We have to move away from this one-sided overhyped discussion, and really think about specific tasks and sectors that AI can be beneficial for, and not just blindly implement it into everything,” warns Sandra Wachter, a professor of technology and regulation at Oxford University.

The solution to the AI washing problem lies in a more transparent and responsible approach to the development and implementation of AI technology. Companies must be held accountable for their claims, while consumers and investors must remain vigilant in separating fact from fiction. Only then can the true potential of AI be realized, without the distortion of misleading hype.

Building trust and credibility

Nevertheless, this is hurting companies which are genuinely using AI in the products or services. Here are some of the effective ways for companies to build trust and credibility around their use of AI:

  1. Transparency and Disclosure:
  • Provide clear, detailed, and easily accessible information about the AI systems being used, including their capabilities, limitations, and underlying technologies.
  • Disclose the data sources, training processes, and potential biases in the AI models.
  • Explain how the AI systems are monitored, tested, and updated to ensure ongoing performance and safety.
  1. Accountability and Governance:
  • Establish clear policies and procedures for the responsible development and deployment of AI.
  • Designate executive-level ownership and accountability for the company’s AI initiatives.
  • Implement robust governance frameworks to oversee the ethical and responsible use of AI.
  1. Third-Party Validation and Audits:
  • Engage reputable third-party experts to assess and validate the company’s AI claims and implementations.
  • Undergo regular independent audits to verify the accuracy and reliability of the AI systems.
  • Publish the results of these audits and assessments to demonstrate transparency.
  1. Hands-on Demonstrations and Pilot Programs:
  • Offer customers and stakeholders opportunities to directly experience and test the AI capabilities.
  • Conduct pilot programs or limited deployments to gather feedback and demonstrate the real-world performance of the AI systems.
  • Actively solicit input and incorporate user feedback to continuously improve the AI offerings.
  1. Investing in AI Talent and Expertise:
  • Assemble a strong team of AI experts, data scientists, and ethicists to drive the development and deployment of the company’s AI initiatives.
  • Showcase the company’s investment in building in-house AI capabilities and thought leadership.
  • Encourage employees to participate in industry events, conferences, and publications to share their expertise.
  1. Collaborative Partnerships and Industry Engagement:
  • Partner with recognized academic institutions, research organizations, or industry leaders to co-develop and validate the AI technologies.
  • Actively engage with industry associations, standards bodies, and regulatory agencies to contribute to the development of best practices and guidelines.
  • Demonstrate the company’s commitment to responsible AI by participating in industryinitiatives and thought leadership forums.
  1. Ongoing Communication and Transparency:
  • Regularly communicate updates, progress, and learnings around the company’s AI initiatives to customers, stakeholders, and the broader public.
  • Prioritize open and honest dialogue to address concerns and address any issues or challenges that arise.
  • Establish dedicated channels for customers and the public to provide feedback and raise questions about the company’s AI practices.

By implementing these strategies, companies can build trust, credibility, and a reputation for being a responsible and transparent leader in the responsible development and deployment of AI technologies.

Acknowledgement:www.bbc.com

 

Know more about our Top Ranked PGDM in Management, among the Best Management Diploma in Kolkata and West Bengal, with Digital-Ready PGDM with Super-specialization in Business AnalyticsPGDM with Super-specialization in Banking and Finance, and PGDM with Super-specialization in Marketing.

Leave a Reply

Your email address will not be published. Required fields are marked *