As gen AI advances, regulators—and risk functions—rush to keep pace

Weekly Business Insights from Top Ten Business Magazines | Week 330

Extractive summaries and key takeaways from the articles curated from TOP TEN BUSINESS MAGAZINES to promote informed business decision-making | Since September 2017 | Week 330 | January 5-11, 2024

Shaping Section | 2

As gen AI advances, regulators—and risk functions—rush to keep pace

By Andreas Kremer | McKinsey & Company | December 21, 2023

Extractive Summary of the Article | Listen

AI’s breakthrough advancement, gen AI, has quickly captured the interest of the public, with ChatGPT becoming one of the fastest-growing platforms ever, reaching one million users in just five days. The acceleration comes as no surprise given the wide range of gen AI use cases, which promise increased productivity, expedited access to knowledge, and an expected total economic impact of $2.6 trillion to $4.4 trillion annually.

There is, however, an economic incentive to getting AI and gen AI adoption right. Companies developing these systems may face consequences if the platforms they develop are not sufficiently polished. And a misstep can be costly. Major gen AI companies, for example, have lost significant market value when their platforms were found hallucinating (when AI generates false or illogical information).  The proliferation of gen AI has increased the visibility of risks. Key gen AI concerns include how the technology’s models and systems are developed and how the technology is used.

The goal is to establish harmonized international regulatory standards that would stimulate international trade and data transfers. In pursuit of this goal, a consensus has been reached: the gen AI development community has been at the forefront of advocating for some regulatory control over the technology’s development as soon as possible. The question at hand is not whether to proceed with regulations, but rather how.

While no country has passed comprehensive AI or gen AI regulation to date, leading legislative efforts include those in Brazil, China, the European Union, Singapore, South Korea, and the United States. The approaches taken by the different countries vary from broad AI regulation supported by existing data protection and cybersecurity regulations to sector-specific laws and more general principles or guidelines-based approaches. Each approach has its own benefits and drawbacks, and some markets will move from principles-based guidelines to strict legislation over time.

While the approaches vary, common themes in the regulatory landscape have emerged globally:  transparency; should uphold human dignity and personal autonomy; awareness of responsibilities, accountability; AI systems are robust, meaning they operate as expected, remain stable, and can rectify user errors;  AI systems are free of bias; privacy and data governance; and should ensure social and environmental well-being.

Organizations may be tempted to wait to see what AI regulations emerge. But the time to act is now. Organizations may face large legal, reputational, organizational, and financial risks if they do not act swiftly.  A misstep at this stage could also be costly. Organizations could face fines from legal enforcement, financial loss from falloff in customer or investor trust.  Preventive actions on the behalf of organizations can be grouped into four key areas:  transparency; governance; Data, model, and technology management; and Individual rights.

3 key takeaways from the article

  1. The rapid advancement of generative AI (gen AI) has regulators around the world racing to understand, control, and guarantee the safety of the technology—all while preserving its potential benefits. Across industries, gen AI adoption has presented a new challenge for risk and compliance functions: how to balance use of this new technology amid an evolving—and uneven—regulatory framework.
  2. While the approaches vary, common themes in the regulatory landscape have emerged globally:  transparency; should uphold human dignity and personal autonomy; awareness of responsibilities, accountability; AI systems are robust, meaning they operate as expected, remain stable, and can rectify user errors;  AI systems are free of bias; privacy and data governance; and should ensure social and environmental well-being.
  3. Preemptive actions on the behalf of organizations can be grouped into four key areas:  transparency; governance; data, model, and technology management; and individual rights.

Full Article

(Copyright lies with the publisher)

Topics:  Technology, Artificial Intelligence, Regulation

Be the first to comment

Leave a Reply