Governments must not rush into policing AI

Weekly Business Insights from Top Ten Business Magazines

Week 320 | Shaping Section | 1

Extractive summaries and key takeaways from the articles curated from TOP TEN BUSINESS MAGAZINES to promote informed business decision-making | Since September 2017 | Week 320 | October 27-November 2, 2023

Governments must not rush into policing AI

The Economist | October 26, 2023

Extractive Summary of the Article | Listen

Will artificial intelligence kill us all? Some technologists sincerely believe the answer is yes. In one nightmarish scenario, ai eventually outsmarts humanity and goes rogue, taking over computers and factories and filling the sky with killer drones. In another, large language models (llms) of the sort that power generative ais like Chatgpt give bad guys the know-how to create devastating cyberweapons and deadly new pathogens.

It is time to think hard about these doomsday scenarios. Not because they have become more probable—no one knows how likely they are—but because policymakers around the world are mulling measures to guard against them. The European Union is finalising an expansive ai act; the White House is expected soon to issue an executive order aimed at llms; and on November 1st and 2nd the British government will convene world leaders and tech bosses for an “AI Safety Summit” to discuss the extreme risks that ai models may pose.

Governments cannot ignore a technology that could change the world profoundly, and any credible threat to humanity should be taken seriously. Regulators have been too slow in the past. Many wish they had acted faster to police social media in the 2010s, and are keen to be on the front foot this time. But there is danger, too, in acting hastily. If they go too fast, policymakers could create global rules and institutions that are aimed at the wrong problems, are ineffective against the real ones and which stifle innovation.

The idea that ai could drive humanity to extinction is still entirely speculative. No one yet knows how such a threat might materialise. No common methods exist to establish what counts as risky, much less to evaluate models against a benchmark for danger. Plenty of research needs to be done before standards and rules can be set. This is why a growing number of tech executives say the world needs a body to study AI.

A rush to regulate away tail risks could distract policymakers from less apocalyptic but more pressing problems.  Hasty regulation could also stifle competition and innovation.  Regulators must be prepared to react quickly if needed, but should not be rushed into setting rules or building institutions that turn out to be unnecessary or harmful. Too little is known about the direction of generative ai to understand the risks associated with it, let alone manage them.  The best that governments can do now is to set up the infrastructure to study the technology and its potential perils, and ensure that those working on the problem have adequate resources.

3 key takeaways from the article

  1. Will artificial intelligence kill us all? Some technologists sincerely believe the answer is yes.  It is time to think hard about these doomsday scenarios. Not because they have become more probable—no one knows how likely they are—but because policymakers around the world are mulling measures to guard against them.
  2. A rush to regulate away tail risks could distract policymakers from less apocalyptic but more pressing problems.  Hasty regulation could also stifle competition and innovation.  Too little is known about the direction of generative ai to understand the risks associated with it, let alone manage them.
  3. The best that governments can do now is to set up the infrastructure to study the technology and its potential perils, and ensure that those working on the problem have adequate resources.

Full Article

(Copyright lies with the publisher)

Topics:  Technology, Artificial Technology, Uncertainty

Be the first to comment

Leave a Reply