Extractive summaries of and key takeaways from the articles curated from TOP TEN BUSINESS MAGAZINES to promote informed business decision-making | Week 290 | March 31-April 6, 2023
An early guide to policymaking on generative AI
By Tate Ryan-Mosley | MIT Technology Review | March 27, 2023
Listen to the Extractive Summary of the Article
Though GPT-4 is the standard bearer, it’s just one of many high-profile generative AI releases in the past few months: Google, Nvidia, Adobe, and Baidu have all announced their own projects. And though the tech is not new, its policy implications are months if not years from being understood.
Despite all the excitement, generative AI comes with significant risks. The models are trained on the toxic repository that is the internet, which means they often produce racist and sexist output. They also regularly make things up and state them with convincing confidence. That could be a nightmare from a misinformation standpoint and could make scams more persuasive and prolific. Generative AI tools are also potential threats to people’s security and privacy, and they have little regard for copyright laws. Companies using generative AI that has stolen the work of others are already being sued.
Alex Engler, a fellow in governance studies at the Brookings Institution, has considered how policymakers should be thinking about this and sees two main types of risks: harms from malicious use and harms from commercial use. Malicious uses of the technology, like disinformation, automated hate speech, and scamming, “have a lot in common with content moderation,” “and the best way to tackle these risks is likely platform governance.” Policy discussions about generative AI have so far focused on that second category: risks from commercial use of the technology, like coding or advertising. So far, the US government has taken small but notable actions, primarily through the Federal Trade Commission (FTC). The FTC issued a warning statement to companies last month urging them not to make claims about technical capabilities that they can’t substantiate, such as overstating what AI can do.
The EU, meanwhile, is sticking true to its reputation as the world leader in tech policy. It desgined a set of rules that would prevent companies from releasing models into the wild without disclosing their inner workings, which is precisely what some critics are accusing OpenAI of with the GPT-4 release.
For policy folks in Washington, Brussels, London, and offices everywhere else in the world, it’s important to understand that generative AI is here to stay.
3 key takeaways from the article
- Though GPT-4 is not new, its policy implications are months if not years from being understood.
- Policymakers should be thinking about this and sees two main types of risks: harms from malicious use and harms from commercial use. For the malicious uses of the technology, like disinformation, automated hate speech, and scamming the best way to tackle these risks is likely platform governance. For risks from commercial use of the technology, like coding or advertising; so far, the US government has issued a warning statement to companies last month urging them not to make claims about technical capabilities that they can’t substantiate.
- For policy folks in Washington, Brussels, London, and offices everywhere else in the world, it’s important to understand that generative AI is here to stay.
(Copyright)
Topics: Technology, Artificial Intelligence, Policy-making
Leave a Reply
You must be logged in to post a comment.