Four trends that changed AI in 2023

Weekly Business Insights from Top Ten Business Magazines | Week 327

Extractive summaries and key takeaways from the articles curated from TOP TEN BUSINESS MAGAZINES to promote informed business decision-making | Since September 2017 | Week 327 | December 15-21, 2023

Shaping Section 3

Four trends that changed AI in 2023

By Melissa Heikkila | MIT Technology Review | December 19, 2023

Extractive Summary of the Article | Listen

This has been one of the craziest years in AI in a long time: endless product launches, boardroom coups, intense policy debates about AI doom, and a race to find the next big thing. But we’ve also seen concrete tools and policies aimed at getting the AI sector to behave more responsibly and hold powerful players accountable. That gives us a lot of hope for the future of AI. Here’s what 2023 taught us: 

  1. Generative AI left the lab with a vengeance, but it’s not clear where it will go next.  The year started with Big Tech going all in on generative AI. The runaway success of OpenAI’s ChatGPT prompted every major tech company to release its own version. This year might go down in history as the year we saw the most AI launches.  But despite the initial hype, we haven’t seen any AI applications become an overnight success. Rather the fundamental flaws in language models, such as the fact that they frequently make stuff up, led to some embarrassing (and, let’s be honest, hilarious) gaffes. There is now a frenetic hunt for a popular AI product that everyone will want to adopt. 
  2. We learned a lot about how language models actually work, but we still know very little.  Even though tech companies are rolling out large language models into products at a frenetic pace, there is still a lot we don’t know about how they work. Generative models can be very unpredictable, and this year there were lots of attempts to try to make them behave as their creators want them to.  We also got a better sense of AI’s true carbon footprint. 
  3. AI doomerism went mainstream.  Chatter about the possibility that AI poses an existential risk to humans became familiar this year. Hundreds of scientists, business leaders, and policymakers have spoken up.  But not everyone agrees with this idea.  Nevertheless, the increased attention on the technology’s potential to cause extreme harm has prompted many important conversations about AI policy and animated lawmakers all over the world to take action. 
  4. The days of the AI Wild West are over.  Thanks to ChatGPT, everyone from the US Senate to the G7 was talking about AI policy and regulation this year.  One concrete policy proposal that got a lot of attention was watermarks—invisible signals in text and images that can be detected by computers, in order to flag AI-generated content. These could be used to track plagiarism or help fight disinformation, and this year we saw research that succeeded in applying them to AI-generated text and images.  It wasn’t just lawmakers that were busy, but lawyers too. 

2 key takeaways from the article

  1. This has been one of the craziest years in AI in a long time: endless product launches, boardroom coups, intense policy debates about AI doom, and a race to find the next big thing. But we’ve also seen concrete tools and policies aimed at getting the AI sector to behave more responsibly and hold powerful players accountable. That gives us a lot of hope for the future of AI. 
  2. Here’s what 2023 taught us: Generative AI left the lab with a vengeance, but it’s not clear where it will go next. We learned a lot about how language models actually work, but we still know very little. AI doomerism went mainstream – but not everyone agrees with this idea.  The days of the AI Wild West are over. 

Full Article

(Copyright lies with the publisher)

Topics:  Technology, Artificial Intelligence, Creativity

Be the first to comment

Leave a Reply