Extractive summaries and key takeaways from the articles curated from TOP TEN BUSINESS MAGAZINES to promote informed business decision-making | Since September 2017 | Week 317 | September October 6-12 , 2023
Are we ready to trust AI with our bodies?
By Melissa Heikkilä | MIT Technology Review | October 10, 2023
Extractive Summary of the Article | Listen
Over the next few years, artificial intelligence is going to have a bigger and bigger effect on us and the way we live. We’re already pretty used to tracking our bodies through wearables like smart watches. Getting a pep talk from an AI avatar doesn’t feel like much of a stretch. People are also using ChatGPT to come up with workout plans.
And it’s not just AI for working out. Waitrose, a posh chain of grocery stores in the UK, used generative AI to create recipes for its range of Japanese food. Others are using it to generate books, which are flooding Amazon, including instruction manuals for mushroom foraging. For her birthday last year, author’s dear friend gave her a perfume with notes that were AI-generated. Even the White House wants us to use AI to help with our health. This makes sense. Neural networks are excellent at analyzing data and recognizing patterns. They could help speed up diagnoses, spot things humans might have missed, or help us come up with new ideas.
But as AI enters ever more sensitive areas, we need to keep our wits about us and remember the limitations of the technology. Generative AI systems are excellent at predicting the next likely word in a sentence, but they don’t have a grasp on the wider context and meaning of what they are generating. Neural networks are competent pattern seekers and can help us make new connections between things, but they are also easy to trick and break and prone to biases.
And most important, it’s crucial to remember these systems have no knowledge of what exercise feels like, what food tastes like, or what we mean by “high quality.” AI workout programs might come up with dull, robotic exercises. AI recipe makers tend to suggest combinations that taste horrible, or are even poisonous. Mushroom foraging books are likely riddled with incorrect information about which varieties are toxic and which are not, which could have catastrophic consequences.
Humans also have a tendency to place too much trust in computers. It’s only a matter of time before “death by GPS” is replaced by “death by AI-generated mushroom foraging book.” Including labels on AI-generated content is a good place to start. In this new age of AI-powered products, it will be more important than ever for the wider population to understand how these powerful systems work and don’t work. And to take what they say with a pinch of salt.
3 key takeaways from the article
- Over the next few years, artificial intelligence is going to have a bigger and bigger effect on us and the way we live. People are using ChatGPT to come up with workout plans, to create recipes of food, to generate books, perfume with AI-generated notes and even the White House wants us to use AI to help with our health. This makes sense.
- But as AI enters ever more sensitive areas, we need to keep our wits about us and remember the limitations of the technology. Generative AI systems are excellent at predicting the next likely word in a sentence, but they don’t have a grasp on the wider context and meaning of what they are generating.
- In this new age of AI-powered products, it will be more important than ever for the wider population to understand how these powerful systems work and don’t work. And to take what they say with a pinch of salt.
(Copyright lies with the publisher)
Topics: Technology, Artificial Intelligence, Humans
Leave a Reply
You must be logged in to post a comment.