Extractive summaries and key takeaways from the articles curated from TOP TEN BUSINESS MAGAZINES to promote informed business decision-making | Since September 2017 | Week 316 | September 29-October 5, 2023
Why Big Tech’s bet on AI assistants is so risky
By Melissa Heikkilä | MIT Technology Review | October 3, 2023
Extractive Summary of the Article | Listen
Since the beginning of the generative AI boom, tech companies have been feverishly trying to come up with the killer app for the technology. First it was online search, with mixed results. Now it’s AI assistants. Last week, OpenAI, Meta, and Google launched new features for their AI chatbots that allow them to search the web and act as a sort of personal assistant.
OpenAI unveiled new ChatGPT features that include the ability to have a conversation with the chatbot as if you were making a call, allowing you to instantly get responses to your spoken questions in a lifelike synthetic voice. OpenAI also revealed that ChatGPT will be able to search the web.
Google’s rival bot, Bard, is plugged into most of the company’s ecosystem, including Gmail, Docs, YouTube, and Maps. The idea is that people will be able to use the chatbot to ask questions about their own content—for example, by getting it to search through their emails or organize their calendar. Bard will also be able to instantly retrieve information from Google Search. In a similar vein, Meta too announced that it is throwing AI chatbots at everything.
This is a risky bet, given the limitations of the technology. Tech companies have not solved some of the persistent problems with AI language models, such as their propensity to make things up or “hallucinate.” But what should concerns the most is that they are a security and privacy disaster, as the author wrote earlier this year. Tech companies are putting this deeply flawed tech in the hands of millions of people and allowing AI models access to sensitive information such as their emails, calendars, and private messages. In doing so, they are making us all vulnerable to scams, phishing, and hacks on a massive scale.
Now that AI assistants have access to personal information and can simultaneously browse the web, they are particularly prone to a type of attack called indirect prompt injection. It’s ridiculously easy to execute, and there is no known fix. With this new generation of AI models plugged into social media and emails, the opportunities for hackers are endless.
3 key takeaways from the article
- Since the beginning of the generative AI boom, tech companies have been feverishly trying to come up with the killer app for the technology. Now it’s AI assistants. Last week, OpenAI, Meta, and Google launched new features for their AI chatbots that allow them to search the web and act as a sort of personal assistant.
- This is a risky bet, given the limitations of the technology. Tech companies have not solved some of the persistent problems with AI language models, such as their propensity to make things up or “hallucinate.” But what should concerns the most is that they are a security and privacy disaster.
- Tech companies are putting this deeply flawed tech in the hands of millions of people and allowing AI models access to sensitive information such as their emails, calendars, and private messages. In doing so, they are making us all vulnerable to scams, phishing, and hacks on a massive scale.
(Copyright lies with the publisher)
Topics: Technology, Artificial Intelligence, Hacking
Leave a Reply
You must be logged in to post a comment.