Informed i’s Weekly Business Insights
Extractive summaries and key takeaways from the articles carefully curated from TOP TEN BUSINESS MAGAZINES to promote informed business decision-making | Since 2017 | Week 382 | January 03-09, 2025 | Archive
How to trust a GenAI agent: four key requirements
By Shiven Ramji | Fortune Magazine | January 7, 2025
Extractive Summary of the Article | Listen
3 key takeaways from the article
- When ChatGPT debuted in late 2022, it took the world by storm. Two years later, the era of GenAI agents has arrived. These supportive sidekicks are increasingly capable of performing tasks and making decisions autonomously. Nearly every week, a new agent seems to hit the market. Predictions are these agents should see rapid adoption, likely making up a third of all GenAI interactions by 2028. All of these raise the question: How can we ensure they are secure?
- Some of the vulnerabilities can be addressed with a thoughtful, identity-based approach. Four key considerations for secure AI integration: User authentication. Secure APIs. Async authentication. And access controls.
- To fully take advantage of GenAI’s potential, organizations must securely integrate GenAI into their applications and keep all four of these requirements in mind. However, finding ways to protect against these unique risks we’re seeing shouldn’t get in the way of innovation and deploying these GenAI agents even faster.
(Copyright lies with the publisher)
Topics: Human & Technology, Artificial Intelligence, AI Agents
Click for the extractive summary of the articleWhen ChatGPT debuted in late 2022, it took the world by storm. Users were stunned by the app’s ability to give helpful, human-like responses to their input. Overnight, generative AI (GenAI) went mainstream. Two years later, the era of GenAI agents has arrived. These supportive sidekicks are increasingly capable of performing tasks and making decisions autonomously. Nearly every week, a new agent seems to hit the market, with recent debuts from Microsoft, Salesforce, ServiceNow, and Priceline. Gartner says these agents should see rapid adoption, likely making up a third of all GenAI interactions by 2028. All of these raise the question: How can we ensure they are secure?
Since of the vulnerabilities can be addressed with a thoughtful, identity-based approach. Four key considerations for secure AI integration
- User authentication. Before an agent can display a user’s chat history or customize its replies based on their age, it needs to know who they are. This will require some form of identification, which can be done with secure authentication.
- Secure APIs. AI agents need to interact with other applications via APIs to take actions on a user’s behalf. As GenAI apps integrate with more products, calling APIs on behalf of end users — and doing so securely — will become critical.
- Async authentication. To complete complex tasks or waiting for certain conditions to be met — like booking airfare only when it drops below $200 — AI agents need extra time. That means running in the background for minutes, hours, or even days, with humans acting as supervisors who approve or reject actions only when notified by the chatbot, helping prevent excessive agency.
- Access controls. Most GenAI apps use a process called Retrieval Augmented Generation (RAG) to enhance the output of LLMs with knowledge from external resources, such as company databases or APIs. To avoid sensitive information disclosure, the retrieved content should only be data that the user can access. With proper authorization and access controls, you can prevent users from getting and sharing data they shouldn’t.
To fully take advantage of GenAI’s potential, organizations must securely integrate GenAI into their applications and keep all four of these requirements in mind. However, finding ways to protect against these unique risks we’re seeing shouldn’t get in the way of innovation and deploying these GenAI agents even faster.
show less
Leave a Reply
You must be logged in to post a comment.