Informed i’s Weekly Business Insights
FREE weekly newsletter | sharing knowledge briefs from TOP TEN BUSINESS MAGAZINES, to keep you ‘relevant’…| Since 2017 | Week 449 | April 17-23, 2026 | Archive

How LLMs could supercharge mass surveillance in the US
By Grace Huckins | MIT Technology Review | April 21, 2026
Extractive Summary of the Article | Listen
3 key takeaways from the article
- There are pieces of your life scattered all over the internet, and some of them are for sale. Data brokers amass web searches, financial records, and location data from millions of individuals and sell them to various clients, including the US government. Information on your recent online purchases or the route that you take to work could be sitting on hard drives around the world, waiting to be used.
- While reassembling those pieces isn’t trivial, there is early evidence that LLMs might make it far easier. LLM agents could potentially do the work of intelligence analysts in a fraction of the time and for a fraction of the cost, which would enable the state to aim its all-seeing eye toward anyone, not just its highest-priority targets.
- Worries over how LLMs could facilitate mass surveillance recently made headlines around the world when contract negotiations between Anthropic and the US Department of Defense fell apart in late February because Anthropic balked when the DOD demanded leeway to use the company’s models to analyze commercially available data on US citizens. There’s plenty of precedent for AI being used for mass surveillance: Most notably, governments worldwide use facial recognition to track citizens and noncitizens alike. But government surveillance is not the only concern. Private companies could just as easily purchase bulk data and analyze it with LLM agents, and they are less subject to legal constraints and public opposition, especially if they aren’t household names.
(Copyright lies with the publisher)
Topics: AI & Surveillance, Anthropic, OpenAI
Click for the Extractive Summary of the ArticleThere are pieces of your life scattered all over the internet, and some of them are for sale. Data brokers amass web searches, financial records, and location data from millions of individuals and sell them to various clients, including the US government. Information on your recent online purchases or the route that you take to work could be sitting on hard drives around the world, waiting to be used.
While reassembling those pieces isn’t trivial, there is early evidence that LLMs might make it far easier. LLM agents could potentially do the work of intelligence analysts in a fraction of the time and for a fraction of the cost, which would enable the state to aim its all-seeing eye toward anyone, not just its highest-priority targets.
Worries over how LLMs could facilitate mass surveillance recently made headlines around the world. According to reporting from the New York Times and the Atlantic, contract negotiations between Anthropic and the US Department of Defense fell apart in late February because Anthropic balked when the DOD demanded leeway to use the company’s models to analyze commercially available data on US citizens. When Anthropic’s rival OpenAI agreed to a DOD deal mere hours later, OpenAI faced an immediate wave of public backlash for apparently swanning past Anthropic’s red lines. Under pressure, OpenAI and the DOD later revised the contract terms.
For avid followers of Anthropic CEO Dario Amodei, the company’s firm stance probably didn’t come as a surprise. In a lengthy essay published to his personal website in January, Amodei had argued that AI-enabled mass surveillance could constitute a crime against humanity. The core concern underlying his dispute with the DOD was that the government might use LLM-based systems such as Claude to analyze reams of data obtained from brokers and build detailed profiles of individual Americans at scale.
There’s plenty of precedent for AI being used for mass surveillance: Most notably, governments worldwide use facial recognition to track citizens and noncitizens alike, and recent reporting indicates that US Immigrations and Customs Enforcement (ICE) agents have leaned heavily on facial recognition apps in order to carry out the Trump administration’s mass deportation campaign. While there’s not yet any smoking-gun evidence that the US government (or anyone else) is using LLMs to conduct surveillance in the way that Amodei warns about, there’s a clear appetite for such capabilities.
Few organizations would choose inefficient procedures of their own volition, but Congress could force the government down that path. Shortly after the Anthropic debacle, a bipartisan group of senators and representatives introduced a bill that would require the government to obtain a warrant before purchasing data from data brokers. Public outcry, too, seems to have had an effect: After OpenAI was overwhelmed by opprobrium for accepting DOD contract terms that Anthropic had rejected, the company and the Pentagon modified the contract to include additional surveillance protections.
But government surveillance is not the only concern. Private companies could just as easily purchase bulk data and analyze it with LLM agents, and they are less subject to legal constraints and public opposition, especially if they aren’t household names.
In the absence of legislation preventing such uses, we might need to rethink how we understand our own privacy. It has always been possible that someone online might unearth your address or connect you with your pseudonymous accounts, but given the effort that would take, it was easy to feel safe. Even in the wake of Edward Snowden’s 2013 revelations about the National Security Agency’s extensive surveillance of US citizens, many people reassured themselves that their privacy was still intact because the government had no reason to look into their lives.
That kind of privacy depends entirely on friction: the time and effort required to link a secret social media account with its real-life owner, or the skill and resources needed to analyze bulk datasets. Stay under the radar, and no one will care enough to overcome that friction. But LLM agents could lessen that effort, or remove it entirely. If the government and other organizations can construct detailed profiles of millions of people at the drop of a hat, no one is beneath their notice.
show less
Leave a Reply
You must be logged in to post a comment.