Weekly Business Insights from Top Ten Business Magazines
Extractive summaries and key takeaways from the articles curated from TOP TEN BUSINESS MAGAZINES to promote informed business decision-making | Since September 2017 | Week 350 | May 24-30, 2024
Five ways criminals are using AI
By Melissa Heikkilä | MIT Technology Review | May 21, 2024
Extractive Summary of the Article | Read and/or Listen
Artificial intelligence has brought a big boost in productivity—to the criminal underworld. Generative AI provides a new, powerful tool kit that allows malicious actors to work far more efficiently and internationally than ever before, says Vincenzo Ciancaglini, a senior threat researcher at the security company Trend Micro.
Most criminals are “not living in some dark lair and plotting things,” says Ciancaglini. “Most of them are regular folks that carry on regular activities that require productivity as well.”
Cybercriminals have mostly stopped developing their own AI models. Instead, they are opting for tricks with existing tools that work reliably. That’s because criminals want an easy life and quick gains, Ciancaglini explains. For any new technology to be worth the unknown risks associated with adopting it—for example, a higher risk of getting caught—it has to be better and bring higher rewards than what they’re currently using. Here are five ways criminals are using AI now.
- Phishing. The biggest use case for generative AI among criminals right now is phishing, which involves trying to trick people into revealing sensitive information that can be used for malicious purposes. For instance, Thanks to better AI translation, different criminal groups around the world can also communicate better with each other. The risk is that they could coordinate large-scale operations that span beyond their nations and target victims in other countries.
- Deepfake audio scams. Generative AI has allowed deepfake development to take a big leap forward, with synthetic images, videos, and audio looking and sounding more realistic than ever. This has not gone unnoticed by the criminal underworld.
- Bypassing identity checks. Another way criminals are using deepfakes is to bypass “know your customer” verification systems. Banks and cryptocurrency exchanges use these systems to verify that their customers are real people. They require new users to take a photo of themselves holding a physical identification document in front of a camera. But criminals have started selling apps on platforms such as Telegram that allow people to get around the requirement.
- Jailbreak-as-a-service. If you ask most AI systems how to make a bomb, you won’t get a useful response. That’s because AI companies have put in place various safeguards to prevent their models from spewing harmful or dangerous information. Instead of building their own AI models without these safeguards, which is expensive, time-consuming, and difficult, cybercriminals have begun to embrace a new trend: jailbreak-as-a-service. Most models come with rules around how they can be used. Jailbreaking allows users to manipulate the AI system to generate outputs that violate those policies—for example, to write code for ransomware or generate text that could be used in scam emails.
- Doxxing and surveillance. AI language models are a perfect tool for not only phishing but for doxxing (revealing private, identifying information about someone online), says Balunović. This is because AI language models are trained on vast amounts of internet data, including personal data, and can deduce where, for example, someone might be located.
3 key takeaways from the article
- Artificial intelligence has brought a big boost in productivity—to the criminal underworld. Generative AI provides a new, powerful tool kit that allows malicious actors to work far more efficiently and internationally than ever before.
- Most criminals are not living in some dark lair and plotting things. Most of them are regular folks that carry on regular activities that require productivity as well. Cybercriminals have mostly stopped developing their own AI models. Instead, they are opting for tricks with existing tools that work reliably.
- Five ways criminals are using AI now: phishing, deepfake audio scams, bypassing identity checks, jailbreak-as-a-service, and doxxing and surveillance.
(Copyright lies with the publisher)
Topics: Technology, Artificial Intelligence, Cyber Crimes, Phishing, Deepfake, Identity checks, Jailbreak-as-a-service, Doxxing, Surveillance
Leave a Reply
You must be logged in to post a comment.