How Ferrari Hit the Brakes on a Deepfake CEO

Informed i’s Weekly Business Insights

Extractive summaries and key takeaways from the articles carefully curated from TOP TEN BUSINESS MAGAZINES to promote informed business decision-making | Since 2017 | Week 385 | January 24-30, 2025 | Archive

How Ferrari Hit the Brakes on a Deepfake CEO

By Sandra Galletti and Massimo Pani | MIT Sloan Management Review | January 27, 2025

Extractive Summary of the Article | Listen

3 key takeaways from the article

  1. In July 2024, an executive at luxury sports car manufacturer Ferrari received several messages that appeared to have been sent by CEO Benedetto Vigna on the messaging and calling platform WhatsApp. The messages, which originated from an unfamiliar number, mentioned an impending significant acquisition, urged the executive to sign a nondisclosure agreement immediately, and claimed that Italy’s market regulator and the Italian stock exchange had already been informed about the transaction.
  2. The attemp was later discovered as scam attempt.  The Ferrari deepfake scam attempt highlights the evolving sophistication of cyberthreats and the growing trend of using deepfake technology to impersonate corporate leaders. 
  3. As the threat of deepfake scams grows, executives should prioritize the following actions to protect their organizations:  emphasize vigilance, Enact strong verification protocols, Promote digital literacy and AI awareness, Incorporate cognitive bias awareness, Enhance communications security, Implement a multilayered security approach, and Continually improve fraud detection systems.

Full Article

(Copyright lies with the publisher)

Topics:  Leadership, Cybercrimes, Deepfake, Artificial Intelligence

In July 2024, an executive at luxury sports car manufacturer Ferrari received several messages that appeared to have been sent by CEO Benedetto Vigna on the messaging and calling platform WhatsApp. The messages, which originated from an unfamiliar number, mentioned an impending significant acquisition, urged the executive to sign a nondisclosure agreement immediately, and claimed that Italy’s market regulator and the Italian stock exchange had already been informed about the transaction.

Despite the convincing nature of the messages, which also included a profile picture of Vigna standing in front of the Ferrari logo, the executive grew suspicious. Although the voice mimicked Vigna’s Southern Italian accent, the executive noticed slight inconsistencies in tone during a follow-up call in which he was again urged to assist with the confidential and urgent financial transaction.

Sensing that something was amiss, the executive asked the caller a question that only Vigna would know the answer to — the title of a book Vigna had recommended days earlier. Unable to answer the question, the scammer abruptly ended the call. The executive’s simple test prevented what could have been a major financial loss and reputational damage for Ferrari.

The attempt to exploit Ferrari is an example of a deepfake — a highly realistic video, image, text, or voice that has been fully or partially generated using artificial intelligence algorithms, machine learning techniques, and generative adversarial networks, or GANs.

Financial losses attributed to AI are expected to rise: Deloitte’s Center for Financial Services predicts that fraud enabled by generative AI could reach $40 billion in losses in the United States by 2027, up from $12.3 billion in 2023. Given how realistic many deepfakes appear and the ease with which scammers can produce them, organizations must increase employee awareness and take proactive measures to protect against this emerging threat.

As the threat of deepfake scams grows, executives should prioritize the following actions to protect their organizations:  emphasize vigilance, Enact strong verification protocols, Promote digital literacy and AI awareness, Incorporate cognitive bias awareness, Enhance communications security, Implement a multilayered security approach, and Continually improve fraud detection systems.

Be the first to comment

Leave a Reply