AI Cyberattacks on HNWI: Prime Targets
Author: Samuel Watterson
As artificial intelligence becomes more accessible, cybercriminals are rapidly leveraging it to launch increasingly sophisticated attacks, especially against high value individuals and their mobile devices. In 2025, the convergence of advanced AI tools and a growing illegal market for personal data has created the perfect storm; one where mobile phones, often the most vulnerable and least protected endpoint, become primary targets.
High value individuals are especially exposed to identity theft, financial fraud and data exploitation tailored by AI-enhanced tactics. This blog explores the evolving landscape of AI-enhanced cybercrime targeting HNWI: the specific threats targeting mobile devices, real-world AI-augmented attack techniques, and actionable strategies to defend against this escalating threat in 2025 and beyond.
A Double-Edged Sword
Large Language Models (LLMs), and other machine learning models, are transforming the ways we live and do business. They are also powerful tools that cyber criminals are leveraging to scale, personalise and automate their attacks with alarming effectiveness.
In early 2025, research by Cisco revealed that underground ‘jailbroken’ or custom LLMs like WormGPT, FraudGPT, and DarkGPT are being used to generate malware, craft hyper‑personalised phishing scripts, and automate social‑engineering at scale. LLMs can now automate end‑to‑end spear phishing campaigns - identifying targets, scraping public information, generating bespoke lures, dispatching them and iterating based on click‑through data.
Cybercriminals are using the technology to rapidly create both audio and video deepfakes at scale - impersonating trusted individuals, such as executives, public figures, or even family members, financial advisors, or business associates. Over 28% of UK adults believe they were targeted by an AI voice scam in 2024 with criminals requiring as little of 3 seconds of audio.
Recent high profile deepfake cases include:
An "unknown actor" is alleged to have used an artificially generated voice of Marco Rubio to contact officials via the Signal messaging app. They contacted at least five individuals, including the foreign ministers, a US governor and a member of Congress.
A sophisticated deepfake video attack in 2024 where an Arup employee joined what they thought was a video call with senior executives during which they were duped into transferring £20m to 5 local bank accounts via 15 transactions.
A deepfake scam impersonating a top City analyst. This convincing attack featured an AI cloned voice and AI-generated likeness of City analyst Michael Hewson, that urged viewers to invest in a fraudulent trading platform.
Spear Phishing at Scale
The first phase of every cyberattack is reconnaissance, with attackers searching for vulnerabilities, assets and weak links to target. AI tools can rapidly analyse social media, corporate websites, public records and data breaches to build detailed profiles of high value individuals.
This intelligence is then used to tailor attacks that exploit the intelligence gathered, including personal habits, relationships, and routines, to create personalised messages intended to trick individuals.
Generative AI enables cyber criminals to efficiently craft convincing, hyper-personalised phishing messages that mimic writing styles and reference relevant information. The use of generative AI makes these messages far more difficult to detect without the translation, spelling and grammatical mistakes that often reveal traditional phishing attacks. In fact, AI agents can now out-phish elite human red teams at scale with simulated phishing campaigns being 24% more effective than campaigns created by their human counterparts.
Why HNWI are Prime Targets
High value individuals are attractive targets for cybercriminals due to the sheer value of the information they possess. In addition to substantial financial assets, they often have access to confidential business data, legal documents, investment strategies, and other sensitive communications. Networks frequently include other influential individuals, making them gateways to broader high-value targets. While many HNWIs maintain a deliberately low digital profile, the data they do share, intentionally or otherwise, can be utilised to craft highly personalised and convincing attacks.
The cybersecurity posture of those within their immediate circle, such as family members, household staff and personal assistants, is also often less robust, providing attackers with multiple entry points. Cybercriminals exploit these peripheral vulnerabilities through social engineering, phishing and AI-powered attacks that mimic trusted contacts or communication patterns.
Mitigating the Risk
Protecting yourself and those around you from sophisticated cyber threats requires a proactive and layered approach. From strengthening digital hygiene to limiting exploitable information online, each measure plays a critical role in minimising risk.
Cyber Security Awareness
In addition to your own awareness, ensuring that your staff, family members and close associates are aware of emerging cyber threats, is a crucial component of a proactive security strategy. This includes developing the ability to recognize subtle cues, such as tone, language patterns, and inconsistencies. This may help differentiate a genuine phone call from an AI-generated voice clone impersonating a loved one.
Establish Secure Communication Protocols
Implementing secure processes for verifying identity must be considered to protect against impersonation and fraud. Consider creating a confidential code word or phrase shared exclusively among trusted family members and staff to confirm their identity during phone or digital communications.
In the absence of such a measure, always verify a caller’s legitimacy by ending the call and independently sourcing and dialling the official contact number of the bank or organisation they claim to represent. Never transfer funds, share sensitive information, or provide assets, such as gift cards or cryptocurrency, to individuals you have not met in person or whose identity cannot be independently confirmed.
Minimise Online Exposure
Controlling the amount of personal information shared online - across social media, corporate websites, and other digital platforms - helps reduce the likelihood of becoming a target for AI-enabled impersonation and social engineering attacks. Limit the publication of content featuring your image or voice, keep social media accounts private, and allow access only to individuals you know and trust.
Secure Your Accounts
Passwords alone are increasingly vulnerable. In fact, 81% of hacking-related breaches stem from weak or stolen credentials. Implementing multi-factor authentication (MFA) significantly enhances security by requiring an additional verification step, something only the legitimate user possesses. Even if a cybercriminal gains access to your password, MFA can prevent unauthorized access in over 99.9% of attempted account compromises.
In a world where AI is accelerating the scale and precision of cyberattacks, staying ahead requires not just awareness, but a proactive and tailored defence strategy.
Want to assess your household cyber risk exposure? Get in touch with coc00n today to explore a bespoke, discreet, and high-impact approach to securing your family’s digital life - across mobile devices, networks, and the people you trust most.
About the author
Samuel Watterson is a coc00n Cyber Security Advisor with a legal background in commercial litigation. With sharp analytical skills and the ability to distil complex issues into clear, actionable advice, Samuel helps individuals and organisations strengthen their cyber security posture.