AI-generated deepfakes, voice clones, and hyper-personalised phishing are bypassing every technical safeguard you have. The only defence left is a trained human mind.
Criminals no longer need technical sophistication. With consumer-grade AI tools, they can impersonate your CEO on a video call, clone a colleague's voice from a podcast, or craft a phishing email indistinguishable from the real thing.
Real-time face and voice manipulation lets attackers impersonate executives on live video conferences, authorising fraudulent transfers.
A few seconds of publicly available audio is enough to clone someone's voice with near-perfect accuracy, enabling convincing phone-based fraud.
Generative AI creates perfectly written, hyper-personalised phishing emails in seconds — with dramatically higher success rates than human-crafted attacks.
A structured approach that moves from awareness to action to ongoing vigilance — building genuine psychological self-defence across your workforce.
A high-impact presentation featuring live demonstrations of AI-generated impersonation. Audiences see a convincing deepfake of a trusted figure — then experience the reveal. This is the moment that creates urgency.
Hands-on workshops exploring how the human mind processes information, creates trust, and becomes vulnerable to manipulation under pressure.
Controlled, bespoke deepfake attacks deployed across the organisation over the following months. Those who are caught out receive targeted refresher training.
eLearning can teach facts. It cannot change behaviour under pressure. That requires a very different kind of training.
Psychological self-defence isn't a concept you learn — it's a reflex you build. Live facilitation creates the emotional context needed for real behavioural change. eLearning creates awareness. We create resilience.
When a team goes through a deepfake reveal together, it becomes a shared reference point — a moment they talk about. That cultural memory is what keeps people vigilant long after the training ends.
We remember what shocks us. A live demonstration where your own colleague — or your own voice — appears as a deepfake creates the kind of visceral understanding that no video module ever could.
A skilled facilitator reads the room, adapts to your industry's specific threat profile, and answers the questions your people are actually asking. No algorithm does that.
Role-play scenarios, live simulations, and group challenges let people practise the “pause and verify” habit in a safe environment — so it becomes instinctive when it matters most.
Our simulated deepfake testing phase provides hard data on vulnerability before and after training — giving your board concrete evidence that the investment is working.
The uncomfortable truth about AI-generated attacks is that intelligence is no protection. Fraudsters don't exploit ignorance — they exploit the very cognitive shortcuts that make high-performing people effective.
Understanding your own psychological architecture is the first step to defending it.
When a request appears to come from the CEO, our brain downregulates scepticism. AI attackers exploit this by creating flawless impersonations of senior leadership, deliberately triggering deference before rational thought kicks in.
Phrases like “this must be done today” or “don't discuss this with anyone yet” deliberately bypass deliberate thinking. Urgency narrows attention, suppresses doubt, and dramatically increases compliance.
Attackers study their targets for weeks before striking — referencing real projects, known colleagues, and recent events. Familiarity creates comfort. Comfort suspends verification. That's the attack vector.
Once we believe something is legitimate — a credible email, a familiar face on a call — we unconsciously filter out signals that contradict it. Our training builds the habit of actively seeking disconfirming evidence.
The goal isn't to make your people paranoid. It's to give them the intellectual humility to pause, the emotional presence to notice when something feels off, and the confidence to verify — even when under pressure from apparent authority.
Traditional cybersecurity training tells people what to watch for. But AI-generated attacks don't look like attacks. They look like Tuesday.
Our programme is built on a different principle: training the human mind to stay emotionally present under pressure, to question authority with intellectual humility, and to build verification reflexes that become second nature.
Because when urgency, authority, and realism combine, technical awareness isn't enough. You need psychological resilience.
People who can identify deepfake videos are barely better than a coin flip — just 24.5% accuracy. The answer isn't better eyes. It's better thinking.
The idea is to keep people emotionally present, taking responsibility with intellectual humility and being nimble enough to avoid these attacks.Nick Smallman — Founder, Working Voices
In January 2024, an employee at engineering firm Arup joined what appeared to be a routine video call with senior colleagues. Every face, every voice was AI-generated.
Publicly available LinkedIn profiles, conference videos, podcast appearances, and company announcements are scraped to build detailed profiles of key executives. This is spear-phishing at its most targeted.
The target receives credible emails that appear to come from senior leadership, establishing context for an upcoming “confidential” financial discussion. Nothing seems unusual.
The employee joins a video call where the CFO and multiple colleagues appear to be present. Real-time face and voice manipulation makes the impersonation convincing. A £20 million transfer is authorised.
Multiple transfers were made across 15 transactions before the fraud was discovered. The employee had followed what appeared to be legitimate instructions from trusted superiors.
Particularly relevant for defence, finance, professional services, and any organisation where a single fraudulent authorisation could cause catastrophic loss.
Where information security is paramount and state-sponsored actors use increasingly sophisticated AI tools for social engineering.
$2.77 billion in Business Email Compromise losses reported to the FBI in 2024 alone. Finance teams are the primary target for AI-enabled fraud.
Any organisation where executives are public-facing, decisions move fast, and a single compromised employee can authorise significant transactions.
Book the Accelerator Talk for your leadership team and see how AI-generated attacks could target your people — before a real attacker does.
Get in Touch →robert@workingvoices.com