Deepfake video calls, voice cloned payment requests, AI written phishing emails. This free module teaches your team the one rule that stops every attack before any money leaves. Enter your work email and it opens immediately.
No password. No account. Share this module with your whole team today.
A Taranaki woman lost $224,000 to a scheme that used a deepfake video of Prime Minister Christopher Luxon. It looked real. It sounded real. New Zealand's Financial Markets Authority has also flagged deepfake impersonations of Winston Peters, Kiwibank CEO Steve Jurkovich, and Westpac CEO Catherine McGrath. All were used to push fraudulent investment platforms carrying the logos of trusted NZ media outlets like RNZ and the NZ Herald.
This is not a future threat. It is happening right now, in Aotearoa, targeting real people and real organisations.
For years, spotting a scam was relatively straightforward. Poorly written emails. Obvious fake sender addresses. Requests that felt off. Your instincts were enough.
That world no longer exists.
AI has not invented new types of fraud. It has made existing fraud dramatically more convincing, faster to produce, and cheaper to run. The scams targeting NZ organisations in 2026 combine tools your attacker can access for free, produce in seconds, and refine endlessly until they work.
You need to know what these actually look like. Not in theory. In practice, the way they show up in your inbox, your phone, and your video calls.
Human beings are remarkably bad at detecting high quality deepfakes. Research shows that human detection accuracy for high quality video deepfakes is only 24.5%. That is barely better than a coin flip.
Voice cloning is even harder. Our brains are wired to trust familiar voices. When you hear what sounds like your CEO or your manager, you do not naturally question whether it is real. That is precisely what the attackers count on.
Your instincts were built for a world where hearing someone's voice meant it was them. That world ended when AI voice cloning became available for free online. The only defence is procedural, not perceptual. You cannot train your ear to spot a good deepfake. You can build a process that makes the deepfake irrelevant.
Here is how this plays out in a real NZ workplace. Read it carefully because this is not hypothetical. Variations of this scenario have happened at NZ organisations.
9.47am. Sarah is the finance coordinator at a Christchurch construction firm with around 80 staff. She is processing invoices when her phone rings. The number shows as the CEO's mobile. She answers.
The voice is unmistakably her CEO, David. Same tone, same way he says her name, same slight Canterbury accent. He sounds slightly stressed. He explains there is a supplier payment that needs to go out today or the firm loses a critical materials order for a major project. It is $18,500. He needs it processed in the next 20 minutes before he goes into a client meeting. He says not to mention it to anyone else yet because the contract negotiation is still being finalised.
Sarah feels the pressure immediately. The urgency. The secrecy. The trust she has in Mark's voice. Every instinct says this is real.
Then she remembers the red flags. Urgency. Secrecy. Money. All three together. She tells the caller she will call him back in five minutes to confirm the details. The caller says there is no time and pushes back.
That pushback is the signal. A legitimate CEO would say of course, call me back. Sarah hangs up, looks up Mark's number from her internal directory, and calls him directly. Mark has no idea what she is talking about. He is in a meeting and has not called anyone.
The voice was cloned from a podcast interview Mark did three months ago. The attackers needed less than 60 seconds of that audio to build a convincing replica. The entire call cost them almost nothing to produce.
Sarah saved her organisation $18,500 and potentially much more by doing one thing. She paused, used a known number, and verified through a separate channel. David confirmed he was in a client meeting and had not called anyone. The voice had been cloned from a business interview he gave to a Canterbury trade publication three months earlier. That is all it took.
Even though AI makes scams more convincing, there are still patterns to watch for. These are not foolproof detectors but they should trigger your verification protocol immediately.
This single rule makes voice cloning, deepfake video calls, fake emails, and AI phishing irrelevant. It does not matter how convincing the fake is. If you verify through a separate channel before acting, the fraud fails.
If your organisation receives or falls victim to an AI assisted scam, report it immediately. Do not wait and do not be embarrassed. These attacks are sophisticated and reporting helps protect other NZ organisations.
CERT NZ: cert.govt.nz or 0800 CERTNZ. Report cyber security incidents including phishing and fraud. CERT NZ is the primary point of contact for cyber incidents affecting NZ businesses.
NZ Police: Report financial fraud at police.govt.nz or call 105. For anything involving immediate financial loss, report immediately so potential account freezing can happen fast.
Financial Markets Authority: fma.govt.nz for investment scams and anything involving fake financial products or fake endorsements by NZ public figures.
Netsafe: netsafe.org.nz or 0508 NETSAFE for general online safety advice and support after an incident.
You do not need a large budget or a dedicated IT team to meaningfully reduce your organisation's exposure. These three actions cost nothing and take less than an hour.
This is Module 8. Modules 1 through 7 cover what AI actually is, which tools to use for which tasks, how to use AI for writing and admin, and the NZ Privacy Act in plain language. Modules 9 through 12 cover copyright, AI and your career, building team AI habits, and what becomes possible when AI is connected directly to your organisation's workflows. The full programme is $3,000 per year for NFPs and charities, or $5,000 per year for businesses and councils. One flat fee covers your whole organisation.
Print this, stick it up near the people who handle payments and approvals, and share it with your whole team. It costs nothing and it works.
You now understand the verification protocol that stops most AI enabled fraud attacks. Most NZ organisations have never formally trained their staff on this. You have. That matters.
They are responding under pressure using the information available to them. The attackers rely on that. What you have just learned is not obvious. It runs against instincts that have served people well for decades. The fact that you now know to pause and verify is genuinely protective. Share this module with your team today.
You now know how to spot AI fraud. But research shows most NZ organisations are already committing Privacy Act breaches with free AI tools right now, without realising it. Module 7 covers exactly what data must never go into ChatGPT or Gemini, in plain language, with real NZ examples. It takes 20 minutes. The legal exposure it closes is significant.
Or email Lee directly at lee@purelayer.co.nz to ask about pricing for your organisation.
Share this free module with your team