In cybersecurity circles, it's easy to focus on the dramatic.
A rogue employee steals sensitive data. A hacker worms their way past hardened defences. These are compelling stories—but they are also distracting. In reality, the majority of cyber incidents aren’t caused by malicious insiders or sophisticated adversaries. They are caused by ordinary staff members making entirely preventable mistakes.
The culprits are not often villains but victims of circumstance.
Staff who were under-trained, over-trusted, or simply unaware. Whether it’s an accidental email sent to the wrong recipient, a misconfigured cloud storage bucket, or a well-intentioned click on a phishing link, the damage is real. In fact, according to the Verizon 2024 Data Breach Investigations Report, more than 70% of breaches involved the human element. That should change how we think about cyber risk.
Organisations invest millions in technology.
Firewalls, encryption, endpoint protection, and AI-driven threat detection. Yet the same organisations often overlook the most vulnerable attack surface: the people who use these systems. It's not that staff are unintelligent. It's that they haven't been trained to identify subtle risks in an increasingly deceptive threat landscape. The result is a steady drip of errors, any one of which has the potential to escalate into a crisis.
Consider, for example, the now-notorious Waikato DHB ransomware attack. It devastated healthcare services across New Zealand. While malicious software played a central role, the broader issue was systemic: legacy systems, under-prioritised upgrades, and a lack of security training and awareness created an environment ripe for compromise. The attack wasn't caused by deliberate sabotage. It was enabled by organisational neglect.
This is a pattern we see repeatedly.
Threat actors are opportunistic. They rely on someone, somewhere, to make a mistake. One weak password. One overlooked software patch. One link clicked without thinking. Each mistake is a crack in the dam. And collectively, they represent a significant threat to operational resilience.
What’s the solution?
Technology alone won’t suffice. A secure system, poorly operated, is still insecure. This is where a mature staff training strategy becomes not just beneficial, but vital. Training must go beyond basic annual compliance videos. It must be immersive, continuous, and adaptive to the specific threats an organisation faces.
Effective training changes behaviour.
It builds muscle memory around caution. It embeds good judgment into daily workflows. This is not just about teaching employees to identify phishing emails; it’s about creating a culture where security is instinctive, not reactive. In other words, organisations must invest in turning their staff from the weakest link into the first line of defence.
That transformation starts at the top.
Executive leadership must model the value placed on security. When training is viewed as a tick-box exercise or delegated without follow-through, staff take note. Conversely, when leadership visibly supports and participates in security awareness initiatives, it legitimises the effort across the organisation. Culture follows example.
Equally important is understanding that training is not one-size-fits-all. Front-line staff need different guidance than system administrators or executives. Cybersecurity awareness must be contextual and relevant. A staff member in finance might be trained to identify invoice fraud, while someone in HR needs to understand risks around data privacy.
Further, training should include realistic simulations. Controlled phishing exercises, role-specific scenarios, and immediate feedback loops are proven methods to drive learning and retention. Mistakes should be treated as learning opportunities, not as failures. The aim is not to punish but to prepare.
Of course, training alone cannot compensate for poor system design or lack of technical controls. Security must be baked into infrastructure. Strong identity management, least-privilege access, and monitoring tools can support staff in making safer decisions. But even the best-designed systems will fail if misused. Which brings us back to people.
Ignorance can be addressed.
Stupidity, in the form of habitual negligence, can be mitigated through accountability and reinforcement. Malice, while comparatively rare, can be contained through controls and auditing. But only if the organisation accepts that these human-centred risks are as real as any zero-day exploit or ransomware strain.
In our work with organisations navigating legacy system upgrades, CyberForensics has found that the most effective security investments are those that combine infrastructure modernisation with human-centred risk strategies. Virtualisation, for instance, can extend the safe use of outdated but critical applications—but only when paired with secure access protocols and staff who understand the stakes.
Ultimately, risk management is not just a technical problem.
It’s a human one. Every employee represents a potential point of failure, or a potential line of defence. The difference is training. The difference is culture. And the difference is whether we treat cybersecurity as something for IT to handle, or something that belongs to everyone.
As regulatory landscapes evolve and public scrutiny increases, organisations must ensure they are not caught off guard by the most predictable threat of all: their own people, left unprepared.
Ignorance is not a defence.
But it is a risk that we can manage—through training, culture, and leadership that puts people at the centre of the security conversation.