ForturaIndustries

Education & Research

In education and research, cyber attacks don’t just lock up systems. They interrupt learning and leak the ideas you haven’t published yet.

Schools, universities, TAFEs, training providers and research institutes across Australia and New Zealand have become fully digital operations: learning platforms, online assessments, research data lakes, student information systems, cloud HR and finance, and a sprawling ecosystem of EdTech and collaboration tools.

Attackers see a sector that:

  • Holds identity data on millions of students, staff and alumni
  • Manages valuable intellectual property and sensitive research
  • Operates with tight budgets and diverse, decentralised IT
  • Has to keep the doors open, even when under attack

In 2025, IBM found the global average cost of a data breach across sectors was USD 4.44 million—the first decline in five years—driven largely by faster detection and AI-assisted defence in better-resourced organisations.

Comparitech research shows 251 ransomware attacks on schools, colleges and universities worldwide in 2025—similar in number to 2024—but records known to be breached jumped to almost 3.96 million, up 27% on the previous year.

Major higher-education incidents like the 2025 University of Phoenix breach affecting an estimated 3.5 million people after a zero-day exploit show how a single event can dominate headlines and erode trust in digital learning.

Fortura exists so that your institution doesn’t become the next case study.

Our Focus

Who We Work With

This page is for education and research organisations across ANZ and globally, including:
  • Universities and research-intensive institutions
  • TAFEs and vocational training organisations
  • K–12 and independent schools
  • Research agencies and labs
  • Online and hybrid education providers
  • EdTech and learning-platform companies that serve them

If you issue qualifications, run research, manage grants or host learning online, this is your threat model.

Rising Expectation

The New Operating Reality for Education & Research

Education and research leaders are being pulled in three directions at once:
01

Always-on digital learning

Learning management systems, online assessments, video platforms and student portals must work 24×7, across devices and geographies.

02

Cloud-first research and collaboration

High-value research now lives in shared drives, cloud storage, Git repositories, SaaS tools and collaboration platforms often across multiple jurisdictions.

03

AI everywhere

Students use AI to learn, write and code. Staff use it to draft materials, mark work and analyse data. Researchers build models and agents as part of their projects.

Meanwhile:

  • Global ransomware data shows education remains one of the most targeted sectors, with 251 attacks in 2025 alone.
  • Ransomware attacks against education plateaued in volume but grew in impact, with more records exposed per incident and attackers experimenting with data-leak and extortion-only models.
  • At the same time, providers are getting better: in 2025, Sophos found that the percentage of attacks stopped before data was encrypted jumped from 14% to 67% in lower education and from 21% to 38% in higher education, suggesting defences are improving but from a low base.
The reality in 2026 is not that education is defenceless. It’s that attackers are treating your sector as a long-term campaign, not a one-off opportunity.
Inside The Attack

How Attacks Really Happen in Education & Research

Most attacks start with everyday behaviour, not exotic exploits:

Initial access

A lecturer reuses a password for a personal app and institutional SSO; a research group leaves a cloud bucket world-readable; a school administrator clicks an AI-written “payroll update” phish.

01

Identity and access compromise

Attackers steal credentials via phishing, info-stealers or reuse from previous breaches. IBM’s 2025 report describes attackers “logging in rather than hacking in” as one of the defining features of modern breaches.

02

Pivot into core platforms

Student information systems, HR/payroll, research data stores, VDI environments, email and collaboration tools.

03

Target high-value data and systems

Student and staff identity data

Research data and IP (especially in STEM, health, defence-adjacent and commercial partnerships)

Financial systems tied to tuition, grants and payroll

04

Monetise or leverage

Ransomware across storage and VMs

Data extortion and leaks (particularly impactful where sensitive research or personal data is involved)

Long-term persistence to steal research over time

Because universities and schools tend to be open, federated environments, compromise of a single account can give attackers leverage across faculties, campuses and even partner institutions.

Emerging Risk

The AI Shift: Cheating Tool or Attack Surface?

AI is transforming education and research—but not always safely.

How it’s being used legitimately

  • Students using generative AI to draft essays, summarise content and debug code
  • Academics and administrators using AI for marking assistance, content generation and policy drafting
  • Researchers building or consuming models as part of projects
  • Institutions piloting AI chatbots and copilots for student support and IT help

Where it’s creating risk

  • IBM highlights an “AI oversight gap”: shadow AI adds around USD 670,000 to the average breach cost; 97% of organisations that reported breaches of AI systems lacked proper access controls.
  • Staff pasting exam content, student records or draft papers into public AI tools
  • Research code and data sent to external APIs without clear data-handling terms
  • AI copilots in IDEs and office tools exposing snippets of proprietary material

How attackers use the same technology

  • AI-written phishing that mimics institutional language and branding
  • Deepfake voices impersonating senior staff on calls and voice messages
  • Automated discovery of exposed research stores and misconfigured cloud resources
If your AI policy fits on a slide, and your AI logs fit on nothing, you don’t have an AI strategy. You have an AI risk.

Fortura’s work with education and research organisations assumes AI is here to stay. The question is how to govern it, not how to stop it.

Compliance

Frameworks, Compliance and the “Trust Contract”

Unlike heavily regulated sectors, education and research often operate under a patchwork of obligations and expectations:

Privacy & data protection

Australian Privacy Act and Notifiable Data Breaches scheme

NZ Privacy Act

GDPR and other regimes for international students and collaborations

Research and funding

Security expectations from government funders and industry partners

Export controls and national-security restrictions for certain research areas

Ethics and data-sharing requirements for human and health research

Security frameworks

NIST Cybersecurity Framework (NIST CSF)

ISO 27001 (and sometimes 27701, 27017/27018)

ACSC Essential Eight for baseline uplift in ANZ

OWASP and cloud security best practice for EdTech platforms

From Fortura’s point of view, these frameworks only matter when they change what actually happens in labs, lecture theatres, admin offices and cloud environments.

Costs of Compromise

The Cost Side: Beyond Ransom and Remediation

For education and research, the cost of a breach goes far beyond the immediate bill.

Operational disruption

Cancelled classes, delayed exams, disconnected remote learning, inaccessible resources.

Research impact

Destroyed or tainted data sets, delayed publications, damaged collaborations.

Reputation and recruitment

Fewer international students choosing your institution, partners re-evaluating collaborations, staff and students losing trust in digital services.

Regulatory and funding risk

Investigations, additional reporting requirements, and funders questioning your ability to safeguard sensitive projects.

For schools and smaller providers, a single significant incident can absorb years of “modernisation” budget. For universities and research agencies, it can derail strategic projects and damage institutional brand in key international markets.

The real question for education leaders isn’t “Can we afford cyber?” It’s “Can we afford to be seen as careless with our students’ and researchers’ future?”
Horizon

The Next 3–5 Years for Education & Research Cyber

Looking ahead, several forces will shape how institutions defend teaching and research:
2027

Ransomware and extortion stay structural

Attack volume may plateau, but impact per incident and data-leak/extortion-only models will keep pressure on institutions with valuable IP and personal data.

2028

AI governance becomes non-negotiable

Leaders will be expected to show how AI is governed in teaching, marking, research code and third-party tools—not just that pilots exist.

2029

Cross-border data and collaboration scrutiny

International partnerships, cloud regions and student mobility will increase privacy and security reviews on how and where data is stored and processed.

2030

Measurable uplift becomes the baseline

Early detection and containment metrics (like Sophos’s “stopped before encryption” trends) will become board-level indicators, not SOC-only statistics.

Fortura helps institutions prioritise identity, backups, segmentation, incident readiness and AI governance in an order that fits academic realities—not generic corporate playbooks.

FAQ

Education & Research

Fortura is a cybersecurity company delivering intelligence-led services today and building security platforms for the future.
Because you hold large volumes of personal data, valuable research and often operate with looser controls and more diverse systems than corporates. Attackers know disruption hits students and research timelines directly, which increases leverage.
No. Schools and vocational providers are heavily targeted because they hold identity data on minors and often have weaker controls. The goal isn’t perfection—it’s prioritising high-impact controls within your budget.
Most start with a combined risk & exposure assessment (aligned to NIST CSF / Essential Eight), plus an attack surface and AI/shadow-AI review across key platforms. From there we co-design a roadmap that IT, security, research leadership and executives can all sign up to.
AI changes both sides: better detection and automation for defenders, but more convincing phishing, deepfakes and automated reconnaissance for attackers. The biggest risk right now is shadow AI—unapproved tools handling sensitive data without proper controls.
We involve research and teaching staff early—mapping sensitive data flows, understanding how labs and collaborations actually run, and designing controls that respect those realities. Security that doesn’t work for researchers will be worked around.
Work with us

Fortura will be Supporting You Across Every Phase of your Security Lifecycle

No Sales Scripts. We'll Talk Through Your Situation.

If you're shaping strategy, assessing risk, or preparing for what's next, we'll help you get clear on priorities and act with confidence. Tell us what you're working through - we'll respond quickly.

Response TimeWithin 24 hours
Office LocationSydney City/Parramatta/Remote
Phone *

By submitting this form, I understand my personal data will be processed in accordance with Fortura's Privacy Statement and Terms of Use.

Get Insights & Alerts

Get the latest news, research notes, practical guidance, and threat updates written for people making security decisions.