
Understanding what happened, and more importantly what it means for the rest of us, requires moving past the geopolitical headlines and looking closely at three interlocking problems: the catastrophic insecurity of civilian IoT and OT infrastructure; the weaponisation of artificial intelligence for lethal targeting; and the ethical abyss that opens up when these two technologies converge on a battlefield that no one voted to build.
The operation that led to the killing of Iran’s supreme leader Ali Khamenei on February 28, 2026, was months, possibly years, in the making. Reporting and a detailed analysis by Check Point Research indicate that Israeli intelligence, working in coordination with the CIA, had achieved something that sounds like the plot of a techno-thriller but is, in fact, a textbook application of well-understood offensive cyber techniques: they had gained access to nearly all of Tehran’s traffic camera network, and had been quietly mirroring its video feed to servers thousands of kilometres away for an extended period before any missile was fired.
The way they did it matters enormously. It was not accomplished through some classified exploit known only to the world’s most elite signals intelligence units. Rather, it succeeded largely because the systems protecting civilian urban infrastructure are, to put it plainly, terrible.
IoT devices, the broad category that includes everything from smart streetlights to networked CCTV cameras, share a set of structural weaknesses that the security industry has been warning about for well over a decade. Default credentials that are never changed by the administrators who install them. Firmware that ships with known vulnerabilities and receives patches that are never applied, either because the update process is cumbersome or because the organisation responsible simply does not have the resources or the culture to maintain it. Flat network architectures where compromising one device opens the door to an entire connected system. And, critically, the fact that these devices often connect directly to centralised data aggregation servers, which become enormously high-value targets once an attacker has a foothold anywhere in the network.
Check Point’s research, published in early March 2026, documented hundreds of attempts by groups linked to Iranian intelligence to hijack consumer-grade cameras across the Middle East, exploiting five distinct vulnerabilities in Hikvision and Dahua devices. None of those vulnerabilities were sophisticated. All of them had been patched in previous software updates. Some had been publicly known since 2017. The attacks worked because the cameras had never been updated, because their owners had no idea they were exposed, and because the manufacturers had long since moved on to selling the next model. As Sergey Shykevich, head of threat intelligence at Check Point, put it: “Now hacking cameras has become part of the playbook of military activity. You get direct visibility without using any expensive military means such as satellites, often with better resolution.”
The asymmetry here is striking. Deploying a reconnaissance satellite costs hundreds of millions of dollars and requires years of development. Hijacking a traffic camera installed to catch illegal U-turns costs essentially nothing, and the target’s own government paid for the installation. “The adversary’s already done the work for you,” observes Peter W. Singer, a military researcher at the New America Foundation. “They’ve placed cameras all around a city.” And unlike drones, which are detectable and only viable when air defences are minimal, a hacked camera is invisible. It sits in its mounting bracket, does exactly what it was always doing, and silently forwards a copy of everything it sees to whoever is watching.
The OT dimension of this is equally important and even less discussed. Operational technology, the industrial control systems that manage physical infrastructure like power grids, water treatment plants, traffic management systems and telecommunications networks, has been converging with standard IT networks for years, driven by efficiency and the promise of remote management. That convergence has created environments where a software vulnerability in a vendor’s update server can translate directly into physical consequences: a factory line that stops, a power grid that goes dark, a cellular tower that goes offline on demand. The reported jamming and selective shutdown of mobile towers around Khamenei’s compound in the minutes before the strike was not a physically destructive act in itself. It was a cyber-physical operation, one that used access to network infrastructure to create a window of silence in which a kinetic attack could land without warning. The OT layer of a city, the systems that make it actually function in the physical world, had become part of the kill chain.
Having access to a city’s camera network is one thing. Doing something useful with thousands of hours of low-resolution traffic footage from a metropolis of eight million people is an entirely different challenge. The human analysts who traditionally work in signals intelligence and surveillance are extraordinarily skilled, but they are not capable of processing that volume of data in anything approaching real time. The solution, as has become standard across modern intelligence operations, is machine learning.
The Israeli Unit 8200, the country’s signals intelligence corps, is widely regarded as one of the most technically advanced organisations of its kind in the world. The systems its analysts reportedly deployed against Tehran represent a particular application of AI that has become central to modern targeting operations: pattern-of-life analysis. The concept is straightforward even if the implementation is extraordinarily complex. Rather than attempting to locate a specific individual directly, you build a model of their environment. You track the people around them, the vehicles they use, the times at which specific activities occur. You cross-reference video data with cellular metadata, GPS signals, and human intelligence from sources on the ground. Over time, the model learns what normal looks like, and it becomes sensitive to anomalies.
In the case of the operation against Khamenei, reporting suggests that analysts were not primarily tracking the supreme leader himself. They were watching the parking patterns of his security detail on the streets near his compound in Tehran’s Pasteur Street area. A camera that had been installed to monitor traffic flow and issue fines to drivers had, over months of observation, produced a detailed map of the habits of the people whose job was to keep Khamenei alive. When three extra convoys appeared forty-five minutes earlier than usual, and when the cellular signals of senior commanders clustered in the same cell tower coverage area, the algorithm produced a high-confidence prediction that a significant meeting was taking place. That prediction, confirmed by a CIA source on the ground, was sufficient to authorise the strike.
This is what the automation of targeting actually looks like. It is not a robot deciding who to kill. It is a system that processes an ocean of data and surfaces patterns that would be invisible to human observation alone, compressing the time between identification and action to a window measured in minutes rather than hours or days. The speed matters, because a target that remains in a confirmed location for sixty seconds is a viable target in a way that one whose movements are uncertain is not.
The AI involved here is not especially exotic. The underlying techniques, convolutional neural networks for video analysis, graph analysis for social network mapping of personnel, anomaly detection on time-series data, are taught in graduate computer science programmes and have been in industrial use for years. What makes them dangerous in a military context is not their novelty but their scale and their integration. When you combine real-time access to hundreds of cameras with the ability to cross-correlate that data against intercepted communications metadata and ground-level intelligence, you get a system that can maintain persistent surveillance over an entire city and flag targets of interest with a reliability that manual analysis could never match.
Beau Woods, a security researcher and former adviser to the US Cybersecurity and Infrastructure Security Agency, has identified a structural problem that makes the civilian dimension of this particularly troubling. “The manufacturer of the device and the owner of the device are not the victim,” he notes. “So the victim isn’t in a position to control the tool that’s used by the adversary.” The chain of consequence runs from a camera installer who never changed the default password, through a vendor who stopped pushing firmware updates, to an intelligence analyst in a foreign country who is now watching the street outside a world leader’s residence. No single actor in that chain intended to build a targeting system. Together, they did.
What makes the Iran operation unprecedented is not any single technique. Camera hacking, cellular network interference, cyber-attacks on radar and air-defence systems, all of these have precedents. What is new is the degree to which they were integrated into a single, coordinated operation in which the digital and physical components were inseparable.
A Unit 42 threat brief and statements from US defence officials described how the operation had been enabled by months and in some cases years of preparatory work, building what was called the “target set.” The cyber components were described as the “first movers,” disrupting Iran’s ability to see, communicate, and respond before any aircraft entered Iranian airspace. US defence officials said members of the Iranian military were unable to coordinate and respond effectively.
Ukraine has provided a prolonged real-world laboratory for these same techniques. Russian forces hacked security cameras in Kyiv to observe Ukrainian air defences and infrastructure targets, prompting Ukrainian intelligence to attempt to disable tens of thousands of internet-connected cameras that could be exploited for reconnaissance. Ukrainian hackers, for their part, reportedly hijacked Russian cameras to monitor troop movements and the transport of military equipment across the Kerch Bridge. The tactic has become so normalised that security researchers tracking the Iran conflict were unsurprised to find it in use on both sides almost immediately after hostilities escalated.
The feedback loops that emerge from this kind of warfare are dizzying. After Khamenei’s death, Iran imposed a severe internet blackout, cutting off its population from communications at precisely the moment when people most needed to reach their families and understand what was happening around them. According to human rights organisations, the blackout contributed to a civilian death toll that exceeded 1,100 in its first days. Simultaneously, reports indicated Iranian state-linked hacking groups used commercial satellite terminals to maintain communications and continue launching cyberattacks against Western targets, circumventing the very blackout their government had imposed on ordinary citizens. The regime was using network disruption as a weapon against its own population while using commercial satellite networks to keep some offensive cyber operations running. The absurdity of that situation tells you something important about how contemporary warfare actually distributes its costs.
Tal Kollender, a former Israeli military cyber-defence specialist and founder of the cybersecurity platform Remedio, offered a formulation that cuts through a lot of the noise: “Cyber isn’t usually the decisive weapon on its own; it’s a force multiplier that helps shape the information environment and supports operations happening on the ground.” That is accurate, but it perhaps understates how thoroughly cyber operations have been woven into the fabric of conventional military action. The strike on Tehran’s compound did not happen despite the cyber dimension. It happened because of it.
The arguments made by proponents of precision, data-driven military operations are not stupid arguments. They deserve to be engaged seriously before being challenged. The central claim is this: modern targeting technology, when it works as intended, can reduce civilian casualties dramatically compared to the area bombardments that characterised twentieth-century warfare. If intelligence systems can confirm with high confidence that a specific person is in a specific building at a specific time, the alternative to a precision strike is not no strike. It is a wider strike, or a ground invasion, or sustained aerial bombardment. The comparison is not between AI-assisted targeting and a peaceful world. It is between AI-assisted targeting and the methods that flattened Dresden, or levelled Fallujah.
This argument has genuine force. But it carries hidden assumptions that erode quickly under examination.
The first assumption is that the algorithm is accurate. Pattern-of-life analysis is probabilistic, not deterministic. The systems described as producing “99% confidence” assessments are, in practice, systems trained on data that reflects the biases and gaps of the collection process. An anomaly in parking patterns that suggests a high-level meeting might just as plausibly reflect a wedding, a family medical emergency, or a shift change. The consequences of misidentification in a military context are permanent and irreversible. Mariarosaria Taddeo, professor of digital ethics and cyber security at the Oxford Internet Institute, has written extensively about precisely this risk in her book The Ethics of Artificial Intelligence in Defence: AI systems in military contexts create what she describes as a moral distance between action and consequence, a distance that makes it easier to act and harder to bear responsibility for the outcome.
The second assumption is that the capability is stable, that it belongs to responsible actors, and that it will remain so. The techniques used against Tehran are not secret. They are documented, taught, and reproducible by any state, and an increasing number of non-state actors, with sufficient technical resources. The security vulnerabilities in Hikvision and Dahua cameras that Check Point documented in March 2026 were not created for military use. They exist because nobody fixed them. The same cameras are installed in cities across Europe, North America and Asia. The same cellular infrastructure is in use everywhere. The same AI models for pattern-of-life analysis are commercially available and being integrated into urban management systems as legitimate smart-city technology. The capability gap between a superpower intelligence operation and a well-funded terrorist organisation, or an authoritarian government targeting its own dissidents, is not as large as we might hope.
The third assumption, perhaps the most dangerous, is that the ethical and legal framework governing these operations keeps pace with their technical development. It clearly does not. Dr Louise Marie Hurel from the Royal United Services Institute has made the argument that this war represents an opportunity, not a comfortable one, but a real one, for a more public debate about cyber operations as integral components of military campaigns. “If cyber is openly acknowledged as integral to the strike package,” she has written, “it can help sharpen the questions about the laws of armed conflict, proportionality, and what counts as a use of force.” Right now, those questions are being answered by the people carrying out the operations, which is not how the laws of war are supposed to work.
There is also a question that goes beyond the battlefield entirely. Every smart city initiative, every IoT deployment, every decision to connect physical infrastructure to an internet-accessible network creates an attack surface that outlives its original purpose. The traffic cameras in Tehran were installed for traffic management. The cellular towers were built to provide communications. The sensor networks that feed urban analytics platforms were deployed to improve the quality of urban life. None of their designers imagined them as components of a targeting system. But that is what they became, not because of some extraordinary act of malice, but because they were connected, because they were insecure, and because someone patient enough to wait had access to them.
The question for every city that is currently rolling out smart infrastructure, which is to say essentially every city on earth, is whether that deployment is accompanied by a security posture commensurate with the risk. The answer, nearly everywhere, is no. The vendor landscape for IoT devices is fragmented, poorly regulated, and dominated by cost pressures that consistently deprioritise security. Software update cycles are inconsistent. Network segmentation is rare. Incident response planning for smart-city components barely exists. The gap between the sophistication of the attacks that are now demonstrably possible and the defences that most urban infrastructure operators have in place is not a gap. It is a canyon.
There is a final dimension to all of this that tends to get lost in the operational detail, and it is the one that matters most for ordinary people who are not military planners, intelligence analysts, or policymakers. The infrastructure that was weaponised against Tehran is functionally identical to the infrastructure that surrounds us. The cameras above our intersections. The cellular networks our phones connect to. The sensors in the bus we take to work. The smart meters on our homes. Modern cars that continuously transmit location and diagnostic data to manufacturer cloud servers. The ambient digital environment of a connected city is, from a purely technical standpoint, the same kind of environment that intelligence services spent years systematically compromising in Iran.
That does not mean that Western cities are under equivalent surveillance, or that citizens of democracies face the same risks as those living under authoritarian regimes. It does mean that the technical capability to conduct that kind of surveillance exists, is being actively refined, and is not exclusive to any particular set of actors. Security researchers have repeatedly demonstrated that consumer-grade cameras, smart building systems, and industrial control devices accessible from the internet can be compromised by attackers with modest skills and freely available tools. The barriers are not technical. They are economic and organisational, and they are not holding.
What happened in Tehran deserves to be understood as more than a geopolitical event. It is a proof of concept, and a deeply unsettling one, for a world in which the boundary between the digital and the physical has effectively ceased to exist. The question of how we build and defend urban infrastructure going forward is not a niche concern for security professionals. It is a question about what kind of cities we want to live in, who controls the systems that make them function, and whether the convenience of connectivity is worth the risks that come with it.
The answer to that last question may turn out to be yes, on balance. But it should be a choice made with full awareness of what those risks actually are, not an assumption inherited from a decade ago when the threat model was still theoretical. It is not theoretical anymore.
]]>
That is exactly why a small and almost invisible Windows 11 artifact deserves more attention than it is getting today. Starting with Windows 11 (22H2+), Microsoft introduced a plain text execution trace tied to the Program Compatibility Assistant service, and it can be surprisingly useful during an investigation.
The location is:
C:\Windows\appcompat\pca\PcaAppLaunchDic.txt
Inside that directory, investigators may find text files that record programs launched through Explorer, including the full executable path and a UTC timestamp. No proprietary parser is required. No obscure binary structure needs to be decoded first. In many cases, the evidence is simply sitting there in readable text.
That alone should make DFIR analysts, incident responders, and threat hunters pay attention.
Most Windows investigations still revolve around the usual execution artifacts: Prefetch, Amcache, Shimcache, UserAssist, SRUM, Jump Lists, and event logs. Those sources remain valuable, of course, but they all come with caveats. Some are disabled in certain environments. Some are noisy. Some require careful interpretation. Some are easy to misunderstand if taken in isolation.
What makes the PCA launch dictionary interesting is not that it replaces those artifacts. It does not. What makes it interesting is that it adds a fresh and highly readable layer of evidence that many analysts are not yet including in their workflow. If an attacker, insider, or end user launched a program by double-clicking it in Explorer, there is a chance this artifact captured that action with enough detail to become immediately useful. That includes binaries executed from local folders, removable media, and even network shares.
From an investigative perspective, that creates several opportunities. First, it can help answer a very simple but very important question: was this executable actually launched on this system? Second, it can help connect an alert to user activity. Suppose EDR telemetry flags a suspicious binary in C:\Temp, or a malicious file downloaded to the Desktop disappears before triage begins. If it was launched through Explorer, this artifact may still preserve the path and time of execution even after the file itself has been deleted. Third, it gives responders another way to validate or challenge a timeline. In real cases, confidence often comes from correlation, not from a single log entry.
The artifact is linked to the Program Compatibility Assistant service, also known as PcaSvc. This service has existed since the Windows Vista era and was originally designed to monitor launched applications, detect compatibility issues, and suggest fixes when older software behaved badly on newer Windows versions. With Windows 11 (22H2+), Microsoft added a more persistent text-based tracking mechanism to support that process.
In other words, this was not created for digital forensics. It was created for system functionality. But as often happens in incident response, an operating system feature built for one purpose ends up becoming valuable evidence for another. That is also why this artifact may remain underused for a while. Many analysts focus on the evidence sources they already know well, and new ones tend to spread slowly across the DFIR community. Until a source is documented in popular cheat sheets, supported by mainstream tools, and discussed in conference talks, it often stays in the blind spot.
The file format is plain text, encoded in UTF-16 LE, with one entry per line. Each line contains the full executable path followed by a pipe-separated UTC timestamp. Here is what a raw entry looks like:
C:\Users\Alice\Downloads\Quarterly_Review.pdf.exe|2026-03-15 09:42:11.000
C:\Temp\tool.exe|2026-03-15 09:43:05.000
D:\AUTORUN\payload.exe|2026-03-15 09:44:22.000
The third entry above is particularly interesting: D:\ is a removable drive. The artifact records the full path at the time of execution, which means USB-based delivery is immediately visible from the path prefix alone.
During live response or remote triage, you can read and display the file content directly with PowerShell. Because the file is UTF-16 LE encoded, a standard Get-Content call needs the correct encoding parameter:
Get-Content -Path "C:\Windows\appcompat\pca\PcaAppLaunchDic.txt" -Encoding Unicode
To filter for entries containing suspicious paths like C:\Temp, Downloads, or AppData, you can pipe into Select-String:
Get-Content -Path "C:\Windows\appcompat\pca\PcaAppLaunchDic.txt" -Encoding Unicode |
Select-String -Pattern "Temp|Downloads|AppData|\\Users\\"
For offline or image-based analysis, grab the file before acquisition or use it as part of a targeted collection. A simple copy via cmd works:
copy "C:\Windows\appcompat\pca\PcaAppLaunchDic.txt" %USERPROFILE%\Desktop\PcaAppLaunchDic.txt
For KAPE users, the artifact is available in the !SANS_Triage target collection or can be added manually. The Eric Zimmerman KAPE target path to include is:
C:\Windows\appcompat\pca\PcaAppLaunchDic.txt
C:\Windows\appcompat\pca\PcaGeneralDb0.txt
C:\Windows\appcompat\pca\PcaGeneralDb1.txt
Note that the PcaGeneralDb files alternate as active logs and contain additional detail about compatibility errors and application exits, making them a useful companion to PcaAppLaunchDic.txt.
If you want to automate parsing across multiple endpoints or integrate this artifact into a larger pipeline, here is a minimal Python snippet that reads the file, splits each line on the pipe separator, and outputs structured results:
import sys
def parse_pca(filepath):
results = []
with open(filepath, encoding="utf-16-le", errors="replace") as f:
for line in f:
line = line.strip()
if "|" in line:
path, timestamp = line.rsplit("|", 1)
results.append({"path": path.strip(), "timestamp": timestamp.strip()})
return results
if __name__ == "__main__":
entries = parse_pca(sys.argv[1])
for e in entries:
print(f"[{e['timestamp']}] {e['path']}")
Run it as:
python3 parse_pca.py PcaAppLaunchDic.txt
For a more robust implementation with timeline output and CSV export, Harlan Carvey published PCAParse, a dedicated Perl-based parser that converts timestamps to Unix epoch format and supports batch processing.
The strongest use case is straightforward: a user opens Explorer, browses to a file, and launches an executable by double-clicking it. That covers a lot of real-world intrusion activity. Think about how many malicious payloads are still executed through social engineering. A user receives a ZIP archive, extracts it, and opens a fake invoice from the Downloads folder. An operator drops a tool into C:\Temp during hands-on-keyboard activity and launches it manually. A technician runs a utility from a USB drive. A user opens a renamed executable from a network share because it looks like a document. Those are all situations where an artifact like this can become extremely helpful.
Its usefulness grows even more when the original file is gone. Attackers delete tools. Users clean up temporary folders. Automated maintenance jobs remove leftovers. In many cases, the executable that matters most no longer exists by the time the investigation starts. If the PCA launch dictionary still contains the path and timestamp, that missing file stops being a dead end and becomes a concrete event in the timeline.
That persistence also gives this artifact anti-forensic significance. A lot of cleanup activity is built around well-known traces. Adversaries may clear Prefetch, remove LNK files, wipe recent items, or run commodity anti-forensic tools that target the usual evidence stores. Less common artifacts are often left behind simply because the attacker does not know they exist.
Imagine a workstation involved in a suspected phishing incident. The email attachment is gone. The downloaded file is gone. The user insists they only previewed a document and never executed anything. EDR captured a weak signal but not enough to reconstruct the full chain of events. Prefetch is inconclusive because of system noise and retention limits.
During triage, you check the PCA launch dictionary and find:
C:\Users\Alice\Downloads\Quarterly_Review.pdf.exe|2026-03-15 09:42:11.000
That one line changes the investigation. Now you have:
If the same campaign involved execution from a USB drive or a mapped share, the path prefix (D:\, E:\, or a UNC path like \\server\share\) may immediately reveal the delivery mechanism without any additional pivoting.
As useful as it is, this artifact should not be oversold. The Sygnia research team and others in the community have documented several important constraints.
The first caveat is scope. The launch dictionary is associated with programs launched through Explorer. It is not a universal process execution log. If a binary was started from cmd.exe, PowerShell, WMI, PsExec, a scheduled task, a service, or another non-Explorer mechanism, you should not assume it will appear there.
The second caveat is encoding. The file uses UTF-16 LE encoding. Tools or scripts that assume UTF-8 or ASCII will either fail silently or produce garbage output. Always specify the correct encoding when reading the file programmatically, as shown in the snippets above.
The third caveat is interpretation. Presence of an entry strongly suggests execution through Explorer, but absence does not prove non-execution. This should be treated as one source in a larger body of evidence, not as the single authority on what ran. There is also the usual enterprise reality: endpoint hardening, cleanup routines, virtualization, and imaging practices can all influence what survives and what does not.
The best time to learn a new artifact is before you need it in a live case. If you work in DFIR, SOC operations, threat hunting, malware analysis, or internal investigations, this is worth adding to your checklist today. It takes little effort to inspect, it is easy to explain to stakeholders, and it can produce immediate value in cases involving user-driven execution.
At a minimum, defenders should:
C:\Windows\appcompat\pca\ directory in targeted collectionA quick reference video from 13Cubed and the Kaspersky Securelist analysis of Windows 11 forensic artifacts are both excellent starting points if you want to go deeper on this topic and the broader changes that Windows 11 brings to a typical forensic workflow.
This also matters for another reason: once a useful artifact becomes widely known, attackers eventually adapt. Today, many anti-forensic tools do not touch it. Tomorrow, some probably will. That window between discovery and widespread attacker awareness is when defenders get the most value from a source like this.
There is a broader point here beyond one Windows folder. Operating systems are full of small implementation details that create unexpected evidence. Some are documented. Some are barely discussed. Some sit in plain sight for months before the DFIR community realizes how useful they are. The investigators who consistently produce strong results are usually the ones who stay curious about those details. They do not stop at the standard artifact set. They keep testing, correlating, and asking a simple question: what else is this system quietly recording?
The PCA launch dictionary is a good reminder that useful evidence does not always arrive in a sophisticated format. Sometimes it is just a text file in a directory nobody bothered to open. And sometimes that text file is exactly what tells you the attacker really did double-click that payload in C:\Temp.
If you investigate Windows 11 systems and you are not checking this artifact yet, now is a good time to start.
]]>That is the idea behind DFIR Toolkit, a project I built to offer a set of lightweight, browser-based utilities for digital forensics and incident response. The goal is simple: make common DFIR tasks faster, easier, and safer, without forcing users to upload potentially sensitive data to a remote server.

You can find it here:
In everyday DFIR work, there are a few recurring tasks that appear over and over again:
None of these tasks are especially glamorous. But they are constant, and they are often urgent.
In practice, analysts often use a mix of local scripts, terminal commands, commercial suites, and random online tools. That approach works, at least until it doesn’t. Some tools are too heavyweight for quick triage. Others require installation, accounts, or external uploads. And when you are handling logs, email headers, or investigative notes, sending data to a third-party service is not always a great idea.
So I decided to build something different: a set of small, focused DFIR tools that run entirely in the browser.
DFIR Toolkit is built around a straightforward idea:
fast forensic triage, zero setup, local analysis only.
In practice, that means:
You open the page, paste or drop the data, and get the result immediately.
For users, the experience should feel almost disposable: quick to access, quick to use, and quick to trust.
On the technical side, building useful DFIR tools without relying on a backend is an interesting challenge. It forces you to think carefully about browser capabilities, parsing logic, performance, and user experience.
The initial release focuses on four utilities that cover common DFIR and triage scenarios.
This tool takes raw text (logs, reports, notes, copied email bodies, console output) and extracts common indicators of compromise such as:
The output is grouped, deduplicated, and structured so that the analyst can immediately copy, export, or pivot into external services such as VirusTotal, AbuseIPDB, Shodan, URLScan, MITRE, or NVD.
It is not meant to replace full enrichment platforms, but to reduce friction between “I have raw text” and “I have structured indicators I can work with.”
Timestamps are one of those things that keep wasting time in investigations.
Unix epoch, milliseconds, Windows FILETIME, WebKit timestamps, ISO 8601, odd application-specific date strings: every analyst has lost at least a few minutes to these conversions, usually at the worst possible time.
The Timestamp Converter is meant to solve that quickly. Paste one value or a batch of values, and the tool tries to detect the format and convert it into multiple useful representations.
It is a small utility, but in DFIR, small utilities are often the ones you use every day.
This is another classic: calculating hashes for text or files without opening a shell or relying on a separate desktop tool.
The Hash Calculator supports common algorithms such as MD5, SHA-1, SHA-256, SHA-384, and SHA-512. The interesting part is that it works directly in the browser, using modern browser capabilities for cryptographic operations.
That makes it useful not only for analysts, but also for students, consultants, and anyone who needs a quick integrity check without leaving the browser.
Phishing investigations often begin with a wall of raw email headers that nobody wants to read manually.
The Email Header Analyzer parses that mess into something more structured and understandable. It helps highlight:
It is not a full mail forensics suite. It is a fast way to go from “paste raw headers” to “I already have a first triage view.”
One of the strongest ideas behind this project is that some forensic support tasks are perfectly suited to run client-side.
Not every DFIR workflow belongs in the browser. Memory analysis, disk image parsing, and large-scale evidence handling still require traditional tooling. But there is a wide category of micro-tasks that can absolutely benefit from a browser-first approach.
When it does, a few advantages are immediate:
That combination makes browser-based DFIR utilities much more practical than many people assume.
From the outside, DFIR Toolkit looks simple.
Under the hood, it uses a stack chosen for one reason: make browser-native analysis practical.
Here is the main setup behind the project:
Architecturally, the app is designed to stay as close as possible to a static, client-driven model.
That means no heavy backend, no unnecessary infrastructure, and fewer moving parts. It also means simpler deployment, easier maintenance, and a workflow that fits well with small, focused utilities.
In other words, the stack is not there to look modern; it is there because it fits the job.
I have always liked tools that solve one clear problem well.
Cybersecurity is full of platforms, dashboards, ecosystems, and feature sprawl. Those things have their place, of course. But there is also a lot of value in building software that does one job cleanly and immediately.
DFIR Toolkit follows that philosophy.
It is not trying to replace your forensic suite, SIEM, mail gateway, notebook, or terminal. It is trying to become one of those tabs you keep open because it saves you time several times a week.
That is a much more realistic goal and, in my opinion, a far more interesting one.
This release is only the starting point.
The roadmap already includes ideas for additional privacy-first browser utilities, especially around lightweight DFIR, parsing, and analyst productivity. There is a lot of room to expand this into a broader toolbox while keeping the same design principle: fast, local, and frictionless.
If this first version proves useful, I plan to keep iterating on both the existing tools and the overall library.
For now, the most important thing is simple: ship something practical, let people use it, and improve it based on real workflows.
If you want to test the project, you can access it here:
dfir-toolkit.andreafortuna.org
I built it for analysts, incident responders, consultants, students, and curious defenders who just want a faster way to handle small but frequent DFIR tasks.
Sometimes the best tool is not the biggest one.
Sometimes it is simply the one that opens instantly and gets the job done.
“The cloud is just someone else’s computer.” The old sysadmin joke has held up better than many forecasts from the last decade. After years of cloud-first mandates, digital transformation roadmaps, and hyperscaler marketing, more companies are taking a second look at where their workloads actually belong.
Not out of nostalgia, but because the industry has matured enough to recognize that “cloud-always” was never more rational than “on-premises-always.” In Europe, the reassessment has an added layer: data sovereignty, regulatory compliance, and a growing unease about relying on infrastructure that may not be fully shielded from foreign government access.
The scale of the shift is difficult to dismiss. According to a Barclays CIO Survey from Q4 2024, 86% of CIOs planned to move some workloads from public cloud to private or on-premises environments, the highest figure the survey had recorded. An IDC study from the same year found that roughly 80% of organizations expected to repatriate a share of compute and storage within the following twelve months.
Only 8% were planning a full exit. Nobody is shutting down their AWS accounts en masse; what’s happening is a selective repositioning, driven by a question that should probably have been asked much earlier: which workloads genuinely benefit from cloud, and which ones are just paying a premium for someone else’s hardware?
If the cloud repatriation movement has a poster child, it’s David Heinemeier Hansson (DHH), CTO of 37signals, the company behind Basecamp and HEY. After finding that the company was spending more than $3.2 million per year on AWS, DHH argued for a radical shift. The company invested roughly $600k in Dell servers and cut its compute bill by about $1.5 million to $2 million a year.
But that was the compute side. In later write-ups, 37signals documented the migration of billions of files off S3 and the operational model behind 10 petabytes of data in Pure Storage. The specifics of later savings are harder to pin to a single primary source, but the trajectory is clear: less recurring cloud spend, more direct control.
As DHH put it: “Cloud can be a good choice in certain circumstances, but the industry pulled a fast one convincing everyone it’s the only way.”
Dropbox was doing cloud repatriation before anyone called it that. Between 2013 and 2016, the company migrated the vast majority of its data from AWS to proprietary colocation facilities via an internal project codenamed “Magic Pocket”. According to later technical reporting, that move led to nearly $75 million in savings over two years, along with dramatically improved control over the storage stack.
In 2013, GEICO began migrating more than 600 applications to the cloud. By 2021, it was reportedly spending over $300 million annually on cloud services, and internal stakeholders were struggling to justify the expense. Trade-press coverage of the case emphasized a now-familiar problem: for data-heavy estates, cloud storage costs can dominate the bill surprisingly quickly. GEICO subsequently began a selective repatriation of its most storage-intensive workloads.
Ahrefs laid out the arithmetic in public: in a technical write-up, the company claimed its bare-metal approach had avoided roughly $400 million in cloud costs. The pattern running through all these examples is the same: once workloads become large, steady, and storage-heavy, owning your hardware starts to look very different from renting it.

Cloud was not a mistake. But it is a tool, not a destiny, and like every tool it works well in some contexts and poorly in others.
For workloads with unpredictable traffic (consumer apps, seasonal e-commerce, early-stage startups), the ability to scale on demand without upfront CapEx is hard to beat. You pay for what you use, when you use it.
Cloud-native development (managed Kubernetes, CI/CD pipelines, Infrastructure-as-Code) can dramatically accelerate delivery cycles. Provisioning environments in minutes rather than weeks is a genuine productivity gain for development teams.
Offloading database administration, backups, patch cycles, and hardware lifecycle to a provider creates real savings for organizations without dedicated infrastructure teams. Not every company can run its own data center, and not every company should.
Multi-region failover, global CDN, and built-in disaster recovery that would cost millions to replicate on premises come standard with every major cloud provider.
Managed AI/ML platforms, petabyte-scale data warehouses, and globally distributed databases are extraordinarily hard to build in-house. For many organizations, cloud remains the only practical path to these capabilities.
The Producer Price Index for cloud computing services rose by 6.4% between September 2023 and May 2024. Cloud pricing is not magically trending toward zero. For stable, predictable workloads, the pay-as-you-go model is often more expensive than owned hardware over a 3-5 year horizon. Egress fees (what you pay to move data out of the cloud) are especially easy to underestimate during procurement.
Migrating into cloud is frictionless by design. Migrating out of cloud, or between providers, is significantly harder. Proprietary APIs, managed service dependencies, data format lock-in, and pricing structures tied to data transfer all create a gravitational pull that makes exit far more expensive than the initial architecture review ever anticipated.
For AI/ML training on large proprietary datasets, real-time industrial IoT processing, or high-frequency financial analytics, the shared nature of cloud infrastructure introduces variance and latency that dedicated hardware simply does not. For consistent, predictable workloads, on-premises infrastructure can offer better and more stable performance at a fraction of the long-term cost.
The shared responsibility model gets a lot of airtime, but in practice it is frequently misunderstood. According to Palo Alto Networks’ State of Cloud Security Report 2025, 53% of organizations identify lax IAM practices as a top challenge and a leading vector for data exfiltration. The Verizon 2025 DBIR found that 30% of breaches now involve third-party components, double the previous year’s figure, a finding that maps directly to cloud supply-chain risk.
The most instructive case is probably Capital One’s 2019 breach. A misconfigured web application firewall on AWS allowed an attacker to exploit the cloud metadata service via SSRF and access over 106 million customer records, including Social Security numbers and bank account details. Amazon’s response was that the vulnerability lay in Capital One’s application layer, not in AWS itself. That distinction is the shared responsibility model in a nutshell: the provider secures the infrastructure, the customer secures everything running on top of it. In practice, the boundary is blurry enough that even large, well-funded security teams can get it wrong.
On-premises environments allow security teams to implement least-privilege at the hardware level, maintain audit trails end to end, and respond to incidents without waiting on a provider’s tooling or disclosure timelines. Repatriating organizations consistently report improved visibility as a secondary benefit. That said, on-prem security is not free: it requires dedicated staff, continuous patching, physical controls, and the discipline to maintain what you now fully own.
Companies fine-tuning large language models or building domain-specific AI on confidential internal data have a strong incentive to keep that work off shared infrastructure. Even if the probability of data leakage is low, the residual risk is often incompatible with IP protection requirements in many sectors. This is becoming one of the fastest-growing reasons to invest in on-premises or private cloud infrastructure.
Repatriation is often framed as a security win, and in many respects it can be. But it would be dishonest to pretend that running your own infrastructure is inherently safer. The real picture is more nuanced.
Cloud providers do some things exceptionally well. The hyperscalers invest billions in physical security, DDoS mitigation, encryption at rest and in transit, and global threat intelligence. Most organizations cannot match that depth of investment on their own. Managed security services (SIEM, SOAR, threat detection) are mature, widely available, and improving rapidly.
The problem is what sits on top. Misconfigurations, overly permissive IAM roles, exposed storage buckets, unrotated secrets, forgotten API keys: these are not infrastructure failures, they are application- and configuration-layer mistakes that happen at the customer level. The CSA Top Threats to Cloud Computing report consistently ranks misconfiguration and inadequate identity management among the top risks. The IBM Cost of a Data Breach Report, published annually with the Ponemon Institute, continues to show that breaches involving cloud environments tend to cost more and take longer to contain than those in purely on-premises estates.
On-premises security gives you full control, but demands the staff to exercise it. Patching cycles, firewall rule management, physical access controls, backup testing, log retention, and 24/7 monitoring all need people. For organizations with experienced security operations teams, that control translates into better posture. For organizations that repatriate workloads without scaling their SecOps capability accordingly, the outcome can be worse than the cloud setup they left behind.
Hybrid architectures create the widest attack surface. This is the part that rarely gets enough attention. An estate split across on-premises, private cloud, and one or more public providers multiplies the number of identity boundaries, network perimeters, and configuration standards that need to be maintained in parallel. Consistent policy enforcement, centralized logging, and unified incident response across all environments require serious tooling and discipline.
The honest conclusion: moving workloads on premises can improve your security posture, but only if you invest in the operational capability to manage it. Repatriation is not a security strategy by itself.
For European organizations, the calculus goes beyond cost. Cloud repatriation is increasingly a matter of legal and regulatory necessity.
The General Data Protection Regulation established strict rules on personal data processing, cross-border transfers, and data subject rights. The Schrems II ruling invalidated the Privacy Shield framework and forced European regulators to look much more closely at US-based cloud providers. The EU-US Data Privacy Framework, adopted in 2023, provides a new legal basis for transfers, but it is still viewed by many practitioners as politically fragile. For that reason, keeping personal data within EU borders remains the easiest posture to defend.
In 2025, a point that legal analysts had raised for years became harder to dismiss in public debate: US cloud providers cannot offer an absolute guarantee that European data will never be reachable through US legal mechanisms. The CLOUD Act is central to that debate. In testimony before the French Senate, Microsoft France’s general manager stated under oath that he could not guarantee French citizens’ data was protected from US authority access. Google, Amazon, and Salesforce have made similar acknowledgments in other contexts. That tension between the CLOUD Act and European data-protection expectations is shaping infrastructure decisions in both the public and private sectors.
The NIS2 Directive, which entered into force in January 2023 and had to be transposed by member states by 17 October 2024, explicitly requires organizations in critical sectors to assess and manage cybersecurity risks introduced by their supply chains, including cloud service providers. In practice, this creates a formal obligation to evaluate concentration risk and, in some cases, to maintain control over critical infrastructure components. The ENISA Threat Landscape, updated annually, provides the European reference framework for these risk assessments and consistently highlights supply-chain attacks and cloud-infrastructure threats among the top concerns. Regulators increasingly expect documented evidence that cloud dependencies have been assessed and that alternatives exist.
The Digital Operational Resilience Act entered into force in 2023 and has applied since 17 January 2025. It requires financial institutions to demonstrate that they can continue operating through severe disruption involving major technology providers. That means maintaining documented exit strategies and preserving business continuity even when a critical ICT provider becomes unavailable. For banks, insurers, and investment firms, this has accelerated investment in hybrid architectures with a credible on-premises fallback layer.
The EU Data Act entered into force on 11 January 2024 and has applied since 12 September 2025. Among other things, it requires providers of data-processing services to reduce barriers to switching and to take legal, technical, and organizational measures against unlawful third-country access to non-personal data held in the EU. That does not eliminate lock-in overnight, but it does make provider switching and eventual exit easier than before.
Taken together, these regulations are reshaping the European compliance environment. Cloud-only architectures face increasing scrutiny, and the ability to demonstrate local control over data is becoming both a competitive and a legal differentiator.
The real problem is that “cloud-first” quietly became “cloud-always”, and many organizations are now paying for that simplification in ways they never fully modeled up front.
The market itself has absorbed this lesson. The hybrid cloud market was valued at approximately $85 billion in 2022 and is projected to reach $262 billion by 2027. Almost no organization repatriating workloads today is abandoning cloud entirely. They are building architectures that put each workload where it belongs. Cloud for elastic, innovation-heavy, globally distributed services. On-premises or private cloud for steady-state, data-intensive, compliance-critical, and proprietary-AI workloads.
The better framework is simpler than it sounds: right workload, right environment. It takes more architectural discipline up front, but it produces better outcomes over time.
For European organizations, it goes beyond good engineering. Under GDPR, NIS2, DORA, and the EU Data Act, it may be the only defensible position left.
]]>
If you have been following my open-source work, you probably know MalHunt, the memory forensics tool I built to automate malware hunting on top of Volatility. Yesterday I pushed a significant batch of updates that, taken together, amount to a near-complete rewrite of the project. Here is what changed and why it matters.
The most visible change is structural. The original malhunt.py was a single 317-line script: practical, but not particularly maintainable or extensible. That file is gone. The codebase now lives under src/malhunt/ as a properly organized Python package:
src/malhunt/
├── core.py # orchestration logic
├── volatility.py # Volatility3 wrapper
├── scanner.py # YARA, Malfind, and network scanners
├── artifacts.py # artifact collection and ClamAV integration
├── models.py # data models
├── utils.py # utilities and YARA rule handling
└── __main__.py # CLI entry point
This separation makes each component independently testable and easier to extend. Speaking of testing: the project now ships with a full test suite covering the core logic, the scanner layer, and the Volatility wrapper, something the old script lacked entirely.
The package is installable via both pip and poetry, and dependency management is now handled through a pyproject.toml with a locked poetry.lock file. No more environment guesswork.
If you have been using the old version, the most important thing to know is that MalHunt now targets Volatility3 exclusively. The legacy v0.1 relied on Volatility2 and its --profile= flag; that approach is now gone.
Volatility3 works differently: it does automatic OS and version detection, it exposes plugins with updated names (windows.vadyarascan, windows.malfind, windows.netscan, and so on), and it handles symbol tables rather than profiles. The underlying subprocess management has been rebuilt accordingly, with proper timeout handling and a configurable retry strategy.
A migration guide is available in docs/MIGRATION.md for anyone upgrading from v0.1.
YARA rule management was one of the weakest points of the old tool. The new version addresses it on multiple levels.
Downloading rules from Yara-Forge. Instead of cloning a git repository, MalHunt now fetches the full Yara-Forge rule bundle directly via HTTP, caches it under ~/.malhunt/, and automatically refreshes it when the cache is more than a day old.
Text-based sanitization. Before using any YARA file, MalHunt strips out rule blocks that rely on imports or features not supported by the version of yara-python used by Volatility: import "math", import "cuckoo", import "hash", imphash patterns, and pe.number_of_signatures. This alone prevents a large category of failures on the 3300+ rules included in the Yara-Forge bundle.
Compile-and-prune validation. The sanitization pass handles known-bad patterns, but the YARA format is complex enough that a rule file can still fail to compile for other reasons. The new validate_and_prune_yara_rules_file() function takes a different approach: it actually compiles the file using yara-python, and when a compilation error occurs, it locates the offending rule block, removes it, and tries again. This loop repeats until the file compiles cleanly or a maximum iteration count is reached. The result is a YARA file that is guaranteed to work, even when the upstream source contains rules with edge-case syntax or undocumented dependencies.
Handling large scans without giving up. YARA scanning over a memory dump is slow. On large images it can easily take 15–20 minutes. The new VolatilityConfig now exposes a dedicated yara_timeout parameter (defaulting to 15 minutes) separate from the general command timeout. If a scan times out and the threshold is still below one hour, MalHunt doubles it and retries automatically. This prevents the tool from aborting unnecessarily on large forensic images, the kind you typically encounter in enterprise incident response.
One of the most frustrating aspects of working with Volatility is decoding its error output. MalHunt now puts effort into turning those errors into actionable feedback.
Structured error objects. The VolatilityError exception now carries the plugin name, the return code, and the full stdout and stderr from the failed command. Downstream code, and log files, can show exactly what went wrong and where, rather than just “Volatility command failed.”
Symbol recovery. When Volatility fails because Windows symbol tables (PDB files) are missing, MalHunt now attempts to recover automatically. It parses the error output for download URLs, tries both .pdb and .pd_ filename variants, and downloads the files into the correct directory structure. If the automatic recovery does not succeed, the tool generates a ready-to-run shell script containing all the download commands, so you can fix the problem in one step rather than hunting through Volatility documentation.
YARA dependency detection. A separate check catches the situation where yara-python is not installed in the same Python environment that the vol binary uses. In that case MalHunt raises a specific error:
“YARA backend not available for Volatility. Install yara-python in the same Python environment used by ‘vol’ (or use yara-x), then retry.”
That single sentence saves a lot of time compared to staring at a generic plugin-not-available stack trace.
The project now ships with a proper docs/ directory covering architecture decisions, installation instructions for various environments, a full usage reference with CLI examples, and a 400-line troubleshooting guide. Not the most glamorous part of a release, but probably the one most people will actually use.
If you were using the old version, the upgrade path is straightforward:
pip install --upgrade malhunt
Or from source:
git clone https://github.com/andreafortuna/malhunt.git
cd malhunt
poetry install
Make sure you have Volatility3 (≥2.0.0) installed and accessible as vol in your PATH. ClamAV integration remains optional.
If you were running v0.1 with Volatility2, read the migration guide first: the command-line interface has changed and the profile-based approach no longer applies.
A few things are still on my list: Linux memory dump support has been tested but could use more coverage, the ClamAV integration needs updating to handle newer daemon configurations, and I want to add structured JSON output for easier integration with SIEM pipelines and case management tools. Pull requests and issues are welcome on GitHub.
MalHunt is released under the MIT license. It is intended for authorized forensic analysis and security research only.
]]>
The first and most disorienting problem is the complete elimination of
profiles. In Volatility2, a profile tied the tool to a specific OS
version and analysis proceeded with a single --profile=Win10x64_18362
argument. Volatility3 replaces this model with symbol tables, dynamically
resolved compressed JSON files matched against a PDB GUID embedded in the
memory image. In connected environments the framework contacts Microsoft’s
symbol server automatically, downloads the matching PDB, converts it, and
caches the result locally. The first run on a new kernel version is slow but
subsequent ones are instant. For air-gapped environments, the
JPCERT/CC offline usage guide
documents how to pre-populate the cache on a connected machine using
pdbconv.py and transfer the resulting files to the isolated workstation.
The second problem is subtler and harder to diagnose: the automagic PDB
scanner failing to locate the correct kernel base address, causing the familiar
Unsatisfied requirement plugins.*.nt_symbols error even with correctly placed
symbol files. Running vol.py -f image.dmp -vvvv windows.info reveals which
base addresses were attempted and whether the scanner exhausted its candidate
list. In several cases, specifying the kernel virtual offset manually through
the configuration system is the only path forward. If acquisition was performed
inside a virtualised environment, disabling hardware virtualisation in BIOS
before the next capture frequently resolves the issue at the source.

The third problem hits anyone working outside the Windows ecosystem.
Volatility3 has no centralised symbol distribution for Linux, Android, or
macOS. Every kernel version and build configuration requires a custom-generated
symbol table produced with dwarf2json,
a Go utility that processes DWARF debug data from a vmlinux binary and a
System.map file. The kernel must have been compiled with
CONFIG_DEBUG_INFO=y. Most distribution kernels do not enable this flag in
their production builds, but major distributions (Ubuntu, Debian, Fedora, RHEL)
ship debug symbols in separate packages (linux-image-*-dbgsym on
Debian/Ubuntu, kernel-debuginfo on RHEL/Fedora) that contain the unstripped
vmlinux needed by dwarf2json. A full recompile is only necessary when no
matching debug package exists for the target kernel version.
The fourth problem compounds the difficulty for Android emulator dumps. The
kernel must be compiled from source using the exact toolchain version embedded
in /proc/version, and the resulting vmlinux must produce a banner string
that matches the memory dump character for character. A single invisible
whitespace discrepancy causes Volatility3 to reject the symbol file without
a clear explanation. Running vol.py banners on the dump before any other
command verifies the expected banner and prevents hours of misdiagnosis.
The fifth problem is specific to macOS analysis. A Kernel Debug Kit (KDK)
matching the exact OS build number must be downloaded from Apple’s developer
portal. After running dwarf2json on the kernel DWARF bundle, the
constant_data field in the resulting JSON must be manually populated with a
base64-encoded Darwin banner string extracted from the target memory image.
Forgetting this step or encoding the wrong banner produces the same
“symbol table requirement not fulfilled” error seen on other platforms, but
with no automatic resolution path available.

The sixth problem is one of the most surprising: dramatic, sometimes
catastrophic performance degradation on specific plugins. windows.filescan
on a typical Windows 10 image takes over an hour in Volatility3, versus
under one minute in the previous framework. The timeliner plugin, which
aggregates artefacts from dozens of sources across the entire image, has been
observed running for over a hundred hours on large dumps without completing.
The root cause is often the automagic stacker, which attempts multiple layer
detection strategies sequentially before committing to a format. Each failed
attempt carries measurable overhead. Specifying the layer type explicitly with
--layer-type WindowsIntel bypasses the guesswork and can reduce startup time
from several minutes to a few seconds on the same image. QEMU memory dumps are
the worst-affected format: the layered structure triggers repeated stacker
retries that render most plugins impractical until the dump is converted to raw
using the built-in layerwriter command
(vol.py -f dump.qemu -o output_dir layerwriter.LayerWriter).
The seventh problem is a specific pathological case: windows.memmap.Memmap
entering an infinite page-mapping loop on certain system processes such as
svchost.exe and sihost.exe. As documented in
GitHub issue #1920,
the plugin prints the table header and then consumes 100% CPU indefinitely,
producing no rows and only terminating after several days or a manual interrupt.
For affected PIDs, windows.vadinfo.VadInfo provides the virtual address
descriptor information required in most investigations and completes in a
reasonable time on the same processes that cause the hang.

The eighth problem for practitioners migrating from Volatility2 is the
absence of several plugins they relied on regularly. notepad and clipboard
were not ported to Volatility3. Some,
like notepad, were deliberately excluded because heap structure changes in
modern Windows make the plugin fundamentally unreliable regardless of which
framework hosts it. For text content buried in process memory, windows.strings
fed with an offset-tagged string file produced by strings -o dump.mem > strings.txt
provides an imperfect but functional substitute. However, as tracked in the
long-running issue #876 in the Volatility3 repository,
the offset mapping logic still has edge cases where strings confirmed present
in dumped process memory go undetected, and the issue remains open after years
of active discussion.
The ninth problem removed the portable deployment model that was standard practice in incident response: early versions of Volatility3 shipped without a standalone Windows executable. Analysts accustomed to dropping a single binary onto a forensic workstation found themselves with no equivalent option, and the missing binary became the most-commented issue in the entire repository. Official pre-compiled executables are now distributed alongside each tagged release and downloadable directly from the GitHub releases page without requiring a local Python installation, restoring the workflow that practitioners had built their field kits around.
The tenth problem is not a single error but a systematic architectural
disruption affecting both daily usage and plugin development. The --profile=
argument is gone. Plugin names are namespaced as windows.*, linux.*, and
mac.*. The calculate() and render_text() pattern that structured every
Volatility2 plugin has been replaced by a _generator() method yielding
rows to a TreeGrid renderer. Class inheritance changed, dependency
declarations moved into a requirements() function, and short option flags
were removed entirely. The
official Volatility2 to Volatility3 migration guide
documents these changes comprehensively, but no documentation fully prepares
an analyst for the operational cost of running familiar commands against a
framework that no longer recognises them.

Woven through this architectural transition is a dependency problem that
cripples many first installations. yara-python and pefile
are listed as optional but practically mandatory for production use. Missing
yara-python silently disables yarascan, vadyarascan, and mftscan.
Missing pefile eliminates verinfo, netscan, netstat, and
skeleton_key_check at import time. Running
pip install pefile "yara-python>=3.8.0" capstone pycryptodome immediately
after the base installation closes most of these gaps. On Windows, libyara.dll
must be in the system PATH and the Python architecture must match the YARA
binary architecture exactly, a constraint that silently breaks installations
where system Python is 64-bit and the installed YARA wheel is 32-bit.
Despite all of this, the direction of travel is clear. The Volatility Foundation continues to close the feature gap with each release, and the underlying architecture of Volatility3 is genuinely better suited to modern memory analysis than its predecessor. The investment required to navigate these ten problems is real but not prohibitive, and it pays back quickly once the environment is correctly configured and the new mental model is internalised.
]]>
Apple built Face ID around dedicated hardware that most competitors have never replicated at scale. The TrueDepth camera system, introduced with the iPhone X in 2017 and refined across every subsequent generation, uses a dot projector, an infrared camera, and a flood illuminator to cast more than 30,000 invisible infrared points onto the user’s face. The TrueDepth system then reads the distortion of those dots to generate a precise depth map, while a separate infrared snapshot captures the resulting pattern. This is not image recognition in the conventional sense; it is a geometric measurement of the physical structure of a human face, performed in three dimensions and entirely independent of ambient light conditions.

Android manufacturers have historically taken a different path. The vast majority of Android phones, including flagship models from Samsung, OnePlus, and many others, perform facial recognition using the front-facing RGB camera. This 2D approach converts a standard photograph into a mathematical representation of facial geometry, measuring relative distances between the eyes, nose, mouth, and jawline. The process is fast and works reliably in daylight, but it is fundamentally different in its security profile. A photograph, a detailed mask, or even a video played on a second screen can, under certain conditions, defeat a purely 2D recognition system. This asymmetry in hardware capability is the root of almost every meaningful security difference between the two ecosystems.
Diagram of the Secure Enclave components. Source: Apple Platform Security.
The depth map generated by Apple’s TrueDepth hardware is converted into a mathematical model, informally referred to as a face vector, which is a compact numerical representation of the three-dimensional structure of the face. This vector captures structural information that no flat image can contain. A neural network running entirely on the device compares each unlock attempt against the stored vector, producing a confidence score; if that score exceeds a defined threshold, access is granted. Critically, neither the raw depth map nor the resulting vector ever leaves the device. Both are stored and processed exclusively within the Secure Enclave, a dedicated cryptographic coprocessor physically isolated from the main application processor and inaccessible even to the operating system itself.
The Secure Enclave is not merely a software boundary. It is a dedicated subsystem within the system-on-chip (SoC) with its own boot process, its own encrypted memory, and communication channels that the main application processor cannot read. Even if the application layer were fully compromised by malware or a privilege escalation exploit, the biometric data stored in the enclave would remain unreachable. Apple reports the probability of a random individual unlocking someone else’s device with Face ID at approximately one in one million, compared to one in fifty thousand for the older Touch ID fingerprint sensor. The system also adapts over time, gradually updating the stored vector to account for natural changes in appearance such as facial hair, eyeglasses, or the slower drift of aging. This continuous learning is performed without transmitting any data externally, maintaining the privacy model alongside the security one.
BiometricPrompt architecture and the Trusted Execution Environment stack. Source: Android Open Source Project.
Google has pursued a different strategy on its recent Pixel line, and the evolution of that strategy illustrates how far software can extend the limits of biometric security without specialized hardware. After abandoning the dedicated 3D sensors of the Pixel 4, subsequent 2D face unlock implementations on Android were widely classified as a weaker biometric modality, acceptable for device unlock but not for authorizing financial transactions or accessing sensitive application data. The Android biometric security framework defines three security classes, from Class 1 (convenience only) to Class 3 (strong authentication), and for years face unlock sat below the threshold required for payment authorization or cryptographic key access.
With the Pixel 8, Google upgraded face unlock to Class 3 biometric, meaning it meets the same security bar as fingerprint authentication and can be used with the Android Keystore to protect cryptographic keys. The improvement came not from new sensors but from substantial advances in liveness detection, the algorithmic ability to distinguish a live human face from a static image or a three-dimensional replica. Modern liveness detection systems analyze micro-movements, skin texture variations, infrared reflectance patterns on devices equipped with appropriate illuminators, and temporal consistency across multiple frames captured during the brief unlock gesture. The result is a system that remains more theoretically vulnerable than Apple’s hardware-centric approach but is considerably harder to fool than earlier software-only implementations.
The storage model on Android is sandboxed but follows a different architecture. Biometric templates reside within the Trusted Execution Environment (TEE), implemented through ARM TrustZone technology, while cryptographic keys are safeguarded by a dedicated secure element like the Titan M2 chip on Google Pixel devices. The TEE provides strong isolation from the main OS, comparable in concept to Apple’s Secure Enclave, though the depth of that comparison depends on the specific implementation. The key challenge for Android is not any single device but the ecosystem as a whole: across hundreds of manufacturers and thousands of models, the quality of TEE implementation varies in ways that are difficult for end users or even administrators to evaluate without detailed technical documentation.
The most operationally relevant question is not which system is theoretically stronger but which is more difficult to defeat under realistic adversarial conditions. Presentation attacks (using a physical or digital replica of the target’s face to deceive the sensor) represent the primary concern for any face-based biometric.
A 2D face unlock system is inherently vulnerable to attacks using high-resolution photographs, printed or displayed on a screen. Several older Android devices were publicly demonstrated to unlock using nothing more than an image retrieved from a social media profile, or even a video playing on another smartphone.
A popular demonstration by Unbox Therapy showing how the Samsung Galaxy S10’s 2D face unlock could be bypassed using a video.
Apple’s Face ID sets a substantially higher bar: defeating it requires a custom-crafted three-dimensional model of the target’s face with accurate depth representation and credible infrared reflectance properties, a task that is neither trivial nor inexpensive. However, researchers, such as those at the Vietnamese security firm Bkav, have successfully demonstrated proof-of-concept bypasses using highly detailed 3D-printed masks combined with 2D infrared images for the eyes.
Security researchers from Bkav demonstrating a proof-of-concept bypass of Apple’s Face ID using a specially crafted 3D mask. (Video via Reuters)
Law enforcement interaction represents a separate and often underappreciated dimension of the threat model. In documented cases across multiple jurisdictions, authorities have compelled individuals to unlock devices using biometric authentication, on the grounds that biometrics constitute physical evidence rather than testimonial disclosure protected against self-incrimination. The legal framework around this coercion varies by country, but the practical implication is that both Face ID and Android face unlock can be used to access a device without requiring the owner to disclose a passphrase. From this angle, the security difference between the two systems becomes less decisive than the shared vulnerability intrinsic to any biometric access control mechanism: the authenticator is always present and visible.
A subtler attack surface involves the neural network inference layer itself. Adversarial perturbations, carefully crafted and nearly imperceptible modifications to input data, can sometimes cause classification models to produce incorrect outputs. Since Android face unlock relies more heavily on learned feature representations, it is theoretically more exposed to this category of attack. In practice, such techniques require significant expertise and physical proximity to the target device. Apple’s reliance on raw depth measurements, rather than purely learned visual features, offers a degree of inherent resistance to adversarial input manipulation, since the depth sensor produces structured physical data that is harder to perturb than a pixel array.
The security gap between the two systems is real, measurable, and architecturally significant. It does not, however, translate uniformly into elevated risk for every user in every context. For the vast majority of individuals using face unlock to avoid typing a PIN throughout the day, both systems provide adequate protection against casual access by strangers, opportunistic theft, or unsophisticated attackers. The adversary capable of defeating either system at scale remains confined, for now, to well-resourced state actors and highly specialized security researchers.
For organizations evaluating mobile device management policies or assessing risk for regulated industries, the distinction carries considerably more weight. Apple’s uniform hardware implementation across all Face ID devices means that the security properties of the biometric are predictable and consistent across an entire fleet. An IT administrator deploying iPhones in a financial services or healthcare environment can rely on the same depth-sensing architecture regardless of which iPhone model employees carry. Android’s heterogeneous ecosystem makes equivalent assurance difficult to provide. A Samsung Galaxy S25 and a budget device from a lesser-known manufacturer may both advertise face unlock, but the underlying implementation, from sensor quality to TEE integrity to software patch level, can differ dramatically.
The principle of least privilege suggests that sensitive operations, whether authorizing a wire transfer or accessing an encrypted credential vault, should require the strongest available authentication factor. On Apple hardware, Face ID satisfies this requirement natively and consistently. On Android, the answer depends on the specific device, the Android version, and the application’s own implementation of the BiometricPrompt API. Recent flagships from Google and Samsung running fully patched software come close to closing the gap in practical terms. Older or lower-tier devices running outdated Android versions do not. Any organizational security policy that treats Android face unlock as equivalent to Face ID without accounting for this variance is operating on an assumption that the hardware and software stack does not always support.
The face, whether read in three dimensions by a dedicated sensor or interpreted by a neural network trained on millions of images, remains among the most convenient biometric factors available on a consumer device. The technology behind it is not monolithic, and treating it as such is a security mistake. Understanding that projecting 30,000 infrared points onto a face and recognizing its photograph with a standard camera are architecturally different operations, with different attack surfaces and different failure modes, is the foundation of any informed decision about mobile authentication, whether you are choosing a phone for personal use or writing a biometric policy for an enterprise.
]]>The NIS2 Directive (EU) 2022/2555 has fundamentally redefined what it means for a European organization to take cybersecurity seriously. Among its most significant shifts is the elevation of training from a recommended best practice to a binding legal obligation. Article 20 explicitly requires that management bodies of essential and important entities follow cybersecurity training, and encourages organizations to offer similar, regular training to their employees (a requirement further solidified by Article 21). This is not a formality. It is the normative foundation upon which the entire human layer of security now rests.

What makes this requirement particularly demanding is not its breadth, but its depth. Compliance is no longer satisfied by assigning an annual e-learning module and checking a box. Regulators, national supervisory authorities, and auditors now expect organizations to demonstrate that training is meaningful, measurable, and continuously updated. The standard has shifted from “did you train your staff?” to “can you prove the training worked, who received it, when, and how it aligned with your current threat landscape?”
Understanding how to structure a compliant training plan requires reading Article 20 and Article 21 together, not in isolation. Article 20 addresses governance and management accountability: board members and senior executives must personally undergo training to acquire sufficient knowledge to identify risks and assess cybersecurity risk-management practices. The personal liability dimension is crucial. Under NIS2, management can be held individually responsible for infringements, which transforms training from an HR matter into a boardroom imperative.
Article 21 then specifies the technical and organizational measures that entities must implement, listing “basic computer hygiene practices and cybersecurity training” as an explicit requirement within an all-hazards risk management framework. Training must be anchored to the organization’s broader risk posture, covering incident handling, business continuity, multi-factor authentication, backup procedures, and supply chain awareness. The two articles together make it clear that no training plan can be considered compliant if it operates as a standalone activity disconnected from risk assessment processes.
The financial stakes reinforce this reading. Essential entities face maximum fines of at least 10 million euros or 2% of global annual turnover (whichever is higher) for non-compliance, while important entities face maximum fines of at least 7 million euros or 1.4% of turnover (whichever is higher). These figures, combined with the possibility of personal liability for executives, make a defensible training program one of the most consequential investments an organization can make.
A compliant training plan is built on four pillars: scope, content, cadence, and evidence. Each one must be intentionally designed rather than assembled by default.
Scope determines who receives what. Not all employees carry the same risk exposure, and a well-structured plan acknowledges this through segmentation. The general workforce needs foundational cyber hygiene: recognizing phishing, using strong passwords and password managers, understanding how to report incidents, applying secure configurations, and practicing safe remote work habits. Mid-level management adds a layer covering incident response protocols, business continuity fundamentals, and an overview of NIS2 obligations relevant to their function. Senior management and board members require training specifically tailored to their legal obligations, risk oversight responsibilities, and the personal liability framework that Article 20 introduces. A training plan that fails to differentiate between these audiences is unlikely to survive a rigorous audit.

Content must map directly to identified risks. One of the most common gaps auditors identify is a mismatch between the threats documented in the organization’s risk register and the topics covered in its training modules. If a company has identified social engineering as a primary attack vector in its threat assessment, and its training program contains no phishing simulation or social engineering awareness module, that gap is an evidentiary liability. Content should be derived from the risk assessment, reviewed at least annually, and updated whenever the threat landscape shifts materially.
Cadence addresses the persistent misconception that annual training is sufficient. NIS2’s requirement for “regular” training implies a frequency calibrated to the pace of threat evolution and to the organization’s operational reality. Practical interpretations from national authorities and compliance frameworks suggest quarterly awareness touchpoints at minimum, supplemented by role-specific deep dives and just-in-time training triggered by incidents or newly discovered vulnerabilities. Phishing simulations, tabletop exercises, and live scenario walkthroughs are not optional embellishments; they are the mechanisms through which training transitions from passive consumption to active competency.
The shift from policy-based assurance to evidence-based proof is perhaps the most operationally disruptive change NIS2 introduces. An auditor asking for proof of training compliance will not be satisfied by a list of employee names marked “completed.” What they require is granular, timestamped, exportable documentation that answers four specific questions: who was trained, what content they covered, when the training took place, and what their assessed performance was.
This means organizations need to invest in platforms or processes capable of producing this level of detail. Training records should include module-specific completion logs, assessment scores, remedial activity records for those who failed initial assessments, and separate documentation for management-level training that reflects their distinct obligations under Article 20. The ENISA technical implementation guidance reinforces that evidence must demonstrate not just activity, but effectiveness. Dashboards showing improvement trends in phishing simulation click rates, reductions in policy violation incidents, or increases in self-reported suspicious activity are the kind of data that demonstrate a living program rather than a dormant one.
Governance documentation must accompany training records to provide context. Version histories of training content showing how modules evolved in response to updated risk assessments, board meeting minutes confirming that management completed their mandated training, and formal approval signatures on the training plan itself are all components of a defensible evidence package. Without this layer, even an operationally excellent training program may fail to produce the compliance narrative an audit demands.
A training plan is only as defensible as the logical chain connecting it to the organization’s formal risk management framework. Risk-linkage is the principle that transforms a training calendar into a compliance control. It means that every training topic can be traced back to a specific identified risk, that every update to the training curriculum is triggered by a documented change in the risk landscape, and that training outcomes feed back into the organization’s periodic risk reviews as measurable evidence of risk reduction.
In practice, this requires integrating the training program into the same governance cycle as the risk assessment. When a new vulnerability is identified, the training team receives a signal to assess whether existing content addresses it. When a sector-level threat intelligence report is published, relevant modules are reviewed for currency. When an incident occurs, post-incident analysis informs the next training iteration. This recursive loop is what compliance frameworks increasingly describe as the difference between a static program and a resilient one.
Organizations that build their training plans with this architecture, scope differentiated by role and risk, content anchored to threat intelligence, cadence calibrated to regulatory expectations, and evidence structured for auditability, are not simply meeting the minimum requirements of NIS2. They are building the kind of institutional security culture that the directive was designed to foster: one where cybersecurity awareness is not a compliance exercise, but an organizational capability embedded in daily operations and accountable at every level of the hierarchy.
]]>In any enterprise environment, privileged accounts represent the highest-value target for attackers. These are not just administrator credentials; they encompass service accounts, DevOps pipelines, cloud management interfaces, and any identity with elevated permissions over critical systems. When one of these accounts is compromised, the consequences extend far beyond a single machine or dataset. Attackers can move laterally, escalate privileges, and reach the deepest layers of an organization’s infrastructure, often without triggering immediate alerts.

The numbers behind this threat are stark. According to research cited by Keeper Security, the global average cost of a data breach in 2024 reached $4.88 million, a figure that reflects not only the technical remediation but also legal fees, regulatory fines, and lasting reputational damage. The 2024 AT&T breach, which exposed data from more than 65 million former customer accounts, stands as a recent and instructive example of what happens when privileged access is left poorly managed on third-party cloud environments.
These incidents rarely originate from sophisticated zero-day exploits. In the majority of cases, the initial vector is a stolen or misused credential. Attackers rely on phishing, credential stuffing, or exploiting improperly decommissioned accounts to gain a foothold. Once inside, the presence of excessive or poorly monitored privileges allows them to escalate quickly and operate undetected for extended periods. The longer an attacker maintains access, the more costly the breach becomes, both in terms of direct financial impact and long-term erosion of customer trust.
Privileged Access Management (PAM) addresses this problem by establishing structured controls over who can access sensitive systems, under what conditions, and for how long. But PAM alone is no longer sufficient. The modern threat landscape demands that it be combined with the principles of Zero Trust, a security model built on the premise that no user, device, or network segment should ever be trusted by default.
Legacy PAM implementations were designed for a different era: perimeter-based networks where internal users were generally considered trustworthy, and where administrative access was granted on a semi-permanent basis to a small number of system administrators. That model has not survived contact with cloud adoption, remote work, and the explosion of non-human identities in modern IT environments.
One of the most persistent weaknesses in traditional PAM is the concept of standing privileges: accounts that retain elevated permissions continuously, regardless of whether those permissions are actively needed. This approach dramatically widens the attack surface. A compromised account with standing admin rights is immediately dangerous, while a credential with no active privileges at the moment of breach offers an attacker far less leverage.
The problem becomes even more acute in hybrid and multi-cloud environments, where privileged accounts often span multiple platforms with different security models and management interfaces. An administrator who holds standing privileges across AWS, Azure, and on-premises Active Directory presents a single point of failure that, if compromised, grants an attacker access to the organization’s entire technology stack. Without centralized visibility and policy enforcement, security teams are forced to manage these risks in silos, inevitably leaving gaps.
Shadow IT compounds the problem further. Many organizations simply do not have a complete inventory of their privileged accounts. Shared credentials, dormant service accounts, and unmonitored automation pipelines create blind spots that security teams cannot defend. As Splashtop’s analysis of PAM challenges highlights, the lack of automated discovery and continuous classification of privileged accounts is one of the most common and dangerous gaps organizations face today.
The Cloud Security Alliance defines Zero Trust PAM around three foundational assumptions: breaches will occur, trust must be continuously verified, and authentication must be adaptive and context-aware. These principles transform PAM from a static gatekeeping function into a dynamic, risk-responsive framework.
At the operational level, this translates into several concrete practices. Just-in-Time (JIT) access replaces standing privileges with time-bound elevations that are provisioned only when a specific task requires them and revoked automatically once the task is complete. A DevOps engineer, for instance, might be granted temporary root access for a deployment window of thirty minutes, after which the privilege is automatically removed. This model, endorsed by the CISA Zero Trust Maturity Model, shrinks the window of opportunity for attackers to exploit compromised credentials.
Multi-Factor Authentication, particularly phishing-resistant implementations such as FIDO2 hardware tokens and passkeys, adds another layer of defense. Combined with behavior-based anomaly detection, which can flag an administrator logging in from an unrecognized geolocation or accessing systems outside of business hours, adaptive authentication ensures that the verification process is not a one-time event at login but a continuous assessment throughout each session.
Micro-segmentation reinforces these access controls at the network level. By dividing the infrastructure into isolated security zones, each with its own access policies, organizations can contain the impact of a compromised privileged account. Even if an attacker gains elevated access to one segment, micro-segmentation prevents unrestricted lateral movement to other parts of the network. When combined with JIT provisioning and continuous authentication, micro-segmentation creates a layered defense architecture where each layer independently limits the attacker’s reach.
An often overlooked component of Zero Trust PAM is the principle of least privilege by design. Rather than granting broad access and later attempting to restrict it, organizations should define the absolute minimum set of permissions required for each role and function from the outset. This inversion of the default, from “allow unless denied” to “deny unless explicitly allowed”, fundamentally changes the security posture of the organization and reduces the blast radius of any single compromised identity.
PAM does not operate effectively in isolation. Its value multiplies when integrated with Identity Governance and Administration (IGA) systems, Access Management platforms, and Security Information and Event Management (SIEM) tools. This integration creates a unified audit trail across all user activities, privileged and otherwise, enabling security teams to correlate events, detect lateral movement, and respond to incidents with the context they actually need.
The treatment of non-human identities is a particularly critical and often underestimated dimension of this ecosystem. Service accounts, API keys, machine-to-machine tokens, and automated pipelines frequently carry elevated permissions and are rarely subject to the same scrutiny as human users. In many organizations, non-human identities outnumber human users by a factor of ten or more, yet they receive a fraction of the security attention. An attacker who compromises a service account can blend into legitimate traffic patterns, moving through cloud environments and on-premises networks with minimal detection risk.
Applying Zero Trust principles to these identities requires a dedicated strategy. Credentials for service accounts and API integrations should be rotated automatically on a short lifecycle, ideally through a secrets management platform that eliminates the need for hard-coded credentials in source code or configuration files. Each non-human identity should be scoped to the minimum set of permissions required for its function, and its activity should be monitored continuously for deviations from established baselines. Organizations that treat non-human identities as second-class citizens in their PAM strategy are leaving one of their largest attack surfaces effectively unguarded.
Insider threats require their own layer of attention. The danger does not come only from malicious actors; negligence and misconfiguration account for a significant share of privilege-related incidents. Duo Security’s research on PAM risks emphasizes that even well-designed PAM strategies can introduce new vulnerabilities if they are inconsistently monitored or poorly maintained, underscoring the need for continuous oversight rather than periodic audits.
Translating Zero Trust principles into a functioning PAM implementation requires a structured approach. The starting point is always a comprehensive audit of the existing identity landscape: every privileged account, human or automated, must be discovered, classified, and assessed against the principle of least privilege. Accounts that retain more access than their function requires should be immediately remediated.
From there, organizations should prioritize the implementation of Role-Based Access Control (RBAC) combined with JIT provisioning workflows. Credential vaulting, where privileged passwords and keys are stored in an encrypted, centrally managed repository rather than shared informally or stored in configuration files, eliminates one of the most common vectors for credential theft. Session recording provides forensic value after incidents and serves as a behavioral deterrent during normal operations.
A robust PAM implementation must also account for the full lifecycle of privileged accounts. This includes automated provisioning and deprovisioning tied to HR and organizational events, so that when an employee changes roles or leaves the organization, their privileged access is adjusted or revoked immediately. Too often, accounts persist long after the business justification for their privileges has expired, creating a growing inventory of dormant credentials that attackers can exploit.
Equally important is the investment in security culture and training. Technical controls are only as effective as the people who interact with them. Privileged users should receive targeted training on recognizing phishing attempts, handling credentials securely, and understanding the rationale behind access restrictions. An organization that deploys sophisticated PAM tooling but neglects to educate its administrators risks undermining its own defenses through human error.
Compliance considerations add urgency to this work. Frameworks such as NIST SP 800-207, ISO 27001, and regulatory standards like GDPR, NIS2, and PCI DSS all require demonstrable controls over privileged access. Automated audit logging, which PAM platforms can generate natively and forward to SIEM systems, directly supports compliance reporting and reduces the manual burden on security and legal teams.
The convergence of PAM and Zero Trust is not a one-time project with a defined endpoint. It is an ongoing operational discipline that must evolve in response to new threats, new technologies, and changes in organizational structure. As cloud-native architectures, containerized workloads, and AI-driven automation continue to reshape enterprise IT, the definition of what constitutes a “privileged account” will keep expanding, and so must the controls that govern it.
Organizations that treat PAM and Zero Trust as a living practice, continuously auditing their identity landscape, adapting policies to emerging risks, and investing in both technology and people, will find themselves significantly better positioned against attackers and in front of regulators. Those that treat it as a checkbox exercise will inevitably discover, often at the worst possible moment, that static defenses cannot withstand a dynamic threat environment.
]]>
The timing is significant. As geopolitical tensions continue to shape the digital threat landscape, and as regulations such as NIS2 and DORA push EU organizations toward more structured approaches to cyber risk management, the need for a coherent, institution-wide intelligence model has never been more pressing. The CERT-EU framework responds to this need by introducing structured concepts, consistent scoring, and clearly defined threat taxonomies, all calibrated to the specific context of Union entities.
At its core, the Cyber Threat Intelligence Framework defines the analytical and operational standards that CERT-EU uses across its publications, from individual Cyber Briefs to the annual Threat Landscape Report. The fundamental challenge it addresses is deceptively simple: intelligence is only useful if the people producing it and the people acting on it share the same understanding of what terms mean, how severity is measured, and which threats deserve immediate attention versus those requiring only monitoring.
By codifying definitions, scoring criteria, and classification hierarchies, the framework transforms what might otherwise be a subjective, analyst-dependent process into a repeatable and consistent methodology. Primary Operational Contacts (POCs) and Local Cybersecurity Officers (LCOs) at Union entities can now receive CERT-EU alerts knowing that the underlying assessments have been produced according to a transparent, documented standard, rather than relying on implicit institutional knowledge.
The framework is also conceived as a key enabler of what CERT-EU calls its Full-Spectrum Adversary Approach, an internal model for threat-informed defence that supports holistic modelling of threats across both strategic and technical dimensions. By making this approach explicit and reproducible, the framework strengthens situational awareness and ensures that observations translate into structured data capable of driving faster, more coherent operational responses.
One of the framework’s most significant conceptual contributions is the introduction of the Malicious Activity of Interest (MAI) as the central analytical unit. Rather than focusing exclusively on confirmed incidents, an MAI encompasses a broader range of adversary behaviours: confirmed compromises, but also suspicious intrusion attempts, adversarial infrastructure development, and targeted reconnaissance. This expanded scope is deliberate, acknowledging that in modern threat environments, the early stages of an attack cycle carry intelligence value that should not be discarded before a formal incident has been confirmed.
Equally important is the framework’s ecosystem model. CERT-EU does not limit its analytical lens to direct attacks on EU institutions. Instead, it considers the broader environment in which Union entities operate: the countries in which they are active, the sectors they belong to, the software and services they rely on, and the supply chains that underpin their operations. This perspective reflects a crucial insight: a threat does not need to directly target an institution to be operationally relevant. A compromised supplier, a widely exploited vulnerability in commercial software used across EU bodies, or a campaign targeting a sector adjacent to Union entities can all carry systemic implications.
The ecosystem model translates into a more nuanced approach to threat relevance. When CERT-EU analysts assess an MAI, they consider not only whether Union entities are directly targeted, but also how many elements of the ecosystem are affected, and how those effects might cascade. A threat actor whose activity spans multiple ecosystem components will be rated more severely than one whose activity is isolated to a single, peripheral element, even if neither has yet caused a confirmed incident at a Union entity.
The framework introduces two structured scales designed to support consistent prioritization. The threat level scale assesses the criticality and proximity of malicious cyber activity in relation to Union entities: a “High” rating indicates an immediate threat requiring urgent verification and action, “Medium” signals a close threat warranting careful monitoring, and “Low” describes distant or indirect threats with no immediately identified link to Union entities. These levels are applied particularly in the Threat Alerts that CERT-EU provides to its constituents, guiding the urgency and scope of recommended mitigations.
Alongside this, a threat actor level scale classifies adversaries based on their observed behaviour during a defined period of interest. A “Critical” actor is one that has caused at least one significant incident directly affecting Union entities; a “High” actor has been responsible for a qualifying MAI that did not reach the threshold of a significant incident; “Medium” and “Low” actors are distinguished by the breadth of ecosystem elements their activity has touched. This granularity allows decision-makers to contextualize alerts within a broader picture of adversary behaviour over time, rather than reacting to isolated events without context.
Complementing these scales, the framework defines a scoring mechanism for both adversaries and mitigations. The threat score is driven by five components: occurrences, targeting, severity, time period, and a decay factor that progressively reduces the weight of older activity. The mitigation scoring draws on a formula that incorporates the coverage of adversary techniques by available controls, the number of initial access vectors addressed, and alignment with the Essential Eight baseline practices, providing a quantitative basis for defensive planning and resource allocation that goes well beyond intuition-based prioritization.
A defining characteristic of the CERT-EU framework is its deliberate integration of established international standards rather than the development of new parallel ones. For the classification of adversary tactics, techniques, and procedures (TTPs), the framework adopts the MITRE ATT&CK knowledge base, a widely used, behaviour-based taxonomy that links observable adversary actions to known techniques, making threat-hunting and prioritized mitigation systematic and repeatable for analysts and defenders alike.
For the assessment of source reliability and information credibility, the framework employs the Admiralty Code, a NATO-standard system that evaluates these two dimensions independently. Source reliability is rated from A (completely reliable) to F (unreliable or untested), while information credibility runs from 1 (confirmed by multiple sources) to 6 (cannot be judged). Crucially, CERT-EU only uses intelligence that meets a specific threshold (A1, A2, B1, or B2 combinations), ensuring that CTI products are grounded in information from sources with a demonstrated track record and with sufficient corroboration or plausibility.
On the question of attribution, the framework adopts a strictly technical stance. CERT-EU does not attribute activity to states or organizations, focusing instead on identifying threat actors through observable technical indicators such as TTPs, infrastructure overlaps, malware artefacts, and targeting patterns. When attribution to a known threat actor proves impossible, the framework designates an Unattributed Threat Actor (UTA) with a numeric suffix (for example, UTA-53), which can later be merged with a known actor or another UTA as additional evidence emerges. This approach, consistent with the best practices promoted by the FIRST community for CTI reporting, ensures that attribution claims remain defensible, evidence-based, and revisable as the analytical picture develops.
CERT-EU has explicitly designed the framework as a dynamic document rather than a static reference. The threat environment changes constantly: new geopolitical pressures emerge, technologies evolve, and regulatory frameworks are updated. The Cyber Threat Intelligence Framework is intended to evolve in step with these shifts, and the organization has published it under TLP:CLEAR precisely to invite feedback from peers and cybersecurity professionals across the broader community. This openness to external input is itself a statement of intent: effective threat intelligence is a collective endeavour, not a closed institutional exercise.
The implications of the framework extend well beyond the walls of EU institutions. National administrations, public bodies, and private organizations that work in cooperation with Union entities, including those already engaged in information-sharing initiatives coordinated through ENISA, now have a shared reference point for aligning their own intelligence processes. Not by replacing their existing frameworks, but by adopting compatible terminology, confidence scales, and scoring approaches that enable genuine interoperability. In a landscape where cyber threats routinely cross organizational and national boundaries, this kind of methodological alignment is a prerequisite for effective collective defence and for the shared situational awareness that complex, interconnected environments increasingly demand.
]]>