The post AI agents building security tests – architecture and prompts appeared first on Labs Detectify.
]]>There’s a lot of hype surrounding “AI hacking”. The headlines are full of FUD (Fear, Uncertainty, and Doubt) about autonomous agents breaking into systems. But what’s the reality? Is it truly about LLMs doing the hacking, or is there a more strategic, powerful use for them?
At the same time, the volume of new vulnerabilities is exploding, with over 40,000 new CVEs published in 2024 and an even faster pace in 2025, reaching over 21,500 by June. This continuous surge amounts to an average of 133 new vulnerabilities every day.
Now, imagine using AI agents for a more scalable purpose: automating the weaponization of security vulnerabilities.
To turn this into a reality, we decided to focus on building a system with two core principles:
Our AI Security Researcher Alfred is a workflow based on a 10-step process, implemented in Go as a chain of agents implemented in OpenAI-mini mode. Alfred takes a vulnerability from a simple data point to a fully functional merge request for a security test. Let’s see how:
Alfred continuously sources vulnerabilities from over 200 sources, including CERTs (like CERT-EU and CERT-SE), public vendor advisories (like Acunetix and Rapid7), and news sites and communities (like Reddit and HackerNews). This creates a broad pool of potential threats, providing a much wider range of vulnerabilities compared to relying solely on the NVD, which has a significant backlog and is not as up-to-date.

Once a vulnerability is identified, Alfred gets all supporting references. This includes scouring GitHub commits, vendor advisories, and even social media mentions to collect every piece of technical information available.

We don’t process everything at once. To ensure we’re focusing on the most critical threats, Alfred sorts all vulnerabilities by their Exploit Prediction Scoring System (EPSS) score. EPSS is a data-driven framework that provides a daily estimate of the probability of a vulnerability being exploited in the next 30 days. This allows us to prioritize what matters most—vulnerabilities that are likely to be weaponized in the wild.
Alfred fetches all content from all URLs and has an LLM group the content into categories. The LLM uses critical rules to categorize content as a “poc” if executable exploit code is present, or other descriptive categories like “advisory,” “remediation,” or “analysis”.
Categorize this security content related to %s using your best judgment. CRITICAL RULES: - You MUST use "poc" if and ONLY if executable exploit code is present with sufficient detail to reproduce the exploit - For all other content, choose a descriptive category that best represents the content (e.g., "advisory", "remediation", "analysis", "detection", "discussion", etc.) - Choose a single category that most accurately describes the primary nature of the content - Be specific and descriptive with your chosen category - Create a concise title (5-10 words) that accurately summarizes the document's content and its type (e.g., "WordPress RCE Exploit Code" or "Apache Advisory for CVE-2024-1234") IMPORTANT: "poc" has a strict definition - it MUST contain actual code or commands that could be executed to exploit the vulnerability. Your response must be a single JSON object with two properties: {"category": "category_name", "title": "Your concise document title"}`
An LLM will learn the exploit and take notes on how it works. Alfred’s task is to analyze content and extract all technical information necessary to understand and potentially reproduce the vulnerability. The analysis is based strictly on the provided content, without adding information from its own knowledge or assumptions. These notes are a precise, exhaustive documentation of the attack vector, prerequisites, and every technical detail needed for reproduction.
Your task is to analyze this content related to vulnerability and extract ALL technical information necessary to understand and potentially reproduce the vulnerability. IMPORTANT: Base your analysis STRICTLY on the content provided. Do not add information from your own knowledge or assumptions. Document EXHAUSTIVELY: - The complete attack vector and exploitation methodology - ALL technical details about how the vulnerability works - EVERY prerequisite and environmental requirement - ALL steps in the exploitation process - EXACT specifications of any unusual formatting or techniques - FULL details on target behavior during and after exploitation - (Prompt cut to fit text) For ANY code, commands, or HTTP requests: - Include them COMPLETELY and EXACTLY as presented - Preserve ALL syntax, formatting, and structure - Document ALL parameters, flags, and options - Note ALL external dependencies or tools required REMEMBER: These notes will become your ONLY reference for future analysis of this vulnerability. You will never see this content again, so be exhaustive, precise, and avoid omitting ANY technical details.`
Alfred acts as a security analyst to triage how feasible a vulnerability is for implementation. It evaluates the previously documented notes and answers a series of true/false questions based only on the technical details provided. Questions include whether the vulnerability is exploitable, relies on HTTP/HTTPS, requires authentication, or is intrusive.
Your objective is to evaluate and triage notes of a security vulnerability that you previously have been documenting. Base your analysis strictly on the technical details provided in the vulnerability description, without making assumptions about typical exploitation patterns.
IMPORTANT: Your task is to carefully analyze the provided vulnerability information and answer each question with true or false. Accuracy is critical as your responses will be used in an automated system. Your goal is answer the following questions (pay attention to the quoted prefix to the questions): "exploitable": Set to false if the provided technical information… "http": Is this vulnerability carried out over HTTP/HTTPS protocols (including HTTP/2, HTTP/3) "authenticated": Does this vulnerability require any form of authentication "multistep": Does this vulnerability require requests to be executed sequentially with dependencies "time_based": Does the vulnerability detection rely on specific timing intervals, including time-based blind injections "pingback": Does exploitation require the vulnerable system to initiate a connection back to attacker-controlled infrastructure (including HTTP, DNS, SMTP, LDAP, or internal network callbacks)? "fingerprint": Does the implementation rely on passive reconnaissance "manual_configuration": Does successful exploitation require prior knowledge of specific values "intrusive": Does this vulnerability test include payloads that could cause permanent damage or disruption to the target system? Examples include: Deleting files or data (rm, DROP TABLE) without recovery Modifying critical system files or configurations that could prevent normal operation (Prompt cut to fit text)
Alfred selects good candidates for implementation based on a ranking system. The system adds bias for vulnerabilities with proof-of-concepts, newer CVEs, higher EPSS and CVSS scores, and more relevant sources to prioritize the most relevant vulnerabilities.
Preliminary filtering, only act on unauthenticated and network-based (Internet-facing) vulnerabilities. This preliminaryFiltering-repository instance is presorted on EPSS in descending order. Rank all vulnerabilities based on the rules below, higher scores means that vulnerabilities are more relevant and will be acted upon first. // Add bias for vulnerabilities with proof-of-concepts // Add bias and prioritize newer CVEs // Add the source count // Add bias for EPSS percentile // Add bias for CVSSv3 scores // Add bias for CVSSv2 scores // Add bias towards more relevant sources // Add bias for recent sources, so that recent mentions in news-sites, CERTs, etc. are prioritized // Add slight bias on "source mentions" seen during the past three months
The next step is development, which happens through rapid iterations until “it works”. Alfred’s goal is to port its technical notes into a standardized JSON specification for a Detectify test module. A computer will parse this output, so exact adherence to the schema is critical. A key requirement is to always use concrete, executable payloads—never placeholders. For command injection, for example, Alfred must use actual commands that work across both Windows and Unix/Linux systems.

Your goal is to port security vulnerability notes to a standardized Unicorn Module JSON specification. This specification describes the format for HTTP requests and assertions to test for specific vulnerabilities."
INPUT: You will receive unstructured notes about a security vulnerability.
OUTPUT: A computer will parse your response, so exact adherence to the schema is critical."
REQUIRED INFORMATION: At minimum, your output must include:
- Valid type and version fields
- Appropriate labels including the CVE identifier (if available)
- At least one request and response signature
- Properly formatted finding metadata
PAYLOAD IMPLEMENTATION REQUIREMENTS:
- Always use concrete, executable payloads - NEVER use template variables like {{command}} or similar placeholders
- For command injection vulnerabilities, include actual commands not a placeholder
- DO NOT include a 'Host' header in your request modifiers - the system automatically handles this
(Prompt cut to fit text)
CROSS-OS COMPATIBILITY REQUIREMENTS:
- When crafting command injection payloads, use commands or techniques that work across both Windows and Unix/Linux systems
(Prompt cut to fit text)
Here are common errors to avoid:
* When using request modifiers, the HTTP method is specified as a string, not as JSON array
* When providing the CVSS, the type attribute must be \"cvss\" in lowercase
(Prompt cut to fit text)
(70+ rows with prompts and instructions)
Once the module is ready, Alfred opens a merge request in GitLab. This allows our internal team of security researchers to review the generated test and ensure it meets our high-quality standards.

The final step is to fix smaller issues and prepare the test for production, such as fixing reference title formatting and extending regex assertions.
So, what was the outcome of the first six operational months? The results speak for themselves:
Alfred exemplifies how AI agents can be powerful tools for security defenders: it significantly accelerates security research by automating the tedious tasks of sourcing, triaging, and test development. This innovation allows our internal security researchers and Crowdsource community of ethical hackers more time to concentrate on what they do best: discovering complex, high-impact vulnerabilities that demand a creative human touch.
For Detectify customers, this means they get access to vulnerability assessments for relevant CVEs faster than ever before. For us, Alfred is a big help in making the internet a more secure place, one automated test at a time.
The post AI agents building security tests – architecture and prompts appeared first on Labs Detectify.
]]>The post 2024 Detectify Crowdsource Awards: Meet the Winners appeared first on Labs Detectify.
]]>Our awards are based on several factors that highlight the diverse contributions of our hackers. This year, we’re looking at quality of submissions by considering the validity ratio, the highest number of submissions for high and critical severities, and the most value generated from submissions made in 2024.
Let’s roll out the red carpet:

This award recognizes the hacker with the highest percentage of valid submissions. With an impressive 65% validity ratio, the winner is Geek Freak! Our internal research team truly admires his consistent accuracy and attention to detail.

While all submissions are crucial to help us protect our customers’ attack surfaces, this particular award honors the hacker who submitted the highest number of valid High and Critical submissions as rated by our internal security research team. Congratulations, once again, to Geek Freak for an outstanding 49 valid High and Critical submissions!

This award goes to the hacker whose submission during the year 2024 resulted in the most “hits” (unique times one specific vulnerability has been found in customers’ systems). With a remarkable 71 hits, the winner is cswiers! His work has helped a bunch of customers (and many more to come).

This special award is voted on by Detectify’s internal security researchers. It recognizes a Crowdsource member who has demonstrated exceptional work, high-quality submissions and great collaboration during 2024. This year, the Team Trophy goes to Joshua!
We want to express our deepest gratitude to every member of our Crowdsource community. Your hard work, dedication, and passion for ethical hacking are what make this program successful. The Detectify Crowdsource Awards are just a small way of showing our appreciation for the role you play in thousands of attack surfaces.
Inspired by our winners? Learn more about the Detectify Crowdsource program and how you can join our team of ethical hackers.
The post 2024 Detectify Crowdsource Awards: Meet the Winners appeared first on Labs Detectify.
]]>The post Reducing the attack surface in AWS appeared first on Labs Detectify.
]]>SCPs are policies to control what can and cannot be done in one or more AWS accounts, regardless of what an IAM policy might say (even if you’re root.) SCPs can be applied to accounts or organizational units (OUs). By default, everything is allowed, and it’s up to us to deny what we don’t need. SCPs are always a defense in depth and will never trigger if IAM permissions are properly configured, but relying on perfect IAM configuration is not a luxury we can afford to count on. For more information about SCPs, see the official AWS documentation on SCPs.
Preferably, we would deny any action we’re not using, but that’s most likely an administrative hell, and we’ll end up blocking developers daily, so we decided against even trying it. At Detectify, we’ve gone for a more pragmatic approach:
This has been a good balance between reducing things we need to worry about and keeping developers productive.
The easiest way to find which services you use is by generating an Organization activity report (aka. organizations access report.) This report shows when services were last used, if at all.
The report can be generated based on an OU or a specific account. In our case, we started with generating a report for our Production OU. To generate such a report, we can use AWS CLI:
aws iam generate-organizations-access-report \ --entity-path "o-XXXXXXXXXX/r-XXXX/ou-XXXX-XXXXXXXX"
Where the entity-path is the AWS Organizations entity path to our Production OU.
Once the report is generated we can get the list of used services with:
aws iam get-organizations-access-report \ --job-id "JobId-goes-here" \ --max-items 1000 \ --query 'AccessDetails[?TotalAuthenticatedEntities > `0`].[ServiceName,ServiceNamespace,LastAuthenticatedTime]'
This gives a list of all services that have been used in the tracking period. It shows the service name, the IAM namespace, and when it was last accessed.
We manually went through this list and filtered out obvious entries that were most likely just someone browsing a specific service in the AWS web console and very old activity.
With the list of used services, we created a policy that denied anything not on the list:
data "aws_iam_policy_document" "allow_only_used_services" {
statement {
sid = "AllowOnlyUsedServices"
effect = "Deny"
resources = ["*"]
not_actions = [
<LIST OF NAMESPACES OF USED SERVICES>
]
}
}
This policy denies access to any service not listed in the not_actions. We seldom adopt new AWS services so this method rarely blocks development but allows us to ensure a proper process for evaluating new services.
We only use a few AWS regions, and there is no reason to allow usage of any region outside of that. At first glance, it might seem obvious how one can use SCP to block unused regions, but because some services are global, you cannot fully restrict access to only certain regions.
As described in Detectify’s journey to an AWS multi-account strategy, we use Control Tower. In Control Tower you can configure which regions to allow or deny, which is what we did.
While only allowing specific actions would be too much work, blocking the risky ones is a lot easier.
Exactly which SCPs to apply depends on the organization but they tend to be far more general than IAM policies, which means they can be shared between organizations. Some places to find SCPs:
There are lots of good resources out there. If you search for aws scp site:github.com you’ll find plenty of examples. We went through many different SCPs for inspiration and picked the (for us) most relevant and least error-prone parts we could find.
Some actions can be very expensive. We have opted to block the most expensive ones we know of. Ian Mckay has a gist with some expensive actions you might want to block to avoid costly mistakes:
data "aws_iam_policy_document" "deny_costly_actions" {
statement {
sid = "DenyCostlyActions"
effect = "Deny"
resources = ["*"]
actions = [
"acm-pca:CreateCertificateAuthority",
"aws-marketplace:AcceptAgreementApprovalRequest",
"aws-marketplace:Subscribe",
"backup:PutBackupVaultLockConfiguration",
"bedrock:CreateProvisionedModelThroughput",
"bedrock:UpdateProvisionedModelThroughput",
"dynamodb:PurchaseReservedCapacityOfferings",
"ec2:ModifyReservedInstances",
"ec2:PurchaseHostReservation",
"ec2:PurchaseReservedInstancesOffering",
"ec2:PurchaseScheduledInstances",
"elasticache:PurchaseReservedCacheNodesOffering",
"es:PurchaseReservedElasticsearchInstanceOffering",
"es:PurchaseReservedInstanceOffering",
"glacier:CompleteVaultLock",
"glacier:InitiateVaultLock",
"outposts:CreateOutpost",
"rds:PurchaseReservedDBInstancesOffering",
"redshift:PurchaseReservedNodeOffering",
"route53domains:RegisterDomain",
"route53domains:RenewDomain",
"route53domains:TransferDomain",
"s3-object-lambda:PutObjectLegalHold",
"s3-object-lambda:PutObjectRetention",
"s3:BypassGovernanceRetention",
"s3:PutBucketObjectLockConfiguration",
"s3:PutObjectLegalHold",
"s3:PutObjectRetention",
"savingsplans:CreateSavingsPlan",
"shield:CreateSubscription",
"snowball:CreateCluster",
]
}
}
Some of these actions you might want to be careful with. For example, denying route53domains:RenewDomain could cause problems if it’s applied to the OU or account that manages domains.
If you are, like us, using Control Tower there are a few dozen SCPs you can enable and let Control Tower manage. In Control Tower you can filter the view to only show SCPs:
Some of the listed SCPs are enabled by default, but plenty of opt-ins exist.
SCPs can be a maximum 5120 bytes, including white-space. We ran into this issue because aws_iam_policy_document does not generate minimized JSON by default.
There are a few ways to work around this:
We went with the last option. There is no built-in function to minimize JSON in Terraform, but you can minimize it by running jsonencode(jsondecode()), so something like:
resource "aws_organizations_policy" "example" {
name = "example"
type = "SERVICE_CONTROL_POLICY"
content = jsonencode(jsondecode(data.aws_iam_policy_document.example.json))
}
This allows us to have our policies written in Terraform (which is more readable, allows comments, gives linting, etc) and still minimize the number of bytes used. If you browse the SCP via AWS console, it won’t be minimized so it’s still readable there too.
There is no dry-run for SCPs which makes any change to them a bit scary since you cannot know if something will break or not.
To reduce the risk of us breaking anything important, we first apply our SCPs to our Staging OU. By having the SCPs applied in staging for a while one can see if it passes the scream test. One can also query CloudTrail via Athena to see if there are any relevant SCP errors with for example:
SELECT * FROM cloudtrail_logs WHERE errorcode='AccessDenied' AND errormessage LIKE '%service control%';
So far we’ve been lucky enough to never encounter any issues and the SCPs have since long been applied to production successfully!
SCPs allowed us to vastly reduce our attack surface and improve our defense in depth. Even though there is no dry run everything went smoothly and we now have much fewer things to keep in mind and worry about!
The post Reducing the attack surface in AWS appeared first on Labs Detectify.
]]>The post 2023 Detectify Crowdsource Awards: Meet the winners appeared first on Labs Detectify.
]]>Let’s roll out the red carpet and meet our distinguished winners:

Firzen set the bar high in 2023, amassing a staggering 51,500 points and securing the top spot on our leaderboard.

Geek Freak demonstrated that quantity and quality can indeed go hand in hand. He exemplified consistency and impact in 2023, with an impressive tally of over 65 valid submissions throughout the year.

The title of Superiority Submitter goes to Tengeez in 2023. With an outstanding submission validity ratio of 71%, his work exemplified quality and set the benchmark for ethical hackers in our community.

Once again, Geek Freak made his mark by clinching the Serial Submitter award. His dedication to submitting at least one valid finding every month throughout 2023 shows a level of consistency and commitment that is rare and commendable.

In the realm of high-stakes vulnerabilities, Shaikhyaser has emerged victorious in 2023, securing the new Critical Submitter award. With over two-thirds of his submissions being high or critical-severity vulnerabilities, his contributions have been pivotal in enhancing the security landscape of Detectify customers.
Lastly, we’d like to thank the whole community. All members of Detectify Crowdsource deserve a big round of applause for their valuable contributions in 2023 and to making the internet safer.
Inspired by the accolades of our ethical hackers? We invite you to become part of this community. Apply to make the internet safer with us at Detectify Crowdsource and join a journey of discovery, challenge, and impact.
The post 2023 Detectify Crowdsource Awards: Meet the winners appeared first on Labs Detectify.
]]>The post Enhancing the Detectify Crowdsource reward system with more continuous and lucrative payouts appeared first on Labs Detectify.
]]>Detectify Crowdsource was launched in 2018 to democratize security research coming from ethical hackers, commonly bound to bug bounty programs that yielded one-time rewards. Our unique approach pioneered the automation of crowdsourced security research, and we’ve created a profitable reward system where submitters are paid for the impact of their vulnerabilities in our customer’s assets.
Since launching our program, we have issued over USD 500,000 in rewards to our private community of ethical hackers.
On accepted submissions, Crowdsource community members would previously receive a fixed payout, determined by the severity of the vulnerability submitted, and a payout every time that one vulnerability was found in our customers’ systems (pay-per-hit).
From November 1, 2023, fixed payouts will be phased out and replaced by substantial enhancements to the pay-per-hit.
We’re introducing an update to promote higher-quality modules, quicker implementation, and to ensure fair and continuous rewards for our ethical hackers:
For example, with the new reward system, if you submit a critical severity module that obtains 100 unique hits, you will receive 20,000 USD (100 payouts of 200 USD).
Detectify Crowdsource consists of 400+ world-class ethical hackers that have generated over 250 million vulnerability findings across the attack surfaces of our 2000+ customers. This monumental achievement from our community is fueled by their submissions, knowledge, and dedication to making the Internet a safer place. No wonder we are proud of them!
Wondering how you can join our community of leading ethical hackers? Try out our signup challenge to see if you have the experience needed to join Detectify Crowdsource here.
The new payouts will only apply to those modules submitted from November 1, 2023.
In the Detectify CS platform, you can access the list of technologies and versions that have been fingerprinted in Detectify’s customers’ assets in the last 3 months. We’ve identified these technologies as being used by our customers to build their products. You can use this list as inspiration for what types of technologies are most commonly used by Detectify’s customers and make the submission more successful.
Every time your submitted vulnerabilities are found in a unique customer application through the Detectify service, you will receive a payout-per-hit. The amount varies depending on the severity of your module.
Along with the payout-per-hit, you also receive points each time your submitted vulnerability is found in a unique customer asset. These points can help you climb our leaderboard. We offer awards for the users at the top of our leaderboard.
If you submit a critical or high severity 0-day vulnerability, you will receive a 0-day bonus, along with regular payouts for the module. You will receive the 0-day bonus once the module has gone live. Remember to mark your submission as a 0-day in the submission form, and then we will validate the vulnerability and start the 0-day process.
The post Enhancing the Detectify Crowdsource reward system with more continuous and lucrative payouts appeared first on Labs Detectify.
]]>The post Q&A with a Crowdsource hacker: Sebastian Neef a.k.a. Gehaxelt appeared first on Labs Detectify.
]]>In our recent Detectify Crowdsource Awards, Gehaxelt was the winner of the Fabulous Feedbacker award, which acknowledged his constant willingness to help, great attitude, and proactive activity in our internal channels.
Read on as Gehaxelt shares how he started out on his career path, some of his current go-to resources and tools, and valuable pieces of advice for fellow ethical hackers who are looking to further their skills.
Gehaxelt: When I was eight years old, I got my first computer. Back then, I was just playing around with lame, 2D computer games. It wasn’t until I was 14 that my father showed me how to build simple websites using plain HTML and CSS. I was thrilled and became motivated to learn more about this. At some point, it slowly evolved into writing automation bots for the games I was playing. One might not consider this to be hacking, but it helped me begin to think outside the box.
While finishing high school two years later, the infamous hacks by Anonymous were all over the mainstream media. It was at that time that I asked myself, “Is hacking (websites) really that easy, or are these hackers just really skilled?” To find out, I began to investigate and spent many evenings browsing the internet learning about web security and various hacking techniques.
And it turned out that hacking can be really easy if you know a few tricks. At least back then (around 2010 or so), many web frameworks, web developers, and sysadmins weren’t as security-aware as they are today, so classic vulnerabilities like SQL Injection, cross-site scripting (XSS), and others were effortlessly found.
I first came across responsible disclosure and bug bounty programs sometime around 2011/2012, which presented a great opportunity for me: Legally being able to test my skills and knowledge against real-world targets and not just some simulated hacking challenges. Certainly enough, I began to find issues that were worth reporting and placed me in a few halls of fame (including those of Google, PayPal, and Twitter). That was a great feeling and a few programs handed out swag or money in return, which was a nice motivational boost. Especially as a soon-to-be student, it was great to earn some extra money doing what I enjoyed.
When I was hacking during those years, I often imagined a “challenge” between the website’s developers and myself — in other words, can they write code that I won’t be able to hack? Can I be better than them? Many times, the answer to these questions was yes, and the resulting rush of adrenaline did the rest. 
“Responsible disclosure and bug bounty programs presented a great opportunity for me: Legally being able to test my skills and knowledge against real-world targets.”
However, doing it alone can be boring at times. Luckily enough, bug bounty communities were beginning to form on Twitter and various online chat groups, so using those channels, I began to exchange writeups, techniques, and ideas with others. Although it was tough competition, it had a positive feedback loop, and collaborating was fun, too.
In the end, we were helping companies to make their websites more secure and thus protect customer data from being stolen.
Gehaxelt: Unfortunately, not all websites that I frequented had a responsible disclosure or bug bounty program, so in 2012, I founded the project internetwache.org. The main idea behind the project was to see how less security-aware companies or website administrators would react to my vulnerability notifications. Plus, as I often used the service myself, I wanted to have my data secured from bad actors.
Since Germany has “hacking laws” prohibiting security testing without authorization — and I assumed that just sending out emails from “[email protected]” would quickly land me in jail — I needed to be more clear about my intentions. Thus, the project’s domain and website were intended to convey my good, ethical intentions. The site was set up to explain who I am in detail and the fact that with my testing, I’m simply trying to help companies improve their security posture: I test solely for vulnerability symptoms, not exploiting anything or pivoting into things I shouldn’t see.
From a legal perspective, the intent might not have changed anything in front of the court, but I was still a teenager and a bit naive, so I believed that nobody would sue someone trying to help them. But to reiterate: I never tried to exploit a vulnerability; instead, I just looked for the symptoms (i.e. an SQL error message/page behavior when changing parameters, broken HTML when entering some tags, etc.). Also, I ensured that the emails I sent always had a friendly tone and never asked for anything in return.
This appeared to have worked: The majority of people that I contacted responded in a friendly manner, thanked me for pointing out the vulnerability, and sometimes even offered a token of appreciation. The worst experiences that I had were either being ignored or being told to not bother the recipient with such things.
Coming back to the initial question, though, I believe that it’s important to know the boundaries. Over the recent years, we’ve heard a few stories of security researchers going too far and getting in trouble. If one respects the scope of a program, tries not to break things when performing their tests, and doesn’t attempt to extort anything (a.k.a. “bounty plz”), chances are good that it will be a win-win situation.
But of course, my story could totally be survivor-biased and might not work out well for others, so take it with a grain of salt.
“If one respects the scope of a program and tries not to break things when performing their tests, chances are good that it will be a win-win situation.”
In the end, it all boils down to trust. Can you be trusted not to overstep visible (or invisible) boundaries? Can you effectively communicate your good faith effort?
In all cases, it certainly helps to be aware of the rules of a responsible disclosure and bug bounty program and follow them thoroughly. If you’re unsure, ask first.
Gehaxelt: Back in the day, Twitter was a really great resource for this once you followed the right people who frequently shared their findings and techniques. However, I feel like this has changed over time – I now see fewer write-ups on blogs, but on the other hand, there are many publicized bug bounty reports that you can read, understand, and learn something from. There are also other sources of knowledge, like YouTube or podcasts, but in the end, nothing beats hands-on experience.
Personally, I really enjoy reading HackerNews to get a community-curated feed of technical news related to IT security. In terms of keeping my skills and knowledge up-to-date, my go-tos are capture the flag (CTF) competitions. Solving security riddles with a team of like-minded people has been — and still is — an invaluable learning resource for me. Well organized CTFs usually feature the latest vulnerabilities and hacking techniques, so you won’t be able to avoid them if you want to come in first place.
Talking to other people, collaborating, and attending conferences obviously also helps in staying up to date. Last but not least, as a Ph.D. candidate, I closely follow academic research, which can sometimes be applicable to bug bounties as well.
Gehaxelt: In regards to responsible disclosure and bug bounty programs, the steep competition is a big challenge. It can become quite demotivating if you’re unable to find vulnerabilities or only come across dupes — and this happens more often these days, since modern websites are more secure than they were 10 years ago.
It might be tempting to go out-of-scope and hack on endpoints that others haven’t yet looked at, but in doing so, you’ll void the Safe Harbor Agreement (SHA) that most bug bounty platforms have. You can avoid that if you stay inside the scope and keep true to a program’s rules. Ask the program owners if something is unclear.
Gehaxelt: Talking about abiding to the scope, Detectify can be a big help in this regard. Once a submitted module is validated and accepted, it will only be run against Detectify customer’s assets and endpoints, which Detectify is authorized to do. This saves Crowdsource hackers time hacking on other bug bounty targets — while Detectify runs your modules on other customer’s assets, you’ll have a greater reach running your vulnerabilities against targets you might not otherwise have access to. It’s also great if you don’t have the time to do hours-long or all-night bug bounty hunts. You’ll receive a monetary reward every time your module produces a hit.
In a nutshell, what I like about Detectify’s Crowdsource system is that they do the work and I can do the research — it’s a win/win for everyone. 
Detectify Crowdsource embraces the talents of ethical hackers like Gehaxelt. If this work aligns with your interests, we encourage you to learn more about the opportunities made possible by joining Crowdsource.
Additionally, you can keep up with our team’s activities to stay looped in on our Crowdsource hackers’ latest and most significant research.
The post Q&A with a Crowdsource hacker: Sebastian Neef a.k.a. Gehaxelt appeared first on Labs Detectify.
]]>The post 2022 Detectify Crowdsource Awards: Meet the winners appeared first on Labs Detectify.
]]>The latest awards reflect our hackers’ achievements made during 2022. Our selection of awards, which we’ll take an in-depth look at below, are measured by a variety of factors: Some awards are based on whether the winner has up to a certain number of points, while others take the number and quality of submissions that the researcher has submitted into account.
We also have an award for individuals who have been exceptionally great to work with by using effective communication, being especially helpful, or consistently having a good attitude.
When it comes to the points mentioned above, here’s how they work:
Without further ado, let’s recognize each of 2022’s award-winning Crowdsource hackers!
First up is our Leaderboard Leader award. This title goes to the Crowdsource ethical hacker who has collected the most points during the entire year. Coming in at the top of the leaderboard during 2022 was melbadry9, who managed to earn a whopping 77,229 points!
In addition to this win, melbadry9 also hit 100,000 all-time points – another significant achievement on its own. Congratulations!
Our Substantial Submitter, awarded for earning the highest number of valid submissions, goes to Geekfreak! Coming in at 87 valid submissions during 2022, this accomplishment is no small feat.
Attaining the highest submission validity ratio in 5 or more submissions earns one of our hackers the title of Superiority Submitter. During 2022, we were impressed with the quality of the submissions from none other than Peter Jaric, whose submissions resulted in a 71% validity ratio. Kudos, Peter!
Detectify Crowdsource wouldn’t be where it is today without the feedback and support of our community members. For their constant willingness to help, great attitude, and proactive activity in our internal channels, we’d like to recognize Gehaxelt as 2022’s Fabulous Feedbacker!
The Significant Start award goes to a Crowdsource researcher who accumulated the most valid submissions during their first month in the Crowdsource community. Hardik, who had six valid submissions during his first 30 days in the community, is the much-deserved winner of this title.
Our Serial Submitter is a Crowdsource hacker who has had at least one valid submission during each month of the year. For his great contributions that come in on a consistent basis, we’re proud to present Geekfreak with this award.
The coveted title of Bullseye Bughunter goes to a researcher with a validity ratio of 100%. This is of course very difficult to achieve, so we didn’t have a contributor who fit this award’s criteria for 2022. We’ve got high hopes for next year!

Rounding out the 2022 Detectify Crowdsource Awards is our Team Trophy. This is an especially notable award, since it’s a title that is voted on by Detectify’s internal researchers. It goes to a Crowdsource researcher who has been a strong communicator and reliable, high-quality submitter. During 2022, we had not one but two hackers that stood out to our team’s researchers: j0v and tengeez. Congrats to you both!
We’d like to thank each and every member of our Crowdsource community for their valuable submissions – the Detectify Crowdsource Awards are just a small token of our appreciation for the important work that you do.
Want to know more about Crowdsource? We encourage you to meet the Crowdsource community and find out more about how the program works.
If you’re interested in getting involved, you can apply to hack with us today and become one of our ethical hackers who are passionate about securing modern technologies and making the Internet a safer place.
The post 2022 Detectify Crowdsource Awards: Meet the winners appeared first on Labs Detectify.
]]>The post Advanced subdomain reconnaissance: How to enhance an ethical hacker’s EASM appeared first on Labs Detectify.
]]>Many EASM programs limit the effectiveness of subdomain enumeration by relying solely on pre-made tools. The following techniques show how ethical hackers can expand their EASM program beyond the basics and build the best possible subdomain asset inventory.
Subdomain enumeration is commonplace, but root domain enumeration is often ignored. Root domain enumeration can be performed in many ways, including:
Searching for acquisitions can help discover assets previously owned by an acquired company. You can search for acquisitions by visiting Crunchbase, searching for an organization, and scrolling down to “Acquisitions.” Here’s an example of Tesla’s acquisitions:
Going to known in-scope domains and using a web crawler can lead you to new subdomains and root domains owned by the same company. Just remember to confirm that the new domain is in scope.
Using a tool like katana on a target’s seed domain can help you find new subdomains. For example, running katana -u https://tesla.com will find auth.tesla.com, ir.tesla.com, shop.tesla.com, and more.
Using the power of Google can lead to a variety of findings. Try dorking for the company’s copyright using “© [COMPANY]. All rights reserved.” or using the allintext, allintitle, and allinurl tags such as allintitle:”Yahoo”.
If the organization has an ASN or their own IP range, there are tools such as Shodan and dnsx that can check these for domains. You can search an organization’s ASN details with the Hurricane Electric BGP toolkit to find ASNs and IP ranges that can be scanned to discover new assets. Here’s just a few of the results from searching “Tesla”:
WHOIS is a protocol that is mostly used for storing ownership information of domains. Normally, you provide a domain name and a WHOIS server will respond with the domain owner’s details, but there are services that allow you to do a reverse WHOIS lookup, where you provide an organization name or email address, and it will return all of the associated domains.
This information has a very low false positive rate and can yield thousands of domains for larger companies. Some of the more well-known reverse whois services are Whoxy, ViewDNS, and WhoisXMLAPI. They all offer APIs that will make it easy to automate the process.
Certificate transparency logs are a great way to find more subdomains that other methods may not discover. Here’s how it’s done:
2. Take the organization name and query crt.sh for that organization
3. Take all common names found for that organization, and query those too. I used *.dev.ap.tesla.services here as an example.
This process discovered pages of subdomains that other methods could’ve missed. If you want to see a walkthrough of this method, check out Nahamsec’s video.
Finding subdomains with permutations is a strategy that has gained a lot of traction recently and is something I have had a lot of success with. The basic idea is that we take subdomains we know to exist and then use them as seeds to generate permutations. For example, if app.example.com exists, we could test for the following:
There are many tools for automating asset discovery through subdomain permutations, such as altdns, ripgen and regulator. I recommend testing all the tools and seeing which one suits you.
Continuous monitoring is essential because external attack surfaces are constantly changing. The key to effective EASM is that you are monitoring changes in as close to real-time as possible. Use automation to routinely check for:
This blog has offered helpful techniques for advanced subdomain reconnaissance that you can add to your EASM toolbelt. There are many data sources to evaluate when assessing an organization’s attack surface. As an ethical hacker, the more data sources you can include, the more effective your EASM will be. As I have mentioned in previous articles, organizations themselves should start thinking like us ethical hackers when it comes to assessing their internet-facing assets and continuously monitoring their growing attack surface. As this blog discusses, root domain enumeration can be performed in many ways, including:
Why not give it a try?
DNS Hijacking – Taking Over Top-Level Domains and Subdomains
Determining your hacking targets with recon and automation
[New research] Subdomain takeovers are on the rise and are getting harder to monitor
My online alias is G0lden. I am a hacker out of the midwest United States. I came into the hacking world through corporate jobs out of college, and I also do bug bounties. I enjoy finding new ways to hunt bugs and cutting-edge new tools. Making new connections with fellow hackers is the best part of this community for me!
The post Advanced subdomain reconnaissance: How to enhance an ethical hacker’s EASM appeared first on Labs Detectify.
]]>The post Detectify Crowdsource offers ethical hackers more than continuous bounties appeared first on Labs Detectify.
]]>Detectify customers around the globe to enable them to secure their external attack surface. Each time a vulnerability is found in a unique customer asset, a bounty is paid to the ethical hacker who submitted the vulnerability.
Earlier this year, we facilitated a survey to learn more about our community of elite ethical hackers. We have subsequently used many of these insights to inform our product roadmap in 2022 and as we plan for next year. We asked a variety of questions, ranging from how many hours per week they spend hacking to what motivates them to keep hacking. We had nearly 200 ethical hackers participate in our survey, most of whom are members of Detectify Crowdsource. We summarized a few learnings from our survey to share with those interested in hacking with Detectify Crowdsource.
Detectify Crowdsource challenges ethical hackers to find vulnerabilities in technologies used most frequently to build web applications. It was no surprise to us to learn that most survey respondents primarily hack technologies associated with web apps. However, we were pleased to learn that over 50% of our community members work as security engineers in their professional lives.
We also learned that 30% of our community of ethical hackers have 5 or more years of experience as ethical hackers. This not only means that Detectify’s EASM customers benefit from vulnerabilities found by experienced security engineers, but that our members also get to learn from other skilled members. We set a high bar to join our community and we are glad to see this reflected in our survey results.
We’re pleased to learn that we have such a talented community of ethical hackers. However, we know it takes more than a compelling reward system to keep our members engaged. While 34% of survey respondents said that earning money is their top reason for hacking, a whopping 36% claimed that they hope to advance their career and learn through ethical hacking.
There are many resources to improve your ethical hacking skills, and while we may be a little biased about some of our own content, we’ve listed some of our favorite resources:
Detectify’s EASM platform tests our customer’s Internet-facing assets for vulnerabilities we’ve crowdsourced from our community of ethical hackers. Each time a vulnerability is discovered in a unique customer asset, the reporter of that vulnerability earns a bounty (no limit on earnings so long as that vulnerability is present in our customer’s attack surface). From day 1, we have prioritized support of our community – from quickly resolving issues and answering questions. We were reminded of how important a responsive team is through nearly 40% of survey respondents claiming that a responsive team is what makes a bug bounty platform most attractive.
Wondering how you can join our community of leading ethical hackers? Try out our signup challenge to see if you have the experience needed to join Detectify Crowdsource here.
The post Detectify Crowdsource offers ethical hackers more than continuous bounties appeared first on Labs Detectify.
]]>The post Determining your hacking targets with recon and automation appeared first on Labs Detectify.
]]>Many ethical hackers struggle because they are hacking the “wrong” types of targets for them. This is especially true for independent researchers or bug bounty hunters. These endeavors only pay for results and findings, not the time invested. Ethical hackers with a good return on their time ensure that their efforts are focused on hacking targets they are comfortable with. A target that is right for you as an ethical hacker could be any of the following:
This list can go on even beyond this. But the idea is that you should always try hacking targets you already have an advantage on. But, how do you find these targets and recognize them from a massive list of targets? The answer is recon and automation!
Recon and automation can be powerful tools for ethical hackers. Recon is the step in which asset discovery takes place. The better you perform your recon, the better the results of your hacking are likely to be. There are many ways that recon can be an advantage, such as:
Using recon to find any of the things above will increase the chances you have success when you hack. Using some of these tricks, you can use recon to take your hacking to the next level and ensure you are only hacking the targets that are best for you.
A word of warning when it comes to recon and automation. While it is true that finding more lucrative assets is a good thing, it’s not the end game. Ethical hackers often get stuck doing so much recon work that they never actually hack the targets. Like all things, there is a balance between reconnaissance and hands-on hacking. If you find yourself struggling with this balance, here are some tips:
Utilizing recon correctly can exponentially increase the returns of your hacking efforts, but getting stuck in the recon phase or getting hit with information overload, can actually hold you back. Balance is key!
Using recon to find juicy hacking targets is like digging for gold. To find gold, you have to dig where others are not. For example, many bug bounty hunters use the same subdomain enumeration tools, sources, and techniques as everyone else. There are a few that dig much deeper than this. To go one step further, you can for example, try fuzzing, brute forcing, generating permutations. The deeper you go, the more likely you are to uncover assets that are untouched by others.
“Bug hunters can find more gold while digging by performing recon continuously.”
Another way other bug hunters can find more gold while digging is by performing recon continuously. Almost every ethical hacker will perform recon once when they first engage the target. Once they have completed gathering information through recon, it is never done again. But imagine a target deploys a new domain the very next day. You would never know the domain exists without continuous recon. This is a perfect use case for automation. Writing some scripts that continuously do recon on your target and report new findings is relatively easy. And the benefits are well worth the work. Any new domains found are always worth a look at if you are confident you stumbled across them first. It is like digging for gold in an untouched cave!
So far, we have talked mostly about subdomain recon, but it doesn’t end there. Recon can be used to find things that other ethical hackers have yet to find, allowing you the chance to test them first. Some of these things are:
There are many things that recon can uncover. And thankfully, many ethical hackers have already made tools to help look for all of these things. Here are some examples:
Shodan can be used to find all kinds of good targets with technology that you like or enjoy hacking on. There are a lot of repositories on GitHub for Shodan dorks if you want to check them out. A really simple example is looking for Jenkins server dashboards that are publicly available:
This is a very small example of how Shodan could be used to find hacking targets that would be good for you. Also, I showed the search engine in the browser in the above example. This could be automated with the shodan cli!
Httpx is a HTTP toolkit created by Project Discovery. Part of this toolkit is the ability to run technology detection using known fingerprints. In fact, right in the readme file in the repository is an example using their subfinder tool and httpx to quickly enumerate subdomains for a target, then grab their status codes, HTTP title, and run technology detection:
subfinder -d detectify.com -silent| httpx -title -tech-detect -status-code
This example is very basic, yet surprisingly quick and effective to quickly generate a list of a target’s domains and some good information about them. Look at the titles, status codes, and technology feedback. And from there, you should be able to discern which domains are best for you.
Arjun can be used to find all available parameters on a page. This can be useful in many situations. This example is going to show a purposely vulnerable page.

This is a small example showing the power of enumerating parameters. Finding all the parameters available to you will open up your attack surface as much as possible. Also, as with the other examples, this can be automated. Below would be a small tool chain using Subfinder, hakrawler, httpx, and Arjun:
subfinder -d detectify.com -silent | httpx -silent | hakrawler -u | grep “detectify.com” > targets.txt && arjun -i targets.txt -oJ data.json
The above tool chain will find subdomains of a target. Use httpx to find which domains have open web ports that are browsable, then spider those domains with hakrawler to find all the pages, and finally run Arjun on each page to find all the parameters for each page. When this stops running, you should have a good list of the pages on a web application, as well as all of the parameters for each page. If you want to take this even further with automation, I would recommend trying to filter your parameters using something like GF and some patterns.
Recon can be one of the strongest tools in a hacker’s tool belt and is a great way to discover hacking targets. I hope something in this post helps you realize there might be a gap in your recon you can fill or that it may be time to automate some of your recon methodologies. I hope you can take something from this and use it to glean more findings. You can easily keep up with new techniques to find hidden assets and hacking targets by following the hacker community on Twitter, Slack, and other communication platforms.
My online alias is G0lden. I am a hacker out of the midwest United States. I came into the hacking world through corporate jobs out of college, and I also do bug bounties. I enjoy finding new ways to hunt bugs and cutting-edge new tools. Making new connections with fellow hackers is the best part of this community for me!
The post Determining your hacking targets with recon and automation appeared first on Labs Detectify.
]]>