Redpoint Security https://redpointsecurity.com/ Helping security professionals and developers navigate the infosec world. Thu, 05 Feb 2026 17:39:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://redpointsecurity.com/wp-content/uploads/2024/07/cropped-Redpoint-Security-Icon-512x512-1-32x32.png Redpoint Security https://redpointsecurity.com/ 32 32 Road Trip to Ruin: Navigating the Roads of User Enumeration https://redpointsecurity.com/navigating-user-enumeration/?utm_source=rss&utm_medium=rss&utm_campaign=navigating-user-enumeration Tue, 03 Feb 2026 21:51:40 +0000 https://redpointsdev.wpenginepowered.com/?p=1611 Hey! It’s been awhile since the last AppSec Travels blog. In this post we wanted to write about a vulnerability that is near and dear to our hearts: User Enumeration. In the world of application security, some vulnerabilities are loud and catastrophic, while others are subtle, acting as silent facilitators for more significant attacks. User […]

The post Road Trip to Ruin: Navigating the Roads of User Enumeration appeared first on Redpoint Security.

]]>

Hey! It’s been awhile since the last AppSec Travels blog. In this post we wanted to write about a vulnerability that is near and dear to our hearts: User Enumeration. In the world of application security, some vulnerabilities are loud and catastrophic, while others are subtle, acting as silent facilitators for more significant attacks. User enumeration is the process of determining valid user accounts, usernames, or email addresses that falls squarely into the latter category. Our vulnerability assessments show that this vulnerability remains a widespread issue. The very first vulnerability I had to assess in my young career was email enumeration. The SaaS-based application I was working with did what all SaaS-based applications do and commingled all users in the same database table. This leads to an enumeration flaw when a user would change their email address inside the application. The application would alert the user that the email was already in use. I honestly don’t remember how we fixed it, but it probably used captchas and rate limiting to mitigate the risk, and still alerted the user to the email in use (sometimes you have to meet Product in the middle). However, since then user enumeration has been part of my daily routine. Whenever we do developer training we talk about user enumeration and Seth turns to me and asks in his sultry, made-for-radio voice, “Justin, how often do you think we see user enumeration?”; my answer is:

My facial expression also matches Severus, (it’s not actually always but it might as well be). 

Checking the Map: Our Data Shows Detours Ahead

In an analysis of 82 client assessments, our reports revealed: 84.15% of applications assessed had some type of enumeration vulnerability. This means a vast majority of the applications we reviewed are giving away information to potential attackers.

Vulnerability TypeApplications Affected
Login Endpoint Enumeration42
Forgot-Password Endpoint Enumeration35
Registration Endpoint Enumeration28
Specialized Enumeration (SSN, Customer Name, /accountExists&email={})12

With 67% of the total findings being medium severity, and an additional 3% high severity and 30% low severity, the risk is clear and present. The most common vulnerability types are found in the fundamental authentication flows: login and forgot-password endpoints.

Where the Pitfalls Are Found

User enumeration vulnerabilities are essentially information leaks that occur when an application’s response to an attacker provides a discernible difference between a valid and an invalid user input.

  1. Login Endpoint Enumeration
  • How it shows up: The system provides distinct server responses or different error messages for a valid account versus an invalid one. For example, the error “Invalid Password” confirms the username exists, while “Invalid Username or Password” (for an invalid username) is less revealing.
  • Why: Developers often provide specific feedback for a better user experience (e.g., “we found that email, now enter your password.”) but fail to consider the security implications of this specificity.
Valid user redirected to /login
Invalid email redirected to /new
  1. Forgot-Password Endpoint Enumeration
  • How it shows up: Similar to login, this endpoint often gives unique responses for an existing email address compared to a non-existing one, or provides specific error messages revealing account status.
  • Why: The intent is to inform the user if they typed their email correctly, but in doing so, the application confirms which email addresses are tied to a registered account.
  1. Registration Endpoint Enumeration
  • How it shows up: The system returns different error messages or API responses confirming account existence when a new user attempts to register with an email that is already in use.
  • Why: This is often a feature to prevent duplicate accounts, but it allows an attacker to validate a list of potential customer emails against your system.
  1. Specialized Enumeration (e.g., SSN, Customer ID)
  • How it shows up: This occurs when a unique identifier, like a Social Security Number (SSN) or a specific customer ID, is used in an API or form field, and the response differs based on its validity, allowing for SSN or Customer identifier exposure.
  • Why: These often expose internal data structures or assumptions about data validity, allowing an attacker to probe a known sequence of identifiers to build a list of existing records.
  1. Login Rate-Limit-Based Enumeration
  • How it shows up: The application blocks a valid user’s account after a set number of failed login attempts, and this is communicated to the UI (e.g., “Account blocked”). However, failed login attempts on invalid accounts do not result in this block message, allowing an attacker to distinguish valid accounts from invalid ones based on which input triggers the “blocked” message.
  • Why: This is a common flaw in rate-limiting or brute-force mitigation logic that is only applied to confirmed, existing accounts, thereby exposing the account’s validity status.
  1. Timing Enumeration
  • How it shows up: The application takes a measurably longer time to return a response for a valid account/username compared to an invalid one. This difference, often only  milliseconds, is due to the server’s internal processing (e.g., performing a password hash for a found user versus immediately failing for an unknown user).
  • Why: This performance difference is a side-effect of the application’s internal processing logic, which executes more steps for a recognized account before returning an error message.
  1. Enumeration as Service
  • How it shows up: Unauthenticated account lookups or account validation APIs. Generally this is used during the registration process. 
  • Why: Ease of use for users when creating accounts. 

Warning Signs: Enumeration as the First Stop on the Attack Roadmap

The core risk of user enumeration is that it provides the necessary groundwork for more severe attacks. An attacker who successfully enumerates valid accounts gains two critical advantages:

  • Targeted Attacks: They can use the list of valid usernames/emails to perform highly effective brute-force or credential stuffing attacks. They no longer waste time trying random usernames; they can focus their efforts on guessing the password for known accounts. Beyond exposing sensitive data in enumerations one of the larger risks of exposing email addresses of users is creating an avenue of spear phishing attacks with those emails. An attacker will now have a valid email to a user of the application with this information and a little sleuthing on their part could create a phishing campaign that appears to come from the application that the enumeration occurred on. 
  • Privacy Exposure: In business-to-business (B2B) or specialized applications, knowing that a specific person (identified by their email or ID) has an account with your service can be a security and privacy risk in itself.

Pothole Patrol: Security Measures for a Smooth Authentication Journey

A consistent pattern of revealing account existence through response differences is the root cause of these vulnerabilities. The fix lies in introducing uniform response handling and friction to the authentication process.

Vulnerability TypeFix/Recommendation
Login, Forgot Password, Registration, and Change EmailImplement Generic Error Messages: The most effective fix is to use a single, vague error message for all invalid input combinations. For example, instead of “Invalid Password,” use “Invalid username or password” for both wrong username and wrong password attempts. This makes it impossible for an attacker to distinguish a valid account from an invalid one.
Registration Endpoint EnumerationOut-of-Band Communication: Return a generic message to the UI that an email has been sent (e.g., “You will receive an email shortly”). The email can then either provide a link to complete the registration (for new users) or inform them that they already have an account for the application.
Login, Forgot Password, Change EmailUse Consistent HTTP Status Codes: Ensure your application uses the same HTTP status code (e.g., 401 Unauthorized or 400 Bad Request) regardless of whether the username exists or the password is just incorrect.
All EndpointsAdd Rate Limiting: Implement rate limiting on all authentication-related endpoints (login, forgot password, registration) to restrict the number of attempts a single IP address or user can make in a given time period. This slows down and ultimately defeats automated enumeration tools.
Brute Force ProtectionsOut of Band notification: Accounts can be locked out by any amount of failed login attempts. The lockout threshold should be determined by the data the account is trying to protect, an online bank account should lock at a much lower number of attempts compared to a recipe application account. Do not show signs of lockout from the application to the UI or in server responses. The application should send a single email or text to the user alerting to them that their account has been locked. It is safe to assume that valid users will use the forgot-password mechanism if they have failed several times at logging in to the application. 
High-Risk EndpointsImplement CAPTCHA: Add CAPTCHA challenges to sensitive endpoints, especially forgot password and registration forms. This provides an additional layer of friction against automated scripts.
Timing EnumerationNormalize Response Time: Hash the incoming password first then lookup the user with the hash and username/email at the same time. 
General PrincipleAvoid Exposing Account Existence Information: Review all authentication-related flows to ensure no specific language or server response reveals the existence of an account to an unauthenticated user.

Off Ramp

User enumeration is everywhere and it hasn’t gotten better since I started my career in application security over a decade ago. Some of them are easy to fix and understand: just don’t tell the user that the account doesn’t exist, for example. The real hard one is registration. We are often at odds with the product team because their need for a good user experience outweighs our need to have a highly secure registration. In the end, our job is to mitigate risk the best we can and hope our applications aren’t the lowest hanging fruit. 

If you’d like to discuss the user enumeration vulnerability more or see if Redpoint Security can advise you on securing your application against user enumeration or other attacks, reach out below.

The post Road Trip to Ruin: Navigating the Roads of User Enumeration appeared first on Redpoint Security.

]]>
Thoughts on the new OWASP Top Ten https://redpointsecurity.com/thoughts-on-the-new-owasp-top-ten/?utm_source=rss&utm_medium=rss&utm_campaign=thoughts-on-the-new-owasp-top-ten Tue, 25 Nov 2025 17:58:50 +0000 https://redpointsdev.wpenginepowered.com/?p=1600 The 2025 OWASP Top 10 is here, and it might be my gray hair speaking, but it seems everything old is new again. For old hats–like myself, who relied on the initial 2003 list to guide my early penetration testing career (thank you, Classic ASP, for the good times)–the 2025 list has less shocking revelations […]

The post Thoughts on the new OWASP Top Ten appeared first on Redpoint Security.

]]>
The 2025 OWASP Top 10 is here, and it might be my gray hair speaking, but it seems everything old is new again. For old hats–like myself, who relied on the initial 2003 list to guide my early penetration testing career (thank you, Classic ASP, for the good times)–the 2025 list has less shocking revelations and more a sobering confirmation: there is a lifecycle of web application security, and it is cyclical. This does not at all dismiss the value of the Top 10 project; it takes herculean amounts of effort to gather and analyze data from multiple consultancies, tool vendors, and security teams. The project acts as a means to shine a light on where web application security should be focusing its efforts, and a shock in the Top 10 list would mean the industry has missed the boat and has not been paying attention. The few surprises on the list this year, though, draws on concepts that have been discussed in security research since the 1970s (Really, just go read the Ware Report if you never have.)

When it comes to information security, the disco era still has advice for us.

As someone who’s been around the block a few times, you start to notice the pattern where attention shifts as defenders fix popular flaws, pushing attackers to focus on less-traveled, yet familiar, vulnerabilities. 

I decided to dive into each of the updated risks (with callouts for various vulnerabilities I see on a basis) with my feelings as someone who has been around since the inception of the Top 10 (anyone else remember Remote Administration Flaws?) and summarize some thoughts on each of the identified risks.

Mapping categories for the new OWASP Top Ten

A01:2025 – Broken Access Control

My friend (and co-worker) Justin Larson likes to say that Insecure Direct Object References (IDOR)  pays our salaries. He’s not wrong, given the amount of authorization issues we see regularly during assessments. I have a hard time seeing this risk changing in the future, either. While IDOR or simple authorization issues may reduce in number, this category is at its core business logic flaws. Some of the first flaws I found as a clean-shaven, wide-eyed, fresh security consultant involved changing ID integers on portals for customer energy bills to see another user’s energy usage. While nowadays I may need to discover some UUID to perform the same action, the risk and lack of security around these calls just hasn’t changed that much.

A02:2025 – Security Misconfiguration

I will trust the team on the numbering with this one, as configuration issues continue to introduce additional risk to most applications. My view into these issues is very dependent on the scope of our current task and what technology supports an application. Web server, cloud, framework configuration flaws account for quite a few issues and often fall outside of a developer’s purview. These lines get crossed and all the sudden cloud storage with customer records is exposed to the world at large. Or maybe a software update resets administrative credentials back to the default (ask me over drinks how I know).

A03:2025 – Software Supply Chain Failures

This category is new(ish) for the 2025 list, and could be categorized as a bit of a surprise. Combining Vulnerable and Outdated Components from 2021 with the risks from attackers performing watering hole attacks targeting high-value package repositories and maintainers. Sometimes we need a bit of a shock to the system in these lists, but we should have seen it coming with the great research and breaches happening in this space. If I were building the list, this may even rank higher than configuration flaws. Even a cursory glance at security news (e.g. Shai Hulud, Indonesianfoods) shows the prevalence and increased scrutiny on this space.

A04:2025 – Cryptographic Failures

Agree or not, the name of this risk still gives me pause, even though I understand the need to include additional CWEs in with Sensitive Data Disclosure. What I don’t like is that the term “Cryptographic” spurs thoughts of encryption keys, token generation, cryptographic algorithms, and NFTs. I end up arguing with engineers and developers about transport layer encryption, when we should really be talking about data protection.

A05:2025 – Injection

2007 called and wants its Injection Flaws back. At least it now pulls together all of the previous “injection” flaws that were their own category. Command Injection, Cross-Site Scripting, Unvalidated Parameters, and SQL Injection have all featured on their own in the past. Risk of each flaw has reduced with industry maturity, frameworks, and developer experience, so a combined category makes sense.

A06:2025 – Insecure Design

Oh good, a catch-all risk for all of the others and anything else we have seen over the last 25 years. No matter how you look at it, the whole list has commonalities between CWEs and risks. Any AppSec engineer can talk a specific vulnerability into two or three categories depending on where in the SDLC the flaw is introduced. On the positive side, inclusion of this risk in the Top 10 has highlighted the need for Threat Modeling, Security Requirements, and other tasks that have been highlighted as “Shifting Left”.

A07:2025 – Authentication Failures

Okay, so now I want to argue, or at least mildly disagree with a ranking. I see authentication failures in a high percentage of assessments and feel like the industry puts little weight into these risks. Taken by themselves, authentication vulnerabilities may only be low or medium severity, but I have seen them combined to great effect to compromise accounts, expose data, and access administrative functionality. And yes, I still consider email address/username to be a piece of the authentication puzzle. Enumeration is an often overlooked and under-rated vulnerability.

A08:2025 – Software or Data Integrity Failures

The introduction of this flaw in the 2021 list was a yawn for me, maybe because the initial read was related more to software integrity checks during install. I am coming around to agreement that it needs to be included at some level, given the dependence of most software on third-party services and components. Research into subdomain takeovers and dangling domains show how crucial these integrity failures can be. 

A09:2025 – Logging & Alerting Failures

Ah! Vindication! The Crocs & Socks is still here. I get the ranking. It seems that most of the industry, author excepted, ignores logging failures until the breach happens. I was guilty of this during my initial introduction to security, but had a quick lesson in application logs, forensics, and gaps in coverage trying to identify successful SQL injection attacks in the early 2000s. Logs and alerts matter. Third party interactions and integrations can introduce gaps most developers won’t think about during development.

A10:2025 – Mishandling of Exceptional Conditions

Finally, the second new risk category for 2025. Anytime an application or infrastructure fails in an insecure manner, a bug bounty researcher writes his first AI-generated submission. Okay, maybe not. Denial of service, failing open, extra parameters, and just generally unexpected behaviours are all accounted for.

Omissions

So what are we missing? The inclusion of Server-Side Request Forgery into Broken Access Control (aka Broken Authorization) means there are no single-vulnerability risks in the Top 10 for the first time. Categories have replaced vulnerabilities, with the majority of the risks even labeled as a collection of failures or conditions. I am not sure if this means we have matured as an industry or if we are missing an opportunity to highlight new and upcoming research. The list has morphed from the original “Top 10 Web Application Vulnerabilities” to “Risks” as practitioners and developers have become increasingly aware of the dangers operating on public networks. While specific vulnerabilities have been “fixed”, developers must still be trained on the dangers of string concatenation, input handling, and output encoding.

Consider the AI of it all

While it is too early to include AI in the list, it doesn’t take a prophet to see that it is going to change the risks of operating an application on the web. Whether that means that vibe coding applications include old vulnerabilities (yes, we have indeed seen a resurgence of SQL injection flaws in the last 12 months) or AI agents find and fix flaws before they make it into production, the AI effect should be reflected in the list in some manner. While it was premature to add an AI-specific risk this time around, it’s going to happen in the next iteration, if not before.

What the Top 10 gets right

The OWASP Top 10 is an awareness document meant as a guide for application security programs. For years, I have harped on the early security triad of AAA (Authentication, Authorization, Accounting) as the baseline we should be using for application security programs. The representation of this triad in the list is confirmation that the security research conducted against computer systems in the 70s and 80s was correct. CIA (Confidentiality, Integrity, Availability) is also well represented. After seeing OWASP struggle to find its voice through the Top 10, this inclusion is exactly where it should be.

What’s Next

From someone who has been around the block a few times with these OWASP updates, realize that the OWASP Top 10 project is now (finally) mature and any expectation that it will change more than 10-20% between iterations is unrealistic. It is not surprising anymore that my favorite talking points, errr, specific risks are not reflected directly in the list. Industry-wide lists are helpful for new applications and training developers, but will not address every application, language, framework, or organization. The proliferation of the other OWASP Top 10 lists is a testimony to the desire to apply the same sort of thinking to other technologies and development techniques.

So take this list and use it. Develop your own Top 2, 4, 5, 8 or 10. Identify the risks that speak to you and your organization and improve your posture with this new focus. Integrate the categories into training, show examples, look for these flaws in that newly-vibe-coded application. In short, apply the principles and let’s make the web less risky. For everyone.

Seth

The post Thoughts on the new OWASP Top Ten appeared first on Redpoint Security.

]]>
SDLC – Managing risk in Software through the compounding effect of control gates https://redpointsecurity.com/sdlc-managing-software-risk-through-control-gates/?utm_source=rss&utm_medium=rss&utm_campaign=sdlc-managing-software-risk-through-control-gates Thu, 06 Nov 2025 22:41:21 +0000 https://redpointsdev.wpenginepowered.com/?p=1586 By Cameron White If you’ve ever watched someone run the hurdles in a track meet, you may share my amazement at their agility to consistently leap each hurdle at speed when the pressure to perform is on. The compounding exertion to clear each barrier is not hard to imagine, and when you’re trying it yourself, […]

The post SDLC – Managing risk in Software through the compounding effect of control gates appeared first on Redpoint Security.

]]>
By Cameron White

Civil service police exam hurdle test, 1942” by Seattle Municipal Archives is licensed under CC BY 2.0.

If you’ve ever watched someone run the hurdles in a track meet, you may share my amazement at their agility to consistently leap each hurdle at speed when the pressure to perform is on. The compounding exertion to clear each barrier is not hard to imagine, and when you’re trying it yourself, the effort can feel surprising.

Maybe that is one of the reasons I’ve always liked the analogy of hurdles when discussing quality gates or security controls. It may be possible to imagine the quality defect or security bug sailing over the first one or two hurdles, but it is a proportionately much rarer thing for a bug to successfully navigate all the control gates when they are properly aligned in support of the final product. There are many industries that take advantage of the compounding effects of control gates. Borrowing from my own experience, I’ve seen them heavily utilized in both the food manufacturing and software development industries. 

Path to Controlled Product Release

In the food manufacturing world, there is no tolerance for safety defects. There are from time to time some minor quality issues, but for the most part, manufacturers aim to maximize the amount of high quality product produced per the given resources. There can be tremendous pressure on a production team to produce the maximum amount of product with the least amount of downtime. Profit margins can be incredibly narrow (sometimes, pennies on the dollar) for food products. A business is often only able to remain in business because of the sheer volume of product produced. In light of that context, it is interesting that food manufacturers often speak of microbial load, and critical control points in a not too different way than security and software development teams talk about tech/security debt and stop gates. 

Because of these similarities, I think it’s worth looking at the theory behind some of the tolerances food product manufacturers build into their processes. And how these controls are designed to support the flow to eventual product release. 

In food manufacturing, controls are built out starting with a description of the end product. Yes, there are common controls to expect but they are tweaked and designed to align in support of the process to produce the end product. Can someone say Threat Modeling?1  These controls are then strategically inserted into the production flow so they occur at the most efficient junctures. Of note, no single control is designed to cover all risks. They are meant to overlap and offer backup to other controls when possible. It is the cumulative effect of these controls that gives the production team the assurance the finished product won’t make anyone sick and that it merits the customer’s trust. For food companies, the risk of a recall is also significant. As mentioned, the margins can be so tight on some products that having to recall everything produced during a specific shift or from a given location can be enough to put a company deep in the red. In software development, the risks are less likely to be directly impactful to anyone’s health, and are usually limited to the ramifications of important services going down or the loss/exposure of sensitive data. That said, the financial implications to services going down or sensitive information ending up in the wrong hands can be very steep. And similar to the pressure the food industry navigates when scaling safety/quality controls across large quantities of product, there is so much code being written these days that resources for testing and securing each line of code must be thoughtfully scaled. Accordingly, it is in the systemic approach to the “defense in depth” mantra headlined by security teams, that information security teams can gain similar confidence that their systems are resilient enough.

Compounding Effect of Security Gates:

Going back to the sprinter/hurdler we mentioned at the beginning, this cumulative yet logarithmic effect can be physically experienced if you have ever tried to clear a series of hurdles while running down a stretch of track. The first hurdle or two can feel like quite an adjustment compared to the stride you had while simply running, but, notably, once you’ve begun to master the technique, seven hurdles doesn’t feel much harder than ten hurdles. 

In manufacturing this effect is expressed as a logarithmic curve (think inverse of exponential curve)

This is because the initial controls will have the highest impact on the overall outcome. However, it is the additional controls that help to ensure there are no gaps, even if a single control fails; the additional coverage reduces the likelihood of a single point of failure creating an unsafe/unfit product. 

This diminishing of returns means there is almost universal agreement on the value of implementing some carefully thought out controls for any process, but as the number of controls increases, their value can become harder to justify. Additional new controls are not all equally impactful in preventing the bug/defect/contaminant. The risk management side of information security really becomes prominent when discussing the ROI of various security controls with the company’s CFO. How then do we know if we have enough controls in place to cover the gaps? This is how we get to, as Kelly Handerhan is fond of saying, “secure enough”. 

Which Security Gates to choose?

Which controls offer the highest impact? Which controls can most easily be validated? The easier it is to validate a control is working properly, the less need there is to devote additional resources to preparing for the time that that control will fail. What if the risk commonly associated with missing the vulnerability via a false negative scenario could be made entirely tolerable by focused effort on ensuring the controls you have in place are working? I would suggest that besides the obvious compliance check a penetration test can provide, that the intrinsic value of security assessments largely comes from the assurance it can provide on whether the controls in place are working as desired. 

Identifying which controls to implement for a code-delivery pipeline can feel like quite the array of options, but building with the end in mind can narrow the scope considerably. Consider the following questions as a starting point:

  • What are my software assets?
  • Which team owns or is responsible for that asset?
  • How does code for that asset get promoted to the production environment?
  • Who will be using this solution? And what are their security concerns/requirements?
  • What data types and corresponding regulatory requirements do I need to consider?
  • What steps does every code change go through before reaching the production environment?
  • Where are the natural pauses or handoffs during the code-promotion flow?
  • What security/quality controls do I have in my toolbox?
  • Which controls are so critical that if they failed it would destroy trust in the product? 
  • How are those critical controls validated and regularly re-validated?
  • Which controls support or provide similar/redundant coverage to those critical controls? 
SDLC process which foregrounds feedback loops and iterative processes

The Case for Process Validation

With the answers to these questions in mind, it would be possible to draft a minimum security checklist and where these controls or ‘checks’ could fit in the software deployment workflow. And logically, if we wanted assurance this process is working, we would need to test it. We would need to send well defined security bugs through the process and confirm whether our checks are catching anything that falls outside the tolerances we have specified. 

Example of SDLC in Waterfall with locations for stopgates

I hope you’ll bear with one last anecdote from food-manufacturing workflows. For large facilities, it is not uncommon for food manufacturers to set up metal detectors or even x-rays near the end of the production line as a final safety precaution to ensure there were no stray bolts, metal filings from the machinery, or any other artifact from the production facility that may have inadvertently wound up in the product. That sounds very robust, but it is worth noting that, because the demand on these machines is quite intense day in and day out, quality assurance teams periodically throughout each production shift perform tests to validate and confirm these machines are working properly. They can do this by placing samples of steel, iron  or plastic pellets of varying specific sizes through the machine and documenting whether the machine successfully passed or failed to detect each artifact.

They may have three to five sample sizes and as long as the machine can successfully detect all samples, the machine is considered operational. However, if the machine ever fails one of these checks, the time of the failed check is noted, and the tester will look back in their documentation to see when the last successful test occurred. And any product that was checked by that machine since that last successful test is quarantined until it can be re-scanned by a machine that has been properly calibrated and performing within specifications. This level of validation, although mostly unknown to the general public, has led to such an absence of these sorts of contaminants making it into the final product that the public feels comfortable consuming their next frozen meal from the local grocery store. 

This same way of thinking (recognizing that not all security controls are equally impactful) must be applied to our most critical software security controls. If validations are synonymous with annual audits at your organization, I would suggest that means your organization is not performing validations but may have unintentionally conflated its compliance program with its validation process.  

What does this look like in practice? It depends on the control, but similar to the use of an EICAR Anti-Virus Test File, it may be as simple as letting the development and security teams partner up and insert a well defined vulnerable chunk of code and track it through the deployment flow. You may be surprised by what you learn from such an exercise.

Scaling Code Security

The amount of code produced in each year has steadily increased. And with the advent of AI agent coding the amount of code produced each year is expected to shoot into the stratosphere!  At this scale the need for coordinated synergistic security controls assuring customers presents as business imperative. 

Historically, food manufacturing became ubiquitous following the industrial revolution. With the increased urbanization, people became disconnected from food sources and increased the need for non-perishable mass-produced food. Consequently, safety problems and the loss of visibility into food production for the general public meant relying on mass-production of food led to the public lost trust in the food produced. The lack of trust in food-production processes led to the formation of the FDA. And title 21 of the CFR was written legally mandating many quality and safety checks. 

Common Minimum Security Requirements for Software Security:

There are a number of security frameworks to help support or refer to when identifying the most appropriate security controls for your organization’s or team’s code-delivery pipeline. Some highlights include:

From these you could surmise a possible short list of must have controls, but we can open the question up as well to our consultant friends, appsec program managers, or you, our learned readers:

  • What are your tests? 
  • When juggling the necessity to ship code at high velocity and meet customer/client’s expectations of quality & security, what series of tests have you found helpful in your experience? 
  • Do you have a short list of must haves that get you by in a pinch?
  • Have you discovered a way to prevent metaphorical steel pieces getting into your code pipelines?

We’d love to hear from you with your suggestions! Or reach out if you’d like us to help implement a more secure process for your organization. Reach out in the form below to contact someone at Redpoint.

  1. Software Threat Modeling is a structured approach to identify potential threats and vulnerabilities in a software system. It involves gathering context by analyzing the system’s design and functionality to understand how attackers might compromise it and then developing strategies to mitigate those risks. ↩

The post SDLC – Managing risk in Software through the compounding effect of control gates appeared first on Redpoint Security.

]]>
The Loop that Feeds Itself: How human data trains machines that train us https://redpointsecurity.com/ought-i-bot-or-not/?utm_source=rss&utm_medium=rss&utm_campaign=ought-i-bot-or-not Tue, 22 Jul 2025 23:56:40 +0000 https://redpointsdev.wpenginepowered.com/?p=1574 A decadent introduction by Jade Lee Our world is made up of an astronomical amount of data. Beyond internet traffic and hard drives, we now supply a consistent stream of data through our cars, refrigerators, doorbells, and most importantly, our words. Almost the entire expanse of human thought and home can be translated into vector embeddings, […]

The post The Loop that Feeds Itself: How human data trains machines that train us appeared first on Redpoint Security.

]]>
The data loop that inspired our interns’ Bot-or-Not Project

A decadent introduction by Jade Lee

Our world is made up of an astronomical amount of data. Beyond internet traffic and hard drives, we now supply a consistent stream of data through our cars, refrigerators, doorbells, and most importantly, our words. Almost the entire expanse of human thought and home can be translated into vector embeddings, by which I mean machine-readable data. It’s what enhances the LLMs that supplement everything from ancient language translation to the essays college students forgot were due. There is so much data, in fact, that much of it often gets misplaced, is seldom organized, and impossible to approximate. 

We can safely assume that what is stored, however, is stored indefinitely. Our hobbies, political views, internet personas, manner of speaking, likeness; the identities tied to us make up the goods that circulate throughout the attention economy. This inconceivable wealth of information creates a poverty of attention because it exceeds the number of eyes that can focus on it. It is of utmost priority for those in power (i.e. advertisers, political candidates, billionaires who back political candidates, and malicious hackers) that attention is directed and predictable. 

This is why Big Tech keeps LLMs overstuffed with an inflow of information so that when they drive social bots into online spaces, the bots will mirror exactly what is being given attention to, all while omitting the fact that it was computer-generated. 

It is imperative that you are none the wiser. Think about your timeline on your most-used social media platform. How easy is it to scroll past a sponsored ad? To discount it for what it claims? Something that appears classically manufactured has a lower chance of achieving virality.  Forget about scientific journals or news articles, or sponsored testimonials—what takes precedence in the online social world is what feels credible/worth our time/funny. Take two different Tiktoks that endorse a certain brand of mouthwash. Would the “Sponsored ad” post made by the company that makes the mouthwash have a greater chance at promoting the product or would the viral post made by someone who has no prior affiliation have a higher chance? Especially if the post is anecdotal and entertaining because they talk about how the mouthwash saved them from being humiliated on their first date. 

The second layer to this is that virality is often difficult to replicate by those who accidentally achieve virality once, which is why social bots are being trained off of every instance of virality that exists and will exist. 

What doesn’t help is that the new normal of LLMs in day-to-day life and the rush to keep the LLM-driven bots up with the online world ushers in a constant onslaught of security risks. After all, artificial intelligence is a mysterious black box of node layers trained for technological breakthroughs, but can easily expose organizations and individuals to devastating security outcomes and compromise the landscape of the human breakthroughs that it stands upon. 

That leaves just one question: When the U.S. Forest Service sets ablaze a strategic wildfire, how exactly do they control where it burns? 

The project

The project is simple, really. Bot? Or not? 

An election post calibrated to gain viral attention. Is it a rage-inducing bot or not?

Who even are you? 

My background can be summed up in five bullet points—

  1. One (1) computational machine learning class 
  2. One (1) UC Berkeley AI Hackathon 
  3. Four (4!) Cybersecurity courses 
  4. One (1) Cloud computing certification 
  5. One (1) minor degree in Sociology. I also have a Bachelor’s in CS, but it isn’t as relevant. 

This was the foundation I had to build upon at the time I started as an intern here at Redpoint. 

When first presented with this project, I felt both ill-prepared and hyperexcited to actually be a part of something, but before I could even start second-guessing my own capability, I realized I first had to confront the project from a non-technical standpoint.

Perhaps others can relate, but I’d been passively keeping nebulous thoughts about AI as I was learning skills that AI was supposedly going to make obsolete. I was preoccupied with that stressor and therefore had little understanding of social bots coming into this, in that I had no idea just how far and convincing they had become in the time between starting my degree and realizing that cybersecurity was the niche I desired to be a part of. It was a no-brainer for me to care about these things, especially in the context of political disinformation or how they provide yet another opportunity for bad actors to compromise people’s digital security. 

I should preface by saying that I’ve come a long way, but at the start of this I tried to ask the dumbest question so that I could work my way outward. The very first thing I did was look up:

“so what exactly is AI.” 

“how to build AI.”

“llm social bot”

Twitter X scraping API limits.” 

“ways to get around X scraping API limits.” 

“why is it impossible to get any data about literally anything anymore” 

Then, I walked myself through the tedious mathematical equations that make up a neural network. I learned that this technology has long lost the title of a contemporary discovery, dating all the way back to the 1940s when the term “computer” still referred to a human. I then had to reflect upon its journey; the reasons why artificial intelligence is only now blossoming despite the theory having been written out decades ago. This is all just to say that the issues we are facing today are singular, that they can’t be held relative to the concerns people had about computer graphics or the Internet. LLMs have led to a phenomenon where we are now unable to confidently identify what has been created by a human and what is the product of a distinct entity mimicking one. What’s worse is that because it’s becoming harder to tell, we’ve begun to just stop caring. 

So why bother? 

It’s not about moving forward. It’s about what we don’t know and what we can’t know. I can’t tell you what Claude’s weights and biases are, especially since I was raised to never ask a lady’s age—so even if I did know, that’d be downright rude of me to air that out and rude of you to ask! Jokes aside, we can’t control what we can’t know. As for what we don’t know, there are too many things to count, for instance, the biopolitics of LLMs. 

I am probably—most definitely not the first to think that artificial intelligence’s widespread accessibility is a form of surveillance. In just a few short years, it has managed to worm its way into regular people’s hands, sometimes from being encouraged by an employer to use it for writing emails, reports, and getting answers for badly worded questions. Students are using it to write entire ethics papers. I can’t imagine that such a decision could only come from laziness because at their core, LLMs promise technical answers for complex human struggles. When assured by its creators that LLMs are constantly being trained, fine-tuned, and improved upon, there is a reality in which I can imagine a student feeding into their insecurities by having the LLM write their assignment, then slowly becoming dependent on it because they start to embody the idea that the LLM might be better at being the student than themselves. The sheer force of computation that LLMs require to be as sophisticated as they are is something that should be taken into consideration as well. There are two kinds that I’d like to introduce as food for thought:

  1. The material force – The natural resources that are drained for the compute alone will have devastating environmental consequences that we are sure to see more of in the future. 
  2. The invisible force – the impalpable processes that have made the body machine-readable, that have allowed (Big Tech’s) artificial intelligence to reconfigure what we identify as truth (the AI result is always the top result!) and what we consider to be a problem that requires a technical solution. Ethics papers, apparently.    

In summary, if we can’t know what is making the LLM so powerful, we at least deserve knowledge about what they’re doing for as long as the modern world is going to necessitate the use of LLMs. Human-machine entanglements have existed since the invention of the wheel, since nuclear power, since the cloud. These inventions need someone to take responsibility for their making since they are now unable to be undone. Moving forward in an AI reality isn’t about throwing gasoline everywhere thinking it’ll all burn down anyway, but at least setting a precedent for the fire. Establishing a plan for the fire. Understanding the damage a stray ember is capable of when it’s met with the lightest of breezes. I believe there is a point to that. After all, cybersecurity is about confronting endemic insecurity. 

If tomorrow all datafication halted, if the AI UIs went down and the API keys were no more, the LLMs would inevitably plateau. It astounds me whenever human beings are described the same way one would for Google Jamboard: pointless and deprecated.  

The irony is that LLMs are trying to learn exactly that, aren’t they? How do humans do unprogressive things? How do humans say the wrong thing in the most perfect way you can say the wrong thing? How are humans so good at being mean? How are they so good at being mean to themselves? 

This is how we here at Redpoint have decided that there should at least be a dynamic taxonomy for profiling what social bots are doing in our online world. How they ingest data, are able to self-reflect, make your organization vulnerable to MCP malware, and well, in our case, piss off some hardcore MLB fans. Stay tuned. 

The post The Loop that Feeds Itself: How human data trains machines that train us appeared first on Redpoint Security.

]]>
Navigating the AI Frontier: How Redpoint Security is Integrating Artificial Intelligence into Application Security https://redpointsecurity.com/ai-at-redpoint-security/?utm_source=rss&utm_medium=rss&utm_campaign=ai-at-redpoint-security Wed, 28 May 2025 16:36:13 +0000 https://redpointsdev.wpenginepowered.com/?p=1563 Artificial intelligence is rapidly transforming industries, and application security is no exception. At Redpoint Security, we’ve been on a journey to understand and leverage the power of AI, not just to enhance our own capabilities, but also to help our clients navigate the evolving threat landscape and securely incorporate AI into their own applications and […]

The post Navigating the AI Frontier: How Redpoint Security is Integrating Artificial Intelligence into Application Security appeared first on Redpoint Security.

]]>
How we learned to stop worrying and love the bot.

Artificial intelligence is rapidly transforming industries, and application security is no exception. At Redpoint Security, we’ve been on a journey to understand and leverage the power of AI, not just to enhance our own capabilities, but also to help our clients navigate the evolving threat landscape and securely incorporate AI into their own applications and workflows.

Redpoint Security’s history with AI

Our initial foray into AI began with the development of a machine-learning feature within Surveyor, our proprietary security-analyzer tool. This feature was designed to evaluate scripts for indications of malicious behavior, so we could evaluate the potential risk or new scripts found in an application being served to a customer, and thereby, provide an ML-informed score regarding likelihood that the new script represented an injection-type attack targeting a customer application. This early use of AI in our tool development geared us up to see potential use cases for AI tooling, especially in the last couple years as they’ve become increasingly powerful.

Using powerful new tools in Redpoint’s day-to-day

In our day-to-day work, we’ve seen the impact of AI tools in speeding up our consultants’ application information gathering and basic risk analysis. To that end, we’ve been actively testing and evaluating various AI models for their efficacy in analyzing application code to identify potential vulnerabilities and risks. This includes exploring how AI can automate or accelerate initial code reviews and flag suspicious patterns. Our work here is increasingly informed by platforms like Arize, which help us rigorously evaluate the performance and reliability of these security-focused AI models.

As AI features become more prevalent in applications – with features such as chatbots, recommendation engines, and automated content generators – the need to secure these features themselves has grown dramatically. Redpoint Security has been at the forefront of performing application security tests specifically on our clients’ AI implementations, such as conversational chatbots. This testing process has matured through repeated engagements. We’ve put together examples of successful exploits, and recently, we’ve found Arcanum Security’s prompt-injection taxonomy to be incredibly valuable in making sure we cover all the bases. It provides a structured approach to identifying potential weak spots and has been instrumental in creating comprehensive checklists for our client engagements, ensuring good coverage of possible vulnerabilities related to adversarial interactions with AI.

Training developers and security professionals to incorporate AI tooling

Beyond testing and analysis, there’s a significant and growing interest from developers and security engineers in understanding how they can safely and effectively incorporate AI into their daily workflows. Recognizing this need, Redpoint is fortunate to have a founder who has consistently been on the cutting edge of security developments. As a co-host of the Absolute AppSec podcast, he stays abreast of innovations across the entire information security industry. Specifically in the area of Large Language Models (LLMs), he and Ken Johnson have developed an industry-leading course “Harnessing LLMs for AppSec” – a two-day course that dives deep into practical ways security professionals can utilize these powerful new tools in their day-to-day tasks. Interested individuals can sign up for virtual offerings or attend the first in-person session at DEFCON this year. More details can be found at training.absoluteappsec.com. The insights gained from developing and teaching this course afford Redpoint Security the experience to offer our clients a range of security training options tailored to their specific needs, and we’ve worked to provide lunch-and-learn sessions on getting started with AI tooling already for a range of our clients. (Reach out to us in the form below if we can help with training your teams of developers and security engineers.)

Redpoint Security Interns’ Bot-or-Not project

We’ve also been committed to working with interns on the latest technologies and security concerns raised due to new industry developments. As a result, this year Redpoint Security interns have worked on a “Bot-or-not” project where they’ve created bots capable of carrying out conversations and arguments of varying degrees of sophistication. From these bots, they’re then hoping to use lessons learnt to generate checklists of indicators or bot as well as set of tools that may work to help with bot-spotting one day out in the wilds of the social-media internet. As one of our interns says,

Redpoint has taken special interest in AI’s ability to skew analytics, spread disinformation, execute spam campaigns, and perform scraping or reconnaissance on apps and data. We are invested in learning how to catch AI’s tricks and, in return, harness its power. We are doing independent research into AI detection methods, studying hallucination patterns and the nuances of bot believability. Understanding how AI fools us, and how we might be able to fool it right back, will be a powerful tool in future security toolkits.

Redpoint Security Intern on the Bot-or-Not Project

AI is making Redpoint Security tooling more effective

Perhaps most excitingly, the latest evolution of Redpoint Security’s proprietary Surveyor tool now integrates AI capabilities to perform sophisticated run-time analysis of applications. Surveyor can dynamically analyze an application as it runs, highlighting potential vulnerabilities related to common classes of bugs, including those listed in the OWASP Top Ten. What sets Surveyor apart is its ability to not only flag potentially vulnerable endpoints and parameters but also provide detailed test instructions for manually verifying these findings. This empowers security teams to quickly investigate and confirm potential issues, improving the efficiency and accuracy of vulnerability detection.

To catch a bit of a hint about what Surveyor is capable of in making use of LLM agents, you can check out our Founder Seth’s recent talk at BsidesSLC where he highlights how AI is helping find User Enumeration vulnerabilities in running applications.

Faces in the Fog: Identifying Users through Unconventional Means at BSidesSLC – April 2025

In conclusion, Artificial Intelligence goes beyond existing as just a buzzword at Redpoint Security; it’s integral to how our services are evolving to meet the challenges of modern application security. From leveraging AI in our initial analysis tools and testing client-side AI features, to providing cutting-edge training and enhancing our core products like Surveyor, we are committed to harnessing the power of AI to better empower developers and security teams as well as protect applications from ever evolving threats. Please reach out to us here if you’d like to discuss more about how we can help you, your application, and your security and development teams.


The post Navigating the AI Frontier: How Redpoint Security is Integrating Artificial Intelligence into Application Security appeared first on Redpoint Security.

]]>
Breaking Bad: How to Identify and Overcome Destructive Fatigue https://redpointsecurity.com/identify-and-overcome-destructive-fatigue/?utm_source=rss&utm_medium=rss&utm_campaign=identify-and-overcome-destructive-fatigue Thu, 06 Mar 2025 18:45:10 +0000 https://redpointsdev.wpenginepowered.com/?p=1556 Introduction In fields that require constant analysis, critique, and problem-solving—such as information security, auditing, and quality assurance—there’s a unique form of burnout that many professionals experience: destructive fatigue. Unlike traditional burnout, which is often tied to excessive workload, destructive fatigue stems from the mental toll of constantly tearing things down without opportunities to build. This […]

The post Breaking Bad: How to Identify and Overcome Destructive Fatigue appeared first on Redpoint Security.

]]>

Introduction

In fields that require constant analysis, critique, and problem-solving—such as information security, auditing, and quality assurance—there’s a unique form of burnout that many professionals experience: destructive fatigue. Unlike traditional burnout, which is often tied to excessive workload, destructive fatigue stems from the mental toll of constantly tearing things down without opportunities to build. This can lead to cynicism, disengagement, and a loss of motivation, making even the most skilled professionals feel stuck in a cycle of negativity.

I have been suffering from this the last few years and didn’t know until I entered what I was feeling into ChatGPT. I knew that I was growing tired of breaking things and essentially telling developers that their babies were ugly but I didn’t know why. Putting how I was feeling into ChatGPT opened my eyes to the problem and also helped me down the path to a solution.

 I am not trying to disparage what I do for work, I truly love what I do but it does take a toll on me. I think information security in general has this problem and why I hear so many people in industry fighting the urge to quit and start a hobby goat farm(maybe that’s just me). When I was a teenager I did a few trade jobs, and there was always a great sense of accomplishment when we’d finish a building. We drive by buildings to this day that I helped build during that time, and my wife hates that I remind her that I helped build it. I do feel like we can have that sense of accomplishment in pentesting, though it’s a little more short lived. At Redpoint we celebrate finding hard vulnerabilities by sharing in Slack what we find and how we figured it out or as examples in trainings that we give. I have worked at other places where I think everyone was so used to finding vulnerabilities or were suffering from fatigue that they didn’t care about finding cool stuff, and it was just viewed as more work to put another issue into Jira. 

I know some of my symptoms include cynicism and loss of motivation. It’s hard to be positive and to keep pushing in the job after seeing some of the things I’ve seen relating to code and applications. It does make me feel at least a little bit safe that LLMs have a few years to go before they will replace all of us. Another symptom I have noticed is increased levels of imposter syndrome. There is always a certain level of imposter syndrome for me, but I’ve definitely felt it increase during this fatigue. I have also suffered from avoidance or procrastination. This has more effect on things outside of work like the need to clean out the garage or fix my son’s door that has been off for over a month (sorry, dude). 

Although without really knowing I was also executing some solutions that were recommended. At Redpoint we do get time to code which I have noticed is the easiest work for me to stay focused and motivated. Coding is a great way during work to help overcome destructive fatigue. Outside of work the best thing I did for help was to start building myself by weight lifting. I originally started to lift for the physical benefits but I have now found that it has an even more positive effect on my mental health. I really feel lost on days where I don’t get to workout in the morning. It just puts me in such a better mood to handle the day, and I think my wife and kids would also agree. Additionally, mountain biking provides the same benefits, allowing me to get outdoors, break a sweat, and soak up Vitamin D. The only downside is it’s seasonal, and I am not ready to fat bike in the snow. As I look back over the last year or so, there were also home projects like building shelves for the garage or a pergola for the backyard that helped me break away from the destructive fatigue of work. 

An image from Justin’s recommended dirt therapy for combatting destructive fatigue

At the end of the day we all deal with a lot of stress and we need outlets or else it will bottle up and slowly kill us. These things have helped me, I know some people don’t want anything to do with weight training or going outside, so just do you, but find something that can help overcome the destruction. Side note, I know there is a lot of crossover in these symptoms with depression. I am not a doctor and neither is ChatGPT if you feel truly depressed seek medical help. I asked ChatGPT for symptoms and solutions for destructive fatigue and below is a list of those. This list helped me see that some of my issues were a result of fatigue and not because I suck as a human. 


Signs and Symptoms of Destructive Fatigue

Destructive fatigue manifests in various ways, affecting mental, emotional, and physical well-being. Here are some common signs:

Mental & Emotional Signs:

  • Lack of Motivation – You no longer feel excited about your work or personal projects.
  • Cynicism or Jadedness – You find yourself believing that all code is bad and security is hopeless.
  • Mental Exhaustion – You struggle to focus, often zoning out even during important tasks.
  • Loss of Interest in Building – Coding and creative pursuits no longer feel fulfilling.
  • Feeling Stuck in a Negative Loop – You see problems everywhere but feel powerless to change them.
  • Irritability & Short Temper – Even minor issues feel frustrating and overwhelming.

Physical Signs:

  • Constant Fatigue – Waking up tired, even after a full night’s sleep.
  • Sleep Problems – Insomnia or difficulty staying asleep due to stress.
  • Body Aches & Tension – Particularly in the shoulders, neck, and back from prolonged screen time and stress.
  • Digestive Issues – Chronic stress can lead to appetite changes, nausea, or stomach pain.
  • Increased Heart Rate or Anxiety Symptoms – Feeling on edge, like your nervous system is in overdrive.

Behavioral Signs:

  • Procrastination & Avoidance – Putting off security assessments or delaying reports.
  • Doomscrolling or Excessive Screen Time – Using social media or distractions as an escape.
  • Loss of Hobbies or Interests – No longer engaging in things you used to enjoy.
  • Neglecting Health or Fitness – Skipping workouts or eating poorly.

Why Destructive Fatigue is Harmful

If left unchecked, destructive fatigue can have long-term consequences, both personally and professionally:

  1. Reduced Work Performance – Fatigue affects concentration, leading to overlooked vulnerabilities or mistakes.
  2. Career Dissatisfaction – The constant cycle of breaking things without creating can make a once-passionate job feel empty.
  3. Mental Health Decline – Persistent stress can contribute to anxiety, depression, and burnout.
  4. Physical Health Problems – Chronic fatigue and stress-related ailments can take a toll on the body.
  5. Strained Relationships – Irritability and disengagement can impact both professional and personal relationships.

How to Overcome Destructive Fatigue

The key to overcoming destructive fatigue is balancing critique with creation and stress with recovery. Here are strategies to help:

1. Shift Focus from Breaking to Building

  • Work on personal coding projects that allow for creativity.
  • Contribute to open-source tools or build scripts that improve security workflows.
  • Explore anything that fuels curiosity.

2. Engage in Hands-on Creativity

  • Try woodworking, 3D printing, or music production.
  • Experiment with digital art, graphic design, or photography.
  • Build something physical—tangible projects can be more rewarding than digital ones. (Build that deck your spouse has been bugging you about) 

3. Prioritize Physical Activity

  • Weightlifting (especially if you enjoy it) can provide structure and a sense of progress.
  • Consider activities like martial arts, climbing, or cycling that challenge both mind and body.
  • Even simple habits like daily walks can help reset your mental state.

4. Redefine Your Role in Security

  • Focus on mentorship and teaching—help developers improve security proactively.
  • Transition into purple team or security engineering roles, where you help build defenses.
  • Engage in threat modeling or DevSecOps to move from finding problems to solving them.

5. Set Boundaries and Reduce Mental Load

  • Limit work outside of work—disconnect from security discussions after hours.
  • Avoid excessive doomscrolling and security drama on social media.
  • Take structured breaks throughout the day to reset your focus.

6. Connect with Like-Minded People

  • Join or start a local maker space, coding club, or security meetup.
  • Play cooperative games, board games, or join a D&D group—something collaborative.
  • Spend time with people outside of tech to shift your perspective.

Final Thoughts

If you’re in a role that focuses heavily on critique, it’s essential to balance that with creation and progress. Recognizing destructive fatigue is the first step in preventing burnout and keeping your passion for security (or any field) alive. Whether it’s coding, lifting weights, building things, or shifting your role, finding ways to build instead of just break will help restore your energy and motivation.


References

  1. Maslach, C., & Leiter, M. P. (2016). Burnout: The Cost of Caring. Psychology Today.
  2. Newport, C. (2016). Deep Work: Rules for Focused Success in a Distracted World. Grand Central Publishing.
  3. Selye, H. (1976). The Stress of Life. McGraw-Hill.
  4. McGonigal, K. (2015). The Upside of Stress: Why Stress Is Good for You, and How to Get Good at It. Avery.
  5. National Institute for Occupational Safety and Health (NIOSH). (2019). Job Stress & Occupational Health Report.

The post Breaking Bad: How to Identify and Overcome Destructive Fatigue appeared first on Redpoint Security.

]]>
How AI and LLMs Will Shape AppSec in 2025 https://redpointsecurity.com/how-ai-and-llms-will-shape-appsec-in-2025/?utm_source=rss&utm_medium=rss&utm_campaign=how-ai-and-llms-will-shape-appsec-in-2025 Fri, 17 Jan 2025 02:05:34 +0000 https://redpointsdev.wpenginepowered.com/?p=1550 Four Predictions for AppSec in 2025 By Ken Johnson and Seth Law In this joint blog from Seth Law at Redpoint and Ken Johnson at DryRun Security, we highlight how 2025 will be a pivotal year for large language models (LLMs) in AppSec. Building on the momentum of 2024, LLMs are moving from novelty to […]

The post How AI and LLMs Will Shape AppSec in 2025 appeared first on Redpoint Security.

]]>
Four Predictions for AppSec in 2025

By Ken Johnson and Seth Law

In this joint blog from Seth Law at Redpoint and Ken Johnson at DryRun Security, we highlight how 2025 will be a pivotal year for large language models (LLMs) in AppSec. Building on the momentum of 2024, LLMs are moving from novelty to necessity, enabling deeper code analysis, automating security workflows, and providing real-time developer assistance. Organizations that adopt AI-driven AppSec will find and fix vulnerabilities faster, freeing security teams to focus on high-value tasks—and ultimately ship more secure code.

It’s no secret that artificial intelligence (AI)—particularly large language models (LLMs)—has taken the tech world by storm. While 2024 saw significant strides in how security practitioners applied AI to scanning and development workflows, 2025 looks poised to be even more transformative. Below are a few key predictions on how AI will influence application security (AppSec) and why you should pay attention.

1. AppSec Will Fully Embrace LLMs

A year or so ago, discussions about AI’s relevance in AppSec ranged from mild curiosity to outright skepticism. But that has largely disappeared. Now, more and more AppSec professionals see how LLMs can automate tedious processes and supplement expert-driven reviews. In 2025, we’ll see:

Shifting from “Is AI worth it?” to “Where can we apply AI next?”
Rather than questioning whether LLMs have a place, AppSec teams will begin embedding AI-driven tools wherever they can bring tangible benefits—such as pulling in threat data, analyzing code changes, and generating automated patch suggestions.

A rethinking of traditional security approaches:
LLMs excel at contextual understanding, especially as context windows grow larger and models become “agentic,” meaning they can iterate through multiple steps or queries. Security teams will rethink processes (like threat modeling) that used to be manual, consolidating them into fluid, AI-backed workflows.

2. More Nuanced, Context-Rich Analysis

One of the historical pain points with AI-based code analysis was context window length. If an LLM or AI agent couldn’t handle your entire codebase all at once, you ended up with either incomplete or inaccurate results. In 2025:

Long Context Windows and Agentic AI Will Be the Norm
When an LLM can “remember” and analyze vast portions of your code, it becomes far more capable of spotting both common and niche vulnerabilities. Agentic AI effectively chains tasks, learns from each query, and can refine results over time.

Security Practitioners Will Rely on AI to Get Deeper Insights
Manual techniques like static analysis or searching for known vulnerabilities will be augmented by LLMs that can correlate multiple parts of a project. Rather than triaging individual findings, developers and AppSec engineers will benefit from LLMs that surface complex vulnerability chains or logic flaws hidden deep in the code.

3. The Rise of “Agentic” Security Orchestration

Security platforms have long offered drag-and-drop workflows for event handling—think of solutions like Tines, which provide no-code solutions to everything from creating Jira tickets to sending out alerts. As LLMs become more powerful:

AI-Driven Orchestration Tools Will Emerge
Imagine chaining multiple specialized AI “agents” together, where one agent monitors new CVE data while another checks your repositories for vulnerable dependencies. A third agent might spin up automated proof-of-concept exploits if it suspects an issue. This isn’t far off—several startups and open-source tools are already heading in that direction.

Developers Get Security Assistance Without Leaving Their IDE
By 2025, many dev environments will likely come with built-in AI “assistants” that can provide context-specific security advice in real time. We’ll see a blending of DevOps, security automation, and AI, helping dev teams ship secure code faster.

4. 2025: A Pivotal Year for AI-Driven AppSec

Above all, there’s a sense of optimism and excitement in the air. If 2024 was the year LLMs finally got a foot in the door, 2025 is when they’ll take center stage.

Innovation Explosion
As more companies see tangible ROI—like fewer vulnerabilities making it to production, or significant time savings in code reviews—expect a new wave of startups and product features. We’ll likely see everything from AI-based threat modeling to continuous compliance checks that happen entirely behind the scenes.

A Culture Shift
Security teams will have less manual busywork and more time to focus on high-value tasks: deeper analysis, custom threat research, and meaningful engagement with developers. Once seen as a “bolt-on” solution, AI tools will become an integral part of the AppSec culture.

Conclusion

Put simply, 2025 will be the year we stop treating LLMs and agentic AI as experiments and start embracing them as must-have elements of our security strategy. From deeply contextual analyses of codebases to automated orchestrations of AppSec workflows, these tools will reshape how we plan, develop, and protect software.

The only question left is: How will you use them?

Now is the perfect time to engage with a trusted security partner to evaluate the use of LLMs to solve application security challenges. The possibilities are endless–and as the technology matures, companies that adopt intelligent workflows will be the ones setting the standard in AppSec. Reach out to see how Redpoint Security can help you incorporate AI tools with your security instrumentation and to learn how to improve the results you get from the LLM tools you have in place. If you would like some help now with a team that is already using AI in their own workflows, send us a message and we will schedule time to discuss the options for getting started.

The post How AI and LLMs Will Shape AppSec in 2025 appeared first on Redpoint Security.

]]>
Redpoint Security Interns at DEFCON32 https://redpointsecurity.com/redpointsec-interns-defcon32/?utm_source=rss&utm_medium=rss&utm_campaign=redpointsec-interns-defcon32 Thu, 15 Aug 2024 17:16:34 +0000 https://redpointsdev.wpenginepowered.com/?p=1487 Adelyn Wengreen, a first-time Def Con attendee My first experience at DEF CON was awesome. As someone still new to this industry, I had no idea what to expect going in, but I really enjoyed the whole weekend. My favorite presentation was My Conversations With a GenAI-Powered Virtual Kidnapper by Perry Carpenter. He talked about […]

The post Redpoint Security Interns at DEFCON32 appeared first on Redpoint Security.

]]>
Redpoint Security interns take in DEFCON for the first time.
Redpoint Security interns, Addey and Norah, take in DEFCON for the first time.

My first experience at DEF CON was awesome. As someone still new to this industry, I had no idea what to expect going in, but I really enjoyed the whole weekend. My favorite presentation was My Conversations With a GenAI-Powered Virtual Kidnapper by Perry Carpenter. He talked about how he manipulated different AI systems to go beyond their programmed restrictions and censorship and how easy it is for people to use these large language models to carry out just about any task.

In his presentation, he also touched on deepfakes and how they’re slowly taking hold of the internet. It’s wild to think about how, as a society, we’re already so desensitized by social media that it’s becoming harder and harder to spot fake images, videos, and voiceovers. Hearing Carpenter’s take on the darker side of these emerging technologies was really interesting. He even managed to talk multiple AI systems into voicing over a fake kidnapping call that sounded so realistic it could easily scare anyone into doing whatever the “kidnapper” demanded. By simply rewording a few phrases or assuring the AI that this was just a “simulation,” the tools were more than happy to comply and play along. It’s crazy to think we’re living in a world where we have to trick computers into believing in THE simulation! (If you’d like to read some of Perry’s impressions regarding DEFCON’s Social Engineering village and AI, check out his post here.)

Aside from Perry Carpenter’s presentation, I really enjoyed the overall DEF CON experience. The hands-on activities were super engaging and fun. I’ve always wanted to learn how to pick a lock, and I’m happy to report that I now know how. The Retro Village had some computers and video games older than me, which was cool to see. On the last day, I made it into the Social Engineering Village, where people volunteered to make live phishing calls to real companies. That was super fun to watch. Another highlight for me was the War Stories Village. Hearing real stories of cybersecurity going wildly wrong or wildly right was so cool.

But honestly, the coolest part of the whole experience was the sense of community. Among the 40,000 people who attended DEF CON, everyone I met was so nice and inclusive. People were excited for each other, encouraging in the competitions, helpful to those of us who were new (like me), and just generally happy to be there. It was such a welcoming environment, and I feel fortunate to have had the opportunity to attend. I highly recommend the DEF CON experience to anyone who gets the chance to go!

To my own surprise, DEFCON 32 proved to be worth the torturous Vegas heat. 

There is so much to do and explore here, from rooting on the cold call competitors in Social Engineering Village to exploring the immersive world inside AIxCC Village or even testing your spy skills in the Physical Security Village. DEFCON has everything: countless incredible speakers, yes, but also hands-on opportunities to do what hackers ultimately love to do— to meddle. Want to try hacking a car? Great, head on down to the car-hacking village. Video games? There’s a place for that. Want to poke around on an early-generation Mac? They have a corner full of retro tech for your pleasure. It was a playground of exploration, friendly competition, and inspiration to learn something new. 


Norah experienced playing around on classic machines at DEFCON

For the CON, downloading the Hacker Tracker app is a must. There are just so many opportunities at your fingertips that it can be hard to keep track of them all. The app helped us prioritize the talks we wanted to see and showed us events that pushed us into villages we might otherwise not have explored. 

A highlight for me was the uniqueness of the War Stories village, which platformed unusual stories of hacker history, covering FBI plots, security disasters, and more. The talks were incredibly engaging and given by genuine experts, accompanied by new perspectives to offer alongside their history lessons. By the end of these talks, I walked away feeling more knowledgeable about this field’s impact on national and global history.

Speaking of impact– while the War Stories village reminds me of security history, Cory Doctorow’s keynote talk again calls this community to action, encouraging us to invest in the future of what he calls the ‘new, good internet.’ His talk, entitled ’Disenshittify or die,’ talked about the route that the internet has taken to find itself here– a place where money is placed above user needs and cybersecurity experts are consistently disregarded (Doctorow is a special advisor to the Electronic Frontier Foundation, where you can find out more details about his keynote.). He encouraged hackers to stand up to leadership that only follows allocated value and instead calls us to work to make the safer and better internet. Doctorow is a well-practiced speaker and a known face at DEFCON. The room was packed for his talk, and for good reason: he passionately believes in the power of this community and the good they can do. His talk ended up being one of my favorites. I have been following the EFF since the beginning of my passion for digital advocacy, so it was cool to watch Doctorow connect hackers to the EFF’s mission.

As a newbie in the industry, I was amazed at the diversity and eccentricity the conference embraced. This space is one of inclusivity, with resources for marginalized hackers to be supported and physical spaces dedicated to fostering their voices. DEFCON is evidence of how invested this community is in supporting and inspiring one another. See you at DEFCON 33!

– Norah Law, a new DEFCON enthusiast

Redpoint Security runs a summer internship program for High School and College-Aged students who are interested in various aspects of the application security industry, including gathering technical know-how regarding application penetration testing, application and mobile app development, as well as business operations and marketing. If you would like to find out more, fill out the form below and we’ll contact you.

The post Redpoint Security Interns at DEFCON32 appeared first on Redpoint Security.

]]>
The experience of a beginner in the field of Appsec. https://redpointsecurity.com/early-appsec-career-experiences/?utm_source=rss&utm_medium=rss&utm_campaign=early-appsec-career-experiences Fri, 09 Aug 2024 18:13:01 +0000 https://redpointsdev.wpenginepowered.com/?p=1483 My name is Trevon Greenwood, and I am a Junior Security Analyst at Redpoint Security. This post outlines my experience as a beginner in the field and what a day at work looks like for me. I have been with Redpoint for just over a year now, so I think I’ve accrued enough experience as […]

The post The experience of a beginner in the field of Appsec. appeared first on Redpoint Security.

]]>
My name is Trevon Greenwood, and I am a Junior Security Analyst at Redpoint Security. This post outlines my experience as a beginner in the field and what a day at work looks like for me. I have been with Redpoint for just over a year now, so I think I’ve accrued enough experience as a beginner to teach others how to be the best beginner they can. Whether in the Application Security field or other career paths, I hope you can learn something from me.

My background in web applications began with front-end web development and UX/UI design. I took a few boot camps/certification courses in that field before knowing much about offensive security. Around this time, Justin, a security consultant at Redpoint, suggested I consider penetration testing. He pointed me toward two helpful and beginner-friendly options for learning about offensive security. The two sources that are very important and relevant to me are OffSec and Burp Suite Certification courses. If you’re unfamiliar with OffSec, it is a massive learning library for many facets of offensive security. Offering hundreds of learning modules and classes, OffSec covers all kinds of training at any experience level. 

Trevon recommends Portswigger’s Web Security Academy as a training resource

Burp Suite’s certification course is another source of learning that has been an enormous help. If you aren’t familiar with it, Burp Suite is an intercepting proxy. An intercepting proxy is an application that works as a middleman between the client and the web server, allowing you to view and modify requests made to the server. There are several other popular intercepting proxies, such as OWASP Zap or Fiddler, but Burp is just the one I prefer. Burp Suite’s course teaches you to find all vulnerabilities while familiarizing yourself with Burp’s interface and workflow.

After gaining familiarity with these courses through lots and lots of practice, I was allowed to do some contract work for Redpoint Security. This contract work was an “interview” process to gauge my abilities in a real-world setting. After a couple of months, they decided to bring me on as a full-time employee!

That was just over a year ago. Since then, my work days have been filled with nonstop learning. Learning the testing process, from gaining new clientele to delivering complete reports, has been enjoyable. There is most often at least one project to work on every day. My primary role for return clients is to confirm whether or not they have made fixes to vulnerabilities found in previous tests done by Redpoint.

On top of that, a lot of the fun comes when we deal with new clients, as their apps haven’t been tested by us previously. This means I get to test my skills and find as many vulnerabilities as possible before the senior consultants get their hands on the app. This also can serve as a good benchmark for my performance to see if I am improving. 

The last thing I want to cover is challenges and remediations for said challenges. My biggest challenge is one that I’m sure you are familiar with. Impostor Syndrome. It gets just about everyone in a workplace like this, which can seem overwhelming. Being the least experienced in the workplace is a recipe for impostor syndrome. So many complex systems vary from client to client, and it can be easy to feel like I know absolutely nothing. But that’s precisely the point here. As a beginner, it’s expected that you know significantly less than your seniors. Acknowledging that I know very little compared to the experts in your field is a great way to ground yourself in reality and realize that you are simply at one step in a long learning journey. In a field that changes as much as this, you are not the only one who feels like they might be behind. Asking questions will also help you build relationships in the workplace, giving you a consistent support system you can trust. 

So that is my experience as a beginner pentester! I still have much to learn, but I already know so much more than last year because I have strived to learn as much as possible while I have time to do so. As with any other growth you may seek, it takes time and patience, both with yourself and the process. With that in mind, if you are a beginner, I hope this has helped boost your confidence in your experience. Or, at the very least, it helped give you insight into what a beginner is going through in this field.

The post The experience of a beginner in the field of Appsec. appeared first on Redpoint Security.

]]>
AppSec Travels 3: Account Takeover  https://redpointsecurity.com/appsec-travels-3-account-takeover/?utm_source=rss&utm_medium=rss&utm_campaign=appsec-travels-3-account-takeover Tue, 16 Jul 2024 22:02:49 +0000 https://redpointsdev.wpenginepowered.com/?p=1463 During a recent assessment, our team came upon a vulnerability that felt like finding a hidden door in a seemingly secure fortress. The discovery involved the password-reset mechanism of an application, allowing us to reset any user’s password with just their email address. This flaw circumvents authentication, giving unauthorized access to user accounts. Here’s how […]

The post AppSec Travels 3: Account Takeover  appeared first on Redpoint Security.

]]>
During a recent assessment, our team came upon a vulnerability that felt like finding a hidden door in a seemingly secure fortress. The discovery involved the password-reset mechanism of an application, allowing us to reset any user’s password with just their email address. This flaw circumvents authentication, giving unauthorized access to user accounts. Here’s how we uncovered it and what it means for application security.

When tokens lead to Account Takeover, it’s like the magical key Link uses in the Zelda game.
An overpowered sensitive token or one that fails to correspond exclusively to a defined user provides unintended access

Close by not quite

Recently we were tasked with testing an API, a simple, straight-forward assessment with only a handful endpoints. We were supplied the source code in order to validate findings and offer better recommendations on fixes. As part of our approach we focus primarily on Authentication and Authorization issues. In our view these are some of the most prevalent failings of modern applications and can lead to the greatest amount of disruption to an application. During the assessment, we were validating the forgot-password mechanism. Generally the issue we find when we look at this application function is username enumeration, which was present here. However, as we looked closer we found there was also the ability to leverage a forgot-password vulnerability into an account takeover which is what we want to discuss in this post.

Forgot Password

The forgot-password process is a straight forward process, a user gives the application their email address and the server will send them a link in the email provided with a way to reset their password. The link will contain a token or some value that is associated with reseting your password. The below image shows the link from the email which contains the password reset token and the user’s email address.

Email received after a password reset request for [email protected]

When the user clicks the link it will take them to a page to enter a new password and submit the form. Behind the scenes the PUT request for resetting the password takes the forgot password token, the new password, and user’s email address from the link.

Password reset request

Here is where the vulnerability enters the process. If you notice in the above screenshot the email address has changed from [email protected] to [email protected] but the password-reset token is the same. The request was accepted by the server which returned a 204 response. After submitting the password reset for the admin user we then validated that it worked and the below screenshot shows that indeed did.

So what happened here? The code creating the password-reset token was properly implemented. It created a new password reset token, added the user’s email that was requesting the password reset, and set it in a Redis cache with a 24-hour time to live.

The problem: the validation check misses a detail

The problem occurred when validating the password token. The code only validated that the password reset token existed in the Redis cache. It did not validate the email address supplied was correlated to the password reset token. The code below shows only the password reset token being validated.

A missing verification that the password reset token was correlated to a specific user allowed an attacker to reset any user’s password. There were also multiple instances of username enumeration throughout the application so the likelihood of a successful account take over increased greatly. We would love to say that is the only time we have seen this, but unfortunately it is not.

And we have seen it worse.

In one instance during a test we noticed that we were given a password reset token that both didn’t expire and could be used an unlimited amount of times. In order to reset the password you had to know the the users 8-digit ID, but that was trivially easy to iterate through in order to reset ALL the users’ passwords in just a few minutes. So this ATO we discuss above actually could have been more damaging. The fix we suggested was to validate that the email supplied in the forgot password was the same email that was being reset. Our client had this fixed in no time.

If you’re curious about the way your company is implementing its password-reset token in your applications, read up more on our assessment offerings and consider reaching out to us here and ask us to come take a look.

The post AppSec Travels 3: Account Takeover  appeared first on Redpoint Security.

]]>