The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection. https://theauthapi.com/ The Auth API eliminates hundreds of hours in development time trying to solve key distribution, key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection. Wed, 20 Aug 2025 13:00:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://i0.wp.com/theauthapi.com/wp-content/uploads/2024/08/cropped-Frame-10.png?fit=32%2C32&ssl=1 The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection. https://theauthapi.com/ 32 32 203434198 API Analytics That Matter: Key Metrics for Measuring Developer Success https://theauthapi.com/articles/api-analytics-that-matter-key-metrics-for-measuring-developer-success/ Wed, 20 Aug 2025 13:00:04 +0000 https://theauthapi.com/?p=1078 Meet TTHW (Time to Hello World) – the golden metric of API usability. It measures how long it takes a new developer to go from signup to their first successful API call. Think of it as the "first impression" metric.

The post API Analytics That Matter: Key Metrics for Measuring Developer Success appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>

Abstract picture

When was the last time you looked at your API metrics? If you’re like most companies, you probably checked your uptime, response times, and maybe your daily active users. But here’s the thing – those numbers, while important, don’t tell you if your developers are actually successful.

Let’s dive into the metrics that really matter for your API’s success, and why they might be the difference between growth and stagnation.

The Most Important Metric You’re Not Tracking

Meet TTHW (Time to Hello World) – the golden metric of API usability. It measures how long it takes a new developer to go from signup to their first successful API call. Think of it as the “first impression” metric.

We recently worked with a client who thought their API was doing great based on traditional metrics. But when we started tracking TTHW, we discovered it was taking developers an average of 2 hours to make their first successful call – for what should have been a 15-minute process. After some investigation, we found the culprit: a complicated authentication process buried in unclear documentation.

Beyond the First Call

Getting that first API call right is crucial, but what happens next? This is where Adoption Velocity comes in. It’s not enough that developers can use your API – they need to be consistently increasing their usage and exploring more features.

Good adoption velocity looks like:

  • A steady increase in API calls over the first 30 days
  • Exploration of different endpoints
  • Transition from test to production environment

Poor velocity often means developers are stuck or, worse, just using your API for a single basic feature.

The Error Patterns That Matter

Raw error rates can be misleading. Instead, look at error patterns. Here’s what we’ve learned matters most:

A spike in 401 errors from new users? Your authentication docs probably need work. Seeing lots of 429s from established users? Time to have a conversation about rate limits and pricing tiers.

The most telling pattern is often the “cascade” – when a developer hits one error, then rapidly hits several others in succession. This usually means they’re getting frustrated and trying different approaches randomly. It’s a key indicator that your error messages aren’t helpful enough.

Support Tickets Tell a Story

Your support tickets are a goldmine of analytical insights. One of our clients was seeing steady ticket volume growth and assumed they needed to hire more support staff. When we analyzed the data, we found that 40% of tickets were about the same three issues. The solution wasn’t more support staff – it was better documentation and clearer error messages.

Retention: The Ultimate Success Metric

Here’s an uncomfortable truth: most APIs lose about 70% of new developers within the first month. But that’s an average, and you don’t have to be average.

Track these retention checkpoints:

  • Day 7: Are they still making calls?
  • Day 30: Have they moved to production?
  • Day 90: Are they expanding usage?

If you’re losing developers at any of these points, that’s your signal to investigate why.

Making These Metrics Actionable

Having data is one thing. Using it effectively is another. Here’s a simple framework:

  1. Track TTHW religiously – it’s your canary in the coal mine
  2. Monitor error patterns weekly
  3. Review support tickets monthly for systemic issues
  4. Check retention quarterly to spot trends

What Good Looks Like

Based on our work with successful API companies, here are some benchmarks:

  • TTHW: Under 30 minutes for basic APIs
  • 30-day retention: Above 40% for new developers
  • Support ticket ratio: Less than 1 ticket per 100 active developers
  • Error rates: Below 1% for authenticated calls

Taking Action

Ready to improve your API metrics? Start here:

  1. Set up proper tracking for TTHW and error patterns
  2. Create a simple dashboard for your key metrics
  3. Establish regular review processes
  4. Set clear improvement targets

The Path Forward

Great API analytics aren’t about collecting every possible metric – they’re about measuring what matters for developer success. At ThatAPICompany, we help organizations identify and track the metrics that drive real improvement in their API programs.

Want to know how your API metrics stack up against industry standards? We offer free API analytics audits to help you identify your biggest opportunities for improvement. Let’s talk about making your API metrics work for you.

Originally posted on That API Company blog.

Photo by Alina Grubnyak on Unsplash

The post API Analytics That Matter: Key Metrics for Measuring Developer Success appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
1078
Protecting Your API Business Logic: The Hidden Threat Costing Companies Millions https://theauthapi.com/articles/protecting-your-api-business-logic-the-hidden-threat-costing-companies-millions/ Tue, 29 Jul 2025 17:26:57 +0000 https://theauthapi.com/?p=998 Last month, Anthropic had to introduce weekly rate limits for Claude subscribers. The reason? Users were sharing accounts, reselling access, and running Claude 24/7 in the background.

The post Protecting Your API Business Logic: The Hidden Threat Costing Companies Millions appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>

Last month, Anthropic had to introduce weekly rate limits for Claude subscribers. The reason? Users were sharing accounts, reselling access, and running Claude 24/7 in the background. These weren’t hackers breaking in through vulnerabilities—they were legitimate, authenticated users exploiting the business logic of Anthropic’s API in ways that cost the company significant resources and degraded service for other customers.

 

This is the new reality of API security in 2025, and it’s costing companies far more than traditional security breaches ever did.

The Authentication Illusion

For years, we’ve operated under the assumption that authentication solves our API security problems. Get the user logged in properly, validate their token, and you’re protected, right?

The data tells a different story. 78% of API attacks now come from authenticated users—people who have legitimate credentials but are exploiting your business logic in ways you never intended. Meanwhile, 27% of all API attacks target business logic vulnerabilities, representing a 10% increase from the previous year.

Think about what this means: your biggest threat isn’t someone breaking down your front door. It’s someone with a valid key using your house in ways that violate your rules but don’t technically break your security.

The Business Logic Vulnerability Crisis

Business logic vulnerabilities are different from traditional security flaws. They don’t show up in vulnerability scanners. They pass authentication checks. They often look like legitimate traffic. Yet they can be far more damaging to your bottom line than a data breach.

Here’s what business logic abuse looks like in practice:

Account Sharing and Reselling: Users share premium API access with unauthorized parties or resell access at discounted rates, undermining your revenue model.

Resource Exploitation: Authenticated users consume far more resources than intended, like running automated processes 24/7 or parallel sessions that exceed fair use policies.

Data Harvesting: Legitimate users systematically extract data beyond their intended access level by manipulating API parameters or exploiting rate limit gaps.

Privilege Escalation: Users modify API calls to access data or functionality they shouldn’t have, often by changing identifiers or parameters in their requests.

The Anthropic example perfectly illustrates this challenge. Users weren’t hacking the system—they were using it in ways that violated the intended business model, forcing the company to implement new controls that affect the user experience for everyone.

Why Traditional Security Fails Against Business Logic Attacks

Traditional security tools, like a Web Application Firewall (WAF), struggle to detect and mitigate this form of abuse, as API attacks adeptly masquerade as regular traffic.

Your existing security stack was designed to stop malicious actors, not to enforce business rules on legitimate users. Here’s why conventional approaches fall short:

Authentication-Only Thinking: Once someone is authenticated, most systems assume all their actions are legitimate. But authentication only answers “who are you?”—not “should you be doing this?”

Static Rate Limiting: Traditional rate limits apply the same rules to all users. Sophisticated abusers simply stay under these limits while still extracting disproportionate value.

Signature-Based Detection: Security tools look for known attack patterns. Business logic abuse often uses perfectly valid API calls in unintended ways.

Lack of Context: Most security systems don’t understand your business model well enough to distinguish between legitimate heavy usage and exploitative behavior.

The Real Cost of Business Logic Vulnerabilities

API-related security issues now cost organizations up to $87 billion annually, and business logic attacks contribute significantly to this figure. But the impact goes beyond direct financial loss:

Revenue Leakage: When users circumvent your pricing model, you lose revenue without immediately realizing it. Unlike a data breach, which is obvious, business logic abuse can drain resources for months before being detected.

Infrastructure Costs: Abusive usage patterns can dramatically increase your infrastructure spending, especially in cloud environments where you pay for compute and bandwidth.

Service Degradation: Heavy abuse can slow down your API for legitimate users, leading to customer complaints and potential churn.

Competitive Disadvantage: If competitors gain unauthorized access to your data or services through business logic exploitation, they can use your own APIs against you.

Development Slowdown: 59% of organizations have had to slow the rollout of new applications because of API security concerns, often due to business logic vulnerabilities discovered late in development.

A Framework for Business Logic Protection

Protecting against business logic attacks requires a different approach than traditional security. You need to think like a business analyst, not just a security engineer.

1. Map Your Business Logic

Start by documenting what normal usage looks like for different user types:

    • What’s a reasonable number of API calls per user per day?

    • Which data combinations should never be accessed together?

    • What usage patterns indicate potential reselling or sharing?

    • Which API endpoints are most valuable to abuse?

2. Implement Adaptive Controls

Unlike static rate limits, adaptive controls adjust based on user behavior and context:

User-Specific Limits: Set different thresholds based on subscription tier, historical usage, and risk profile.

Behavioral Analysis: Monitor for patterns that indicate abuse, like perfectly regular intervals, unusual geographic distribution, or simultaneous sessions.

Dynamic Throttling: Gradually reduce service quality for suspicious usage rather than hard blocking, which can circumvent abuse while maintaining service for legitimate edge cases.

3. Design Business-Aware Authentication

Move beyond simple “authenticated/not authenticated” to contextual authorization:

Session Intelligence: Track not just who is accessing your API, but how, when, and from where.

Usage Analytics: Build real-time dashboards showing resource consumption by user, helping you spot abuse quickly.

Granular Permissions: Implement fine-grained access controls that align with your business model, not just your data model.

How Auth API Addresses Business Logic Threats

At Auth API, we’ve seen these challenges firsthand, which is why we’ve built our platform around business-aware security rather than just technical authentication.

Adaptive Rate Limiting: Our system learns normal usage patterns for each customer and adapts limits accordingly. Unlike static rate limits that treat all users the same, we provide tailored protection that grows with legitimate usage while flagging abuse.

Granular Access Management: We help you define and enforce security policies that align with each customer’s subscription level and intended usage patterns. This means you can prevent business logic abuse while maintaining a great experience for legitimate users.

Real-Time Usage Intelligence: Our monitoring gives you visibility into how customers are actually using your API, making it easy to spot patterns that indicate sharing, reselling, or other forms of business logic abuse.

Customer-Specific Security Controls: Rather than one-size-fits-all rules, you can implement different security postures for different customer segments, ensuring enterprise customers get the flexibility they need while preventing abuse from free or trial users.

The Path Forward

As APIs become more central to business operations, protecting business logic becomes as important as protecting data. The companies that recognize this shift early will have a significant advantage over those still thinking in terms of traditional perimeter security.

Here’s what you can do today:

    1. Audit your current API usage patterns to understand what normal looks like for your business

    1. Identify your most valuable API endpoints and the business logic that governs their use

    1. Implement monitoring that goes beyond technical metrics to include business context

    1. Review your authentication architecture to ensure it can support business-aware authorization decisions

The future of API security isn’t just about keeping bad actors out—it’s about ensuring legitimate users respect the business rules that make your API sustainable and profitable.

References

  1. Salt Security – “API Security Trends – API Attacks & Breaches Report” (2025)

    • 78% of API attacks come from authenticated users
    • Available at: https://salt.security/api-security-trends
  2. Imperva – “New Research Reveals API Security is a Business Risk” (March 2024)

    • 27% of all API attacks targeted business logic (10% increase from previous year)
    • Available at: https://www.imperva.com/blog/state-of-api-security-in-2024/
  3. Thales Group – “Application and API Security in 2025 | Trends in Protecting Digital Innovation” (December 2024)

    • API-related security issues cost organizations up to $87 billion annually
    • Available at: https://cpl.thalesgroup.com/blog/application-security/application-api-security-2025
  4. Salt Security – “Major API Security Breaches and API Attacks from 2024” (May 2025)

    • 95% of respondents experienced security problems in production APIs
    • 23% experienced a breach
    • Available at: https://salt.security/blog/its-2024-and-the-api-breaches-keep-coming
  5. Security Boulevard – “API Attacks Rise 400% in Last Six Months” (March 2023)

    • 59% of organizations slowed application rollout due to API security concerns
    • Available at: https://securityboulevard.com/2023/03/api-attacks-rise-400-in-last-six-months/
  6. Infosecurity Magazine – “Business Logic Abuse Dominates as API Attacks Surge” (February 2024)

    • Traditional security tools struggle to detect business logic attacks
    • Available at: https://www.infosecurity-magazine.com/news/business-logic-abuse-api-attacks/
  7. Check Point Research – “The Escalation of Web API Cyber Attacks in 2024” (March 2024)

    • 20% increase in API attacks from January 2023 to January 2024
    • Available at: https://blog.checkpoint.com/research/a-shadowed-menace-the-escalation-of-web-api-cyber-attacks-in-2024/
  8. Traceable AI – “2025 Global State of API Security Report” (November 2024)

    • 57% of organizations suffered API-related breaches in past two years
    • Available at: https://www.cybersecurity-insiders.com/2025-global-state-of-api-security-report-new-data-shows-api-breaches-continue-to-rise-due-to-fraud-bot-attacks-and-genai-risks/

Note: All statistics and data points referenced in this article are sourced from the above industry reports and research studies conducted by leading cybersecurity organizations and API security specialists.

 


Want to see how Auth API can help protect your business logic while maintaining a great user experience? Book a demo to discuss your specific API security challenges. 

Photo by Maxime VALCARCE.

The post Protecting Your API Business Logic: The Hidden Threat Costing Companies Millions appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
998
🚀 Introducing Usage-Based API Access: A New Layer Between Stripe and Your API Gateway https://theauthapi.com/articles/%f0%9f%9a%80-introducing-usage-based-api-access-a-new-layer-between-stripe-and-your-api-gateway/ Mon, 21 Jul 2025 17:21:50 +0000 https://theauthapi.com/?p=984 We didn’t start by trying to reinvent authentication. We started by trying to protect and commercialize our own APIs. What began as a side project to issue secure API keys quickly turned into a much bigger realization: there’s a missing layer between Stripe and your API Gateway—and nobody’s owning it.

The post 🚀 Introducing Usage-Based API Access: A New Layer Between Stripe and Your API Gateway appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
By the founders of The Auth API

We didn’t start by trying to reinvent authentication. We started by trying to protect and commercialize our own APIs. What began as a side project to issue secure API keys quickly turned into a much bigger realization: there’s a missing layer between Stripe and your API Gateway—and nobody’s owning it.

Today, we’re introducing that layer. We call it The Auth API.

🧱 Stripe handles the money. Kong handles the traffic. But who handles the business logic?

If you run an API product, you’ve likely duct-taped together some version of:

  • Stripe for subscriptions
  • A rate-limiter at the edge
  • Basic API key auth
  • Logging in your backend
  • Maybe some manual usage tracking

But here’s the problem: none of these systems talk to each other. There’s no canonical source of truth about who’s calling your API, how often, whether they’ve paid, or what plan they’re on. You’re left guessing—or building a brittle internal dashboard just to keep track.

That’s where The Auth API comes in.

We sit between your users and your API gateway, offering:

🔐 The Auth API is the missing business layer for your API

  • 🔑 Per-customer API key management
  • 📊 Real-time usage tracking
  • 📉 Built-in rate limits per key
  • 💸 Usage-based billing (Stripe integration launching soon—get early access)

We’re not a replacement for Kong, Unkey, or Zuplo—we’re a new kind of layer.

Call it “API Monetization as a Service,” if you like.

🆚 How we compare

PlatformWhat it solvesWhat it misses
StripePayments, billing, invoicesNo native support for per-key usage metering
Kong / Tyk / API GatewaysTraffic routing, rate limitingNo monetization layer, no customer-level tracking
UnkeyDeveloper-first API key issuanceNo billing or advanced usage insights
ZuploEdge auth and rate limitingExcellent edge control, but monetization is DIY
The Auth APIUsage-based API monetization + observabilityNo OAuth/user login (by design)

We’re not trying to be everything. We’re trying to be the best way to sell, secure, and scale your API access.

👋 Our backstory

We’re API builders ourselves. We created TheDogAPI.com and a few internal tools for dev teams. At some point we realized we were spending way too much time building infrastructure just to know who’s using our APIs, how often, and whether they should be.

That pain led to a prototype. That prototype turned into a product.

Today, The Auth API is used to protect and monetize thousands of API calls per day—and we’re just getting started.

⚙ What’s live today

  • ✅ Instant API key issuance
  • ✅ Per-key usage logs & analytics
  • ✅ Rate limiting
  • ✅ Organization/team support
  • ✅ Admin dashboard
  • ✅ Webhooks, secrets, and metadata support
  • ✅ Open-source client libraries – GO, TypeScript, PHP

🛣 What’s next (we’re building in public)

  • 🚧 Stripe metered billing integration (early access now—launching soon)
  • 🚧 SDKs for popular frameworks
  • 🚧 Partner API and embedded analytics
  • 🚧 Self-hosted / BYOK edition

📢 Want to help shape this?

We’re opening up early access to the usage-based billing layer right now. If you’re running a public API, building a dev tool, or want to stop duct-taping your monetization layer—we’d love to hear from you.

The post 🚀 Introducing Usage-Based API Access: A New Layer Between Stripe and Your API Gateway appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
984
Securing Public APIs Without Breaking Them: Lessons from the Dotpe Case https://theauthapi.com/articles/securing-public-api-without-breaking-changes/ Mon, 23 Sep 2024 09:17:15 +0000 https://theauthapi.com/?p=944 Public APIs offer immense value, but without proper security, they expose businesses to significant risks. Learn from Dotpe's API vulnerability, and discover quick, tactical solutions to secure your API—without making breaking changes.

The post Securing Public APIs Without Breaking Them: Lessons from the Dotpe Case appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
Updated – 23/09/2024

APIs have become the backbone of modern digital infrastructure. They enable businesses to interact with their customers in real-time, automate workflows, and scale operations. However, their immense benefits come with significant risks—especially when left unsecured. The case of Dotpe, as revealed in a recent deep dive into their API vulnerabilities, serves as a cautionary tale for companies offering public APIs.

The Dotpe Case Study

No business with a public API wants to wake up to see its vulnerabilities exposed and trending at the top of Hacker News—yet that’s precisely what Dotpe faced after these flaws came to light. The reputational damage can be immense and long-lasting, with thousands of developers and security professionals scrutinizing the issue.

Screenshot of PeaBee1‘s article trending at the top of Hacker News

Dotpe provides a widely used QR code menu service for restaurants across India, powering the digital experience for customers by offering contactless ordering. However, their API design and implementation expose glaring security flaws. A curious customer could access sensitive restaurant data, such as ongoing orders, purchase history, and even personal details of other patrons. They could see restaurant revenue figures by tweaking basic API parameters and placing fraudulent orders​.

While this situation is shocking, it is unfortunately common. APIs that are not adequately secured expose data that should remain confidential and create a huge potential for misuse.

Critical Failures in API Security

  1. Lack of Authentication: Dotpe left many API endpoints unauthenticated, allowing anyone to access sensitive data. Strong authentication is essential for every API.
  2. Improper Use of Parameters: Simple changes to table IDs and phone numbers let unauthorized users access data. APIs need to validate and sanitize all inputs.
  3. Rate Limiting: Dotpe’s API had no rate limiting, allowing users to quickly scrape data from thousands of restaurants. Rate limiting is essential to prevent abuse, especially for public APIs.
  4. Monitoring and Alerts: If Dotpe had been monitoring its API traffic for unusual activity, it could have detected and resolved the issue earlier. Proper monitoring is crucial to track unusual usage patterns.

Tactical Fixes for Securing Public APIs

If your API is already exposed to potential misuse, how can you quickly add security without making breaking changes to the API? Several methods exist to strengthen your API’s defences without altering the public-facing interface.

  1. Implement Strong Authentication and Authorization
    Ensure every endpoint requires user authentication. This can include API keys, OAuth tokens, or session-based tokens. Multi-factor authentication (MFA) should be considered for sensitive data.
  2. Input Validation and Sanitization
    Validate all incoming data to ensure it fits the expected parameters. Ensure that only authorized users can perform actions and access data.
  3. Rate Limiting and Throttling
    Rate limiting can prevent API abuse. Set limits on how much data users can request within a certain time frame to avoid scraping or overloading the system.
  4. Geo-Based Whitelisting
    Geo-fencing can reduce exposure if your API primarily serves users from a specific location. Restricting API access to specific cities or countries can help block foreign attacks. However, ensure this doesn’t block legitimate users using VPNs or mobile networks with different locations. Consider fallback mechanisms, like additional authentication for out-of-region access.
  5. Comprehensive Logging and Monitoring
    Implement logging for all API interactions. Monitor in real-time to detect suspicious behaviour, such as mass data extraction or failed login attempts.
  6. Security by Design
    Security shouldn’t be an afterthought. Ensure security is part of the development process, even if you need to update your API after launch. Building security into the core of the API is cheaper and more effective than patching it later.

Introducing TheAuthAPI

For businesses struggling with securing their public APIs, solutions like TheAuthAPI can provide an immediate path forward. TheAuthAPI offers an integrated suite of tools that help businesses secure, track, and even monetize their APIs. Here’s how:

  • Secure: TheAuthAPI provides powerful authentication mechanisms that help ensure only authorized users can access your APIs, shielding sensitive customer and business data from prying eyes.
  • Track: Real-time monitoring and alerting ensure that every API interaction is logged and analyzed for suspicious behaviour, allowing companies to act quickly before data is compromised.
  • Monetize: The platform also offers tools to monetize your API, ensuring that every customer interaction enhances security and contributes to revenue growth.

Conclusion

The Dotpe case underscores the need to secure APIs from day one. No business with a public API wants to see its flaws on Hacker News. While public APIs are essential for scaling businesses, they must be protected from potential misuse. By applying tactical fixes and investing in solutions like TheAuthAPI, businesses can secure their APIs without breaking them and ensure they are safe and profitable.

  1. PeaBee’s original article was taken down on 23/09/2024 due to concerns about breaking hacking laws in India and the US. ↩

The post Securing Public APIs Without Breaking Them: Lessons from the Dotpe Case appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
944
The average cost to a business for an API breach is 3.9 million dollars… https://theauthapi.com/articles/the-average-cost-to-a-business-for-an-api-breach-is-3-9-million-dollars/ Mon, 23 Jan 2023 13:49:25 +0000 https://theauthapi.com/?p=794 As a leader, we know you understand the importance of security and data protection in your business operations. Based on hours of customer research and our specialized team’s in-depth knowledge, here are the five most common challenges to all businesses offering external access to their API: So what safeguards have you implemented to protect your […]

The post The average cost to a business for an API breach is 3.9 million dollars… appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>

As a leader, we know you understand the importance of security and data protection in your business operations.

Based on hours of customer research and our specialized team’s in-depth knowledge, here are the five most common challenges to all businesses offering external access to their API:

  1. Authentication-based access control is challenging to set up correctly and maintain.
  2. It requires an advanced understanding of the underlying technology and architecture to secure and scale access.
  3. It requires proactive monitoring and management of the programs accessing your API.
  4. Access control systems must be regularly updated to meet the latest security standards and regulations.
  5. Access control systems can be vulnerable to attacks and malicious actors, such as hackers and malware if they are adequately defended.

So what safeguards have you implemented to protect your API from malicious actors?

We care about API access security so much that we’re offering a free 1hr consultation to help you on your journey. Please send us a message to set up a call.

The Auth API platform offers a comprehensive authentication-based access control solution that can help protect your API’s resources and data.

* from IBM’s Cost of a Data Breach Report 2020

The post The average cost to a business for an API breach is 3.9 million dollars… appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
794
Right Ways of API Rate Limiting https://theauthapi.com/articles/api-rate-limiting/ Fri, 29 Jul 2022 07:56:15 +0000 https://theauthapi.com/?p=624 API rate limiting is essential for every API. Here are the best practices for implementing rate limits

The post Right Ways of API Rate Limiting appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
What is API Rate Limiting?

Rate limiting, also called throttling, is the process of restricting the number of requests that an API can process. In other words, if an API receives 1,000 requests per second from a single user, a rate limit could restrict that to a manageable number such as 10 or 20. Rate limits can apply to individual keys or overall traffic to an API.

Just like a fire marshal restricts the number of occupants allowed in a building, a rate limiter restricts the number of requests allowed to an API. 

Sometimes, too much of a good thing can be a bad thing.

In 2020, the world quickly had to adapt to new restrictions. Seeing friends and family may be good, but too much of it could be dangerous. Going to work may be good, but being in the office could be hazardous. People had to adapt rapidly to new limitations.

Whether you’re talking about social distancing or API requests, the same principle applies: healthy performance requires healthy limitations. Too many customers and a store become crowded. Too many API requests and your server becomes overloaded. That is why API rate limiting is an essential practice for APIs, but like everything, there’s a right and a wrong way to do it.

API rate limit offenders can disrupt your service by:

Consistent Experience. Everybody has had the experience of being in a convenience store, waiting in a long line of people while the cashier is busy with one overly-complicated transaction for what feels like a lifetime. Because one user is asking more than usual from the service, the entire line is backed up. Similarly, one user submitting an excessive number of API requests affects the benefit of every other user trying to use the same API. This could either be due to a malicious attack, poor design, or simply the legitimate needs of users, but putting a limit on the number of requests that can be made safeguards the experience of everybody using your API.

Cost Management. Resources cost money, and API requests use resources. This can directly affect the bottom line in many ways, ranging from simple memory usage to lost customers who have gotten frustrated with an inaccessible API. Excessive API requests harm the experiences of other users, including your most valued customers. Even APIs that serve static content can be affected by this issue, as unlimited requests for this static content can drastically impact your bottom line. Therefore, API rate limits are essential for every API, regardless of the type of resource being provided.

Protected Services. While rate limiting is necessary to regulate the day-to-day legitimate users of your API, it also protects against another critical category: malicious attacks. Bad actors can abuse your API by submitting unnecessary requests that clog the communication channels for your legitimate users.

The Consequences of Ignoring Rate Limits

Rate limits are essential for your API, but what are the dangers of neglecting them? Whether malicious or not, the consequences of ignoring rate limiting can be extreme.

DoS and DDoS Attacks

Denial of Service (DoS) attacks occur when a lousy actor floods an API with requests. These attacks stop legitimate users from being able to access resources. Similarly, Distributed Denial of Service (DDoS) attacks have the same goal of flooding an API with requests but use “distributed” users (users of separate machines) to make these requests from more than one source. That makes DDoS attacks harder to prevent and the culprits more challenging to identify. Rate limits are essential for preventing DoS and DDoS attacks. By restricting the number of requests each user can make within a time period, DoS attacks are made less effective, and by limiting the total number of requests that can be made over a more significant time period, DDoS attacks are weakened.

Neither of these attacks is completely avoided through rate limiting, but the harmful impacts can be mitigated by preventing the damage directly from the source.

Cascading failure

Cascading failure refers to a state of errors that propagate and multiply. This escalation of errors can be caused by an overload of API requests—either through a malicious attack or a surplus of legitimate users.

Cascading failure occurs when a portion of a system is overloaded, driving increased traffic to other areas of a system, increasing the strain and causing them, in turn, to be overloaded. The most effective way to prevent cascading failures is to prevent server overload in the first place by using rate limiting.

Resource Starvation

Resource Starvation is the result of inaccessible resources. For example, suppose your API is embedded in another website, but your servers are overloaded with requests for resources. In that case, the website trying to access your API won’t be able to access the resources, resulting in Resource Starvation.

Related to Resource Starvation is Resource Exhaustion, which describes a type of DoS attack that uses specific vulnerabilities in the design of an API to create more resource-taxing requests, as opposed to a sheer volume of requests. Resource Exhaustion Attacks highlight the importance of tiered rate limiting so that different kinds of risks can be mitigated at once.

Tiered Rate Limiting

Because of the variety of vulnerabilities rate limiting must attempt to account for, APIs should use tiered rate limits.

As the name suggests, tiered rate limiting structures API requests into time-based tiers that build on each other. For example, an API may limit the number of requests that can be made every second, every three seconds, every 10 seconds. If the limit is 6 requests per second but 10 requests every three seconds, that means that high levels of traffic are allowed in short bursts, but sustained levels of high usage would be limited.

Tiered Rate Limiting Diagram
Tiered API rate limiting request flow

The tiers of a rate-limiting server will depend on the resources being used. Some APIs may allow hundreds of requests per second, while others may limit their usage to a few requests per minute or even per hour. The complexity and resources used for each API request will dictate what your tiers should be.

Since tiers can be complex and intersecting, APIs can structure their tiers in sophisticated ways, such as limiting overall activity in addition to single-user activity, changing time frames based on volume, or creating delayed requests as a middle option between allowing and rejecting a request.

Implementation

There are multiple ways to implement rate limits based on the individual needs of your API. Here are three of the most useful ways to implement rate tiers:

1. Hard Stop

In the hard stop practice, once a rate limit is reached, the API will reject all requests that exceed the limit. In 2020 and 2021, many people experienced newly restricted occupancy limits in grocery stores, which often required attendants to count customers coming in and only allow new customers to enter the building as other customers exited. This is the hard stop implementation in analog. Once the limit has been reached, the doors are closed, and only when the requests fall back under the threshold will any new requests be allowed.

The most typical indication that this limit has been reached is an HTTP 429 error “Too Many Requests.” Optionally, developers can include information about when the request can be retried.

While this hard limit can be frustrating for users, especially when they don’t understand why they are receiving this error, it’s also the simplest to implement and regulate, making it a popular choice for developers. To prevent unnecessary frustration, customers must understand that they should retry their request after waiting for some time.

Users of the popular Dall-E Mini AI art generation algorithm are no strangers to rate limits, as the “too much traffic, please try again” popup became nearly as famous as the AI itself. This kind of popup is just one example that rate limits can be communicated to the users of an API.

example of a website showing too much traffic

2. Throttled Stop

If a hard stop is like shutting the doors, a throttled stop is like tapping the breaks. Throttled stops delay the response to a request rather than rejecting it outright, so they serve as a middle ground between accepting and rejecting API requests. Throttling can be built into your rate tiers in concert with hard limits.

For example, you can set hard limits of 10 requests per second and 100 requests per minute. That is straightforward, it means that users can make up to 10 requests per second, but they can’t make 10 requests EVERY second. After 100 requests within a minute, any additional requests will be denied until the time period has progressed. However, this means that a user submitting 10 requests per second would have access to the resources they need for 10 seconds, and then have a 50-second waiting period before they can access any further resources.

That kind of inconsistency can be frustrating for users, but it can be mitigated by allowing a certain number of delayed requests in addition to the hard limits. Rather than rate limits being an all-or-nothing toggle, throttling API requests can slow down additional requests by creating an artificial delay.

The tiers of a throttled rate limit can be structured the same way as rate limiters with hard stops, but with an additional color in their palette. The diagram below illustrates this in action: the user is submitting requests at around 8 requests per second (r/s). The API has a tier that accepts 8 requests within a “burst,” as defined by their time frame, and additional requests are only accepted at a rate of 5 requests per second. That means that although the user is submitting 8 requests per second, once the limit has been met, only 5 of those requests are allowed, and the additional requests are denied.

Some APIs use delay to create a middle ground between allowed and rejected requests.

Diagram of API rate limit delaying
Some APIs use delay to create a middle ground between allowed and rejected requests.

This can be used in combination with hard limits or in place of it. For example, an API could use a tier that allows 10 requests with no delay per 10 seconds, followed by 10 requests with 5r/s delay, for a total of 20 requests allowed in 10 seconds—10 with the delay, and 10 without, with a hard limit after the 20th request. It all depends on the needs of the API and the nature of the resources being accessed.

Throttled stops are more difficult to implement, but they improve the user experience by reducing frustrating hard stops.

3. Billable Stop

One of the reasons rate limits are important in the first place is to reduce unnecessary costs. In service of this goal, billable stops give the users the ability to access requests over their limit—for a price.

Billable stops are a good way to recoup the cost of excess API requests while also giving users the option to have more access to your API than they would otherwise.

The obvious downside to billable stops is that many users are reluctant to spend more money on your API, but the upside is that it can be beneficial to both the API and the user.

Best Practices

The key to a successful API is to make users enjoy their experience. Whether the user is the CEO of a major business, the developer of another API, or the end-user who just found you on Google, your API should be built to meet people’s needs and—at a minimum—annoy them as little as possible along the way.

In service of this goal, here are some best practices for introducing rate limiting to your API:

1. Don’t Be Greedy.

You are in business to make a profit. There is no shame in that. But turning your rate limit into a profit stream may be killing the golden goose.

Allow your users to use your API to solve their pain point rather than using your rate limit to create a new pain point for them. If you are flexible with your rate limit and you make your users happy, they will be happy to support you in return.

Having no restrictions is putting yourself in a position to be vulnerable to DoS and DDoS attacks, compromising your user experience, and hurting your bottom line. But having too stringent restrictions may sow bad faith and turn users away from your API altogether. It’s important to hold both of those concepts in balance as you structure your rate limits.

2. Be Transparent.

Make sure your users know what limits they have agreed to in their contract. The users of your API aren’t just the end-users of the resource, they are also the developers of other APIs who are implementing your server into their interface. It’s important they fully understand how your rate limiting is structured, and why you structured it the way you did.

It’s especially important to document your entire process so users have an objective source to look to for answers.

For example, your API offers weather report information to its users. If one of your users is a Smart Thermostat that accesses your API at regular intervals to update the weather, a design error could cause these calls to go into an infinite loop, requesting weather data several times per second. By documenting the way your tiers are structured, your user better understands what kinds of errors they could run into, what protections you have in place, and what the consequences of exceeding their limit could be.

3. Add a Counter

In addition to being transparent about the reasons for your limits, it’s also important to be transparent as your API is being used. Including counters in response headers ensures that users know where they are in relation to your limits.

This is important because it allows your users to make informed decisions. The number of requests you allow can vary greatly depending on your API, but a best practice for rate limit implementation is to make relevant information accessible to your users, both as they’re implementing your API and as they’re using it.

The throughline of these best practices is that you should equip your users to make informed decisions. Rather than exploiting their uninformed decisions for a quick payout, it’s better for you and for your users to treat people fairly and transparently.

A great way to do this is to set these headers on the Response

  • “RateLimit-Limit” – How many requests or “points” the client is allowed to make
  • “RateLimit-Remaining” – How many they have remaining until the “Reset”
  • “RateLimit-Consumed” – How many they have already consumed
  • “RateLimit-Reset” – ISO Date of when the RateLimit will be reset for the client
  • “Retry-After-Seconds” – How many seconds until the client should wait for the RateLimit to be reset

The Auth API is Here to Help

Managing an API can be a headache, but The Auth API has a team of qualified experts to take the most technical aspects of API management off your hands.

The Auth API specializes in API key management and analytics, helping you leverage your API to maximize your ROI.

Don’t reinvent the wheel—see how The Auth API can help you achieve your goals by signing up for a free trial today.

The post Right Ways of API Rate Limiting appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
624
Letting Other APIs Access Your API on Behalf of Your User (OAuth) https://theauthapi.com/articles/letting-other-apis-access-your-api-on-behalf-of-your-user-oauth/ Tue, 12 Jul 2022 16:23:21 +0000 https://theauthapi.com/?p=489 Here’s how you can use OAuth tokens to delegate access between your API and other APIs.

The post Letting Other APIs Access Your API on Behalf of Your User (OAuth) appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
What is OAuth? 

OAuth is an open standard authorization framework which uses tokens to delegate access.

There are two “auths” commonly used in digital security: authorization and authentication.

Simply put, authentication is the process of verifying users are who they claim to be, while authorization is the process of giving authenticated users permission to access resources. Both authentication and authorization are essential concepts in data security, but unfortunately, the name “OAuth” leaves that distinction up to the imagination.

OAuth is short for Open Authorization, so OAuth tokens are used to grant users permission to access resources. On the other side of the “auth” coin, OpenID is a protocol which allows APIs to grant access to users through another API. In other words, OAuth is useful for authorization, while OpenID is useful for authentication.

Similarly, SAML (Security Assertion Markup Language) is an authentication protocol allowing users to be authenticated to multiple client servers through a single identity provider. SAML tends to be specific to users, while OAuth is specific to applications.

OAuth Security

If your bank let strangers roam freely around their vault and flip through their documentation, would you continue to bank there?

Security is a top priority for financial institutions, but it isn’t enough to simply be secure. If security were the only consideration, banks would simply be sealed in concrete and steel capsules. But in addition to being secure, banks also have to be accessible.

Financial institutions must be accessible so their customers can talk to tellers in the lobby, access their funds, and receive legal documentation. They should also be secure so interlopers cannot access funds or information which belongs to legitimate users.

Regarding API access, this give-and-take between security and accessibility presents a whole new realm of challenges. But despite the difficulty of pursuing this balance, it must be a top priority for organizations.

This is why letting other APIs access your API through OAuth tokens is an important process so resource owners can access their resources through your API.

Why OAuth?

Consider the example of a hotel: guests need access to their hotel rooms, but not to each others. To coordinate this complex matrix of permissions, there has to be a uniform format allowing permissions to be granted or revoked to specific users and specific rooms over specific time periods.

Most modern hotels have moved away from physical keys in favour of digital keycards because they give managers more granular control over granting and revoking permissions, reducing the security risks of lost or stolen keys.

In API access, OAuth fills the role of a hotel’s magnetized key card: a unified standard to delegate access to other APIs (on behalf of users) to facilitate access and protect security for both the client and the user.

OAuth Versions and Their Differences

OAuth 2.0 was a ground-up rebuild of OAuth 1.0, which shared the same goals and functionality but introduced improvements like simplified signatures and short-lived tokens.

While OAuth 2.0 was a significant improvement over 1.0, it wasn’t perfect. In the decade since 2.0, several improvements have been made, culminating in the in-progress 2.1 protocol. 

OAuth 2.1 consolidates post-2.0 features by integrating many RFCs that have added or removed functions from the original 2.0 protocol. There were only two years between OAuth 1.0 (2010) and OAuth 2.0 (2012), and in the intervening decade, new best practices and exploits have been unearthed, so OAuth 2.1 is an attempt to codify these best practices in an up-to-date protocol.

PKCE in OAuth 2.1

PKCE (Proof Key for Code Exchange) is an extension to the authorization code flow, which is more secure than the traditional implicit flow.

PKCE is an attempt to mitigate the security risks associated with stolen client secrets. It does this by using an improved authorization flow using code challenges and code verifiers to authenticate users rather than using static client secrets. This will be explained in greater detail in the section on OAuth code flow.

Client secrets are vulnerable to being intercepted, and many developers correctly realized the assumption that client secrets could be kept secret indefinitely was short-sighted and doomed to fail given enough time. This is why PKCE (pronounced “pixie”) is an essential component of OAuth 2.1.

4 Components of OAuth

To understand the OAuth flow, it’s vital to understand the parts of API-to-API access.

1. Tokens

The basic building blocks of OAuth are the tokens exchanged between servers. Tokens can have many different formats and can be generated in many different ways, but there are three types of tokens used in the OAuth flow.

Authorization Tokens

Authorization tokens are given by an authorization server in response to an authorization request by the client-server. Authorization tokens don’t give the client access to their desired data, but it allows them to use their verification to exchange the authorization token for an access token. Authorization and access are split between two separate tokens because these tokens are used with different servers and offer an additional level of security. The authorization token is used to create the verification that produces an access token.

Access Tokens

Once the authorization token is received by the client-server, it is used to request an access token. The client server sends a verification (generated using a client secret or code verifier with PKCE) to establish the validity of the client server. Once the authorization server receives the authorization token and verification, it sends back an access token the user server then uses directly with the resource server to access the resources.

Refresh Tokens

Unlike authorization and access tokens, refresh tokens aren’t created on their own, but are issued along with access tokens. Since access tokens in OAuth 2.0 are designed to expire (one of the major problems with OAuth 1.0 was the persistence of tokens), refresh tokens are designed to renew the access token when it expires so sessions can continue uninterrupted, but new sessions would require new authentication.

Thinking back to the example of a hotel room, the difference between an authorization and access token can be compared to the magnetized key card and the code to a safe. The authorization token gets the client the opportunity to authenticate themselves – it gets them into the room – but the access token is required to access the actual resources: the valuables in the safe.

2. Servers

Understanding the roles of authorization, access, and refresh tokens is difficult without also understanding the various servers they are passed between. Here, the nomenclature can be confusing because language such as “client” and “resource” is used differently in separate contexts. But in the world of OAuth, the terms “client” and “resource” are defined in the context of the authorization request, rather than in the context of a business relationship, in the same way words like “right” and “left” are defined in the context of what direction a person is facing.

Client Server

In the API access context, the client server is the server requesting access on behalf of the user. For example, when you visit a website that requests a Facebook login to create an account, the website API is the client server, while Facebook is the resource server.

Authorization Server

The authorization server plays a crucial role in the OAuth flow by serving as the “middleman” between the client and resource server. The authorization server receives the authorization request from the client-server, issues an authorization token, and when it receives authentication, it issues an access token the client then uses to access the resource server. Authorization servers are tied to the resource server but don’t have direct access to the resources themselves.

Resource Server

The resource server is the server that contains the resources the API would like to access. For example, if your API wants to include an integration from Google Maps, Google Maps is the Resource Server while your API is the client-server. The resource server receives the access token from the client-server and responds with the requested data. 

3. Scope

Just like a bank wants customers in their lobby but not in their vault, APIs want other APIs to have access to authorized data but not free rein to access and edit unauthorized data. This is where API keys and OAuth tokens are especially useful – API keys grant API access to another API, but OAuth tokens grant specific resource owners access to their data in an API. 

The scope of API-to-API access refers to what kinds of information the client-server can access. For example, a social network API may allow users to upload photos from a resource server such as Dropbox or Google Drive. The client server (the social network) needs access to the data on the resource server (Dropbox) but doesn’t need access to upload new photos, delete photos, or reset the user’s password.

Because of that, the scope of the API access is delegated by the client-server on behalf of the user. For example, an API with a “share” button may request access to a user’s contacts but wouldn’t need access to a user’s camera or location.

4. Timeline

One of the key improvements of OAuth 2.0 and 2.1 are short-lived tokens. In OAuth 1.0, tokens were persistent until revoked. For example, Twitter OAuth 1.0 tokens did not expire at all unless they were specifically revoked by the user. 

There are two kinds of authentication: stateless and stateful.

  • Stateful authentication tokens can be revoked at any time but are difficult to use with API to API access. Stateful authentication stores credentials on the backend side which gives the users the ability to revoke access at any time.
  • Stateless authentication stores credentials on the client side (browser) so the token can’t be revoked until it expires but is easier to integrate with third-party APIs because the authentication is already stored on the user’s side.

Revoking API keys is a different process from revoking OAuth tokens and can be assisted with the help of API management like The Auth API. API key management is an important resource for APIs of every size. Whether your API has dozens of users or millions, keeping track of individual API keys is an impossible task without solutions like The Auth API.

Understanding OAuth Flow

With the addition of OAuth 2.1 and PKCE, the flow of code is different from previous iterations of OAuth.

First, a request is generated by the client-server. This authorization request is sent to the authorization server, which responds with an authorization token. A unique authorization token is generated with each request, and the authorization API can create tokens using any method or format that suits their needs. Many tokens are issued in the JWT (JSON Web Token, pronounced “jot”) format. Once the user server receives the authorization token, it exchanges the authorization token and verification for an access token.

This is where PKCE diverges from the authorization code flow. In the traditional authorization code flow, the client would have its own client secret that it would send along with the authorization token to the authorization server. The weakness of this approach is that the client’s secret could be intercepted, making it unnecessarily vulnerable to security attacks.

Using PKCE, the authentication server instead sends a hashed code to the client, using its client verifier code to create a code challenge. The code challenge is an encoded string that gets decrypted by the authentication server. The authentication server can match the actual code challenge to the code challenge they expected to receive based on the verification and client information, and if the results match, the authentication server can then provide the client with an access token.

Once the access token is received by the client-server, it is exchanged for the requested resources from the resource server.

PKCE is more secure than the client secret approach because the verification code is different every time based on the unique authorization token generated with each request.

Until the access token has been revoked (either actively through stateful authentication or passively through stateless authentication), the loop of requests and resources continues for the duration of the session.

The OAuth flow passes an authentication token and access token between the 3 servers and the user to access resources.

API Keys vs OAuth Tokens

API keys and OAuth tokens accomplish similar results in slightly different contexts. API keys are used to identify the APIs that are requesting access, while OAuth tokens identify the users requesting resources. 

API keys typically give an API full access to the permissions of another API but don’t give the API-specific data that belongs to users. For instance, a Google Maps API would use an API key to give access to another API to integrate an embedded Google Map onto their interface, but a specific user would use an OAuth token to access their saved locations within Google Maps.

OAuth tokens are used by resource owners who are using the client-server to access data on the resource server. API keys are used by one API to communicate with another API. Because of that relationship, OAuth tokens and API keys aren’t mutually exclusive options but rather are parts of the same process.

Manage API Keys with The Auth API

As integration and communication between APIs become more advanced, the authorization needs of your API are likely to continue growing. Managing API keys and access tokens can become overwhelming without the help of The Auth API, experts in API access and analytics.

Receive regular reports on your API users, revoke access at any time, and create new keys with the click of a button using The Auth API’s developer-friendly API. Start a free trial today to see what The Auth API can do for you.

The post Letting Other APIs Access Your API on Behalf of Your User (OAuth) appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
489
Securing JavaScript Client-to-Server Communications https://theauthapi.com/articles/securing-javascript-client-to-server-communications/ Fri, 24 Jun 2022 02:01:17 +0000 https://theauthapi.com/?p=478 In this guide, we cover the fundamentals of client-to-server communication and why securing JavaScript is the best practice for maintaining a secure network.

The post Securing JavaScript Client-to-Server Communications appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
Anytime a user requests something from the internet, client-to-server communication is used. The primary function of the internet is to provide the infrastructure for one computer to communicate with another computer. This exchange allows for each computer to share and exchange data. 

One of those computers owns and serves the resources, which is referred to as the server-side. The other computer that requests access to those resources is referred to as the client-side. Even if a computer owns some resources, it can still be referred to as the client-side if it sends requests. Together, this method of network communication between each side is colloquially referred to as client-server architecture. 

On the client-side, processes are facilitated by a computer programming language called JavaScript. Securing JavaScript is integral to deterring bad actors from intercepting sensitive information and promoting a positive user experience. Ensuring secure communication between your network and the network of client-side users should be at the top of your cybersecurity priorities list. In this guide, we will explore why securing JavaScript client-to-server communication is so important and how to do so. 

Examples of Client-to-Server Communication

To best understand client-server architecture, let’s review some examples. A client is defined by the request method used to interact with the internet. The web browser you are using to access this article is a client. The web browser on your phone is a different client. Applications on your phone, computer, or gaming console are also clients. Clients are always local to the user that is requesting access, referring to the end user’s computer, phone, or other personal hardware. 

A server is defined by the access method used to process requests and “serve” information to the client. Examples of server-side processes include saving and accessing data, redirection to other webpages, or user validation. Server-side processes take place at a remote location, where web application servers process web requests from the client. Those servers are centralized centers where data is stored and disseminated. 

Consider anytime you go to an ATM to withdraw funds. When a customer enters their card, how does the ATM (client) interpret the digital data associated with the bank card? After the card is entered, a request for information is sent to the bank’s central software (server), which enables the ATM to display the proper data to a customer. 

HTTP/HTTPS Requests 

The foundation of data exchange on the internet is a client-server protocol called Hypertext Transfer Protocol (HTTP). This type of protocol means that the request-response method is used for client and web servers to communicate with each other. They communicate by exchanging individual messages. The client-sent messages are requests and messages sent by the server are called responses. 

Hypertext Transfer Protocol Secure (HTTPS) is a secure version of HTTP. The primary difference between them is HTTPS uses encryption to secure communication between clients and servers. HTTPS utilizes Transport Layer Security (TLS) protocol or Secure Sockets Layer (SSL) for encryption.

Client-Side Requests 

Let’s start with the initiating components of HTTP/HTTPS. The web browser, mobile browser, or user application is always the entity initiating the request. To generate a webpage, the client-side browser or API sends a request for the HTML file representing the webpage. Next, this file is analyzed, and additional requests are made for any adjacent scripts, visual sub-resources (JPG and MP4 files), and layout data (CSS). The web browser or API combines the resources from those requests and presents the complete webpage to the end-user. 

A web page is a hypertext document, meaning, some parts of the page are interactive and must be triggered by an end-user action, such as clicking with a cursor/finger or inputting information to generate a new webpage. The browser or API translates these actions into HTTP requests and interprets the HTTP responses to provide the end-user with accessibility. 

Server-Side Responses

On the other end of the communication chain is the server, which serves the resources requested by the client. A server may appear as a single machine, but it is likely a group of machines sharing the load, or a multi-faceted piece of software analyzing other computers completely or partially to generate the requested resources on demand.

Proxies

Between the client-side browser or API and the server, many computers, and machines relay HTTP messages. Due to the layered design of web development software, most of these operate at the physical, transport, or network levels, becoming transparent at the HTTP layer, which can significantly impact performance. Proxies are the messages that operate at the application layer. Proxies can perform many functions such as:

  • Filtering – determining which requests can and can’t come through via parental controls or antivirus scans
  • Authentication – controlling access to data 
  • Caching – storing data for later use
  • Load balancing – allowing different servers to handle different requests

Client-Side JavaScript

Client-side JavaScript or CSJS is the most common type of computer language. It allows for the implementation of complex features on a web page. For the code to be interpreted by the browser to display graphics, video, link navigation, button clicks, or other complexities, the script needs to be referenced or included in an HTML document. Only when a user submits a form with complete, valid entries can the request be submitted to the web server. 

Benefits of CSJS 

The CSJS method allows for numerous advantages as compared to other computing languages. For instance, you can use JavaScript to confirm if the information a user has entered in a form field is valid. Other benefits of using CSJS include: 

  • Immediate feedback – Users don’t have to wait for a page to reload to know if they’ve forgotten to input information in a form field.
  • Dynamic interfaces – Web Pages don’t have to remain static. For example, sliders can be implemented to showcase a revolving carousel of products.
  • Lessened Server Load – CSJS can verify user input before sending the request to the server. In doing so, less server traffic is created.
  • Amplify interactivity – CSJS allows for the creation of interfaces that react when users place their cursors over them or scroll down a webpage.

CSJS Security 

JavaScript is one of the core technologies of the internet. Due to its prominence, bad actors attempt to infiltrate and distort its properties to cause harm to users on the client side. Since its release in 1995, JavaScript has had issues that have attracted the attention of the wider cybersecurity community. Most prominently, the way JavaScript interacts with the Document Object Model (DOM) presents a risk for end-users on the client side by allowing bad actors to send malicious scripts over the web to infiltrate client devices. 

Two strategies can quell this JavaScript security risk:

  • Sandboxing – Running scripts separately, which restricts access to resources and tasks
  • Applying the same-origin policy – Stops scripts from one website from accessing data from scripts on other websites

CSJS Risks 

Cross-Site Scripting (XSS) is one of the most common JavaScript security risks. In this method, hackers attempt to exploit websites to return malicious scripts to client-side users. The hacker determines what these scripts can do including spreading malware, remotely controlling an end user’s browser, stealing sensitive data, and account tampering. If the author of a browser or application neglects to implement the same-origin policy, then they have created an environment where XSS tampering can exist. 

Cross-Site Request Forgery (CSRF) is another common risk with CSJS. CSRF vulnerabilities allow bad actors to deceive clients’ browsers to engage in unintentional behaviors on other sites. If a target’s site authenticates requests by only using cookies, then hackers can send requests carrying the end users’ cookies. XSS and CSRF risks live in the application layer, requiring authors to utilize the correct developmental procedures.

Many common JavaScript security issues can escalate risks for end-users, such as vulnerabilities in the browser and plug-in code, improper execution of sandboxing or same-origin policy, and faulty client-server trust relationships. The only way for authors to elude these security risks is to develop applications and browsers devoid of JavaScript security vulnerabilities from inception. 

Cross-Origin Resource Sharing 

As mentioned above, the implementation of a same-origin policy is a critical mechanism for controlling the permissions of a script from one web page to another. This method of securing JavaScript facilitates a secure client-to-server communication but is quite restrictive. What happens if client scripts need to access resources on another domain without the necessary access rights? 

Cross-Origin Resource Sharing (CORS) is the solution. CORS is a security strategy that employs additional HTTP headers for servers to allow browsers at one origin to access resources from a different origin. CORS is essential in preventing spoofing attempts. Web applications typically have a frontend static code made up of HTML, JavaScript, CSS, and a backend API. 

Bad actors can copy that static code and host it under a different domain (fake website) while using the same backend API. Next, they can employ a malicious script, sending them to that domain and utilizing a request which will provide them with page content and session cookies, this can allow them to steal login information and other sensitive information. The implementation of CORS prevents such a scenario from happening. 

Domain Validation 

Another strategy for securing JavaScript in client-to-server communication is to utilize a method known as domain validation. Domain Control Validation (DCV) is a strategy used by Certificate Authorities to verify that the person making a request is authorized to access the domain related to the request before providing an SSL certificate. Domain validation prevents bad actors from sending requests from fake sites and phishing attempts. 

Best Practices for Client-Side Data Storage

There are many strategies for protecting sensitive data, we have covered many of them in previous articles on securing API keys and authentication. However, client-side storage enables users to log different types of data on the client with users’ permission and retrieve them later. This allows users to save web pages or documents for offline use, maintain custom settings for a website, and save data for the long term. 

LocalStorage

LocalStorage allows users to store data for the entire website permanently. It is not accessible to service workers or web workers. LocalStorage can only contain strings and has a limit of 5MB. It may be useful for storing small amounts of specific storage data. 

SessionStorage

SessionStorage is used to store information temporarily and cleared at the end of the webpage session. Like LocalStorage, it is not accessible to service workers or web workers because it is tab specific, has a limit of 5MB, and can only contain strings. 

IndexedDB 

IndexedDB is an event-based API for client-side storage for large amounts of data. This API sorts data by using keywords or listing. In many ways, IndexedDB is a bolstered version of LocalStorage. Authors can create applications with many request capacities regardless of network connectivity. This method allows application accessibility with or without a connection.  

Cookies

Each new HTTP request comes with cookies, so storing data in them will bloat the size of web requests. They are synchronous and are not accessible from web workers. Just like LocalStorage and SessionStorage, cookies only contain strings. 

Allow The Auth API to Secure Your Network 

The Auth API takes the risk out of building a robust key store with lifecycle management and bad-actor detection. Our platform has been audited by third-party industry-leading security firms to ensure secure client-to-server communication by utilizing best practices for client-side data storage, API key security, and authentication protocols.

Secure your JavaScript client-to-server communication with The Auth API. We are a one-stop shop for your communication-secured network. Start your free trial today to learn more about our product. 

The post Securing JavaScript Client-to-Server Communications appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
478
The Best Ways of Securing Server-to-Server Communications (API to API) https://theauthapi.com/articles/the-best-ways-of-securing-server-to-server-communications-api-to-api/ Fri, 10 Jun 2022 03:55:00 +0000 https://theauthapi.com/?p=466 Keep your Server-To-Server Communications Secure Using API Keys

The post The Best Ways of Securing Server-to-Server Communications (API to API) appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
In the blockbuster sequel Pirates of the Caribbean: Dead Man’s Chest, Jack Sparrow and company embark on a convoluted quest to find both a key and the chest it opens. Compasses, chests, keys, and jars of dirt change hands throughout the film, but the right combination of keys, chests, and hands doesn’t culminate until the next instalment. 

As far as security goes, the buried treasure may be good enough for most pirates, but is it good enough for your users?

Server-to-server communication is the backbone of many internet systems, but advancements in security bring along with them advancements in security circumvention. These new challenges in securing server-to-server communication require software companies and developers to be familiar with the latest and best tools for ensuring their API security.

What Are API Keys?

API keys provide a great solution when APIs need to communicate. Since API keys are static and easily generated and revoked, they are an effective and straightforward way to secure your communication.

But what are API keys? An API key is a unique identifier used to authenticate a user, developer, or program in an Application Programming Interface. API keys can be assigned specific access (such as read or write permissions) and when generated with The Auth API, they can be implemented in flexible and versatile ways.

API keys are short and static, meaning they’re efficient to generate and can continue to be used until their access is revoked—one of the many functionalities that The Auth API provides.

Alternatives to Server-to-Server Communication

There are advantages and disadvantages to every structure, but the alternatives to server-to-server communication have significant downsides to keep in mind. Here are some alternatives to server-to-server communication, as well as the reasons they may not be the best fit for your API:

1. User-to-Server Communication

User-to-server (or client-server) communication is efficient and requires minimal maintenance due to centralized data. While those benefits are worthwhile for many applications, there are serious pitfalls to be aware of.

The biggest downside of user-to-server communication is that it breaks the security paradigm. Building increasingly secure firewalls is a short-term solution to an escalating long-term problem as more and more critical data is spread between server and client locations.

2. Security Through Obscurity

Security Through Obscurity (STO) relies on hiding weaknesses from bad actors by withholding critical information about a system’s loopholes and vulnerabilities. However, by restricting information about a system to only a few key stakeholders, the weak points in the castle walls are hard for hackers to identify.

While STO is a good practice in limited applications, its greatest flaw is that once pandora’s box opens, it’s hard to close. Security through obscurity is a limited strategy to make it harder for bad actors to get in, but STO alone isn’t enough to keep your data secure once they have access.

Challenges of API Key Implementation

While API keys are immensely useful, they aren’t without their pitfalls. Here are some of the challenges to keep in mind when implementing API keys:

  • Don’t Over-Provision User Roles. It may seem easier to give broad access upfront and restrict access down the line, but often, the restriction never happens. This over-provisioning of access can cause serious IT risks that far outweigh the productivity benefits of broad access. 68% of organizations don’t pay attention to elevated IT access before adding new permissions, leaving them unnecessarily vulnerable and putting them at risk for cyber attacks due to unmonitored privileges.
  • Back up API Keys with SSL/HTTPS. An API Key alone is generally not considered secure. Because your client has access to the key, that leaves it vulnerable to Man-In-The-Middle (MITM) attacks and other forms of cybersecurity threats. Due to that vulnerability, API keys must be backed up with SSL/HTTPS. Embedding API keys directly in code is not recommended—it leaves you and your clients vulnerable to having keys accidentally exposed or stolen. Rather, use HTTPS referrers or encrypt the links between servers through SSL authentication.
  • Use Access Control for Authentication and Authorization. Access control is the practice of restricting authentication and authorization. Authentication is like the bouncer at the door—it determines who has the right identification to be allowed in. On the other hand, authorization limits what users are allowed to do once they’re in the door. Limiting who has access to your API and what changes they’re allowed to make, limits the damage a bad actor can cause.
  • Use Best Practices for Password Protection. Set up guidelines for what constitutes a strong password to protect your users from brute force attacks. Don’t be fooled into thinking modern users are tech-savvy enough to create strong passwords of their own volition: in 2021, the password “123456” was used over 100 million times and takes less than one second to crack. Encourage your users to use passwords at least 12 characters long using alphanumeric and special characters. Limiting the number of password attempts and encrypting password data are also effective ways to preempt brute force attacks.
  • Log All API Key Usage. The perseverance of API keys is both a strength and a weakness. Strong because the keys don’t expire, but weak because once stolen, they can be difficult to track down and revoke unless you keep appropriate logs of what servers use API keys. Having a clear audit trail is essential to identifying and fixing breaches if and when they occur, and an API platform like The Auth API is the easiest way to log your API key usage.
  • Keep The Implementation Simple. Avoid over-building. API keys should be simple enough for any dev to implement well. That’s where The Auth API comes into the picture. They take the guesswork out of API key implementation, so you don’t have to reinvent the wheel and your devs can breathe easy. The Auth API gives you the tools to implement API keys into your server-to-server communications painlessly and efficiently with robust controls and versatile features.

API Key Best Practices

Now that you know why API keys are the best way to secure server-to-server communications and you’re aware of what challenges to be cautious of, here are some time-tested best practices to set you on the path toward success. Of course, your software needs are as diverse and individual as you are, but these best practices take some of the guesswork out of tailoring your communication security to your specific needs.

  • User Roles Should Be Limited. Defining the right user roles is an important part of keeping your data secure. Limiting the number of users with read and write access to your software is vitally important, and it’s always best to be conservative with your permissions and loosen them as needed, rather than to be generous with permissions and try to get the cat back in the bag down the road. Limiting user rules is one safeguard against privilege escalation, a type of cyber attack where a user exploits a flaw in your system to give themselves more permissions than intended initially. Keeping your access well-controlled limits your exposure to risks such as privilege escalation.
  • Keep a Record of All Servers Who Use an API Key. This goes hand in hand with keeping logs of API key usage. Keeping a record of every server that tries to use an API key allows you to identify anomalous patterns. One of the benefits of API keys is that it allows you to keep track of the companies with API key access, and what individuals at those companies control the keys.
  • Identify Optimization Areas. Having a strong idea of how your organization is structured and what your data needs are is the key to using API keys effectively. The best practice is to use only one API key for each system needing data access. By identifying your systems and data needs, you can identify the best way to segment your API key usage to eliminate overlap and redundancy.
  • Assign Roles to Key Holders. API key holders aren’t necessarily individual users. Create stratified permission categories by identifying who needs what permissions and to what systems. Remember that the key concept is to eliminate overlap as much as possible, but portioning out the appropriate roles to key holders requires a total view of what categories key holders fall into.
  • Generate Strong API Keys. The key to API security is to generate strong API keys. API keys include alphanumeric and special characters, and a strong API key should be randomly generated, unique, and non-guessable. For example, The Auth API uses a secure random generator designed for cryptography, and they recommend switching the hex encoding to Base64.
  • Store API Keys Securely. Fort Knox’s locks are no good if the keys fall into the wrong hands. Once API keys are generated securely, they also must be stored securely. Your keys need to access, but to avoid keeping all your eggs in one basket, they should be stored by generating a hash value. When users send an API key request, their API key is hashed and compared against the stored hash value for an additional level of security. This ensures that even if your infrastructure is compromised, a bad actor can’t gain access to your users’ API keys.

Advantages of Securing Server-to-Server Communication

Protecting your most vital information is no small task, and you may be wondering if all this fuss about API keys is worth your time.

Your security is your greatest asset: without it, your most precious data can be stolen, tampered with, or abused. Keeping communication between servers secure is a top priority not only for your safety but also for your users’ safety. API calls now make up 83% of all web traffic, meaning your API-to-API security is more important today than it ever has been. There are many advantages to maintaining secure server-to-server communication:

1. Anomaly Detection

Having secure server-to-server communication lets you to keep a close eye on your API key usage. By leveraging the tools at your disposal to analyze trends and keep track of the programs, users, and companies that use your API, you can ensure that your greatest asset is in only the best hands. 

2. Identity Detection

With an expanding repertoire of security circumvention tools at their disposal, bad actors are adept at gaining access to your infrastructure. API keys allow you to identify which users are compromised so that you can revoke permissions and make the appropriate changes. Technology is still a fundamentally human discipline, and the humans you work with are still vulnerable to deception and carelessness. Almost all cyber-attacks involve social engineering and using keys to maintain API security ameliorates the human error factor in keeping your data secure. Using API keys to maintain secure server-to-server communication allows you to stay in the driver’s seat of your security.

3. Maintain Security, Maintain Trust

Stewardship of your data and your client’s data is a key to building trust. A shocking 41% of companies have over 1,000 files of sensitive data such as credit card numbers and health records that are left completely unprotected. With the unprecedented rise of cybercrime post-Covid, developers need to be more aware now than ever that their security is a top priority.

Putting API Keys into Practice with The Auth API

It’s one thing to understand the technology and another to know how to act on it. API security may sound ethereal, but when an average data breach costs over $8 million, the impact of secure server-to-server communication suddenly becomes very tangible.

Now that you know all about API keys, best practices, and why server-to-server security is so important, what are the tangible steps you can take to implement these ideas with The Auth API?

  • Identify Your Needs. No two organizations are exactly alike. Identify what users and permission levels make the best use of your resources.
  • Choose The Plan That Fits Your Organization. The Auth API offers plans for single-access API platforms all the way up to large teams. Once you’ve identified your API security needs, select the best plan for your unique organization.
  • Generate and Store API Keys. Use The Auth API’s cryptographic key generation to create and store secure API keys for your users.
  • Record and Manage Your Users. With The Auth API, there’s no need to spend your days pouring over records: their third-party-audited platform detects bad actors and sends you an email every month detailing the regular usage of your API keys, and alerting you immediately when anomalies are detected.
  • Generate, Refresh, and Destroy Keys. There is any number of reasons your keys may become compromised. Whether by a malicious MITM attack, simple negligence, or staff turnover, there is any number of reasons you might want to generate new keys, refresh old ones, or delete users entirely. The Auth API makes this process painless by enabling you to perform these tasks with the flip of a switch.
  • Protect Yourself from Malicious Attacks. From social engineering to permission escalation to man-in-the-middle attacks, you already know how using API keys can help keep your API secure, but did you know that The Auth API also monitors for and prevents Distributed Denial of Service (DDoS) attacks too? By allowing you to set up granular rate limiting, you can prevent costly and disruptive attacks from succeeding.

All of this starts with The Auth API. Take the guesswork out of your API security by using their state-of-the-art platform to manage your API keys and take charge of your most precious asset: your security. Take control today by starting a free trial or booking a demonstration to see how their API platform can revolutionize your API security.

The post The Best Ways of Securing Server-to-Server Communications (API to API) appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
466
How Do You Know Who’s Using Your API Keys? https://theauthapi.com/articles/how-do-you-know-whos-using-your-keys/ Thu, 26 May 2022 21:27:05 +0000 https://theauthapi.com/?p=456 Managing and monitoring your API key usage is no simple task. However, The Auth API offers a platform that records API metrics, performance and more.

The post How Do You Know Who’s Using Your API Keys? appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
The second article in this series covered everything you need to know about access control. Once your security protocols and commands are in place, you’ll need to understand how to monitor your API keys’ usage to ensure bad actors are not taking advantage. 

However, there are challenges associated with identifying who is using your keys, understanding API value for authorized users, and targeting the correct metrics to monitor. You can solve these concerns with the proper analytics, but with so much information to digest, you may find yourself overwhelmed with how to interpret it. 

This guide will answer these complex questions and give you a holistic understanding of monitoring your API. After all, survey data indicates that 35% of organizations focus on API performance. 

What Is API Monitoring? 

API monitoring is a method of observing API keys in applications to garner insight into their performance, availability, and responsiveness to evaluate your API’s functionality. It is a cybersecurity best practice that helps organizations locate any outages or subpar performing API calls, leading to application, website, or adjacent services failure, which negatively impacts the user experience. 

API monitoring identifies issues directly from your API testing and monitoring solution by analyzing and providing visibility into API issues. API key monitors quickly facilitate the identification, prevention, and solution of issues through alerts and anomaly detection before they snowball into larger ones. 

An anomaly is a contradiction that deviates from normal behavior. For example, if you’re using a banking application and withdraw consistent amounts of $100 per month from your savings, and one month a withdrawal of $10,000 occurs, that would be considered an anomaly. 

API monitoring tools utilize a subset of artificial intelligence called machine learning (ML) to detect such anomalies. ML will utilize time-series analysis, defined as observations recorded in a linear progression that correlate in time. This is how API monitoring determines if a mechanism different than normal triggered a request, which may signal bot or bad actor activity. 

Afterwards, the software will alert authorized users. Alerting is a reactive solution by a monitoring system that triggers when an API check fails. 

Why You Should Monitor Your API Key Usage

API monitoring allows you to determine API availability, behaviour, and functional correctness. API keys are the white blood cells of your application or website. They determine who is allowed to have access to an API. Therefore, their failure can onset a complete system failure if not properly managed. 

If you don’t have a clear understanding of what is going on behind the scenes, you’ll inadvertently create blind spots in your organization and endanger key components of running your business. If your business utilizes APIs in any capacity or provides them as a service, it is essential to ensure they are available, responsive, and secure. 

The API Monitoring Process

API monitoring software triggers the API based on predetermined parameters or from multiple locations and records performance timings and response details. The API monitoring process consists of the following steps:

  • Configure – This step is all about defining the parameters for the API, such as HTTP method, URL, request details, location, and expected values. This sets the framework for the monitored API. 
  • Run – Next, the software periodically triggers the API from the specified location with the preconfigured parameters. Afterwards, the system records those results. 
  • Alert – The app cross-checks the results against the expected values from the configuration step. The system will send alerts to the user if the results do not match the expected value. 
  • Report –The final step is for the system to produce reports on response times and availability over a certain period for long-term analysis. 

Metrics to Consider for API Key Usage 

You should pay attention to certain metrics to determine if your API delivers value to its users and avoid focusing on vanity metrics. 

Requests Per Minute (RPM) 

As an API metric, RPM is straightforward. It refers to measuring the number of requests your API handles per minute. This number will change depending on the date and time, so its primary use is to establish a baseline of request numbers. 

Rate of Failure 

It’s critical to understand how many times failure will occur. API technology is not fool-proof, and like any other software, it can and will fail. If you know that your API can fail, deciding on a course of action is easier. You’ll be able to determine if an alternative scenario is best or if utilizing a different API service is the proper choice.  

Latency 

Network latency, measured in milliseconds, is the time needed for a request or data to go from its source to the destination. The goal is for your latency to be as close to zero as possible. The higher your latency, the more negative experience users will have.  

API Uptime 

API uptime is calculated based on the window of time a server is available during a select period. This metric lets you check if a request can be successfully sent to the API endpoint and garner a response with the expected HTTP status code. 

CPU Usage and Memory

Measure the impact your APIs have on your servers. Two important metrics for determining this impact are CPU and memory usage. A high CPU usage can mean the server is subject to overloading, creating bottlenecks that negatively affect the user experience. 

Memory usage allows you to identify the quantity of your resource use. Understanding these metrics can help you determine whether you need to downgrade your machine or upgrade to ease the stress put on it so you can avoid bottlenecks.

Time to First Hello World (TTFHW)

Developers will be familiar with the concept of TTFHW as the first expression in a new programming language that will output the text “hello world.” TTFHW refers to the time the user requires to make their initial API transaction since landing on your web page.

Ensuring Your API Is Valuable 

In the past, APIs were middleware responsible for integrating and exchanging data across multiple systems. API efforts were a product of the IT division, rendering the taxonomy (hierarchical framework) used to categorize them as non-intuitive and technical. This prohibited business stakeholders from engaging in API prioritization and design.

In today’s landscape, industry-leading businesses are defining their API taxonomy in a common language that business and IT units understand. The key is to discern between APIs that directly serve the business against those that enable technical functionality. Streamlined taxonomy can significantly reduce API analysis time and increase adoption and value realization.

Proper taxonomy allows business and IT personnel to have conversations about which APIs directly steer the customer experiences (business) and the ones that are part of the infrastructure that permit the delivery of those experiences (technical). In other words, a more efficient categorization of APIs has led to a wider possibility of understanding if an API provides value to the customer experience or not. 

Now that we’ve covered API monitoring, let’s dive into some of the challenges associated with determining who is using your keys and best practices for preventing bad actors from accessing your API based on the metrics above. 

Challenges of API Security 

APIs may utilize similar frameworks and models, but that doesn’t change that data protocols always differ. That’s why effective API security translates the various data formats and languages that different protocols use and the intention of each request. However, this is easier said than done for the reasons below.  

APIs Are Unique 

API protection is about understanding the design principles behind the API. This requires sorting and understanding permutations and code layers formed by human variation and technological intricacy. 

The common foundational framework of APIs includes XML and gRPC. However, there’s no way to understand how a developer designed any given app. Due to this, the types of markups, data, and the application logic itself will vary.

Layers of code in an API are often robust. Parsers require a lot of data to decode app design. The challenge lies in the fact that you may find JSON inside one layer but a different iteration of code in another layer. 

Service API Management 

Many software as a service (SaaS) platforms are only available via API. These service APIs have added security obstacles based on their variation of security and authentication models and high data volume. Two ends of the connection belong to two different businesses with service APIs. Therefore, different security and authentication models are necessary to protect each entity.

Poor Communication 

To create rules for API protection, security teams must identify what a specific API endpoint should do and how. This information should come from the development team but often disappears in cross-functional communication.

To secure API keys, you must understand how the API should function, which requires careful documentation. However, developers don’t always properly prepare documentation for APIs. If documentation isn’t reliable, aligning security and business objectives for the API is difficult. If these objectives aren’t aligned, security can block or allow the wrong things. 

Protecting Internal APIs  

As a result of technological evolution, APIs are going internal. This means your security needs to protect both internal API and front-facing APIs. By adding users, keys, or additional access to a front-facing API, you risk making internal APIs vulnerable. The management of internal APIs, their security, and how they behave with other APIs must be given the same consideration as the external ones.

API Evolution 

Developers must change the way they think when it comes to API security due to an evolving landscape. For example, say a developer wants to guard a login API against credential-based static attacks while blocking all bots. However, if all the API clients are automated tools, that technically makes them operate as bots. Additionally, mobile API usage faces a lot of traffic from technical bots, adding to the obstacle of discerning “good” versus “bad” bots. Customer automation is essentially legal bots, but all bots look and behave the same from a traditional security perspective. Bot protection for APIs is not as simple as it seems.

The challenges in API key security arise from the fact that APIs are no longer simple, front-facing APIs. APIs are nuanced and combined with human variation, making them more layered every day. Developers and IT departments within organizations should consider these obstacles since they make identifying who uses API keys more challenging. 

How Bad Actors Can Compromise Your API Keys 

Being aware of the challenges of monitoring your API key security is one thing, but understanding how bad actors compromise your keys is another. Let’s cover some of the most common attacks. According to data, 91% of businesses have experienced some form of cyberattack.  

DDoS Attacks 

A distributed denial of service (DDoS) attack is when hackers attempt to overwhelm API memory by flooding the bandwidth of the target system or by sending a large amount of information in each request. You can identify this type of attack by knowing your API’s normal memory usage or receiving alerts signifying an anomaly (a larger than the normal number of requests) according to your API monitoring tools. 

MITM Attacks 

A man in the middle (MITM) attack discreetly alters, relays, and intercepts requests and messages between two parties to steal sensitive information. Bad actors become the middleman by intercepting a session token issuing API to an HTTP header and a user. Once the hacker has that token, they can access the user account and steal sensitive information. API monitoring tools can detect bad actors and alert authorized users. 

Injection Attacks 

Injection attacks typically occur on an application running on sub-optimally developed code. To gain access, the hacker injects malicious code into software (e.g., SQL injection and cross-site scripting). Two-factor authenticators (two passcodes to gain access) and encryption can deter bad actors from injecting malicious code. 

The Power of Notifications 

Notifications in the form of alerts are not good if you aren’t receiving them. Your company should use an API monitoring platform that can ping you in a third-party channel such as email or Slack when new keys are created or provide updates of who has been using your API keys. 

Our hacker detection feature will alert you when anomalies arise, allowing you to take the proper action to regulate your API key usage. 

Take control of your API performance by signing up for a free trial today!  

The post How Do You Know Who’s Using Your API Keys? appeared first on The Auth API solves API key distribution, API key lifecycle, event logging, rate-limiting, hook events, cache control and anomaly detection..

]]>
456