KeyRunner - API Client - Visual Studio Marketplace
Extension for Visual Studio Code - The Zero Trust API Client
Have you tried tools that generate tests or docs automatically from your schema?
Apidog, Scalar, and Postman AI are a few examples but wondering what’s working best for people building modern APIs.
]]>Curious what everyone else is using in 2025!
]]>What are the best tools/libraries at the moment to implement a JSON:API server based in schema?
Merc.
]]>Hey everyone,
I’ve been using Postman for about 3 years now for API testing and documentation in our small dev team. Recently though, I’m finding it increasingly bloated and resource-heavy - my laptop fan goes crazy whenever I have it open for more than 20 minutes. Plus, since they’ve moved more features behind the paid plans, I’m wondering if it’s time to look elsewhere.
We mainly use it for REST API testing, environment management, and occasionally for documenting our APIs for the frontend team.
Has anyone switched from Postman to something else recently? What are you using, and how does it compare?
Thanks in advance!
You’re not alone – Postman’s increasing bloat and the push towards paid features are common complaints I’ve heard (and experienced myself!). It sounds like you’re at the point where the trade-offs are no longer worth it.
I’ve explored a few alternatives lately, and here are two that I think address your specific needs for REST API testing, environment management, documentation, and a lighter resource footprint:
Apidog: For me, Apidog is the clear winner because it offers so much in one tool. It is secure because the code can be stored on the local machine. If you want a modern, efficient, and collaborative API tool, it’s definitely worth exploring.
Hoppscotch: This is a solid, open-source option that’s designed to be lightweight and fast. It’s a web-based app (but runs nicely as a PWA), so it doesn’t consume as many system resources as a full desktop application. It handles REST APIs well, and has decent environment management features.
Key Differences:
Ultimately, the best way to decide is to try them out with your specific workflows and APIs. But based on your description, I think Apidog or Hoppscotch will be a good fit for you.
]]>I have been working with JSON:API for a caching-heavy application & wondered if there’s any standard / discussion around including field-level expiry metadata in JSON:API responses.
Right now, cache-control & ETags can manage whole-resource expiration but in many real-world APIs; certain attributes (like price, stock, or location) update more frequently than others. ![]()
Embedding expiry metadata at the attribute level could help clients make smarter refresh / polling decisions.
I am aware this might go beyond the core spec but curious if anyone has tackled this use case. For eg; imagine a product API where the description rarely changes; but the price changes every 60 seconds sending a meta.expires_at per attribute could reduce unnecessary data fetches. ![]()
I have not seen this discussed widely and wonder if it conflicts with the spec’s emphasis on simplicity & consistency.
Checked JSON:API — Extensions and Profiles for reference. Also checked what is cloud computing, where different data fields update at different times.
Is this something already explored through meta / extension usage in practice? Would this kind of fine-grained expiry violate the spirit of JSON:API, or could it be proposed as an optional extension? ![]()
I would love to hear your thoughts or see if others have solved this problem differently.
Thank you !! ![]()
§ 5.4.1 Rules for Extensions
An extension MAY (…) define new object members as described below.
The specification mentions extension members explicitly at some places if there are further processing rules applied based on existing or absence extension members.
Let’s look at an example:
§ 7.1 Top Level
A document MUST contain at least one of the following top-level members:
data: the document’s “primary data”.errors: an array of error objects.meta: a meta object that contains non-standard meta-information.- a member defined by an applied extension.
If extension members wouldn’t be explicitly listed in this enumeration, documents having only members declared by extensions would be invalid.
]]>Organization Schema Markup | Google Search Central | Documentation | Google for Developers
“contactPoint”: {
“@type”: “ContactPoint”,
“telephone”: “+9-999-999-9999”,
“email”: “[email protected]”
}
its ok to have type thats not the same name as the entity in entities that have a composite relationship (if i understand it correct). for example:
“address”: [{
“@type”: “PostalAddress”,
“streetAddress”: “999 W Example St Suite 99 Unit 9”,
“addressLocality”: “New York”,
“addressRegion”: “NY”,
“postalCode”: “10019”,
“addressCountry”: “US”
},{
“streetAddress”: “999 Rue due exemple”,
“addressLocality”: “Paris”,
“postalCode”: “75001”,
“addressCountry”: “FR”
}]
Extension for Visual Studio Code - The Zero Trust API Client
a security-first API client with built-in redaction capabilities, compliant with HIPAA, SOC2, GDPR
]]>filter query parameter family should be used for it. How filtering is implemented is up to each implementation. It may be specified by a profile.
Filtering a collection based on attributes of related resources (or their existence) is a common use case. Reusing the concept of relationship paths defined by the specification for the include query parameter is a good choice in my opinion. Doing so ensures that the filtering semantics are well aligned with the rest of the specification.
I think someone should formalize that strategy as a profile at some point in time.
]]>I’ve been working on a project that uses JSON:API to serve structured data from our backend, and it’s been great for keeping things standardized and clean. Now, my team wants to use that same data for reporting and visualization, and we’re exploring different BI tools.
One question I have is: what’s the best way to connect JSON:API-based data to platforms like Power BI? Has anyone here set up something similar? I know Power BI supports REST APIs, but I’m wondering how well it plays with JSON:API specifically.
While researching this, I also came across an article explaining Power BI architecture, and it gave me a better understanding of how data flows from source to dashboard. But I’d still love real-world input from developers who’ve connected APIs like ours to BI tools.
Are there middleware tools or patterns that help bridge this gap? I’d appreciate any guidance or examples!
Thanks in advance,
J Mathew
Hey everyone, I’ve been using Postman for about 3 years now for API testing and documentation in our small dev team. Recently though, I’m finding it increasingly bloated and resource-heavy - my laptop fan goes crazy whenever I have it open for more than 20 minutes. Plus, since they’ve moved more features behind the paid plans, I’m wondering if it’s time to look elsewhere. We mainly use it for REST API testing, environment management, and occasionally for documenting our APIs for the frontend t…]]>
Appreciate any advice!
]]>I’m trying to implement pagination in my JSON:API response, but I’m having trouble with the “next” and “previous” links. They aren’t showing up as expected, even though I’m following the pagination guidelines. I’m using version 1.0.0 of the specification. Has anyone run into this issue or have any tips on how to fix it?
Thanks in advance for any advice!
]]>Key Questions for Discussion:
What are the most common challenges you face when validating JSON data?
How do you approach debugging validation errors in complex schemas?
Are there specific features or tools you wish existed to make JSON validation easier?
Why This Matters:
JSON validation is critical for ensuring data integrity, especially in large-scale systems where data exchange happens across multiple platforms. By sharing best practices and discussing innovative tools, we can collectively improve the development experience and reduce errors.
A New Resource in the Community:
This new website offers a streamlined approach to validating JSON data against schemas. It seems like a promising addition to the tools available for developers working with JSON. Have any of you tried it yet? If so, what are your thoughts on its features or usability?
Feel free to share your experiences, ideas, or even examples of tools or techniques that have worked well for you. Let’s collaborate to make JSON validation simpler and more effective for everyone!
01
]]>Do try this out … Have been using it for a while now!
]]>Apidog offers a comprehensive solution that covers API design, debugging, testing, and documentation in a more resource-efficient package. Guess you can have a try!
]]>I’ve been using Postman for about 3 years now for API testing and documentation in our small dev team. Recently though, I’m finding it increasingly bloated and resource-heavy - my laptop fan goes crazy whenever I have it open for more than 20 minutes. Plus, since they’ve moved more features behind the paid plans, I’m wondering if it’s time to look elsewhere.
We mainly use it for REST API testing, environment management, and occasionally for documenting our APIs for the frontend team.
Has anyone switched from Postman to something else recently? What are you using, and how does it compare?
Thanks in advance!
]]>Every resource has an ID. The ID is identifying the resource together with the resource’s type.
The ID is immutable. It’s set by the client or server when creating the resource. IDs generated by the client are discussed in 9.1.1 Client-Generated IDs.
The local identifier (lid) allow to identify resources within a JSON:API document which don’t have an ID yet. This is the case for creating resources if using server-generated IDs.
]]>Or can it be the id on the client?
What I’d really like to do is allow the client to send its own identifier for the resource in all calls and have the server use that (and a client identifier) to uniquely identify the resource on the server?
Is that totally going against the spec? I see it is possible in a post by using lid, but that for a get or patch the client must provide the id?
Thanks
Dave
]]>On thing to take note of is that the json specifications are open to most kinds of pagination
When working with large datasets; implementing efficient pagination is crucial for performance and usability. JSON:API provides guidelines for pagination, but there are multiple approaches, such as page-based, offset-based, and cursor-based pagination. What are the best practices for implementing pagination while maintaining compliance with JSON:API standards? ![]()
One challenge developers face is balancing performance with usability. For example, cursor-based pagination is more efficient for large datasets but can be complex to implement. On the other hand; offset-based pagination is simpler but may become slow with large data tables. How do you determine the best approach for your specific use case?
I have checked https://jsonlint.com/mastering-json-format- Salesforce Developer documentation guide for reference.
I’d love to hear insights from those; who have implemented pagination in JSON:API. What challenges did you encounter, and how did you resolve them? Are there recommended libraries / techniques that simplify pagination while ensuring consistency with JSON:API specifications? ![]()
Thank you ! ![]()
included top-level member. Would you agree the following is valid?
{
"data": {
"type": "carts",
"attributes": {
"note": "Some note about cart",
},
"relationships": {
"products": [
"data": {
"type": "products",
"lid": "some-uuid"
}
]
}
},
"included": [
{
"type": "products",
"lid": "some-uuid",
"attributes": {
"quantity": 5
}
}
]
}
]]>endpoint name: /v2/schoolUnits/{school-unit-code}
response:
{
“data”: {
“type”: “schoolUnits”,
“id”: “1”,
“attributes”: {
// … this schoolunit’s attributes
},
“relationships”: {
// … this schoolunit’s relationships
}
}
}
other examples of type: address becomes addresses, email to emails, phoneNumber to phoneNumbers, enrolment to enrolments and person to persons (or people?).
]]>Under 6.3 Server Responsibilites it says:
If all instances of that media type are modified with a media type parameter other than
extorprofile, servers MUST respond with a406 Not Acceptablestatus code.
In other words, including anything other than ext= or profile= in the content type is prohibited.
So while I fully agree it should not be part of the specification I don’t think the spec should prohibit versioning methods either but it does actually prevent versioning via content negotiation.
I’m trying to figure this out myself currently as well and I’m leaning towards an X-API-Version header because I don’t like there being more than one URI that identifies the same resource.
/shopping-carts/{country}/{session-id}
/shopping-carts/{country}/{session-id}/items/{item-id}
/api-specifications/{docker-image-id}/apis/{path}/{file-name}
/api-specifications/{repository-name}/{artifact-name}:{tag}
/article-size-advices/{sku}/{sales-channel}
https://opensource.zalando.com/restful-api-guidelines/
The standard best practice for REST APIs is to have a hyphen, not camelcase or underscores. This comes from Mark Masse’s “REST API Design Rulebook” from Oreilly.
It is recommended to use the spinal-case (which is highlighted by RFC3986), this case is used by Google, PayPal, and other big companies.
Microsoft says: DO use kebab-casing (preferred) or camel-casing for URL path segments. If the segment refers to a JSON field, use camel casing.
api-guidelines/azure/Guidelines.md at vNext · microsoft/api-guidelines · GitHub
Seems to me that Microsoft recommends kebab-casing in this case (my case) since Im referring to a value in a field/attribute (type) and not to a JSON field??
Hyphens and camelcase as value in the type field (attribute) feels weird but I guess it has to be same as the URI to be consistent
Or use best of both worlds (combination).
If I understand it correct most (almost all) would recommend this solution:
endpoint name: /v2/school-units/{school-unit-code}
response:
{
“data”: {
“type”: “school-units”,
“id”: “1”,
“attributes”: {
// … this schoolunit’s attributes
},
“relationships”: {
// … this schoolunit’s relationships
}
}
}
json:api recommends:
endpoint name: /v2/schoolUnit/{school-unit-code}
response:
{
“data”: {
“type”: “schoolUnit”,
“id”: “1”,
“attributes”: {
// … this schoolunit’s attributes
},
“relationships”: {
// … this schoolunit’s relationships
}
}
}
best of both worlds:
endpoint name: /articles/1 HTTP/1.1
{
“data”: {
“type”: “article”,
“id”: “1”,
“attributes”: {
// … this article’s attributes
},
“relationships”: {
// … this article’s relationships
}
}
}
endpoint name: /v2/school-units/{school-unit-code}
response:
{
“data”: {
“type”: “schoolUnit”,
“id”: “1”,
“attributes”: {
// … this schoolunit’s attributes
},
“relationships”: {
// … this schoolunit’s relationships
}
}
}
I will have to think about whether we should follow json:api and use camelCase in the uri and value in attribute type or make an exception in this case since most standards recommends hyphens (URL is case sensitive, its good for SEO to use hyphens and so on).
]]>It is recommended that the URL for a collection of resources be formed from the resource type.
It is recommended that a relationship URL be formed by appending
/relationships/and the name of the relationship to the resource’s URL.
In both cases it does not mention to transform the resource type or name of the relationship from camelCase to kebab-case or snake_case.
Microsoft Graph REST API is one example among many which use camelCase in URLs for REST APIs.
]]>Thank you for your answer.
So the solution for our endpointname is: /v2/schoolUnits/{school-unit-code} ?
Are you sure the rule applies on URLs for Resource Collections? I know camelCase is standard for attributes but I have never seen an endpoint with camelcase. Not in any blog (best practise), forum or source (like Fielding).
Some examples I could find: Use Plural Nouns for Collections Dont: Avoid CamelCase or Underscores.
Use hyphens (-) to improve the readability of URIs.
To make your URIs easy for people to scan and interpret, use the hyphen (-) character to improve the readability of names in long-path segments
http://api.example.com/devicemanagement/manageddevices/
http://api.example.com/device-management/managed-devices /This is much better version/
What value should I use in attribute type in the response if I use above example.
Im still not convinced if the json:api standard recommends camelCase.
I want to apply the json api standard but I cant find any good source with examples with plural nouns for a resource/type with wordword.
Do you have any other source that recommends using camelCase in endpoints (and type)?
TIA!
Best regards
schoolUnits
Please find details about naming recommendation here: JSON:API — Recommendations
]]>What is a good naming convention on endpoints and type for for words (nouns) that consist of two words for example SchoolUnit?
endpoint name: /v2/school-units/{school-unit-code}
response:
{
“data”: {
“type”: “schoolunit”,
“id”: “1”,
“attributes”: {
// … this article’s attributes
},
“relationships”: {
// … this article’s relationships
}
}
}
Thank you!
]]>I was wondering about the definition of “MAY” in this case, meaning can we actually require the attributes member to be present?
Yes. You can enforce the attributes member to be present implicitly by requiring a specific attribute to be present. I think in practice that’s the most common use case by far. Allowing a resource to be created without enforcing any attribute to be set is an edge case.
Having the attributes member as optional allows clients and servers to avoid unnecessary characters. Instead of having an empty object as value of the attributes member, they can skip the attributes member entirely.
This seems to imply that one should support the creation of resources (via a POST) where the ‘attributes’ member is not supplied at all, like:
...
data: {
type: 'mytype'
# no attributes: {...}
}
I was wondering about the definition of “MAY” in this case, meaning can we actually require the attributes member to be present?
]]>For creating a resource (POST) the spec (9.1) says:
The request MUST include a single resource object as primary data. The resource object MUST contain at least a
typemember.
Then in 7.2:
In addition, a resource object MAY contain any of these top-level members:
attributes: an attributes object representing some of the resource’s data.
So my question is quite simply should we handle scenarios where a POST request does not include attributes (or meta) in the resource object? Essentially this would be a POST to create a resource where no resource attribute fields was given (only ‘type’) and the created resources would use whatever defaults are defined. This might not make sense for specific resources.
Or, can we interpret the MAY statement to be that we may require attributes to be provided?
]]>I think this is inconvenient even more if you consider constraint about included data, which tells to include resource only once, even if there are multiple resource identificators. So I have algorithm which checks data to ensure there is resource in full only once. Now even this must be optional.
Anyway thanks for your answers and ideas, I’m gonna try push further with my solution and come back with some production feedback.
P.s. And don’t get me wrong, I’m not here to gripe. I’m very grateful for all your work and effort. Me and my team love the idea of batch operations which atomic try to bring. That’s why I’m so prudent about it. I just want it to works.
]]>lid) in a prior operation object.” ]]>What should be a response in that case?
The atomic operations extension defines the data member of a result object as “the “primary data” resulting from the operation.” So it should be the state of the resource after that specific operation. Not the state of the resource after performing all operations. At least in my reading.
I see that this may complicate client implementations. Those may need to calculate the state of the object after performing all operations.
]]>atomic: results twice or more times? ]]>
- Is it right that this extension forbids some part of base specification, like data?
Yes. The data and included members are forbidden if atomic operations extension is applied.
Great question!
A server cannot support the include query parameter if the atomic operations extension is applied. If the include query parameter is applied, the included member must be present in the response document. But that’s forbidden by the extension. Supporting the include query parameter would lead to conflicting processing rules.
The same applies to sort query parameter. The base specification requires that primary data in data member must be sorted as requested. But that conflicts with processing rules of the extension which forbids that member.
For sorting, pagination and filtering it does not seem that clearly defined. But I think the intention of the base specification is clearly that those apply to the primary data encoded in data. A server should not support those query parameters if atomic extension operation is applied.
I think this should be clarified in the extension.
I’m not sure on sparse fieldset to be honest. Applying it to the data within an atomic response object seem to make sense. And I don’t see a processing rule within the base spec conflicting with that.
That’s a very good question as well. I fear that case hasn’t been considered when defining the extension. My first intention would be that a server should reject a request if it targets the same resource twice. But that may not prevent the issue entirely. A request could still update a resource and the resource relationships in a same request. And there might be even valid use cases for it. E.g. updating a resource’s attribute and patching a to-many relationship of the same resource in a single request. @dgeb what’s your thoughts on that case?
Assuming that the API follows the recommended URL naming schema: yes.
]]>I’m currently working on integration of atomic extension and I’m missing some guidelines in specification.
path /authors/123/relationships/comments, which means there is resource type authors, id 123 and relationship comments, is it same as ref with type: authors, id: 123 and relationship: comments?Thanks for even partial answers. This really bothers my mind nowadays.
]]>Please find more information on it in this topic in the meantime: How are local IDs supposed to be used?
]]>Is somewhere example usage of lid identifier? I cannot imagine use-case where lid can somehow appear in document and be useful. The only scenario which comes to my mind is when two different resource types are created. Like creating new order and new order-item then i can use lid as intended. But that is not possible in JSON:API. Creating resource is possible only one by one. So even if client want create multiple resource, it must call server for each individually, so even there isn’t lid necessary (for pairing purpose).
Second thing about lid is, if I understand it correctly, I have to return client-provided lid with newly created resource? Like I have tu remember that string through whole creating process and then add it to resource? Isn’t it a little bit heavy? Especially when on server lid does not carry any useful linking information as described before.
]]>or if you want to see the original one which somehow disappears, here it is
]]>FAQ: JSON:API — Frequently Asked Questions
Link: https://jsonapi.org/schema
I searched around for awhile and came up empty. I’m hoping someone knows where I can find a copy.
Having this document would be very helpful in implementing a JSON:API server. Thanks for your time.
]]>