Kreya Blog https://kreya.app/blog/ Kreya Blog Mon, 30 Mar 2026 00:00:00 GMT https://validator.w3.org/feed/docs/rss2.html https://github.com/jpmonette/feed en <![CDATA[Getting Started with GraphQL in Kreya]]> https://kreya.app/blog/getting-started-with-graphql/ https://kreya.app/blog/getting-started-with-graphql/ Mon, 30 Mar 2026 00:00:00 GMT With the release of Kreya v1.19.0 support for GraphQL was added. This guide will walk you through setting up your first GraphQL operation, from schema import to automated testing.

What is GraphQL?

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, and makes it easier to evolve APIs over time.

More information and detailed examples about GraphQL can be found on the official website.

Importing a GraphQL schema

To call GraphQL APIs, using importers is optional. However, using a schema importer has a lot of advantages. For example, it provides autocompletion for your requests and scripting types and validates your queries.

You have three possibilities to import a GraphQL schema in Kreya:

  • Local File: Import existing .graphql or .gql files directly.
  • Schema URL: Link to a hosted static schema definition.
  • Introspection: Query a running server to dynamically fetch its current schema, this is perfect for rapidly evolving APIs.

Select your preferred option in the Importers tab.

If you don't currently have a schema, you can use the schema introspection from our Kreya example GraphQL server. https://example-api.kreya.app/graphql

Creating and sending a GraphQL operation

To create a new GraphQL operation, click the icon in the operations list and select GraphQL. Next, choose a descriptive name for your operation and hit enter.

If you defined a schema importer in the previous step, you can select it from the header. If you only have one importer, it will be selected automatically.

After that, you need to define the endpoint in the Settings tab.

The Kreya example GraphQL endpoint is also available for use here. https://example-api.kreya.app/graphql

Finally, define your query in the Request tab.

query {
books {
id
name
}
}

This is a classic GraphQL query. It requests a specific list of resources books and only the pieces of data (id and name) needed for the UI. This is one of the major advantages of GraphQL: with a REST API, you might hit an endpoint such as /api/books and receive 50 fields (author, publication date, ISBN, etc.), even if you only require the names.

Define variables

Using variables allows you to separate your query logic from your test data. You can specify a variable in the Variables tab at the bottom of the request editor.

{
"bookId": 3
}

This variable can then be reused in the query itself using the prefix $, e.g. $bookId.

query getBook($bookId: Int = 1) {
book(id: $bookId) {
id
name
}
}

Testing your GraphQL API Pro / Enterprise

Similar to gRPC, REST, and WebSocket operations, the Script tab allows you to perform functional testing on your GraphQL API. An example of some basic tests, include verifying status codes, response shapes, and specific data values.

import { expect } from 'chai';

kreya.graphql.onQueryCompleted(call => {
kreya.trace('The GraphQL query completed.');

kreya.test('Status code', () => expect(call.status.code).to.equal(200));
kreya.test('status is success', () => expect(call.status.isSuccess).to.be.true);

kreya.test('Book name', () => expect(call.response.content.data.book.name).to.eq("Harry Potter and the Philosopher's Stone"));
});

More scripting hooks such as kreya.graphql.onMutationCompleted and kreya.graphql.onSubscriptionCompleted can be found in the documentation.

Another way to test your API is to use snapshot tests. You can enable snapshot testing via the Settings tab.

A detailed description of how to use Snapshot Testing can also be found in the documentation.

Conclusion

That's it! You just sent your first GraphQL operation with Kreya!

Have fun exploring the world of GraphQL! If you have any questions or suggestions on how we can improve its use in Kreya, please contact us or report an issue.

]]>
<![CDATA[gRPC in the browser: gRPC-Web under the hood]]> https://kreya.app/blog/grpc-web-deep-dive/ https://kreya.app/blog/grpc-web-deep-dive/ Mon, 23 Mar 2026 00:00:00 GMT In my previous post we explored the full gRPC protocol stack, from the .proto contract down to the HTTP/2 frame on the wire. We ended with a cliffhanger: browsers cannot speak native gRPC. I promised to cover that gap. This is that post.

We will look at why browsers are fundamentally incompatible with the gRPC HTTP/2 protocol and how gRPC-Web works around those limitations at the byte level. Along the way, we will dig into the Fetch API's streaming internals and close with what is (hopefully) coming in the near future.

Why browsers can't speak native gRPC

To understand the problem, we need to revisit what gRPC actually requires from its transport layer. Two HTTP/2 features are critical, and both are inaccessible to browser JavaScript.

The trailer problem

As we covered in the previous post, gRPC sends its final status not in the HTTP status code but in HTTP/2 trailers, a HEADERS frame sent after all DATA frames. This is what allows a server to stream 1000 records and only then report success or failure.

The Fetch API exposes response.headers, but not trailers. The response.trailers property has been in the WhatWG Fetch specification as a Promise<Headers> for years, and it is still not implemented in any major browser. The reason is not laziness, it is a genuinely hard platform problem. HTTP/1.1 doesn't have trailers at all (at least not in practice), and exposing HTTP/2 trailer semantics through a clean cross-version API turns out to be non-trivial.

The framing problem

Even if trailers were available, JavaScript has no mechanism to control raw HTTP/2 framing. The browser's networking layer decides when to open and close HTTP/2 streams, handles flow control, and manages connection multiplexing. Your JavaScript code never touches a single HTTP/2 DATA frame directly.

These two constraints together mean that native gRPC, in its current form, cannot run in a browser without modification.

gRPC-Web: trailers in disguise

The official solution from the gRPC team is gRPC-Web, a protocol adaptation that works around both constraints by moving trailers into the HTTP body. Instead of relying on HEADERS frames that JavaScript can't read, trailers are encoded as a special final message inside the DATA frames that the browser can read perfectly well.

The flag byte upgrade

gRPC-Web reuses the same 5-byte length-prefixed framing as gRPC, but gives a new meaning to the first byte. Where gRPC only uses bit 0 (the compression flag), gRPC-Web reserves bit 7 as a trailer flag:

ByteBitsPurpose
00Compression flag: 1 = the payload is compressed
07Trailer flag: 1 = this frame contains trailers, not application data
1-4allMessage length: 4-byte big-endian unsigned integer

So a gRPC-Web response stream consists of:

  • Zero or more data frames (flag byte 0x00 or 0x01 if compressed)
  • Exactly one trailer frame at the end (flag byte 0x80)

The trailer frame on the wire

The trailer frame's payload is a plain block of HTTP header-style key-value pairs, separated by \r\n (CRLF), just like HTTP/1.1 headers.

For a successful completion, the trailer payload looks like this:

grpc-status: 0\r\n
grpc-message: OK\r\n

That is 34 bytes. The complete 5-byte header then becomes 80 00 00 00 22.

Let's visualize a complete successful server-streaming response in gRPC-Web. First a data frame carrying our "Apple" fruit message, then a trailer frame:

--- data frame ---
00 00 00 00 0a 08 96 01 12 05 41 70 70 6c 65
  │           └─ Protobuf payload (10 bytes)
  └───────────── Message length (0xA = 10)
└──────────────── Flag byte: 0x00 (data, uncompressed)
--- trailer frame ---
80 00 00 00 22 67 72 70 63 2d 73 74 61 74 75 73 3a 20 30 0d 0a
  │           67 72 70 63 2d 6d 65 73 73 61 67 65 3a 20 4f 4b 0d 0a
  │           └─ "grpc-status: 0\r\ngrpc-message: OK\r\n" (34 bytes)
  └───────────── Trailer block length (0x22 = 34)
└──────────────── Flag byte: 0x80 (trailer frame)

The JavaScript code reads the response body as a stream of bytes, parses the 5-byte header to determine the frame type and length, then either parses the protobuf payload or parses the trailer key-value pairs.

Content types

gRPC-Web defines four content types:

Content-TypeEncodingSuitable for
application/grpc-webBinaryFetch API
application/grpc-web+protoBinary + protobuf hintFetch API
application/grpc-web-textBase64XHR (legacy)
application/grpc-web-text+protoBase64 + protobuf hintXHR (legacy)

The -text variants exist for compatibility with the older XMLHttpRequest API, which is text-oriented and cannot handle arbitrary binary chunks from a streaming response. With -text, the entire response body is base64-encoded, so XHR can accumulate it as a string and the client decodes it before envelope parsing.

With the modern Fetch API you always want the binary variants.

Streaming limitations

gRPC-Web inherits a hard constraint from HTTP/1.x semantics: the request body must be complete before the server can start responding. This rules out two of the four gRPC streaming models:

Streaming modelWorks in gRPC-Web?Reason
UnarySingle request, single response
Server streamingSingle request, streaming response body
Client streamingRequires streaming request body
Bidirectional streamingRequires simultaneous request/response streams

Client and bidirectional streaming from a browser are simply not possible with gRPC-Web.

The translation layer

Because a standard gRPC server speaks HTTP/2 with native trailers, something must translate the gRPC-Web envelope into native gRPC and back. There are two ways to provide this:

A proxy: The classic approach is Envoy with the grpc_web filter sitting in front of your gRPC server. The proxy unwraps the gRPC-Web envelope, forwards the call upstream as native gRPC, and re-wraps the response. This works language-agnostically but adds an operational hop.

In-process middleware: Most modern server frameworks handle the translation themselves, with no separate proxy needed:

  • .NET: Grpc.AspNetCore.Web adds a single app.UseGrpcWeb() middleware call. ASP.NET Core detects the application/grpc-web content type and translates on the fly.
  • Go: improbable-eng/grpc-web provides an http.Handler wrapper around any gRPC server. One line of setup, no infrastructure changes.
  • Node.js / Deno: The @grpc/grpc-js package combined with a small grpc-web wrapper handles the translation in-process with no separate infrastructure.

The in-process approach is generally preferred for new services: it keeps the stack simple, eliminates an extra network hop, and co-locates the protocol logic with the service code. The proxy approach remains useful when you don't control the server implementation or need to front many heterogeneous backends.

Troubleshooting gRPC-Web in the browser

gRPC-Web is notoriously difficult to debug with standard browser tooling. The Chrome DevTools Network tab will show you that a gRPC-Web request was made and how many bytes were transferred, but it has no idea what those bytes mean. The response body is raw binary, and there is no built-in way to inspect the envelope framing or decode the protobuf payload.

Several things compound the problem:

  • The trailer is hidden. The trailer frame is just another chunk of response body bytes. DevTools has no hook to surface it as headers, so it appears nowhere in the familiar "Headers" or "Trailers" panel. You might spend a long time wondering why grpc-status is nowhere to be found.
  • The 5-byte envelope is opaque. Even if you export the raw bytes, you must manually strip the 5-byte length header before you can feed the payload into a protobuf decoder. For the -text variants, there is an extra base64 decode step before you even get to the envelope.
  • Error messages are buried. A failed gRPC-Web call typically returns HTTP 200 (the actual status is in the trailer frame). DevTools will show a green 200 response with no obvious sign that anything went wrong. The real error is encoded in the trailer frame payload, which is just more opaque bytes at the tail of the response body.
  • The grpc-Web proxy adds a hop. If Envoy or in-process middleware is misbehaving, stripping headers, changing content types, or corrupting the envelope, the only way to see this is to capture traffic at both the browser and the backend simultaneously or debugging the middleware itself.

The most practical debugging workflow is to export a HAR file from the browser's Network tab and open it in a tool that understands gRPC-Web. Kreya's HAR import does exactly this: if you have already imported your .proto definitions, Kreya will automatically decode the envelope framing and render the protobuf payloads as human-readable fields, no manual byte stripping required.

The Fetch API and streaming internals

Before closing the gRPC-Web chapter, it is worth understanding what the browser's Fetch API can and cannot do with streaming today, because any gRPC-compatible browser client has to work within these constraints.

Reading a streaming response

The Fetch API exposes the response body as a ReadableStream<Uint8Array>. Reading it incrementally looks like this:

const response = await fetch('/fruit.v1.FruitService/ListFruits', {
method: 'POST',
headers: { 'Content-Type': 'application/grpc-web+proto' },
body: requestBytes,
});

const reader = response.body.getReader();
let buffer = new Uint8Array(0);

while (true) {
const { done, value } = await reader.read();
if (done) break;

// Append new chunk to buffer
buffer = concat(buffer, value);

// Parse complete envelopes out of the buffer
while (buffer.length >= 5) {
const flags = buffer[0];
const length = new DataView(buffer.buffer).getUint32(1, false); // big-endian
if (buffer.length < 5 + length) break; // wait for more data

const payload = buffer.slice(5, 5 + length);
buffer = buffer.slice(5 + length);

if (flags & 0x80) {
parseTrailers(payload); // trailer frame
} else {
dispatchMessage(payload); // data frame
}
}
}

The key insight is that chunks from reader.read() do not align with gRPC-Web frames. A single DATA frame from the server may arrive split across multiple read() calls, or several frames may arrive in one chunk. Your buffer management must handle both cases.

Streaming request bodies

Reading the response is straightforward, but what about writing? Can we stream a request body from browser JavaScript?

The answer is: yes, but with caveats.

Since Chrome 105, you can pass a ReadableStream as the body of a fetch() request:

const stream = new ReadableStream({
start(controller) {
controller.enqueue(encodeMessage(request1));
controller.enqueue(encodeMessage(request2));
controller.close();
},
});

await fetch('/service/Method', {
method: 'POST',
headers: { 'Content-Type': 'application/grpc-web+proto' },
body: stream,
duplex: 'half', // required!
});

The duplex: 'half' option is the critical detail. It tells the browser that this is a one-way upload: the client is free to stream the body, but the server's response body will not arrive until the client's request body is closed. This maps to HTTP/1.1 semantics, where you finish sending before you start receiving.

What about duplex: 'full'? True bidirectional streaming would require duplex: 'full', where the server can start sending responses while the client is still uploading. This requires the browser to open an HTTP/2 stream and read from it while simultaneously writing to it, exactly what gRPC does natively. The option exists as a proposal tracked in the WHATWG Fetch repository and is experimentally available behind a flag in some Chromium builds, but as of early 2026 it is not shipped in any stable browser. Safari and Firefox do not support it at all yet.

Connect-RPC: designed for the web from the start

gRPC-Web retrofits gRPC onto browsers while preserving full gRPC server compatibility. Connect-RPC (developed by Buf) takes a different philosophy: design a protocol that is idiomatic HTTP from the ground up, works natively in browsers without any proxy, and also interoperates with gRPC.

A Connect-RPC server speaks three protocols simultaneously: the Connect protocol, native gRPC, and gRPC-Web. The client declares which one it wants through the Content-Type header.

Unary calls: just HTTP

The most striking difference in Connect-RPC is how unary calls work. There is no length-prefixed framing for unary requests or responses. The serialized protobuf message is simply the entire HTTP body, nothing more.

A GetFruit request looks like this on the wire:

POST /fruit.v1.FruitService/GetFruit HTTP/1.1
Content-Type: application/connect+proto
Content-Length: 7

<7 bytes of raw protobuf>

And a successful response:

HTTP/1.1 200 OK
Content-Type: application/connect+proto
Content-Length: 10

<10 bytes of raw protobuf>

This is indistinguishable from a well-behaved JSON REST API, just with a different content type. Any HTTP proxy, CDN, or debugging tool understands it immediately. No special handling, no envelope parsing, no proxy required.

Connect unary error handling

Because Connect uses real HTTP status codes for unary errors, the error response is equally transparent:

HTTP/1.1 404 Not Found
Content-Type: application/json

{
"code": "not_found",
"message": "fruit Apple not found"
}

Notice the Content-Type: application/json, this is deliberate and unconditional. Even when the client negotiated binary protobuf for the happy path (Content-Type: application/connect+proto), errors are always returned as JSON. This means you can read any Connect error with nothing more than curl or a browser's network inspector, without having to decode binary protobuf just to find out what went wrong. The full set of Connect error codes and their HTTP status mappings is well-defined in the Connect protocol specification. For richer errors, Connect also supports the same details array as gRPC's google.rpc.Status:

{
"code": "invalid_argument",
"message": "request validation failed",
"details": [
{
"type": "google.rpc.BadRequest",
"value": "CiQKBG5hbWUSHG11c3QgYmUgYXQgbGVhc3QgMSBjaGFyYWN0ZXI="
}
]
}

Streaming calls: the end-of-stream message

For streaming calls, Connect uses the same 5-byte envelope framing as gRPC. The compression flag (bit 0) works identically. But instead of a trailer flag (bit 7 like gRPC-Web), Connect uses bit 1 (value 0x02) as an end-of-stream flag:

ByteBitsPurpose
01End-of-stream flag: 1 = this is the final message with trailers
00Compression flag: 1 = the payload is compressed
1-4allMessage length: 4-byte big-endian unsigned integer

The end-of-stream message has its payload encoded as JSON, regardless of whether the rest of the stream uses protobuf or JSON encoding. This is intentional: it makes the terminal status easy to inspect with any tool.

For a successful stream, it looks like this:

--- data frame (same as gRPC) ---
00 00 00 00 0a 08 96 01 12 05 41 70 70 6c 65
  │           └─ Protobuf payload (10 bytes)
  └───────────── Message length (0xA = 10)
└──────────────── Flag byte: 0x00 (data, uncompressed)
--- end-of-stream message ---
02 00 00 00 0f 7b 22 6d 65 74 61 64 61 74 61 22 3a 7b 7d 7d
  │           └─ JSON payload: {"metadata":{}} (15 bytes)
  └───────────── End-of-stream message length (0xF = 15)
└──────────────── Flag byte: 0x02 (end-of-stream)

When the stream errors mid-flight, the end-of-stream message carries the error instead of metadata:

{
"error": {
"code": "internal",
"message": "upstream database unavailable"
}
}

This design has a subtle but important benefit over gRPC-Web's CRLF-delimited trailer block: the end-of-stream message is regular JSON, parseable by any standard JSON library, with a well-typed schema you can evolve over time.

Protocol comparison

FeatureNative gRPCgRPC-WebConnect (unary)Connect (streaming)
TransportHTTP/2HTTP/1.1+HTTP/1.1+HTTP/1.1+
HTTP/3 (QUIC) support✅ (Draft spec)
Request framing5-byte envelope5-byte envelopeNo framing5-byte envelope
Response framing5-byte envelope5-byte envelopeNo framing5-byte envelope
TrailersHTTP/2 HEADERSBody (flag 0x80)HTTP headersBody (flag 0x02)
Error encodinggrpc-status trailerTrailer frameHTTP status + JSONEnd-of-stream JSON
Browser compatible
Proxy component requiredN/A
Client streaming in browser❌ (unary)✅ (with duplex: 'half')
Bidirectional in browserN/A❌ (pending duplex: 'full')
Human-readable errors
Works with curlPartially

Connect's application/connect+json content type brings another benefit: it uses Protobuf's canonical JSON mapping, so every call is both human-readable and testable with curl, without any special client library.

What is coming

The browser networking stack is actively evolving. Several changes in flight will reshape how gRPC-style protocols work in browsers.

WebTransport

WebTransport is a browser API built on QUIC (HTTP/3). It provides multiplexed bidirectional streams and unreliable datagrams, without the head-of-line blocking that plagues TCP.

From the browser's perspective, WebTransport is a lower-level primitive than Fetch. You open streams explicitly, you control when each stream starts and ends, and you can send multiple streams over a single QUIC connection. This maps much more naturally to gRPC's stream model than Fetch does.

The gRPC team has done exploratory work on running gRPC over WebTransport, and experimental implementations exist in some gRPC libraries. No formal specification has been published yet (the closest published doc is G2, gRPC over HTTP/3, which covers straight QUIC transport rather than the WebTransport browser API). The envisioned protocol uses standard gRPC length-prefixed framing inside QUIC streams, with the call path (/package.Service/Method) as the stream header.

Browser availability: Chrome has supported WebTransport since version 97 (late 2021). Firefox support is under active development. Safari support is not yet announced.

Actual production deployment of gRPC-over-WebTransport is still rare as of early 2026, but the foundation is solid and adoption will follow browser support.

Fetch API duplex: 'full'

As mentioned earlier, duplex: 'half' enables streaming request bodies but does not allow the server to respond while the client is still uploading. duplex: 'full' would remove that restriction, enabling true bidirectional streaming over a standard HTTP/2 Fetch request.

The Chrome team has been working on this. There is an explainer in the WHATWG Fetch repository and an active Origin Trial. Once stable, it would allow native bidirectional Connect-RPC streaming over Fetch, without any need for WebTransport.

HTTP/3 and QUIC adoption

HTTP/3 is already widely supported in browsers and increasingly deployed on the server side. While native gRPC was originally defined strictly over HTTP/2, support for gRPC over HTTP/3 has started landing in implementations. For example, the .NET gRPC implementation supports HTTP/3 out of the box in modern versions, allowing native gRPC over QUIC with all the associated benefits.

Because gRPC-Web and Connect-RPC do not have strong ties to HTTP/2-specific framing, they can automatically work with HTTP/3. Their envelope formats are unchanged, they are just bytes in a POST body, traveling equally well over TCP or QUIC. For gRPC-Web apps currently running behind a proxy or CDN, you can enable HTTP/3 for the proxy-to-browser leg immediately without changing a line of client or server code.

The Fetch trailers proposal

The WHATWG Fetch specification has an open proposal for response.trailers returning a Promise<Headers> that resolves when all trailers have been received. If this lands, browsers could eventually consume native gRPC responses without adapting the wire format at all.

This has been blocked mainly because the semantics interact subtly with HTTP/1.1 chunked-encoded trailers (technically defined but almost never used in practice), and because it requires coordinating with the HTTP/2 and HTTP/3 specifications. It remains an open issue and is unlikely to ship in the next one to two years.

Closing

Getting gRPC to work in a browser turns out to require far more engineering than it first appears. Native gRPC is off the table: browsers cannot read HTTP/2 trailers or control raw frames. gRPC-Web solves the trailer problem elegantly by encoding trailers as a special flagged message in the body, but it requires a proxy and locks you out of client and bidirectional streaming. Connect-RPC solves the proxy problem by treating unary calls as plain HTTP and embedding end-of-stream metadata as a JSON message, while remaining fully compatible with gRPC and gRPC-Web on the server side.

WebTransport is the long-term answer for true bidirectional streaming in browsers, and duplex: 'full' for Fetch may bridge the gap before WebTransport reaches universal support.

Tools like Kreya support native gRPC and gRPC-Web out of the box, so you can inspect gRPC-Web trailer frames without writing a single line of parsing code. But now you know exactly what is happening under the hood.

Further Reading

]]>
<![CDATA[Postman vs Kreya]]> https://kreya.app/blog/postman-vs-kreya/ https://kreya.app/blog/postman-vs-kreya/ Mon, 16 Mar 2026 00:00:00 GMT The API development lifecycle has changed. In 2026, the choice of API testing tools is no longer just about sending requests. It's about data ownership, seamless team collaboration, ensuring reliable deployments and deep protocol support.

For years, Postman has been the industry standard, but its shift toward a "cloud-mandatory" ecosystem has caused discontent for developers who value privacy and local workflows.

On the other side of the spectrum sits Kreya, a "privacy-first" desktop client designed to run where your code lives: in your file system and your version control.

In this post, we'll dive into a side-by-side comparison of Postman and Kreya.

Comparison overview

Feature / topicPostmanKreya
PhilosophyCloud-first, workspace-centricLocal-first, file-based
AccountMandatory for core featuresOptional (license sync only)
Data StorageProprietary cloudLocal JSON (Git-friendly)
AuthenticationStrict inheritance (folder-based)Reusable resource (apply anywhere)
ImportersStatic / one-time importContinuous / auto-syncing import
TestingManual JS assertions (pm.test)Manual + snapshot testing
Offline ModeLimited / "Scratchpad" only100% native & offline
Pricing (Solo)$9 user/month$5 user/month
Pricing (Enterprise)$49 user/month$10 user/month

Feature bloat

Postman supports a lot of features. Really, lots of features. In fact, probably the main criticism of Postman is that it is very bloated, slow and "enterprisey".

Kreya tries to maintain its simplicity by limiting its feature set and only releasing fully thought-out features.

Required account

Postman has shifted toward a cloud-first model where an account is required for core features like collection management. While a "lightweight Scratchpad" version exists, most modern features (e.g. add requests to a collection, basic collection management) require a Postman account.

Postman Screenshot of limited functionality without Account

You are often pressured or forced to sign in, which can be a hurdle for developers who just want to test an endpoint quickly.

Postman screenshot of sign in prompt after trying to save

In contrast, Kreya follows a "no-account-required" philosophy. It remains fully functional out-of-the-box, requiring a login only for license synchronization, ensuring work can begin the moment the app is opened.

Kreya screenshot of sending a POST request

Data storage and sharing

Postman stores data primarily in its own cloud. While this makes syncing easy, it creates a vendor lock-in. Sharing usually happens through Postman Workspaces which requires a paid tier since March 2026 and often requires moving your sensitive API data onto their servers (e.g. sensitive API keys are stored in environment variables, which end up on their cloud by accident). The heavy cloud dependency may conflict with strict corporate security policies.

While it is possible to export Postman collections to your disk and then share them via Git, this process is not inherently simple, and additional context data, such as environment variables, is missing. Furthermore, not all collections can be exported this way (e.g. gRPC collection export is not supported yet).

Kreya uses a local-first, file-based storage system. Your projects are just a folder of JSON and configuration files on your disk. This means you can share your Kreya project the same way you share code: via Version Control Systems like Git. No proprietary cloud is required, your existing CI/CD and version control systems are your sharing platform.

Kreya project file system screenshot

Offline work

Because of its heavy reliance on cloud synchronization, Postman's offline experience can be clunky. If you lose your connection, you may lose access to certain workspace features or you may find that the current values for variables don't sync as expected when you get back online.

Kreya is designed as a native desktop application with local storage, it is 100% offline-capable. Your data never leaves your local machine, unless you explicitly push it to your own repository.

Authentication

Postman uses an inheritance model to apply authentication. Authentication is tied to a specific request or folder, often forcing you into rigid hierarchies just to share a single token.

Postman screenshot of applying an auth configuration

Kreya treats authentication as a reusable resource. You create an Auth configuration once and apply it to any request or directory, regardless of where it lives in your project.

Kreya screenshot of applying an auth configuration

Importers

Postman primarily supports static imports. You upload a file or paste a link, and Postman creates a copy of that data in its cloud. If your OpenAPI definition or gRPC proto file changes, you have to re-import to keep your collection in sync.

Postman screenshot of importing a file

Kreya uses continuous importers. Instead of just copying data, you link your project directly to a source, like a local folder of .proto files or a remote OpenAPI URL. Kreya monitors these sources and automatically updates your requests whenever the underlying schema changes, ensuring your project is never out of date.

Kreya screenshot of a configured continuous importer

Scripting and snapshot assertions

Testing in Postman usually requires writing manual pm.test assertions for every single field you want to check. If your API response has multiple fields, you're writing a lot of repetitive code.

Postman screenshot of testing a response with scripting assert

Kreya fully supports traditional scripting assertions for those who need them:

Kreya screenshot of testing a response with scripting assert

But Kreya also introduces an alternative approach: Snapshot testing ("Golden Master" testing). Instead of writing multiple lines of code, you simply "save" a known-good response as a snapshot. Kreya detects and highlights regressions in future runs, eliminating the need to write manual test code for every individual field.

Kreya screenshot of testing with snapshot assertions

Modern protocol support

Postman remains a REST-first powerhouse, though its support for modern standards often trails the industry. For instance, it only added HTTP/2 support in late 2024, nearly a decade after the protocol's release. This "retrofitted" approach can make gRPC and GraphQL feel like secondary layers within a heavy, cloud-locked UI.

In contrast, Kreya is built for the modern edge. It frequently is one step ahead of Postman and offers deep protocol support, including native HTTP/3 and seamless streaming (e.g. gRPC bidirection, Server-sent events, streamed responses). If you are working on the absolute bleeding edge of modern protocols, Kreya is usually the more specialized tool.

Pricing

Postman's pricing has become increasingly complex, with "Professional" and "Enterprise" tiers that can be quite expensive for small teams. Following changes in March 2026, free team plans are no longer supported, meaning collaboration now requires a paid subscription. Additionally, many features are now metered via a consumption model, where users may face overage charges for AI credits, monitoring, or mock server usage.

Kreya maintains a simpler, more predictable structure with a functional free tier and transparent pricing for teams needing advanced features like snapshot testing or scripting.

Conclusion

The shift we're seeing in 2026 isn't just about features, it's about the philosophy of development. Postman is evolving into an all-in-one API Governance platform, which brings power but also significant overhead and "vendor lock-in".

Kreya succeeds by doing the opposite, it stays out of your way. By leveraging file-based storage and native protocol support, it turns your API collections into first-class citizens of your codebase. If your team prioritizes CI/CD integration and data privacy over a proprietary, login-protected cloud platform, Kreya is the better tool for the job.

]]>
<![CDATA[Virtual Scrolling: Rendering millions of messages without lag]]> https://kreya.app/blog/using-virtual-scrolling/ https://kreya.app/blog/using-virtual-scrolling/ Wed, 11 Mar 2026 00:00:00 GMT Displaying a large amount of messages in a list can destroy the performance of an application. The naive approach to displaying these messages would be to render each message as a DOM element (in a webpage context). But this quickly degrades the performance, since the browser needs to spend all its time laying out these elements, repainting them during scrolling and holding the huge list of DOM nodes in memory.

Chat apps, log message visualizers and various other applications all face this problem. Today, we are going to take a look how we solved this in Kreya.

The problem

Let's visualize the problem. Imagine we have a browser viewport with a height of 800px and the user scrolled down a bit. Each message takes up around 50px in height, so we display around 15 messages in the viewport:

[ Message container (height: 5,000,000px) ]
+--------------------------------------------------------------------+
| <div id="msg-1"> ... </div> |
| <div id="msg-2"> ... </div> |
| ... |
| <div id="msg-9"> ... </div> (Off-screen) |
| |
| [ Browser viewport (height: 800px) = visible area ] |
| +--------------------------------------------------------------+ |
| | <div id="msg-10"> ... </div> | |
| | <div id="msg-11"> ... </div> | |
| | <div id="msg-12"> ... </div> | |
| | ... | |
| | <div id="msg-25"> ... </div> | |
| +--------------------------------------------------------------+ |
| |
| <div id="msg-26"> ... </div> (Off-screen) |
| <div id="msg-27"> ... </div> |
| ... [ 99,970 more DOM nodes taking up memory ] ... |
| <div id="msg-100000"> </div> |
+--------------------------------------------------------------------+

This is VERY slow. Each message creates one node in the DOM, which takes up memory and slows down the performance drastically. When scrolling, all the DOM nodes have to be recalculated, which creates UI freezes with that many elements.

So how do we solve this?

Solution 1: Do not display that many messages

One approach would be to simply not display thousands of messages. Realistically, who is going to view them all? In my opinion, this should be the first course of action to check. Maybe replace the thousands of items with a search and only display the first 100 results? This is actually the approach we usually take in Kreya, for example in our API endpoint selection:

Here, we do not display all endpoints, as it can get pretty slow if a project contains hundreds or thousands of them. And a user is not going to scroll through them all to select on. Instead, we provide a search to narrow down the selection.

Solution 2: Virtual scrolling

But in other cases, like our response view, we really want to have a list with all available responses. Often, there is only a single response returned by the server. But with gRPC, WebSockets or Server-Sent Events, there is a possibility that many thousands responses will be received and Kreya must handle that use case perfectly.

We achieved this by using virtual scrolling:

[ Message container (height: 5,000,000px) ]
+--------------------------------------------------------------------+
| ↕ [ Empty scroll space (height: 7 * 50px = 350px) ] |
| <div id="msg-8"> ... </div> (Off-screen) |
| <div id="msg-9"> ... </div> (Off-screen) |
| |
| [ Browser viewport (height: 800px) = visible area ] |
| +--------------------------------------------------------------+ |
| | <div id="msg-10"> ... </div> | |
| | <div id="msg-11"> ... </div> | |
| | <div id="msg-12"> ... </div> | |
| | ... | |
| | <div id="msg-25"> ... </div> | |
| +--------------------------------------------------------------+ |
| |
| <div id="msg-26"> ... </div> (Off-screen) |
| <div id="msg-27"> ... </div> (Off-screen) |
| ↕ [ Empty scroll space (height: 99,973 * 50px = 4,998,650px) ] |
+--------------------------------------------------------------------+

With virtual scrolling, we only create DOM elements for items currently visible in the viewport (plus some more as a small buffer for smooth transitions). This ensures that our resource usage stays low. To make the native browser scrollbar work, the actual height of the container has to stay the same as if it contains all items (50,000,000px in our example). Then, we use offsets or empty divs to place the actual content at the correct offset.

And voilà:

An animation showing the virtual scrolling with Kreya gRPC responses.

Deep dive into virtual scrolling

Virtual scrolling as a concept is pretty simple: Given the scroll offset from the top, calculate the items to display. But as always, the devil's in the details and there are a lot of gotchas to consider.

The most important point is that items inside a virtual scroll list need a known height in pixels. You cannot render dynamically sized content, for example a paragraph that scales its height with text content, inside a virtual scroll list. We need to know each item size to be able to calculate the item offsets and placements. The easiest way to work with virtual scrolling is when each item has the same, static height.

The math

Let's take a deeper look at the math involved. To make things simpler, we assume an universal, static item height of ITEM_HEIGHT_IN_PX = 50.

As described above, the scroll container height should be the total height of all items:

setScrollContainerHeight(totalItemCount: number, scrollContainer: HTMLElement): void {
const height = ITEM_HEIGHT_IN_PX * totalItemCount;
scrollContainer.style.height = height + 'px';
}

Virtual scrolling works by listening to scroll events of the container and rendering items based on the current scroll offset. We need to find the index of the message that the user scrolled to:

findIndexAtScrollOffset(scrollOffset: number): number {
// This will return
// 0px => 0
// 49px => 0
// 50px => 1
// 20038px => 400
return Math.floor(scrollOffset / ITEM_HEIGHT_IN_PX);
}

// The reverse calculation, needed later on
calculateScrollOffsetForIndex(index: number): number {
return index * ITEM_HEIGHT_IN_PX;
}

Of course we do not want to only render a single item, but the visible items in the viewport. Additionally, add some more items above and below as a buffer so that fast scroll events do not see empty space.

const BUFFER_ITEM_COUNT = 20;

calculateItemRangeForIndex(index: number, totalItemCount: number, viewportHeight: number): { indexStart: number, indexEnd: number} {
// This is not entirely accurate, but rendering one item too many does not hurt much
const additionalItemsAboveOrBelow = (viewportHeight / ITEM_HEIGHT_IN_PX) / 2 + BUFFER_ITEM_COUNT;
const indexStart = Math.max(0, index - additionalItemsAboveOrBelow);
const indexEnd = Math.min(totalItemCount - 1, index + additionalItemsAboveOrBelow);
return { indexStart, indexEnd };
}

Putting it together, a naive implementation would look something like this:

updateVirtualScrollContent(scrollOffset: number, totalItemCount: number, viewportHeight: number): void {
const itemIndex = findIndexAtOffset(scrollOffset);
const { startIndex, endIndex } = calculateItemRangeForIndex(itemIndex, totalItemCount, viewportHeight);

// Left as an exercise for the reader (depends on the used framework etc.)
renderItems(startIndex, endIndex);

// Calculate the offset of the first DOM node (everything above is empty space to render the scrollbar correctly)
const contentOffset = calculateScrollOffsetForIndex(startIndex);

// Left as an exercise for the reader (depends on the used framework etc.)
setContentOffset(contentOffset);
}

This is a pretty basic implementation and has room for a lot of optimizations:

  • Throttle the method call when calling this from a scroll event listener. Otherwise you are constantly re-rendering the items even though the user only scrolled a few pixels. A good option would be to use requestAnimationFrame
  • Remember the last rendered range. Inside the update method, check whether the buffer still has "enough" items regarding the current offset. This allows you to skip doing any work, for example when the user only scrolled one item down and you still have 19 more items rendered as your buffer.
  • If the amount of items in the list can change, remember to update the total size of the scroll container
  • Implement "jump to index" if you application needs it
  • If it is possible to remove items, you need extra safeguards. For example when the user scrolled to the end of the list and removes item, you need to jump back to actual item indexes.

The bigger problem: Virtual scrolling with accordions

In Kreya, we have a bigger challenge. Not only do we need virtual scrolling, we also want to render accordions as our items. Accordions have a collapsed and expanded state, with each state taking up a different height:

  • Collapsed state: Small height (48px) showing just the header
  • Expanded state: Large height (304px) showing the header and the content

The states have a difference of 256px, which we need to account for in every calculation. To be able to do that, we need to track which indexes contain an expanded accordion:

const expandedIndexes = new Set<number>();

countExpandedIndexesInRange(endIndexInclusive: number): number {
let count = 0;
for (const index of this.expandedIndexes) {
if (index <= endIndexInclusive) {
count++;
}
}

return count;
}

And our findIndexAtScrollOffset method needs to be modified as well:

findIndexAtScrollOffset(scrollOffset: number): number {
// If all items are collapsed, this would be the index at the offset
// However, if some items above are expanded, the index is smaller than this one
let index = Math.floor(scrollOffset / COLLAPSED_ITEM_HEIGHT_IN_PX);
const expandedCount = countExpandedIndexesInRange(index);
if (expandedCount === 0) {
return index;
}

// Calculate the offset that the index (including its own content) would have
let sizeForIndex = calculateSizeForIndex(index);

while (true) {
const itemSize = expandedIndexes.has(index) ? EXPANDED_ITEM_HEIGHT_IN_PX : COLLAPSED_ITEM_HEIGHT_IN_PX;
const sizeForPreviousIndex = sizeForIndex - itemSize;

// The previous item wouldn't be visible, so the current item is the one we are looking for
if (sizeForPreviousIndex <= scrollOffset) {
return index;
}

// The previous item would be visible, continue with that
index--;
sizeForIndex -= itemSize;
}
}

// Calculates the size the whole list would take up including the specified index
calculateSizeForIndex(index: number): number {
const itemCount = index + 1;
const expandedCount = this.countExpandedIndexesInRange(index);
return (itemCount - expandedCount) * COLLAPSED_ITEM_HEIGHT_IN_PX + expandedCount * EXPANDED_ITEM_HEIGHT_IN_PX;
}

// Calculates the size the whole list would take up excluding the size of the specified item at the index
calculateOffsetForIndex(index: number): number {
return index === 0 ? 0 : this.calculateSizeForIndex(index - 1);
}

As you can see, things get much more complicated. We cannot calculate the index from the scroll offset directly anymore, since we do not know how many expanded accordions are present in that range.

Here is how it looks in Kreya:

An animation showing the virtual scrolling with accordions.

Conclusion

Virtual scrolling is essential when displaying thousands or millions of items. But this only works when the size of the items is known beforehand. Using content that changes in size, such as accordions, is possible, but harder.

]]>
<![CDATA[Kreya 1.19 and 1.19.1 - What's New]]> https://kreya.app/blog/kreya-1.19-whats-new/ https://kreya.app/blog/kreya-1.19-whats-new/ Wed, 25 Feb 2026 00:00:00 GMT Kreya 1.19 is here, and in the meantime we have already released 1.19.1. Together they bring GraphQL, OAuth 2.0 Device Flow, and a set of polish improvements across the CLI and UI.

GraphQL support

GraphQL operations allow you to execute queries, mutations, and subscriptions against a GraphQL API. Add a new operation by clicking the icon. Next, choose a descriptive name for your operation and hit enter.

An animation showcasing creating a GraphQL operation

You can also enter variables for your query in the "Variables" tab in JSON format. These variables can be used in your query with the $ prefix (e.g. $bookId).

If you don't have a GraphQL API yet, you can experiment with some GraphQL operations in our example project, which you can pull from GitHub.

OAuth 2.0 Device Flow

We have added support for the OAuth 2.0 Device Authorization Grant (formerly known as the Device Flow) that enables devices with no browser or limited input capability to obtain an access token. To use this, create a new authentication in Project > Authentication and select the Grant type Device code.

You can then choose this new authentication for an operation. Pressing the Update button will start the Device Flow.

An animation showcasing using the new device flow auth

UI improvements

Recent projects are searchable

You can now search for recent projects in the launch window. Start typing the name of the project you are looking for, and a list of matching projects will appear.

An animation showcasing searching for recent projects

Create a new project

You can now create a new project directly from the application menu. Select Kreya > New project... and enter the necessary project information.

An animation showcasing creating a new project

Importer type selection

The process of selecting an importer type has been simplified. Choose your type at the top of the page and enter the required importer information.

An animation showcasing selecting the importer type

Operation actions

We have moved all operation-related actions into the operation header. Actions such as 'Change gRPC method', 'Reset operation' and 'History' can now be found there.

An animation showcasing using the new operation actions

Inline create operation

We have optimized the process of creating an operation in Kreya. First, you enter the name, and then you can select the gRPC method in the operation header.

An animation showcasing creating an gRPC operation inline

The same applies to REST operations if you have imported API definitions. You can also select a template in the operation header.

An animation showcasing creating an REST operation inline

Insomnia collection v5 import support

We have added support for importing Insomnia collection v5 files. In the application menu, select Kreya > Import, then select Insomnia collection (v5) and your file.

An animation showcasing importing an Insomnia collection v5

And more

There are other notable improvements:

  • gRPC v1 reflection support in addition to v1alpha
  • Session cookies support
  • Simplified test scripting

Kreya 1.19.1

Version 1.19.1 focuses on polish and quality-of-life improvements across the CLI, UI, and platform integrations.

CLI

  • Added the --relative-to option for CLI path resolution relative to the project or current working directory. See relative path resolution.
  • Added the filename to the JUnit reporter output
  • Automatically detect Kreya projects in parent directories
  • Show correct default values in the CLI help output

Kreya UI

  • gRPC: Improved fallback to the v1alpha server reflection importer
  • Added a Copy as kreyac option to the operation context menu to copy the operation as a kreyac command.
  • Added quick actions to open settings tabs, clear all cookies and clear all user variables
  • Added a word wrap option to all editors with horizontal scrolling

Release notes and feedback

For a full list of new features and bugfixes, see the release notes.

If you have feedback or questions, please contact us or report an issue.

Stay tuned! 🚀

]]>
<![CDATA[Calling Auth0 Secured APIs with Kreya]]> https://kreya.app/blog/calling-auth0-secured-apis/ https://kreya.app/blog/calling-auth0-secured-apis/ Mon, 16 Feb 2026 00:00:00 GMT Securing your APIs with Auth0 is a smart move for production, but it often adds extra steps to the development and testing cycle. No one wants to spend their afternoon manually swapping expired tokens.

In this post, we'll show you how to use Kreya to automate the entire Auth0 handshake, allowing you to call secured endpoints effortlessly.

Setting up authentication in Kreya

One of the major benefits of Kreya is that authentication is built into the core of the app. Instead of manually adding headers to every request, you define an authentication configuration once and reuse it throughout your project.

Navigate in Kreya to the Project menu and select Authentications (or use the shortcut Ctrl++a).

The configuration in Kreya depends on the type of Auth0 Application you are using.

Get your Auth0 credentials

To configure the authentication in Kreya, you need to gather specific values from the Auth0 dashboard. You need values from your Application settings (who is asking for the token) and your API settings (what the token is for).

For the Application values, navigate in the Auth0 dashboard to Applications > Applications in the left-hand menu and select the application. This applies to all applications regardless of the type.

Auth0 M2M Application Settings

To get the audience value for your auth configuration, navigate in the Auth0 dashboard to Applications > APIs and select your API.

Auth0 API Settings

Machine to machine application (M2M)

This is used, if your API is being called by another backend service or a background worker. You don't require a user login with this type of application.

PropertyValue
TypeOAuth2 / OpenID-Connect
Grant typeClient credentials
Issuer#AUTH0_APPLICATION_DOMAIN (can be found in the Auth0 application settings)
Client Authorize MethodBasic
Client-ID#AUTH0_APPLICATION_CLIENT_ID (can be found in the Auth0 application settings)
Client-Secret#AUTH0_APPLICATION_CLIENT_SECRET (can be found in the Auth0 application settings)
Token-Type to authorize on APIsAccess-Token
Additional token parametersKey = audience, Value = #AUTH0_API_IDENTIFIER (can be found in the Auth0 Api Settings)

It should look similar to this:

Kreya Auth0 Backend API Config
note

You may have noticed that we use the environment variable {{ env.backend.clientSecret }} instead of inserting a raw value for the client secret.

This is a security best practice that prevents sensitive credentials from being accidentally committed to version control or shared with team members who shouldn't have access to it.

Single-page application

If you are developing a web frontend, which is built with Angular or React, you likely have a single-page application (SPA) set up in Auth0. SPAs are considered public clients, because they cannot securely store a client secret.

By using the Authorization Code flow, Kreya allows you to simulate a user's login eperience exactly as it would happen in your web app.

PropertyValue
TypeOAuth2 / OpenID-Connect
Grant typeAuthorization code
Issuer#AUTH0_APPLICATION_DOMAIN (can be found in the Auth0 application settings)
Client Authorize MethodNone
Client-ID#AUTH0_APPLICATION_CLIENT_ID (can be found in the Auth0 application settings)
Use native browserCheck (optional)
ScopeExample: openid profile email
Token-Type to authorize on APIsAccess-Token
Additional login parametersKey = audience, Value = #AUTH0_API_IDENTIFIER (can be found in the Auth0 Api Settings)
Kreya Auth0 SPA Config
note

We use the native browser to redirect the login. When Kreya attempts to authenticate, it will open your native browser (where you have access to locally stored credentials).

info

On redirect flows, ensure that the Kreya Redirect-URI is added in the Allowed Callback URLs in the corresponding Auth0 application.

Native application

If you are building a mobile or desktop application, you likely have a native application set up in Auth0 which uses the Authorization Code with PKCE flow.

In Kreya, the configuration for a native application is identical to the SPA setup. Kreya automically manages the PKCE handshake under the hood.

PropertyValue
TypeOAuth2 / OpenID-Connect
Grant typeAuthorization code
Issuer#AUTH0_APPLICATION_DOMAIN (can be found in the Auth0 application settings)
Client Authorize MethodNone
Client-ID#AUTH0_APPLICATION_CLIENT_ID (can be found in the Auth0 application settings)
Use native browserCheck (optional)
ScopeExample: openid profile email
Token-Type to authorize on APIsAccess-Token
Additional login parametersKey = audience, Value = #AUTH0_API_IDENTIFIER (can be found in the Auth0 Api Settings)

Regular web application

Unlike SPAs or native apps, a regular web application is considered a confidential client, because it runs on a server you control, allowing it to securely store a client secret.

In Kreya, this setup uses the Authorization Code flow. While it looks similar to the SPA setup, we need to provide the client secret and change the Client Authorize Method. This ensures that Kreya authenticates itself to Auth0 using your application's credentials during the token exchange.

PropertyValue
TypeOAuth2 / OpenID-Connect
Grant typeAuthorization code
Issuer#AUTH0_APPLICATION_DOMAIN (can be found in the Auth0 application settings)
Client Authorize MethodBasic
Client-ID#AUTH0_APPLICATION_CLIENT_ID (can be found in the Auth0 application settings)
Client-Secret#AUTH0_APPLICATION_CLIENT_SECRET (can be found in the Auth0 application settings)
Use native browserCheck (optional)
ScopeExample: openid profile email
Token-Type to authorize on APIsAccess-Token
Additional login parametersKey = audience, Value = #AUTH0_API_IDENTIFIER (can be found in the Auth0 Api Settings)
Kreya Auth0 Regular Web App Config

Invoking the request

To use an authentication configuration in Kreya, go to the Auth tab and select the configuration.

Kreya Auth Configuration Selection

To explicitly get a new Access-Token click the Update Button. If no token is present, it will fetch one when we send the request.

Kreya Auth Configuration Refresh

Depending on your authentication configuration, it will directly fetch the token or open your browser, where you can login with an authorized user.

If the retrieval is successful, Kreya displays the JWT and its expiry date directly in the UI. You don't need to re-authenticate for every request, Kreya caches the token and automatically includes it in the request.

You can click the icon to see the current JWT claims:

Kreya JWT Claims

In this example, we are calling a local API, which reads all user claims of the authenticated user and returns it.

Kreya Dotnet API Response

Pro tip: Set the auth per directory settings

Instead of manually assigning the authentication configuration to every single request, you can use Directory settings.

By setting the authentication at the directory level, all requests within that directory and its subdirectories will automatically inherit those credentials.

Kreya Auth Directory Settings

Conclusion

Testing secured APIs doesn't have to be a manual burden. By integrating Auth0 with Kreya, you can automate the complex OAuth2 handshakes and focus on what actually matters.

Whether you are working with machine-to-machine tasks or simulating complex single-page application user flows, Kreya's inheritance-based settings and automated token management make your development workflow faster and more secure.

Ready to try it out? Download Kreya today and start testing your secured endpoints.

]]>
<![CDATA[gRPC deep dive: from service definition to wire format]]> https://kreya.app/blog/grpc-deep-dive/ https://kreya.app/blog/grpc-deep-dive/ Mon, 09 Feb 2026 00:00:00 GMT In our previous posts (part 1 and part 2), we demystified Protocol Buffers and learned how data is encoded into compact binary.

But Protobuf is just the payload. To send this data between microservices, we need a transport protocol. Enter gRPC.

While many developers use gRPC daily, few look under the hood to see how it actually works. In this post, we’ll go beyond the basics and explore the full gRPC protocol stack: from the high-level service architecture and streaming models down to the low-level HTTP/2 framing and byte-level wire format.

The contract-first philosophy

At the heart of gRPC lies the contract-first approach. Unlike REST, where API documentation (like OpenAPI) is often an afterthought, gRPC enforces the structure upfront using Protocol Buffers (.proto files).

This contract defines not just the data structures (Messages), but the service capabilities (RPCs):

package fruit.v1;

service FruitService {
// Unary: simple request -> response
rpc GetFruit(GetFruitRequest) returns (Fruit);

// Server streaming: one request -> many responses
rpc ListFruits(ListFruitsRequest) returns (stream Fruit);

// Client streaming: many requests -> one response
rpc Upload(stream Fruit) returns (UploadSummary);

// Bidirectional streaming: many requests <-> many responses
rpc Chat(stream ChatMessage) returns (stream ChatMessage);
}

This definition is the source of truth. From this single file, the protobuf compiler (protoc) generates client stubs and server boilerplate in almost any language (Go, Java, C#, Python, etc.), ensuring that the client and server always agree on the API shape.

Streaming models

One of the biggest differentiators of gRPC is its native support for streaming. This isn't just "chunked transfer encoding", it's first-class API semantics.

  • Unary: Looks like a standard function call or REST request. The client sends one message, the server sends one back.
  • Server streaming: Perfect for subscriptions or large datasets. The client sends a query, and the server returns multiple results over time.
  • Client streaming: Useful for sending a stream of data (like telemetry from an IoT device) where the server processes messages as they arrive.
  • Bidirectional streaming: True real-time communication. Both sides can send messages independently. This is often used for chat apps or multiplayer games.

Metadata

In addition to the actual data, gRPC allows sending metadata. Metadata is a list of key-value pairs (like HTTP headers) that provide information about the call. Keys are strings, and values are typically strings, but can also be binary data. The key names are case-insensitive and must not start with grpc- (reserved for gRPC internals). The keys of binary values must end with -bin.

Metadata is essential for cross-cutting concerns that shouldn't be part of the business logic payload:

  • Authentication: Usage of Bearer tokens (e.g., Authorization: Bearer <token>).
  • Tracing: Passing trace IDs (e.g., transport-id: 12345) to track requests across microservices.
  • Infrastructure: Hints for load balancers or proxies.

Metadata can be sent by both the client (at the start of the call) and the server (at the start as headers, or at the end as trailers).

Under the hood: the transport layer

So how does this contract map to the network? gRPC is built on top of HTTP/2, leveraging its advanced features to make these streaming models possible.

The most important concept is streams. Every gRPC call, whether it's a simple unary request or a long-lived bidirectional stream, is mapped to a single HTTP/2 stream. This allows multiplexing: you can have thousands of active gRPC calls on a single TCP connection, with their frames interleaved. This prevents opening thousands of connections that would be needed with HTTP/1.1. While it solves the HTTP/1.1 "head-of-line blocking" issue, TCP-level blocking remains a concern if packets are lost.

Constructing the URL

Before we send any bytes, we need to address the resource. In gRPC, the URL is generated automatically from the .proto definition: /{Package}.{Service}/{Method}.

For GetFruit, the path becomes: /fruit.v1.FruitService/GetFruit

This standardization means clients and servers never argue about URL paths.

The HTTP/2 frames

A gRPC call typically consists of three stages, each mapping to HTTP/2 frames:

  1. Request headers and metadata (HEADERS frame): contains metadata like :path, :method (POST), and content-type (application/grpc).
  2. Data messages (DATA frames): the actual application data.
  3. Response trailers (HEADERS frame): the final status of the call.

Metadata on the wire

Since gRPC is built on HTTP/2, metadata is simply mapped to HTTP/2 headers. String values are sent as-is (e.g. user-agent: grpc-kreya/1.18.0).

Binary values are base64-encoded and the key must end with -bin. Libraries usually handle this encoding/decoding transparently.

The length-prefixed message

Inside the HTTP/2 DATA frame, gRPC wraps your protobuf message with a mechanism called length-prefixed framing. Even in a streaming call, every single message is independent and prefixed with a 5-byte header:

BytePurposeDescription
0Compression Flag0 = Uncompressed
1 = Compressed
1-4Message Length4-byte big-endian integer indicating the size of the payload

Visualizing the bytes

Let's reuse the fruit message from our previous post

weight: 150
name: 'Apple'

which encodes to 10 bytes of protobuf data: 08 96 01 12 05 41 70 70 6c 65.

When sending this over gRPC, we prepend the header:

  • Compression: 0 (no compression)
  • Length: 10 (0x0A)

The final 15-byte gRPC message looks like this:

00 00 00 00 0a 08 96 01 12 05 41 70 70 6c 65
  │           └─ The protobuf payload (10 bytes)
  └───────────── Payload message length (0xA = 10 bytes)
└──────────────── Compression flag (0 = false)

This simple framing allows the receiver to read exactly the right number of bytes for the next message, decode it, and repeat, enabling fluid streaming.

Status and trailers

In REST, you check the HTTP status code (200, 404, 500). In gRPC, the HTTP status is almost always 200 OK, even if the logic failed!

The actual application status is sent in the trailers (the very last HTTP/2 header frame). This separation is crucial: it allows a server to successfully stream 100 items and then report an error on the 101st processing step.

A typical trailer block looks like this:

grpc-status: 0
grpc-message: OK

(Status 0 is OK. Non-zero values represent errors like NOT_FOUND, UNAVAILABLE, etc.)

Rich errors

Sometimes, a simple status code and a string message aren't enough. You might want to return validation errors for specific fields or other error details. The rich error model (specifically google.rpc.Status) solves this.

Instead of just grpc-status and grpc-message, the server returns a detailed protobuf message serialized as base64 into the grpc-status-details-bin trailer. This standard message contains:

  1. Code: The gRPC status code.
  2. Message: The developer-facing error message.
  3. Details: A list of google.protobuf.Any messages containing arbitrary error details (e.g., BadRequest, PreconditionFailure, DebugInfo).
message Status {
// The gRPC status code (3=INVALID_ARGUMENT, 5=NOT_FOUND, etc.)
int32 code = 1;

// The error message
string message = 2;

// A list of extra error details (any custom protobuf message, e.g. validation error details)
repeated google.protobuf.Any details = 3;
}

Clients can decode this trailer to get structured, actionable error information.

Compression

Depending on the environment, bandwidth can be precious, especially on mobile networks. gRPC has built-in support for compression to reduce the payload size.

How it works

  1. Negotiation: The client sends a grpc-accept-encoding header (e.g., br, gzip, identity) to tell the server which algorithms it supports.
  2. Encoding: If the server decides to compress the response, it sets the grpc-encoding header (e.g., br).
  3. Flagging: For each message, the compression flag (byte 0 of the 5-byte header) is set to 1.
  4. Payload: The message payload is compressed using the selected algorithm.

Let's look at how the wire format changes when compression is enabled. Note that compressing our tiny "Apple" message with brotli results in a larger size due to overhead, but the structure remains the same:

01 00 00 00 0e 8f 04 80 08 96 01 12 05 41 70 70 6c 65 03
  │           └─ The compressed payload
  └───────────── Length of compressed message (0xE = 14 bytes)
└──────────────── Compression flag (1 = true)

This happens per-message. It is even possible to have different compression settings for the request and the response (asymmetric compression).

Alternative transports

While gRPC usually runs over TCP/IP with HTTP/2, the protocol is agnostic enough to run elsewhere.

  • Unix Domain Sockets: Perfect for local IPC. It bypasses the TCP network stack for maximum efficiency.
  • Named Pipes: The equivalent on Windows.

This flexibility allows gRPC to be the universal glue between components, whether they are on different continents or on the same chip.

The browser gap (gRPC-Web)

There is one place gRPC struggles: the Browser. Web browsers do not expose the low-level HTTP/2 framing controls required for gRPC (specifically, reading trailers and granular stream control).

This challenge is addressed by gRPC-Web, a protocol adaptation that:

  1. Encodes trailers inside the data stream body (so the browser doesn't need to read HTTP trailers).
  2. Supports text-based application-layer encoding (base64) to bypass binary constraints.

We will cover more on how exactly gRPC-Web works in a future post.

Closing

gRPC is more than just a serialization format, it's a complete ecosystem that standardizes how we define, generate, and consume APIs. By understanding the layers, from the .proto contract to the 5-byte header on the wire, you can debug issues more effectively and design better systems.

Tools like Kreya abstract this complexity away for daily testing, but knowing what happens under the hood puts you in control when things get tricky.

Further Reading

]]>
<![CDATA[Looking back on 2025]]> https://kreya.app/blog/looking-back-on-2025/ https://kreya.app/blog/looking-back-on-2025/ Mon, 02 Feb 2026 00:00:00 GMT 4.75 million invoked operations, over 5,000 monthly active users and 3 releases. These are some key facts about the past year.

What happened in 2025?

The Kreya's UI was reworked in the first release 1.16.0 of 2025, with a fresh new look being given to the project settings. We also introduced cookie management and added WebSocket support.

Another release 1.17.0 in April added the new Kreya Script, which allows you to control operation invocation with JavaScript. Support has also been added for importing HAR files and the JWT authentication provider, as well as for exporting operations as cURL and gRPC commands.

The last release 1.18.0 of the year 2025 introduced powerful new features for visualising and validating your API workflows: Data Previews and Snapshot Tests.

Monthly active users

At the end of 2025, we had more than 5,000 monthly active users, which was a +7.2% increase compared to the previous year. As we haven't done much to attract new users, we're pleased that we've managed to increase our monthly active users again this year, even if only slightly.

info

It is important to note that we only collect minimal anonymous telemetry data and this is completely transparent in the documentation. We appreciate every user who does not deactivate telemetry.

YearMonthly active users
20255,067 (+7.2%)*
20244,727 (+21.1%)*
20233,902 (+35.9%)*
20222,871 (+89.3%)*
20211,517

*The percentage numbers refer to the change compared to the previous year.

Invoked operations

Even greater growth was achieved this year in the category of invoked operations. With more than 4.75 million operations invoked, there was an increase of +18.5% compared to 2024. This shows that our users are using Kreya more frequently than in the previous year.

YearInvoked operations
20254.75M (+18.5%)*
20244.01M (+39.2%)*
20232.88M (+44.7%)*
20221.99M

*The percentage numbers refer to the change compared to the previous year.

REST is still increasing

As with last year, there is increased interest in REST operations compared to gRPC. It's "only" about 10 percent, but we want to improve Kreya for REST operations even more and focus on them. Our goal is to develop a comprehensive API client, not just a gRPC one. This will keep us busy in the coming years.

YearPercentage of invoked REST operations
20259.97%
20246.93%
20233.36%
20220.77%

What's next?

As mentioned above, we also want to focus on operation types other than gRPC. We implemented REST years ago and WebSockets this year. The next release will be available in a few weeks and will support GraphQL. If you can't wait to try it out, you can use it now in the beta release channel. In the Kreya application menu, go to Kreya > About and choose the Beta release channel. Then, hit check for updates.

Finally, we would like to take this opportunity to thank all users for using Kreya, giving us feedback and helping us to keep developing an awesome product. As always, do not hesitate to contact us at [email protected] with any requests, feedback or just to say hello.

Good luck in the new year and stay tuned. 👋

]]>
<![CDATA[Transfering files with gRPC]]> https://kreya.app/blog/transfering-files-with-grpc/ https://kreya.app/blog/transfering-files-with-grpc/ Mon, 26 Jan 2026 00:00:00 GMT Is transfering files with gRPC a good idea? Or should that be handled by a separate REST API endpoint? In this post, we will implement a file transfer service in both, use Kreya to test those APIs, and finally compare the performance to see which one is better.

Challenges when doing file transfers

When handling large files, it is important to stream the file from one place to another. This might sound obvious, but many developers (accidentally) buffer the whole file in memory, potentially leading to out-of-memory errors. For a web server that provides files for download, a correct implementation would stream the files directly from the file system into the HTTP response.

Another problem with very large files are network failures. Imagine you are downloading a 10 GB file on a slow connection, but your connection is interrupted for a second after downloading 90% of it. With REST, this could be solved by sending a HTTP Range header, requesting the rest of the file content without download the first 9 GB again. For the simplicity of the blogpost and since something similar is possible with gRPC, we are going to ignore this problem.

Transfering files with REST

Handling file transfers with REST (more correctly plain HTTP) is pretty straight forward in most languages and frameworks. In C# or rather ASP.NET Core, an example endpoint offering a file for downloading could look like this:

[HttpGet("api/files/pdf")]
public PhysicalFileResult GetFile()
{
return new PhysicalFileResult("/files/test-file.pdf", "application/pdf");
}

We are effectively telling the framework to stream the file /files/test-file.pdf as the response. Internally, the framework repeatedly reads a small chunk (usually a few KB) from the file and writes it to the response.

Kreya screenshot of calling a GET /api/files/pdf endpoint

The whole response body will consist of the file content and Kreya, our API client, automatically renders it as a PDF. Other information about the file, such as content type or file name, will have to be sent via HTTP headers.

This is important. If you have a JSON REST API and try to send additional information in the response body like this:

{
"name": "my-file.pdf",
"created": "2026-01-28",
"content": "a3JleWE..."
}

This is bad! The whole file content will be Base64-encoded and takes up 30% more space than the file size itself. In most languages/frameworks (without additional workarounds), this would also buffer the whole file in memory since the Base64-encoding process is usually not streamed while creating the JSON response. If the file itself is also buffered in memory, you could see memory usage over twice the size of the file. This may be fine if your files are only a few KB in size. But even then, if multiple requests are concurrently hitting this endpoint, you may notice quite a lot of memory usage.

Transfering files with gRPC

While the REST implementation was straight forward, this is not the case with gRPC. The design of gRPC is based on protobuf messages. There is no concept of "streaming" the content of a message. Instead, gRPC is designed to buffer a message fully in memory. This is the reason why individual gRPC messages should be kept small. The default maximum size is set at 4 MB. So how do we send large files bigger than that?

While gRPC cannot stream the content of a message, it allows streaming multiple messages. The solution is to break up the file into small chunks (usually around 32 KB) and then send these chunks until the file is transferred completely. The protobuf definition for a file download service could look like this:

edition = "2023";

package filetransfer;

import "google/protobuf/empty.proto";

service FileService {
rpc DownloadFile(google.protobuf.Empty) returns (stream FileDownloadResponse);
}

message FileDownloadResponse {
oneof data {
FileMetadata metadata = 1;
bytes chunk = 2;
}
}

message FileMetadata {
string file_name = 1;
string content_type = 2;
int64 size = 3;
}

This defines the FileService.DownloadFile server-streaming method, which means that the method accepts a single (empty) request and returns multiple responses. While we could send the file metadata via gRPC metadata (=HTTP headers or trailers), I think it's nicer to define it explicitly via a message. The server should send the metadata first, as it contains important information, such as the file size.

A naive server implementation in C# could look like this:

private const int ChunkSize = 32 * 1024;

public override async Task DownloadFile(Empty request, IServerStreamWriter<FileDownloadResponse> responseStream, ServerCallContext context)
{
await using var fileStream = File.OpenRead("/files/test-file.pdf");

// Send the metadata first
await responseStream.WriteAsync(new FileDownloadResponse
{
Metadata = new FileMetadata
{
ContentType = "application/pdf",
FileName = "test-file.pdf",
Size = fileStream.Length,
}
});

// Then, chunk the file and send each chunk until everything has been sent
var buffer = new byte[ChunkSize];
int read;
while ((read = await fileStream.ReadAsync(buffer)) > 0)
{
await responseStream.WriteAsync(new FileDownloadResponse
{
Chunk = ByteString.CopyFrom(buffer, 0, read),
});
}
}

This works, but has some issues:

  • A new byte array buffer is created for each request
  • The buffer is copied each time to create a ByteString

The first point is easily solved by using a buffer from the shared pool, potentially re-using the same buffer for subsequent requests.

The second point happens due to the gRPC implementation in practically all languages. Since the implementation wants to guarantee that the bytes are not modified while sending them, it performs a copy first. This is a design decision which favors stability over performance. Luckily, there is a workaround by using a "unsafe" method, which is perfectly safe in our scenario and improves performance:

private const int ChunkSize = 32 * 1024;

public override async Task DownloadFile(Empty request, IServerStreamWriter<FileDownloadResponse> responseStream, ServerCallContext context)
{
await using var fileStream = File.OpenRead("/files/test-file.pdf");

// Send the metadata first
await responseStream.WriteAsync(new FileDownloadResponse
{
Metadata = new FileMetadata
{
ContentType = "application/pdf",
FileName = "test-file.pdf",
Size = fileStream.Length,
}
});

// Then, chunk the file and send each chunk until everything has been sent
using var rentedBuffer = MemoryPool<byte>.Shared.Rent(ChunkSize);
var buffer = rentedBuffer.Memory;

int read;
while ((read = await fileStream.ReadAsync(buffer)) > 0)
{
await responseStream.WriteAsync(new FileDownloadResponse
{
Chunk = UnsafeByteOperations.UnsafeWrap(buffer[..read]),
});
}
}

Let's try this out. After importing the protobuf definition, we call the gRPC method with Kreya:

Kreya screenshot of calling the gRPC file download method

This works, but where is our PDF? Since we are sending individual chunks, we need to put them back together manually.

To achieve this, we simply need to append each chunk to a file. In Kreya, this is done via Scripting:

import { writeFile, appendFile } from 'fs/promises';

const path = './preview-pdf.pdf';

// Initialize an empty file
await writeFile(path, '');

// Hook to handle each individual gRPC message
kreya.grpc.onResponse(async msg => {
if (msg.content.metadata) {
// Ignore the metadata for now
return;
}

// Note: The data is only in Base64 here because Kreya encodes them as such for Scripting purposes.
// On the network, gRPC transfers the chunk as length-delimeted bytes
await appendFile(path, msg.content.chunk, 'base64');
});

// When we received everything, show the PDF
kreya.grpc.onCallCompleted(async () => await kreya.preview.file(path, 'PDF'));

This allows us to view the PDF:

Kreya screenshot of calling the gRPC file download method

Comparison

Great! So transfering files with gRPC is definitely possible. But how do these two technologies compare against each other? Which one is faster and has less overhead?

Total bytes transferred

The total amount of bytes transferred on the wire is actually a pretty difficult topic. It depends on a lot of factors, such as the HTTP protocol (HTTP/1.1, HTTP/2 or HTTP/3), the package size of TCP/IP, whether TLS is being used etc. We are going to take a look how this applies to REST and gRPC.

Streaming files over a gRPC connection generates overhead, although not much. Since gRPC uses HTTP/2 under the hood, each individual chunk message has a few bytes overhead due to the HTTP/2 DATA frame information needed. Additionally, each chunk needs a few bytes to describe the content of the gRPC message. You are looking at roughly 15 bytes per message chunk if it fits into one HTTP/2 DATA frame. Transfering 4 GB of data with a chunk size of 16 KB would need around 250,000 messages to transfer the file completely, incurring an overhead of ~3.7 MB. This may or may not be negilible depending on the use case.

Up- or downloading huge files with REST over HTTP/1.1 has less overhead. Since the bytes of the file are sent as the response/request body, there is not much else that takes up space. In case of uploads to a server, HTTP multipart requests incur a small overhead cost to define the multipart boundary. Additionally, HTTP headers and everything else that is needed to send the request over the wire take up space, but this is the case for all HTTP-based protocols. Downloading files, whether small or large, have roughly the same amount of bytes overhead with HTTP/1.1. Depending on the count and size of HTTP headers, this is around a few hundred bytes.

Funnily enough, transfering files with REST over HTTP/2 incurs a bigger overhead. HTTP/2 splits the payload into individual DATA frames, very similar to our custom gRPC solution. Each frame, often with a maximum size of 16 KB, has an overhead of 9 bytes. For a file 4 GB in size, this amounts to ~2.2 MB overhead. While HTTP/2 has many performance advantages, transfering a single, large file over HTTP/1.1 has less overhead.

In these examples, we omitted the overhead created by TCP/IP, TLS and the lower network layers, which both HTTP/1.1 and HTTP/2 share. Comparing it with HTTP/3, which uses UDP, would make everything even more complicated, so we leave this as exercise for the reader :)

Performance and memory usage

I spun up the local server plus client and took a look at the CPU and memory usage. Please note that this is not an accurate benchmark, which would require a more complex setup. Nevertheless, it provides some insight into the differences between the approaches. The example file used was 4 GB in size.

gRPC (naive)gRPC (optimized)REST (HTTP/1.1)REST (HTTP/2)
Duration24s22s20s28s
Max memory usage36 MB35 MB32 MB35 MB
Total memory allocation4465 MB165 MB38 MB137 MB

If done right, memory usage is no problem for streaming very large files. And as expected, the naive gRPC implementation which copies a lot of ByteStrings around allocates a lot of memory! It needs constant garbage collections to clean up the mess. The maximum memory usage however stays low for all approaches.

What really surprised me was the bad performance of HTTP/2 in comparison to HTTP/1.1. It was even slower than gRPC, which builds on top of it! I cannot really explain this huge difference, especially since the code is exactly the same, both on the client and the server. I was running these tests on .NET 10 on Windows 11.

The optimized gRPC version performs pretty well, but is still slower than REST via HTTP/1.1. Since it has to do more work, it takes longer and uses more memory (and CPU). Optimizing the gRPC code was very important, as the naive implementation allocates so much memory!

I also tested HTTP/1.1 with TLS disabled, but it did not really make a difference.

Conclusion

A REST endpoint for up- and downloading files is still the best option. If you are forced to use gRPC or simply too lazy to add REST endpoints in addition to your gRPC services, transfering files via gRPC is not too bad!

If you use some kind of S3 compatible storage backend, the best options is to generate a presigned URL. Then, download your files directly from the S3 storage instead of piping it through your backend.

There are lots of points to consider when implementing file transfer APIs. For example, if your users may have slow networks, it may be useful to compress the data before sending it over the wire.

Should you need resumable up- or downloads, instead of rolling your own, you could use https://tus.io/. This open-source protocol has implementations in various languages.

]]>
<![CDATA[Catching API regressions with snapshot testing]]> https://kreya.app/blog/api-snapshot-testing/ https://kreya.app/blog/api-snapshot-testing/ Thu, 15 Jan 2026 00:00:00 GMT API development is a high-wire act. You’re constantly balancing new feature delivery against the terrifying possibility of breaking existing functionality. Testing, for example manually writing assertions for every field you expect back, is your safety harness. But what happens when the response payload is 500 lines of complex nested JSON? Your harness becomes a tangled mess of brittle code that takes longer to maintain than the feature itself. This is where snapshot testing shines, but it also has drawbacks of its own. Let's take a look.

What is snapshot testing?

Snapshot testing (sometimes called "Golden Master" testing) is straightforward:

  1. You take a system in a known, correct state.
  2. You capture its output (the "snapshot") and save it to a file. This is your baseline.
  3. In future tests, you run the system again and compare the new output against the saved baseline.
  4. If they match exactly, the test passes. If they differ by even a single character, the test fails, and you are presented with a "diff" highlighting the variance.

Unlike traditional unit tests that ask, "Is X equal to Y?", snapshot tests ask, "Has anything changed since last time?"

Why snapshot test HTTP APIs?

While snapshot testing is popular in UI frameworks (like React), it also works pretty well in the world of APIs. Be it HTTP, REST, gRPC or GraphQL, all types of APIs can be made to work with snapshot testing.

API responses, particularly large JSON or XML payloads, are notoriously difficult to test comprehensively using traditional assertions. Imagine an e-commerce API returning a product details object. It might contain nested categories, arrays of variant images, pricing structures, and localized metadata. Writing individual assertions for every one of those fields is tedious, error-prone, and results in massive test files that developers hate updating.

Often, developers end up just asserting the HTTP 200 OK status and maybe checking some fields of the top-level object, leaving 90% of the payload untested. This may result in embarassing bugs later on, even though the core business logic may be thoroughly tested.

A bug I often encountered is that data that is being exposed via an API that should not be exposed, but nobody notices during development or review. Imagine a user object with a password field (hashed of course), which obviously should not be exposed. A bug is introduced, exposing this field publicly via the API. Since no test exists that this field should NOT be exposed, the change passes the PR review and is merged.

Uh-oh, someone made a mistake
{
"id": 1,
"username": "johndoe",
+ "password": "$2a$12$QuJLR1Ot8AB0uWtKOMC9hOpU1g9bLkYant5g5I6CdC4HsQCvyN9zG",
"email": "[email protected]",
"profile": {
"department": "Engineering",
"theme": "dark"
}
}

Snapshot testing solves this by treating the entire response body (and sometimes including headers and status) as the unit of verification. It provides immediate, comprehensive coverage against unintended side effects. If a backend developer accidentally changes a float to a string in a nested object three levels down, a snapshot test will catch it instantly.

Benefits: Speed and confidence

Adopting snapshot testing for your APIs offers significant advantages.

Fast test creation

Snapshot tests are easy and fast to create. This often leads to more tests being created than with traditional tests, as less time must be spent per test created.

Catching everything

Traditional tests only check what you think might break. Snapshot tests catch everything that does break. It protects you from side effects in areas of the response you might have forgotten existed. Accidentally removing a field provides instant feedback instead of only catching the bug in production.

Accidentally serialized the enum as number instead of string
{
"id": 1,
"username": "johndoe",
"email": "[email protected]",
"profile": {
"department": "Engineering",
- "theme": "dark"
+ "theme": 1
}
}

Quick updating of tests

Changing or introducing a field means you have to update all your tests or snapshots in our case. While this may seem like a drawback at first, this process is usually pretty fast with the correct tool. Simply run all tests, which should fail since something has changed. Then, review the diffs and accept them all. With traditional testing, if you wanted to keep the same coverage, you would have to change or add an assertion to each affected test case. This often proves difficult in practice, as either developers are too lazy to or spend a long time to do this.

Simplified code reviews

When a snapshot test fails due to an intentional change, the developer updates the snapshot file. In the pull request, the reviewer sees a clear, readable diff of exactly how the API contract is changing.

Pitfalls: Dynamic data and missing discipline

Snapshot testing is powerful, but it has sharp edges. If misused, it can lead to a test suite that developers ignore.

The nondeterminism problem

This is the number one enemy of snapshot testing. APIs often return data that changes on every request:

  • Timestamps ("createdAt": "2023-10-27T10:00:00Z")
  • Generated UUIDs or IDs
  • Randomized ordering of arrays

If you include these in your raw snapshot, your test will fail on every run. Dynamic data must be remove or replaced by the snapshotting tool before comparing snapshots. Luckily, most snapshotting tools can be configured to remove things like dates and UUIDs altogether or replace them with deterministic placeholders. Other configuration options allow to ignore specific properties or matching content with regexes.

A snapshot with scrubbed timestamps, meaning the got replaced with placeholders
{
"id": 1,
"username": "johndoe",
"email": "[email protected]",
"creationDate": "{timestamp_1}",
"profile": {
"department": "Engineering",
"theme": "dark"
},
"lastLogin": "{timestamp_2}"
}

APIs returning randomly ordered arrays is often a actually a bug that developers have not considered, probably due to returning data directly from the database without an explicit ORDER BY clause. So it may actually be a good thing that randomly ordered arrays do not work with snapshot testing! Otherwise, API consumers are forced to order the data themselves before presenting them to users (since data would randomly re-order on each reload).

Snapshot fatigue

When snapshots fail frequently (perhaps due to the nondeterminism mentioned above), developers can get fatigued. They stop analyzing the diff and just blindly press the "Update Snapshot" button to get the build green. At this point, the tests are useless. Discipline is required to ensure failures are investigated.

Snapshot tests only catch that something has changed. It is on the developers and reviewers to determine whether the change was intended.

A similar thing can happen when reviewing a PR with many snapshot updates all over the place. It may be difficult for the reviewer to grasp whether all snapshot changes where truly intended. Keep your PRs small and focused on one change at a time. Reviewing snapshots where in each one the same field was added makes things much easier.

Practical snapshot testing example

Let's walk through a practical example of snapshot testing. Since this is our blog, we use Kreya as the snapshotting tool. Kreya allows you to call REST, gRPC and WebSocket APIs (with GraphQL coming soon in version 1.19). Snapshot testing works for all of them.

Step 1: The setup

Imagine you have a REST endpoint GET /api/users/{id}. You create a request for this endpoint or let Kreya generate one automatically by importing an OpenAPI definition.

Kreya screenshot of calling a GET /api/users/1 endpoint

Step 2: Enable snapshot testing

The request works. Now, you need to enabling snapshot testing in the Settings tab. Then, call the endpoint once to generate our baseline. Accepting the snapshot stores the baseline as a text file on the disk.

Kreya screenshot when the baseline snapshot was accepted

Did you spot that Kreya automatically scrubbed the timestamps of the response? This ensures that the snapshot does not store any dynamically generated data.

Step 3: Check if everything works

The snapshot is stored on disk and will be checked against the new response when we call the endpoint again. Let's do this once to see if everything works correctly.

Kreya screenshot where a successful test shows that the snapshot matches

As we can see, the snapshot matches exactly. The snapshot test turns green.

Step 4: Testing a regression and the diff

A week later, someone on the backend team accidentally renames the profile.department field to just profile.dept. You run your Kreya tests. The test fails. Kreya presents a clear diff view:

Kreya screenshot where a failed test shows that the snapshot did not match

You instantly see the breaking change. If this was intentional, you press "Accept" button in Kreya, and the new structure becomes the baseline. If it was accidental, you have caught a bug before it reached production.

Other protocols

Snapshot testing also works for other protocols, for example gRPC:

An animation showcasing snapshot testing.

Conclusion

Snapshot testing is not a silver bullet, it doesn't replace all unit or integration tests. However, for HTTP APIs with complex payloads, it is perhaps the highest-ROI testing strategy available. It trades the tedium of writing endless assertions for a system of rapid baseline comparisons.

Tools like Kreya make this process manageable by integrating snapshotting directly into the development workflow and providing robust mechanisms to tame dynamic data. By incorporating snapshot tests, you gain a safety net that allows you to refactor and add features to your API with significantly higher confidence.

]]>
<![CDATA[5 Best API Testing Tools]]> https://kreya.app/blog/best-api-testing-tools/ https://kreya.app/blog/best-api-testing-tools/ Thu, 06 Nov 2025 00:00:00 GMT API testing is a critical aspect of modern development. The right tools enable faster, automated and more reliable API testing.

In this comparison, we look into the following 5 API testing tools: Kreya, Postman, Bruno, Insomnia, and HTTPie.

We'll analyze their architectures and protocol capabilities to help you choose the right tool for your modern development and security needs.

Kreya

Kreya screenshot

Kreya is a privacy-first desktop client designed to explore, test and automate API workflows.

It operates locally, ensuring that project data remains secure on the user's machine and is structured for simple management alongside source code via Git.

Key Features:

  • Privacy-first and local file-based storage, designed for Git diffs and code reviews.
  • Supports gRPC (Protobuf), REST, WebSocket, HTTP/2, and HTTP/3.
  • Environment and authentication management.
  • Automated snapshot assertions and scripting for automated testing.
  • CI/CD workflows per CLI.

Pros:

  • Designed for collaboration via Git.
  • Very extensive support for modern protocols (gRPC, HTTP/3).
  • Seamlessly handles API changes (e.g. new endpoints from an OpenAPI spec).
  • Ideal for developers testing complex API architectures.

Cons:

  • No mock servers and no documentation publishing.
  • Some features require a paid license.

Postman

Postman screenshot

Postman was one of the first REST GUI clients to emerge, and remains one of the most popular API development tool when testing REST APIs.

It offers the most comprehensive set of features and robust collaboration for large teams.

Key Features:

  • Collections, environments, and workspaces.
  • Mock servers to simulate API behavior.
  • Monitoring, CI/CD integration, and API design tools.

Pros:

  • User-friendy interface, to make it easy for beginners and experienced users to test APIs.
  • Broad feature set with workspaces, mocks, monitoring and automation.
  • Strong collaboration and synchronization via the cloud.

Cons:

  • Can feel feature-heavy ("bloated") for simple tasks.
  • Mandatory user account and heavy cloud reliance (vendor lock-in).
  • Heavy cloud dependency that may conflict with strict corporate security policies.
  • Data storage and sharing via Git is not natively straightforward and often requires workarounds.

Bruno

Bruno screenshot

Bruno is an open-source API client built on a strictly offline-first philosophy.

Its defining feature is storing all collections and environments as plain-text .bru files, which makes them inherently Git-friendly.

Key Features:

  • Offline-first and local file-based storage (.bru files).
  • Native integration with Git for version control.
  • Supports functional testing and scripting.
  • Supports REST, gRPC, and GraphQL.

Pros:

  • Git integration eliminates cloud lock-in.
  • Open-source core.
  • Minimalist design and strong focus on speed.

Cons:

  • Not focused on the full API lifecycle (e.g. monitoring, mock servers).
  • Some features are reserved for paid add-ons.
  • Future uncertain, since Bruno broke their promises a few times already (e.g. Pricing model change from the "Golden Edition" to a subscription-based one).
  • History of buggy releases.

Insomnia

Insomnia screenshot

Insomnia is a popular, open-source desktop client favored for its clean, streamlined interface and protocol flexibility across REST, GraphQL, and gRPC.

While historically known as a local-first option, it now offers flexible storage—users can operate strictly locally or utilize cloud synchronization (under Kong's ownership) for team collaboration.

Key Features:

  • Elegant, user-friendly, and customizable interface.
  • Supports GraphQL, gRPC, and REST.
  • Flexible data storage options (local, Git sync, or cloud sync).

Pros:

  • High-quality UI/UX preferred by many developers.
  • Open-source core.
  • Excellent choice for API automation via its CLI.

Cons:

  • Shift toward mandatory cloud features has been controversial for some users.
  • Advanced collaboration and synchronization features require paid plans.
  • Plugin ecosystem is robust but smaller than Postman's.

HTTPie

HTTPie screenshot

HTTPie revolutionized command-line testing with its simple, human-readable syntax, serving as a modern, friendly replacement for curl.

HTTPie is a tool for developers who prioritize the efficiency and clarity of text-based API interactions across both their terminal and a persistent workspace.

Key Features:

  • Human-readable CLI syntax for rapid requests.
  • Seamless sync between the CLI, Desktop, and Web clients.
  • Support for environments and basic authentication.

Pros:

  • High speed and clarity for ad-hoc, command-line testing.
  • Minimalist design focused purely on request creation and execution.

Cons:

  • Lacks advanced testing features like complex assertion scripting.
  • Primarily focused on REST/HTTP, with limited support for other protocols.

Conclusion

API Test ToolDescription
KreyaPrivacy-first and local file-based storage, designed for Git diffs and code reviews with advanced protocol support (e.g. gRPC, HTTP/3).
PostmanMost comprehensive set of features, heavy cloud reliance.
BrunoOpen-source core, offline-first API client, uses plain-text files for native Git version control and collaboration. Future uncertain, since Bruno broke their promises a few times already.
InsomniaOpen-source core, simple UI, flexible data storage options. Advanced features require an account.
HTTPieCLI Client that modernize API interaction with human-readable syntax. Lacks advanced testing features like complex assertion scripting.

For years, the market favored cloud-first platforms like Postman, that excelled at enterprise collaboration and feature breadth, but often resulted in vendor lock-in and security conflicts.

The rise of local-first and Git-friendly tools proves that many prioritize data privacy, transparent version control, and seamless integration into existing CI/CD workflows.

Ultimately, your decision depends on your team's priorities: Do you need the broad feature set and synchronization of a cloud monolith like Postman, or the control, speed, and specialized protocol support of a Git-friendly client like Kreya?

The best tool is the one whose architecture and technical capabilities align perfectly with your unique development principles.

]]>
<![CDATA[Kreya 1.18 - What's New]]> https://kreya.app/blog/kreya-1.18-whats-new/ https://kreya.app/blog/kreya-1.18-whats-new/ Fri, 26 Sep 2025 00:00:00 GMT Kreya 1.18 brings powerful new ways to visualize and validate your API workflows: Data Previews and Snapshot Tests. These features make it easier than ever to understand your API responses and ensure your integrations remain stable over time.

Snapshot tests Pro / Enterprise

Kreya 1.18 introduces Snapshot Tests, a fast, code-free way to ensure your APIs behave consistently over time. Inspired by Jest, snapshot tests automatically capture and compare API responses, alerting you to any unexpected changes.

How snapshot tests work:

  • Enable snapshot testing in Kreya settings
  • Kreya automatically saves a snapshot of each API response
  • On future runs, responses are compared to the stored snapshot
  • If the response changes, you’ll see a diff and can accept or reject the update

Advanced usage: For more control, use the kreya.snapshot API in your scripts to snapshot custom values:

kreya.grpc.onResponse(async msg => await kreya.snapshot.verifyObjectAsJson('response', { length: msg.value.length }));

Best practices:

  • Keep your tests deterministic, avoid random or time-dependent data in snapshots
  • Treat snapshots like code: review changes, commit them to version control, and keep them up to date
An animation showcasing snapshot testing.

Test results are shown in the Tests tab, with visual snapshot diffs.
See the snapshot documentation for more details.

Data Previews Pro / Enterprise

Kreya now lets you create rich, interactive data visualizations directly in the app. With the new kreya.preview API, you can display PDFs, charts, HTML, and more, right from your scripts. This makes it easy to turn raw API responses into meaningful insights.

Why use previews?

  • Visualize complex data for easier interpretation
  • Share and review results with your team
  • Debug and explore responses interactively

Sample: Visualize API data as a chart

You can render charts using libraries like Apache ECharts. For example, to visualize a REST API response as a bar chart:

kreya.rest.onCallCompleted(
async resp =>
await kreya.preview.html(
`
<html>
<body>
<div id="chart" style="width: 100%; height: 100%;"></div>
<script src="proxy.php?url=https%3A%2F%2Fcdn.jsdelivr.net%2Fnpm%2Fecharts%405.6.0%2Fdist%2Fecharts.min.js"></script>
<script>
const chart = echarts.init(document.getElementById('chart'));
chart.setOption({
title: { text: 'Values' },
animation: false,
xAxis: { type: 'category', data: ${JSON.stringify(resp.response.content.map(d => d.label))} },
yAxis: { type: 'value' },
series: [{ type: 'bar', data: ${JSON.stringify(resp.response.content.map(d => d.value))} }]
});
window.addEventListener('resize', () => chart.resize());
</script>
</body>
</html>
`,
'Sales',
),
);
An animation showcasing visualizing data as a chart.

See the kreya.preview API documentation and samples for more details and inspiration.

Comments in environments

A small change, but sometimes really useful. It's now possible to store comments in environments.

An animation showcasing add a comment in environments.

Improved trace and test tab

The Trace and Test tab has been redesigned. Messages are now grouped by operation name, making it quick and easy to see which message belongs to which operation. It is also now possible to search the Trace tab and navigate through the search results.

An animation showcasing the improved test and trace tab.

Multiple organizations and seats manager in customer portal Pro / Enterprise

There is now a dropdown menu in the header of the customer portal for managing multiple organizations, whether personal or business. Additionally, the customer portal now features a new role called Seats Manager, which distinguishes between the billing administrator and the person responsible for managing user seats.

Kreya customer portal screenshot

GTK4 for Linux

On Linux, we updated our dependencies to GTK4. If you do not use Kreya via snap, make sure that you have libgtk-4-1 and libwebkitgtk-6.0-4 installed.

Improved proxy support Enterprise

It is now possible to configure proxy settings under Project > Proxy. Here, you can add a proxy address and exclusions. To activate a proxy, select it in the footer bar.

An animation showcasing using the proxy settings.

And more

Kreya 1.18 also includes various improvements and bug fixes. For a full list, see the release notes.

If you have feedback or questions, please contact us or report an issue.

Stay tuned! 🚀

]]>
<![CDATA[Comparing the privacy of popular API clients]]> https://kreya.app/blog/comparing-privacy-of-popular-api-clients/ https://kreya.app/blog/comparing-privacy-of-popular-api-clients/ Fri, 13 Jun 2025 00:00:00 GMT API clients hold sensitive data, from auth tokens to proprietary endpoints. But much control do users retain over their data? And is their privacy respected?

When choosing an API client, developers often focus on features and performance. However, with growing concerns over data privacy, understanding how these tools handle your information is more critical than ever. This comparison examines the privacy postures of four popular API clients: Postman, Kreya, Insomnia, and Bruno.

All API clients were tested with their newest version as of 13 June 2025. Only free editions without login were used. A fresh "workspace" was initiatilized before any telemetry data was collected.

Postman

Postman is one of the oldest API clients and certainly the most popular. Since its inception, Postman shifted the approach from a simple GUI client to a cloud platform. And this cloud-centric approach is clear to see: Almost all data is stored in the Postman cloud and this cannot be changed. The only exception is using Postman without an account through the lightweight API client. This stores the data locally, but the lightweight API client has a heavily restricted feature set. Postman heavily encourages creating an account, since otherwise syncing your data is not possible. Other features, such as environments, workspaces and collections, are also locked behind an account.

To get back control over their data, Postman users often export their collections as a big JSON file and sync that via git. However, Postman seems to restrict this export feature. For example, gRPC and WebSocket requests cannot be exported and remain locked into the Postman world.

Not only do users barely have any control over their data, it is also uploaded to Postman's servers. There are possibilities to keep data (such as "current values" of variables) local only, but one small misstep and a secret is leaked to Postman. Postman knows everything about your APIs. Confidential URLs, hidden features, secret upcoming changes deployed to your staging environment are just some of the examples that are exposed to Postman's servers.

Let's take a look at the collected telemetry. As with all tested clients, telemetry is automatically being collected. Postman does not offer a way to disabled telemetry. Interestingly, it also does not list which third party services it uses to collect telemetry data in its privacy policy.

To see how much Postman phones home, opening the lightweight API client (version 11.49.4) and sending a request while intercepting the traffic resulted in 10 requests being sent to external servers:

  • Some of them were simple update checks, for example to https://dl.pstmn.io.
  • Two requests went to LaunchDarkly, which can be used for telemetry as well as for feature flags. One call returned a staggering 112 KB message of feature flags.
  • https://bifrost-https-v10.gw.postman.com/ws/proxy was contacted with information about the installation.
  • https://events.getpostman.com/events was called with the following telemetry data:
    View JSON data
    {
    "type": "events-general",
    "indexType": "client-events",
    "env": "production",
    "propertyId": "{redacted uuid}",
    "userId": "0",
    "teamId": "",
    "propertyVersion": "11.49.4",
    "property": "windows_app",
    "timestamp": "2025-06-13T08:26:22.573Z",
    "category": "offline_api_client",
    "action": "ad_viewed",
    "label": "collections",
    "value": 1
    }
  • https://api2.amplitude.com/2/httpapi is also used for detail telemetry data:
    View JSON data
    {
    "api_key": "56d4a7f42486e1c4ec95a892fd96c402",
    "events": [
    {
    "user_id": null,
    "device_id": "{redacted uuid}",
    "session_id": 1749803181864,
    "time": 1749803181873,
    "platform": "Web",
    "language": "de",
    "ip": "$remote",
    "insert_id": "9f584529-db9c-45c9-bbe4-9aa675a91358",
    "event_type": "$identify",
    "user_properties": {},
    "event_id": 0,
    "library": "amplitude-ts/2.7.2",
    "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Postman/11.49.4 Electron/32.3.3 Safari/537.36",
    "event_properties": {}
    },
    {
    "user_id": null,
    "device_id": "{redacted uuid}",
    "session_id": 1749803181864,
    "time": 1749803181864,
    "platform": "Web",
    "language": "de",
    "ip": "$remote",
    "insert_id": "d232976f-6ba7-4151-8d60-6d347c4be793",
    "event_type": "session_start",
    "event_id": 1,
    "library": "amplitude-ts/2.7.2",
    "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Postman/11.49.4 Electron/32.3.3 Safari/537.36",
    "event_properties": {},
    "user_properties": {}
    },
    {
    "user_id": null,
    "device_id": "{redacted uuid}",
    "session_id": 1749803181864,
    "time": 1749803181953,
    "platform": "Web",
    "language": "de",
    "ip": "$remote",
    "insert_id": "d0230992-17d9-4952-a3db-514a18e6bd1d",
    "event_type": "[Amplitude] Page Viewed",
    "event_properties": {
    "[Amplitude] Page Domain": "",
    "[Amplitude] Page Location": "{redacted path}/Postman/app-11.49.4/resources/app.asar/html/scratchpad.html?browserWindowId=1&logPath={redacted path}\\Postman\\logs&sessionId=6712&startTime=1749802966409&preloadFile={redacted path}\\Postman\\app-11.49.4\\resources\\app.asar\\preload_desktop.js&scratchpadPartitionId=e60a4656-25bf-4ff4-b8aa-7576cc4eb535&isFirstRequester=true",
    "[Amplitude] Page Path": "/{redacted path}/Postman/app-11.49.4/resources/app.asar/html/scratchpad.html",
    "[Amplitude] Page Title": "Postman",
    "[Amplitude] Page URL": "{redacted path}/Postman/app-11.49.4/resources/app.asar/html/scratchpad.html",
    "release_channel": "",
    "platform": "desktop",
    "current_url": "/{redacted path}/Postman/app-11.49.4/resources/app.asar/html/scratchpad.html",
    "event_source": "client_app",
    "team_user_id": null,
    "company_size": 0,
    "workspace_visibility": null,
    "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Postman/11.49.4 Electron/32.3.3 Safari/537.36",
    "[Amplitude] Page Counter": 1
    },
    "event_id": 2,
    "library": "amplitude-ts/2.7.2",
    "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Postman/11.49.4 Electron/32.3.3 Safari/537.36",
    "user_properties": {}
    },
    {
    "user_id": null,
    "device_id": "{redacted uuid}",
    "session_id": 1749803181864,
    "time": 1749803181966,
    "platform": "Web",
    "language": "de",
    "ip": "$remote",
    "insert_id": "845fff32-2de2-40f9-974f-621a4a6ec6e2",
    "event_type": "$identify",
    "user_properties": {},
    "event_id": 3,
    "library": "amplitude-ts/2.7.2",
    "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Postman/11.49.4 Electron/32.3.3 Safari/537.36",
    "event_properties": {}
    },
    {
    "user_id": null,
    "device_id": "{redacted uuid}",
    "session_id": 1749803181864,
    "time": 1749803182616,
    "platform": "Web",
    "language": "de",
    "ip": "$remote",
    "insert_id": "fd0f1c4d-8f88-4616-9d4a-32c39cee7456",
    "event_type": "LAC - Micro Ads - Ad - Viewed",
    "event_properties": {
    "ad_id": "collections",
    "release_channel": "",
    "platform": "desktop",
    "current_url": "/{redacted path}/Postman/app-11.49.4/resources/app.asar/html/scratchpad.html",
    "event_source": "client_app",
    "team_user_id": null,
    "company_size": 0,
    "workspace_visibility": null,
    "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Postman/11.49.4 Electron/32.3.3 Safari/537.36"
    },
    "event_id": 4,
    "library": "amplitude-ts/2.7.2",
    "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Postman/11.49.4 Electron/32.3.3 Safari/537.36",
    "user_properties": {}
    }
    ],
    "options": {},
    "client_upload_time": "2025-06-13T08:26:26.885Z"
    }
  • Postman also loads some SVGs from the internet, such as https://postman.com/_aether-assets/illustrations/dark/illustration-hit-send.svg.

All of this was just during a very short session. Using Postman for longer generates even more telemetry data.

I couldn't reproduce reports that Postman logs all endpoints include query parameters, maybe this has been fixed or only happens when using the full Postman version.

Regarding privacy, Postman has also been involved in some controversies in the past. One of the most recent questionable change to Postman was implemented in 2023 when the Postman Scratchpad was removed. If users were not careful during this period and migrated their local Scratchpad data to the Postman cloud, they lost access to it with the following Postman updates. The Scratchpad was eventually replaced by the lightweight API client with a reduced functionality. Most users needed to create an account and upload their data to continue using Postman as they were accustomed to.

All in all, Postman scores pretty badly in regard to privacy and data ownership. It is still the most popular application in this space since it has been around for longer and probably has the most features.

Kreya

New API clients appear as quickly as some disappear again. With four years under its belt, Kreya is already an established tool. Created with a strong focus on privacy, it should score better than most alternatives.

One interesting approach that has gained popularity (again) in recent years is the local storage of data. In contrast to most competitors, Kreya does not store its data as a proprietary blob or on some external server. Instead, Kreya project data is stored in JSON files in a location of the users choosing. The data format is even optimized for syncing via git. For example, the JSON is formatted and stringified JSON fields were avoided to help with reviewing changes and possible merge conflicts. But syncing the data via git is not the only option. Users may use any software they wish to transfer the data to other users to collaborate on Kreya projects.

This approach decouples Kreya from the "data ownership", leaving complete control to users. Users are also able to work completely offline, since all data is stored on their computer.

Since Kreya does not sync anything to external servers, the need for an account is not there. All free features are usable without an account.

An account (basically only the email address) is only required for validating the license of a paid subscription and has no other impact. Enterprise customers may even request an "offline license key". This does not require an account nor any communication to the Kreya license server.

As for telemetry, let's open Kreya (version 1.17.0) and perform some actions. After closing the app, three requests have been sent to external servers:

  • One request to https://kreya.app/user-messages/user-messages.json, which is used to display (urgent) messages to users.
  • One request to https://stable-downloads.kreya.app/appcast.json to check for new versions.
  • One request to https://api.mixpanel.com for anonymous telemetry with the following content:
    View JSON data
    [
    {
    "event": "AppStarted",
    "properties": {
    "$os": "win-x64",
    "token": "{redacted}",
    "distinct_id": "{redacted uuid}",
    "version": "1.17.0",
    "launchInfo": "NoUpdate",
    "arch": "X64",
    "osArch": "X64",
    "subscriptionPlan": "Free",
    "authenticated": false,
    "appKind": "Ui",
    "packageManager": "None",
    "time": 1749805404
    }
    },
    {
    "event": "OperationInvoked",
    "properties": {
    "token": "{redacted}",
    "distinct_id": "{redacted uuid}",
    "invokerName": "rest",
    "sendMode": "all",
    "operationType": "unary",
    "hasScript": false,
    "httpMethod": "GET",
    "time": 1749805421
    }
    },
    {
    "event": "AppClosed",
    "properties": {
    "token": "{redacted}",
    "distinct_id": "{redacted uuid}",
    "openDurationSeconds": 22,
    "time": 1749805427
    }
    }
    ]

Telemetry and update checks can be disabled via configuration. Disabling those two features would cut the requests down, resulting in only one call to https://kreya.app/user-messages/user-messages.json. Kreya is also the only of the tools to explicitly declare what telemetry data it collects.

As we can see, Kreya is much more focused on privacy and data ownership than Postman. Kreya is also the only API client out of the four to be able to completely disable telemetry.

Insomnia

Insomnia was created by Gregory Schier as a slick alternative to Postman. It has since been sold to Kong Inc.

While Insomnia has been pretty focused on privacy in the past, it shows that Insomnia has been bought by a big company. For example Insomnia 8.0 introduced an account requirement for most of the existing features. If users did create an account, all their data was uploaded to Insomnia's servers. As a response, users created the fork Insomnium, but was later abandoned.

Insomnia has three storage options, which give the user some control over their data:

  • Local vault: Stores data locally in a proprietary format
  • Secure cloud: Syncs all data to Insomnia's servers, end-to-end encrypted on paid plans. If the user does not provide a passphrase, Insomnia generates one and stores it on their server.
  • Git sync: Insomnia automatically syncs changes to a git repository. For paid plans only.

As for telemetry, using Insomnia 11.2.0 to view the requests performed by the app:

  • https://api.github.com/repos/Kong/insomnia, maybe to get information about GitHub stars?
  • Two calls to https://updates.insomnia.rest/ with installation information
  • One call to https://github.com/Kong/insomnia/releases/download/[email protected]/RELEASES
  • One call to https://api.segment.io/v1/batch with the following content:
    View JSON data
    {
    "batch": [{
    "timestamp": "2025-06-13T08:47:55.077Z",
    "integrations": {},
    "event": "App Started",
    "type": "track",
    "properties": {
    "localProjects": 0,
    "remoteProjects": 0,
    "createdRequests": 1,
    "deletedRequests": 0,
    "executedRequests": 1
    },
    "context": {
    "app": {
    "name": "Insomnia",
    "version": "11.2.0"
    },
    "os": {
    "name": "windows",
    "version": "10.0.26100"
    },
    "library": {
    "name": "@segment/analytics-node",
    "version": "2.2.1"
    }
    },
    "userId": "",
    "anonymousId": "{redacted uuid}",
    "messageId": "node-next-1749804475077-a0e58dd8-978c-4181-b720-d1393391fc2d",
    "_metadata": {
    "nodeVersion": "v22.14.0",
    "jsRuntime": "node"
    }
    }, {
    "timestamp": "2025-06-13T08:47:56.396Z",
    "integrations": {},
    "type": "page",
    "properties": {},
    "name": "/organization/org_scratchpad/project/proj_scratchpad/workspace/wrk_scratchpad/debug/request/req_id",
    "context": {
    "app": {
    "name": "Insomnia",
    "version": "11.2.0"
    },
    "os": {
    "name": "windows",
    "version": "10.0.26100"
    },
    "library": {
    "name": "@segment/analytics-node",
    "version": "2.2.1"
    }
    },
    "anonymousId": "{redacted uuid}",
    "userId": "",
    "messageId": "node-next-1749804476396-d8978cb1-81f7-40d1-b933-91fc2d0bcb28",
    "_metadata": {
    "nodeVersion": "v22.14.0",
    "jsRuntime": "node"
    }
    }],
    "writeKey": "4l7QUfACrIcqvC913hiIwAA2BDYP2OJ1",
    "sentAt": "2025-06-13T08:48:05.089Z"
    }
  • One call to https://redacted-subdomain.ingest.sentry.io/api/ with
    View JSON data
    {
    "sent_at": "2025-06-13T08:49:37.343Z",
    "sdk": {
    "name": "sentry.javascript.electron",
    "version": "6.5.0"
    }
    }
    {
    "type": "session"
    }
    {
    "sid": "172e49db6845452b84f79c0d081b82eb",
    "init": true,
    "started": "2025-06-13T08:47:53.975Z",
    "timestamp": "2025-06-13T08:49:37.343Z",
    "status": "exited",
    "errors": 0,
    "duration": 103.36775422096252,
    "attrs": {
    "release": "11.2.0",
    "environment": "production",
    "user_agent": "Node.js/22"
    }
    }

Interestingly, Insomnia tracks every click in the UI with Sentry (as seen in the DevTools), but does not send this information to its servers. Maybe this data is sampled and only a subset is sent or maybe this is only sent in case an error occurs.

What's even more interesting is that users without an account are able to disable the telemetry, but users logged into Insomnia do not have this option! The telemetry for users is also not really anonymous like in the other tools, as it sends the hashed (SHA256) user id as part of the message.

Bruno

The first version of Bruno was released at around the same time as Kreya with a similar idea for local data storage. The main difference is that Bruno stores the data in the custom Bru markup language, whereas Kreya stores it as JSON.

There are a lot of other similarities between Bruno and Kreya. Both do not require an account for the free version and only an email address to verify the license for paid plans.

As for telemetry and other requests, opening Bruno 2.5.0 yielded the following requests:

  • Two requests to https://github.com/usebruno/bruno, maybe to get star and release information?
  • One request to https://objects.githubusercontent.com
  • One telemetry request to https://us.i.posthog.com/batch/:
    View JSON data
    {
    "api_key": "{redacted}",
    "batch": [{
    "distinct_id": "RVIvKzCa5iF0kThDLRAh8",
    "event": "start",
    "properties": {
    "os": "Windows",
    "version": "2.5.0",
    "$lib": "posthog-node",
    "$lib_version": "4.2.1",
    "$geoip_disable": true
    },
    "type": "capture",
    "library": "posthog-node",
    "library_version": "4.2.1",
    "timestamp": "2025-06-13T09:09:18.213Z",
    "uuid": "0197688b-f706-7f98-2b0b-ae2c1068eab1"
    }],
    "sent_at": "2025-06-13T09:09:28.215Z"
    }

The telemetry data is pretty slim, but cannot be disabled.

All in all, there is not much to complain about Bruno's privacy approach with the exception that telemetry cannot be disabled.

Conclusion

When it comes to privacy and data control, there are clear differences among these popular API clients.

ToolData storageAccount requirementTelemetryKey takeaway
PostmanCloud-firstHeavily encouragedCannot be disabledHeavy cloud integration with little data control.
KreyaLocal-firstNoCan be disabledExcellent privacy, full data control.
InsomniaCloud-encouragedHeavily encouragedCannot be disabled while logged inLost its way after acquisition, forcing cloud-centric features.
BrunoLocal-firstNoCannot be disabledStrong privacy, full data control.

Which API client should you choose? As one of the creators of Kreya, my choice is clear :)
But choose for yourself. Apart from the privacy and data ownership, there may also be features that one API client solves better than others.

Do you have questions or feedback? Do no hesitate to reach out to us at [email protected]!

]]>
<![CDATA[Demystifying the protobuf wire format - Part 2]]> https://kreya.app/blog/protocolbuffers-wire-format-part-2/ https://kreya.app/blog/protocolbuffers-wire-format-part-2/ Thu, 15 May 2025 00:00:00 GMT In our previous post, we explored the basics of the protocol buffers (protobuf) wire format. Now, let's take a closer look at some advanced features: packed repeated fields, maps and negative numbers.

Repeated fields

Repeated fields allow you to store multiple values of the same type in a single field. For example:

message FruitBasket {
repeated string fruits = 1;
}

By default, each value in a repeated field is encoded as a separate tag-value pair in the wire format. For example, encoding fruits: ["Apple", "Banana"] results in two tag-value pairs, each with the same field number but different values:

0a 05 41 70 70 6c 65 0a 06 42 61 6e 61 6e 61
  │  │              │  │  └─ UTF-8 encoded string payload (Banana)
  │  │              │  └──── Length of the string (6)
  │  │              └─────── Tag of the field "fruits" (field number 1, wire type 2)
  │  │
  │  └────────────────────── UTF-8 encoded string payload (Apple)
  └───────────────────────── Length of the string (5)
└──────────────────────────── Tag of the field "fruits" (field number 1, wire type 2)

This is the foundation for understanding how packed repeated fields and maps are encoded, as both build on the repeated field concept.

Packed repeated fields

By default, repeated fields are encoded as multiple tag-value pairs. However, for numeric types, you can use the packed option to store all values in a single length-delimited field:

message FruitCounts {
repeated int32 values = 1 [packed = true];
}

A packed repeated field is encoded as:

  • The tag (field number + wire type 2 for length-delimited)
  • A varint indicating the total byte length of the packed data
  • The concatenated varint-encoded values

For example, encoding [3, 270, 86942] as packed results in:

0a 06 03 8e 02 9e a7 05
  │  │  │     └──── The varint encoded value of 86942 → 0x9e 0xa7 0x05
  │  │  └────────── The varint encoded value of 270 → 0x8e 0x02
  │  └───────────── The varint encoded value of 3 → 0x03
  │
  └──────────────── Byte length of the packed data
└─────────────────── Tag of the field "values" (field number 1, wire type 2 length delimited)

If the field would not be packed, the same value would look like this:

08 03 08 8e 02 08 9e a7 05 
  │  │  │     │  └──── The varint encoded value of 86942 → 0x9e 0xa7 0x05
  │  │  │     └─────── Tag of the field "values" (field number 1 and wire type 0 varint)
  │  │  │
  │  │  └───────────── The varint encoded value of 270 → 0x8e 0x02
  │  └──────────────── Tag of the field "values" (field number 1 and wire type 0 varint)
  │
  └─────────────────── The varint encoded value of 3 → 0x03
└────────────────────── Tag of the field "values" (field number 1 and wire type 0 varint)

Maps

Maps in protobuf are syntactic sugar for repeated key-value message pairs. For example:

message FruitBasket {
map<string, int32> fruit_counts = 1;
}

This is internally represented as:

message FruitBasket {
repeated FruitCountsEntry fruit_counts = 1;
}

message FruitCountsEntry {
string key = 1;
int32 value = 2;
}

Each map entry is encoded as a length-delimited embedded message. For example, encoding { "Apple": 3, "Banana": 5 } results in two length-delimited fields, each containing the encoded key and value.

0a 09 0a 05 41 70 70 6C 65 10 03 0a 0a 0a 06 42 61 6E 61 6E 61 10 05
  │  │                          │  │  └ The second map entry Banana: 5
  │  │                          │  └─── The length of the second map entry: 10 bytes
  │  │                          └────── Tag of the field "fruit_counts" (field number 1, wire type 2 length delimited)
  │  │
  │  └───────────────────────────────── The first map entry Apple: 3
  └──────────────────────────────────── The length of the first map entry: 9 bytes
└─────────────────────────────────────── Tag of the field "fruit_counts" (field number 1, wire type 2 length delimited)

Negative numbers: ZigZag encoding

Protobuf uses ZigZag encoding for signed integers (sint32, sint64) to efficiently encode negative numbers. Regular int32 and int64 use standard varint encoding, which is inefficient for negative values. ZigZag encoding maps signed integers to unsigned so that numbers with small absolute values (including negative ones) have a small varint encoded value. The formula for ZigZag encoding is:

(n << 1) ^ (n >> 31)

For example:

 0 → 0
-1 → 1
1 → 2
-2 → 3

This makes negative numbers compact in the wire format.

Encoding temperature = -2 with

message Weather {
sint32 temperature = 1;
}

results in

ZigZag(-2) = (-2 << 1) ^ (-2 >> 31)
= 0b11111100 ^ 0b11111111
= 0b00000011
= 3
08 03
  └─ ZigZag encoded value of -2 is 3 as varint is 03
└──── Tag of the field "temperature" (field number 1, wire type 0 for varint)

If the field was an int32 instead of a sint32, the encoding would look very different. When using int32, negative numbers are encoded using standard varint encoding, which is optimized for small positive numbers. Negative values are represented in two's complement form, which always results in a 10-byte varint for any negative 32-bit integer. This is much less efficient than ZigZag encoding for negative numbers.

00000010 2 in binary, displayed as 8-bit
11111101 one's complement
11111110 add 1 → -2 in two's complement

Varint encoding 11111110:

         11111111 11111111 11111111 11111110 # original value -2 in two's complement
1111 1111111 1111111 1111111 1111110 # split into 7-bit chunks
1111110 1111111 1111111 1111111 1111 # change to little endian
11111111 11111111 11111111 11111111 00001111 # add continuation bits

As you can see, the number -2 as int32 in varint is 11111111 11111111 11111111 11111111 00001111 in binary or FE FF FF FF 0F in hexadecimal.

In the message:

08 FE FF FF FF 0F
  └─ The varint encoded value of -2 as int32 (two's complement of 2)
└──── Tag of the field "temperature" (field number 1, wire type 0 for varint)

In summary, using sint32 (with ZigZag encoding) is much more space-efficient for negative numbers than using int32, which is why protobuf recommends sint32 for fields that may contain negative values.

Closing

Understanding these advanced protobuf wire format features helps you debug and optimize your data interchange. For more details, see the official protobuf encoding guide.

Curious how these messages travel between services? Read our gRPC deep dive next.

]]>
<![CDATA[Import HAR Files in Kreya: Debug APIs & gRPC-Web]]> https://kreya.app/blog/har-import/ https://kreya.app/blog/har-import/ Wed, 14 May 2025 00:00:00 GMT Kreya has introduced a powerful new feature: HAR file import! This update makes analysing, replaying and debugging HTTP and gRPC-Web requests directly in Kreya easier than ever. In this post, we’ll show you how to export HAR files from browsers and import them into Kreya. We will also explain why this is a secure and essential workflow for developers, particularly those working with gRPC-Web APIs.

What is a HAR file?

A HAR (HTTP Archive) file is a standardised format for capturing all the network requests and responses made by your browser. Including headers, payloads, cookies and more, it is invaluable for debugging APIs and web applications.

How to export a HAR File from browsers

note

HAR files may contain sensitive information, such as authentication tokens, cookies and request/response bodies. Always handle them securely. Kreya helps with this by storing all imported HAR data locally on your machine, so no data ever leaves your device unintentionally and you can safely analyse even sensitive HAR files.

Exporting a HAR file in Chrome

  1. Open Chrome DevTools (right-click → Inspect, or press F12).
  2. Go to the Network tab.
  3. (optional) Enable Preserve log and Disable cache to capture all requests, including those during page reloads or redirects.
  4. (optional) To ensure all sensitive data (including request/response bodies) is included, you can execute the following command in the DevTools Console before exporting:
    4.1. Open the command palette in the DevTools by pressing Ctrl++P.
    4.2. Run the command Allow to generate HAR with sensitive data.
  5. Reproduce the workflow you want to capture.
  6. Click the downward pointing arrow and save the HAR file to your computer.
    • Alternatively, you can right-click on any network request and select Copy → Copy all as HAR.

Exporting a HAR file in Firefox

  1. Open Firefox Developer Tools (Right-click → Inspect, or press F12).
  2. Go to the Network tab.
  3. Enable Disable cache to capture all requests.
  4. Reproduce the workflow you want to capture.
  5. Right-click anywhere in the list of network requests and select Save All As HAR and save the file to your computer.

Exporting a HAR file in Safari

  1. Enable the Develop menu (Safari → Settings → Advanced → Show features for web developers).
  2. Open the Web Inspector (Develop → Show Web Inspector or F12).
  3. Go to the Network tab.
  4. Enable Disable cache to capture all requests.
  5. Perform the actions you want to capture.
  6. Click Export and save the har file to your computer.

How to import a HAR file in Kreya

If you have copied the HAR contents to your clipboard, simply open Kreya, and it will automatically detect the HAR content and prompt you to import it. Just click Import. If you saved to a HAR file, follow these steps to import it:

  1. Open Kreya.
  2. Open the command palette with Ctrl+K.
  3. Select the import action.
  4. Select the HAR import type.
  5. Choose your .har file.
  6. Click Import.
  7. Kreya will automatically parse the file and create requests for each entry, including all headers, payloads, and metadata.
  8. You can now replay, modify, or analyze these requests directly in Kreya.

gRPC-Web: decode and debug with ease

HAR files can also capture gRPC-Web requests, which are commonly used in modern web applications. Kreya’s HAR import feature is especially useful here:

  • If you import your proto definitions into Kreya via an importer, it will decode gRPC-Web payloads automatically.
  • This means you can inspect, replay, and debug gRPC-Web requests just like regular HTTP requests.
  • No more struggling with base64 or binary payloads. Kreya makes them human-readable.
An animation showcasing importing requests with a HAR file.

Privacy & security: your data stays local

Kreya is designed with privacy in mind. All imported HAR data is stored locally on your machine. No data is sent to external servers or leaves your device unintentionally. This is especially important when working with HAR files that may contain sensitive information. Kreya ensures that your data remains secure and private.

Why use Kreya for HAR file analysis?

  • Comprehensive HAR import: Bring all your HTTP and gRPC-Web requests into one place.
  • Local-first privacy: Sensitive data never leaves your machine.
  • Advanced gRPC-Web support: Decode and analyze gRPC-Web payloads with imported protos.
  • Replay and modify requests: Easily debug and test APIs using real captured traffic.

Conclusion

Kreya's HAR import feature streamlines your API debugging workflow. It keeps your data secure and offers unmatched support for gRPC-Web. Experience a new level of productivity and security in API development — try it out today!

]]>
<![CDATA[Kreya 1.17 - What's New]]> https://kreya.app/blog/kreya-1.17-whats-new/ https://kreya.app/blog/kreya-1.17-whats-new/ Wed, 30 Apr 2025 00:00:00 GMT Kreya 1.17 comes with the new Kreya Script, which can control operation invocation with JavaScript. Support has been added for importing HAR files, the JWT auth provider, exporting operations as cURL and gRPCurl commands and many more features have been implemented.

New Kreya Script Pro / Enterprise

This release introduces a new way of scripting in Kreya. It's now possible to create a script outside the context of an operation. To do this, you can create a script file and call an operation with plain JavaScript.

An animation showcasing creating an Kreya Script and running it.

Calling operations multiple times, waiting for a response and so on, you are completely free to do anything in this script. Here is an example of what a script might look like.

// Invoke an operation that starts a long running job on the server
await kreya.invokeOperation('start-long-running-job');

let finished = false;
while (!finished) {
const result = await kreya.invokeOperation('fetch-job-status');
finished = result.success;

if (!finished) {
kreya.sleep(500);
}
}

// Now invoke an operation that fetches the job result and performs tests on it
await kreya.invokeOperation('fetch-job-result');

See the documentation for more examples and information. This feature can only be used with a Pro or Enterprise plan, if you want to give this a try, there is a 10 day trial period.

Importing HAR files

A disadvantage of gRPC over REST is browser support. You cannot just look at a request in the browser's development tools. You have to decode everything first. This is sort of fixed with the new HAR file import. You can simply export your requests to a HAR file in the browser's development tools and import it into Kreya. If the proto files for those requests are present in Kreya, you can simply read each request and response in Kreya.

An animation showcasing importing requests with a HAR file.

JWT (signed with a static key) auth provider

A new auth provider is available with the new release: JWT (signed with a static key). To create such an auth config, just open the authentications tab and create one with this type and fill in all the details.

An animation showcasing creating an JWT auth config.

Exporting operations as cURL and gRPCurl commands

Operations can be exported as cURL and gRPCurl commands. Simply open the context menu of an operation and copy it to the clipboard.

An animation showcasing exporting operations as cURL command.

User variables editor

User variables can now be viewed and edited in Project > User variables. You can set a user variable in scripts with kreya.variables.set("hello", "world");.

An animation showcasing viewing user variables.

Searching for operations, directories and collections

We have also implemented a quick search in the operation list to find operations, directories and collections. Just click on the new search button or press Ctrl+F or +F and enter your search string.

An animation showcasing searching operations, directories and collections.

Support reading and writing files in scripting API Pro / Enterprise

It is now possible to read and write files from your operation script. This can be used for snapshot testing, for example.

import { expect } from 'chai';
import { readFile } from 'fs/promises';

const verified = await readFile('say_hello.verified.txt', 'utf8');

kreya.grpc.onResponse(msg => {
kreya.test('Verify response', () => expect(msg.content.message).to.eql(verified));
});

See the documentation for more examples and information.

Connecting via unix-socket (e.g. to the Docker daemon)

You can now connect via unix-socket with a normal REST request. Just enter a valid unix-socket address in the endpoint field (e.g. for the Docker daemon unix:///var/run/docker.sock) and send your request.

An animation showcasing connecting via unix-socket.

See the documentation for more examples and information.

Use native browser in auth configs

Using the native browser opens the default browser of your system instead of Kreya's built-in web window. This approach allows you to leverage browser extensions, password managers, passkeys, and other browser-specific features. However, it requires a localhost redirect URI with an available port to complete the authentication flow.

An animation showcasing using native browser in auth configs.

Bug fixes

Many bugs have been fixed. More details can be found on our release notes page.

If you find a bug, please do not hesitate to report it. You can contact us at [email protected] for any further information or feedback.

Stay tuned! 🏄

]]>
<![CDATA[Kreya 1.16 - What's New]]> https://kreya.app/blog/kreya-1.16-whats-new/ https://kreya.app/blog/kreya-1.16-whats-new/ Thu, 16 Jan 2025 00:00:00 GMT Kreya 1.16 comes with support for WebSocket calls and an updated look and feel for the project settings. Faker is added to the scripting API, cookie management is added and many more features are implemented.

WebSocket support

Create a new operation in the operations list, select the type WebSocket and start using WebSockets in Kreya!

An animation showcasing creating an WebSocket operation and sending it.

You can find all the details in our documentation.

Auth as custom header or query param

It is now possible to define a custom header for the auth or send it as a query parameter. This can be configured for each auth configuration under advanced options.

An animation showcasing sending auth as a custom header.

Added faker to scripting API Pro / Enterprise

The bogus faker is now directly accessible in the scripting tab with kreya.faker. No additional imports are required.

An animation showcasing using faker in the scripting tab.

Updated look and feel of Kreya

Kreya's UI has been reworked. Especially the project settings have been given a fresh new look. They now open as a tab, allowing the user to quickly switch between an operation and the project settings. To open the project settings, go to the application menu and click Project > Environments or Authentications or ....

An animation showcasing opening project settings.

Tip: For even faster access, the project settings can be opened using keyboard shortcuts (e.g. Ctrl++E for environments on Windows or ++E on MacOS).

We've also streamlined the workflow for creating operations, so you don't have to select the type every time you create an operation, just choose the right type of operation from the menu at the start.

An animation showcasing creating operations.

Collection state filter Pro / Enterprise

In the header of a collection it is now possible to filter the operations by their states.

An animation showcasing filtering states of operations in a collection.

File exclusion list for gRPC proto file importers

In gRPC proto file importers, files can be excluded from import. This is particularly useful if you're importing entire folders of protos and don't want to import certain subfolders.

An animation showcasing excluding files in a gRPC proto file importer.

Importer custom headers Pro / Enterprise

To add custom headers to the gRPC server reflection and REST OpenAPI URL importers, open the advanced options in these importers.

An animation showcasing excluding files in a gRPC proto file importer.

Cookies can now be managed in Kreya. A response tab is visible when a cookie is set and all cookies can be managed in a separate tab under Project > Cookies. It is important to note that all cookies are managed separately per environment.

An animation showcasing managing cookies.

AWS Signature v4 authentication

An additional auth type has been added. An AWS Signature v4 auth type can now be created in the auth settings.

An animation showcasing creating an AWS authentication.

Bug fixes

Many bugs have been fixed. More details can be found on our release notes page.

If you find a bug, please do not hesitate to report it. You can contact us at [email protected] for any further information or feedback.

Have a nice day! 👋

]]>
<![CDATA[Looking back on 2024]]> https://kreya.app/blog/looking-back-on-2024/ https://kreya.app/blog/looking-back-on-2024/ Thu, 09 Jan 2025 00:00:00 GMT 4 million invoked operations, over 4,700 monthly active users and 4 releases. These are some key facts about the past year.

At the end of January 2024, we released our first 2024 release 1.13.0 with the new collections and Postman importer. This was followed by a small bugfix release 1.13.1 two weeks later in February. Another release 1.14.0 was released in April, focusing on improving collections, scripting and testing. The last release 1.15.0 of the year 2024 was released in June, with support for protobuf editions.

Monthly active users

We reached over 4,700 monthly active users at the end of 2024, which is an increase of +21.1% compared to the previous year.

YearMonthly active users
20244,727 (+21.1%)*
20233,902 (+35.9%)*
20222,871 (+89.3%)*
20211,517

*The percentage numbers refer to the change compared to the previous year.

Invoked operations

In 2024, more than 4 million operations were invoked, an increase of +39.2% compared to 2023.

YearInvoked operations
20244.01M (+39.2%)*
20232.88M (+44.7%)*
20221.99M

*The percentage numbers refer to the change compared to the previous year.

REST is increasing

As we started as a gRPC client and later added REST support, the interest in REST is still growing.

YearPercentage of invoked REST operations
20246.93%
20233.36%
20220.77%

Why telemetry is helpful

Telemetry gives us a lot of useful insights into how our users are using Kreya, for example we are currently seeing that collections are being used more and more. Additionally, we see which features or areas users find difficult to use, so we have focused on simplifying things in the upcoming release 1.17.

It is important to note that we only collect minimal anonymous telemetry data and this is completely transparent in the documentation. We appreciate every user who does not deactivate telemetry.

As of 2024, 1,856 users had disabled telemetry. So maybe we have a lot more users than we see, since the deactivation event is the last thing we heard from them. 😄

Finally, we would like to take this opportunity to thank all users for using Kreya, giving us feedback and helping us to keep developing an awesome product. As always, do not hesitate to contact us at [email protected] with any requests, feedback or just to say hello.

Good luck in the new year and stay tuned. 👋

]]>
<![CDATA[Kreya 1.15 - What's New]]> https://kreya.app/blog/kreya-1.15-whats-new/ https://kreya.app/blog/kreya-1.15-whats-new/ Wed, 12 Jun 2024 00:00:00 GMT Kreya 1.15 comes with support for protobuf editions. Operation, directory and collection paths can be viewed and copied from the tab's context menu. Comments imported with a proto file are now visible in the declaration tab.

Protobuf editions

Kreya supports the new protobuf editions.

A detailed explanation of protobuf editions can be found in our blogpost.

Show operation, directory and collection paths

The new version displays the operation path when you hover over a tab, and you can select "Copy path" from the tab's context menu to copy it.

An animation showcasing copying an operation path

Show comments in declaration tab

Comments imported with a proto file are now visible in the declaration tab.

Bug fixes

Many bugs have been fixed. More details can be found on our release notes page.

If you find a bug, please do not hesitate to report it. You can contact us at [email protected] for any further information or feedback.

Have a nice day ☀️

]]>
<![CDATA[Demystifying the protobuf wire format]]> https://kreya.app/blog/protocolbuffers-wire-format/ https://kreya.app/blog/protocolbuffers-wire-format/ Fri, 03 May 2024 00:00:00 GMT Protocol buffers transform data into a compact binary stream for storage or transmission. In this blog post, we will use a proto definition of a sample message and serialize it to binary data.

The sample message

For our sample we use the following .proto file:

syntax = "proto3";

message Fruit {
int32 weight = 1;
string name = 2;
}

This defines a Fruit message with two fields name and weight. Each of these fields has a type, a name and a field number.

We will serialize a simple sample message with the following values:

weight: 150
name: 'Apple'

Each of the field value pairs is encoded as a combination of the field number, the wire type and a payload. The binary stream always starts with the tag of the first field. The tag is a varint-encoded value consisting of the field number and the wire type.

The varint

Varint is a method of serializing integers using one or more bytes. Smaller numbers take a smaller number of bytes. This encoding is used for the tag of each field as well as for several types in protobuf (int32, enum, bool, and others). Varint uses a group of seven bits to represent the value of the number and an eighth bit as a continuation bit to indicate whether more bytes are needed. Here are the steps involved in encoding integers into varints:

  1. Grouping: The integer is broken into 7-bit groups from the least significant to the most significant bits.
  2. Continuation Bit: Each 7-bit group is prefixed with a continuation bit. This bit is set to 1 for all byte groups except the last, which is set to 0. This bit tells the decoder whether to expect another byte.
  3. Combination: These groups are then combined in a little-endian format, where the least significant group (the rightmost 7 bits) is stored first.

Let's look at encoding the number 150 as a varint:

         10010110 # decimal 150 in binary
1 0010110 # split into 7bit group
0010110 1 # change to little endian
10010110 00000001 # add continuation bits

As you can see, the number 150 in varint is 10010110 00000001 in binary or 96 01 in hexadecimal.

The main benefits of the varint encoding is the space efficiency for small numbers. Numbers smaller than 128 are stored in just one byte. As numbers get larger, additional bytes are used. This is very efficient for data that is frequently small but can occasionally be large (which in reality is often the case for most numbers).

For fields that almost always contain large numbers, the varint encoding is inefficient due to the additional continuation bit. In this case fixed size numbers, for example fixed32, should be preferred.

The wire types

Protobuf knows five different wire types. A wire type describes the encoding format of a payload.

ValueNameProto types
0varintint32, int64, uint32, uint64, sint32, sint64, bool, enum
1i64fixed64, sfixed64, double
2lenstring, bytes, embedded messages, packed repeated fields
3SGROUPgroup start (deprecated)
4EGROUPgroup end (deprecated)
5i32fixed32, sfixed32, float

The tag

The tag is a varint-encoded value consisting of the field number and the wire type. The field number of our first field is 1 and since it is an int32 which gets encoded as varint, the wire type is 0. The low three bits represent the wire type, the other bits represent the field number. This can be expressed as

wire_type | (field_number << 3)

As our field number is less than the maximum number we can serialize to the four bits available, we do not need an additional byte for the field number. For the wire type 0 and the field number 1 this will result in 08 with the following binary representation:

0000 1000
  │   └─── Wire type (0)
  └─────── Field number (1)
└────────── Varint continuation bit

The value

Immediately after the tag the value of the field gets encoded according to the wire type. For the weight field we want to encode 150 as int32 with a wire type of varint. As the sample in the varint paragraph shows, this results in 96 01.

So far we have the following data:

08 96 01
    └──── Payload of the field "weight", varint encoded
└───────── Tag of the field "weight" (field number and wire type)

Length delimited field

On to the field name. According to our wire type table, a string is a length delimited field and therefore encoded with wire type 2. Together with the field number 2 this results in the tag 12:

0001 0010
  │   └─── Wire type (2)
  └─────── Field number (2)
└────────── Varint continuation bit

The tag of a length delimited field is followed by a varint which specifies the length of the payload. The UTF-8 representation of Apple is 41 70 70 6c 65. These are 5 bytes, therefore the length is 5.

12 05 41 70 70 6c 65
  │        └──────── UTF-8 encoded string payload (Apple)
  └───────────────── Count of UTF-8 bytes of the payload (5)
└──────────────────── Tag of the field "name" (field number and wire type)

The encoded sample message

Concatenating our two encoded fields leads to the following bytes:

08 96 01 12 05 41 70 70 6c 65
        └────────── Length delimited field "name" with field number 2
└─────────────────── Varint encoded field "weight" with field number 1

We can verify our encoding using protoc:

echo '08960112054170706c65' | xxd -r -p | protoc --decode=Fruit ./fruit.proto
weight: 150
name: "Apple"

which exactly results in our sample values as expected 🥳🎉

The command works like this:

  1. Echo our hex encoded bytes
  2. Pass them through xxd to transform the hex into binary
  3. Pass the binary stream to protoc with the decode flag
    • For protoc to access our sample proto we stored it in the working directory in a file named fruit.proto

Even if we do not have access to the proto file, we can extract some information from the encoded protobuf by using the decode_raw flag:

echo '08960112054170706c65' | xxd -r -p | protoc --decode_raw
1: 150
2: "Apple"

This tells us there are two fields, one with the field number one that decodes to the value of 150, and one with the field number of two which decodes to the string "Apple".

Other wire types

  • Bytes and nested messages are encoded exactly the same way as strings with a length delimited encoding.
  • Boolean values are encoded as varints resulting in 01 for true and 00 for false.
  • Enums are also encoded as varints.
  • Repeated fields (as long as they are not packed) end up as multiple tag value pairs in the byte stream with the same tag being present multiple times.
  • Packed repeated fields are encoded as length delimited fields.

Closing

In this blog post we have encoded a sample protobuf message and validated the encoded bytes with protoc. For more details, see the official protobuf encoding guide or continue with Part 2 of this series for a deep dive into maps, negative numbers, and packed repeated fields.

To learn how these encoded messages are transported over the network, check out our gRPC deep dive.

]]>