A high-performance load testing engine designed for standalone (CLI) or Kubernetes cluster-internal use.
Supports load generation with a pluggable architecture for HTTP, gRPC and Kafka engines.
Configuration is loaded from a YAML file. The file location is resolved in priority order:
--config-fileCLI flagCONFIG_FILEenvironment variable- Default path:
/etc/taiko/config.yaml
| Parameter | Type | Description |
|---|---|---|
engine |
string | Engine type: http, grpc, or kafka. |
targets |
list | One or more target definitions (fields depend on engine type). |
load.duration |
duration | Total duration of the load test (e.g. 30s, 5m, 1h). |
metrics.type |
string | Metrics output destination: console, prometheus, or s3. |
variables |
list | Dynamic variables for template substitution in payloads. |
The HTTP engine sends requests to one or more HTTP endpoints. Multiple targets are supported simultaneously, each with its own URL, method, headers, body, and target RPS. A worker pool is automatically scaled up or down every 2 seconds to match the configured RPS across all targets, with weighted target selection proportional to each target's RPS share.
| Field | Type | Required | Description |
|---|---|---|---|
url |
string | yes | Target URL. Supports {{variable}} template substitution. |
method |
string | no | HTTP method (e.g. GET, POST). Defaults to GET. |
timeout |
duration | no | Per-request timeout (e.g. 5s). |
body |
string | no | Request body. Supports {{variable}} template substitution. |
headers |
map | no | HTTP headers as key-value pairs. |
rps |
int | yes | Target requests per second for this endpoint. |
engine: http
targets:
- url: http://localhost:8081/api/endpoint?id={{id}}
method: POST
timeout: 5s
body: '{}'
headers:
Content-Type: application/json
rps: 500
load:
duration: 1m
metrics:
type: console
variables:
- name: id
type: uuidThe gRPC engine calls unary RPC methods on one or more gRPC services. Service schema is resolved either from .proto source files (compiled at startup) or via gRPC server reflection when no proto files are provided. Multiple targets are supported, each pointing to a different endpoint or method, with workers automatically scaled to hit the configured RPS.
| Field | Type | Required | Description |
|---|---|---|---|
endpoint |
string | yes | gRPC server address (e.g. localhost:9090). |
service |
string | yes | Fully-qualified service name (e.g. mypackage.MyService). |
method |
string | yes | RPC method name (e.g. SayHello). |
proto_files |
list | no | Paths to .proto files for schema resolution. Falls back to server reflection if omitted. |
payload |
string | no | JSON request payload. Supports {{variable}} template substitution. |
metadata |
map | no | gRPC metadata headers as key-value pairs. Values support {{variable}} substitution. |
timeout |
duration | no | Per-request timeout (default: 30s). |
rps |
int | yes | Target requests per second. |
engine: grpc
targets:
- endpoint: localhost:9090
service: taiko.TaikoService
method: Taiko
proto_files:
- examples/taiko.proto
payload: '{"message": "{{user_id}}"}'
metadata:
x-request-id: "{{uuid}}"
rps: 10
timeout: 5s
load:
duration: 60s
variables:
- name: user_id
type: uuidThe Kafka engine produces messages to one or more Kafka topics. A single shared producer client is used across all targets, with broker addresses pooled from all target definitions. Multiple targets are supported, each with its own topic, key, value template, and RPS. Workers are automatically scaled to match the configured message throughput.
| Field | Type | Required | Description |
|---|---|---|---|
brokers |
list | yes | One or more Kafka broker addresses (e.g. ["localhost:9092"]). |
topic |
string | yes | Kafka topic to produce messages to. |
key |
string | no | Message key template. Supports {{variable}} substitution. |
value |
string | no | Message value template. Supports {{variable}} substitution. |
headers |
map | no | Kafka record headers as key-value pairs. Values support {{variable}} substitution. |
rps |
int | yes | Target messages per second. |
The Kafka engine optionally supports Avro serialization with Confluent Schema Registry. When configured, key and/or value templates are treated as JSON, encoded to Avro binary using the specified schema, and (when fetched from a registry) prepended with the Confluent wire format header (0x00 + 4-byte schema ID).
| Field | Type | Required | Description |
|---|---|---|---|
schema_registry |
object | no | Schema Registry connection settings (see below). |
key_schema |
object | no | Schema to use for encoding the message key as Avro. |
value_schema |
object | no | Schema to use for encoding the message value as Avro. |
schema_registry fields:
| Field | Type | Required | Description |
|---|---|---|---|
url |
string | yes | Schema Registry URL (e.g. http://schema-registry:8081). |
username |
string | no | Basic auth username. |
password |
string | no | Basic auth password. |
key_schema / value_schema fields (specify either subject or file, not both):
| Field | Type | Required | Description |
|---|---|---|---|
subject |
string | no | Schema Registry subject name (requires schema_registry). |
version |
int | no | Schema version to fetch (defaults to latest). |
file |
string | no | Path to a local .avsc file. No wire format header is added. |
engine: kafka
targets:
- brokers: ["localhost:9092"]
topic: taiko
key: "{{user_id}}"
value: '{"event": "click", "user": "{{user_id}}", "timestamp": "{{timestamp}}"}'
rps: 5
load:
duration: 10s
variables:
- name: user_id
type: uuid
- name: timestamp
type: timestamp
generator:
format: rfc3339engine: kafka
targets:
- brokers: ["localhost:9092"]
topic: user-events
key: "{{user_id}}"
value: '{"id": "{{event_id}}", "user_id": {{user_id}}, "action": "click"}'
rps: 500
schema_registry:
url: "http://schema-registry:8081"
value_schema:
subject: user-events-value
load:
duration: 60s
variables:
- name: user_id
type: int_range
generator: { mode: rnd, min: 1, max: 1000000 }
- name: event_id
type: uuidVariables let you inject dynamic values into request payloads at runtime. They are defined under the top-level variables key and referenced anywhere in a payload using {{variable_name}} placeholders. Substitution is applied per-request and is supported across all engines:
- HTTP:
urlandbodyfields - gRPC:
payloadandmetadatavalues - Kafka:
key,value, andheadersvalues
Each variable has a name, a type, and an optional generator block with type-specific parameters.
| Type | Description | Generator fields |
|---|---|---|
uuid |
Random UUID v4 generated per request. | (none) |
timestamp |
Current time at request time. | format: unix (default) or rfc3339 |
string_set |
Selects a value from a fixed list of strings. | values (list of strings), mode: seq or rnd |
int_range |
Integer within a min/max range (inclusive). | min (int), max (int), mode: seq or rnd |
int_set |
Selects an integer from a fixed list. | values (list of ints), mode: seq or rnd |
mode: seq cycles through values sequentially (thread-safe); mode: rnd picks randomly on each request.