Skip to content

Latest commit

 

History

History
 
 

README.md

title Benchmark Suite
tags
benchmarks
performance
testing

ABI Benchmark Suite

Codebase Status: Synced with repository as of 2026-02-04.

Benchmarks Baselines Suites

Comprehensive performance benchmarks for the ABI framework, measuring throughput, latency, and resource utilization across all major subsystems.

Quick Start

# Run all benchmark suites
zig build benchmarks

# Run all benchmark suites (including competitive)
zig build bench-all

# Run specific suite
zig build benchmarks -- --suite=simd

# Quick mode (reduced iterations)
zig build benchmarks -- --quick

# Verbose output
zig build benchmarks -- --verbose

Directory Layout

Path Purpose
benchmarks/ Suite entry points (main.zig, run.zig, mod.zig)
benchmarks/core/ Shared benchmark config + vector utilities
benchmarks/domain/ Domain suites (ai, database, gpu)
benchmarks/infrastructure/ Infrastructure suites (simd, memory, concurrency, crypto, network)
benchmarks/system/ System/integration suites (framework, CI, baselines, standards)
benchmarks/competitive/ Competitive comparisons (FAISS, vector DBs, LLMs)
benchmarks/domain/ Feature-specific suites (ai, database, gpu)
benchmarks/infrastructure/ SIMD, memory, concurrency, crypto, network
benchmarks/system/ Framework, CI, baseline store/comparator
benchmarks/baselines/ Baseline JSON storage (main/branches/releases)

Available Suites

Suite Purpose Key Metrics
<<<<<<< HEAD
simd Vector operations ops/sec, throughput (GB/s)
memory Allocator patterns allocs/sec, fragmentation %
concurrency Lock-free structures ops/sec, contention ratio
database WDBX operations insert/search latency (μs)
network HTTP/JSON parsing req/sec, parse time (ns)
crypto Hash/encrypt ops MB/sec, cycles/byte
ai GEMM/attention GFLOPS, memory bandwidth
gpu GPU kernels kernel time (ns), throughput
quick Fast verification CI-friendly subset

Suite Details

SIMD Suite (infrastructure/simd.zig)

Tests vectorized operations using SIMD intrinsics:

  • Dot product (single/batch)
  • Matrix multiplication
  • L2 norm computation
  • Cosine similarity
  • Distance calculations (Euclidean, Manhattan)
zig build benchmarks -- --suite=simd

Memory Suite (infrastructure/memory.zig)

Measures allocator performance:

  • General purpose allocator throughput
  • Arena allocator patterns
  • Pool allocator efficiency
  • Fragmentation under stress
  • Memory pressure handling
zig build benchmarks -- --suite=memory

Concurrency Suite (infrastructure/concurrency.zig)

Tests lock-free data structures:

  • Lock-free queue throughput
  • Work-stealing deque performance
  • Atomic counter operations
  • MPMC queue contention
  • Thread pool scaling
zig build benchmarks -- --suite=concurrency

Database Suite (domain/database/)

WDBX vector database benchmarks:

  • Vector insertion (single/batch)
  • Linear search performance
  • HNSW approximate search
  • Concurrent search operations
  • Cache-aligned memory access
  • Memory prefetching effectiveness
zig build benchmarks -- --suite=database

Network Suite (infrastructure/network.zig)

Network protocol benchmarks:

  • HTTP header parsing
  • JSON encoding/decoding
  • WebSocket frame processing
  • Request routing overhead
zig build benchmarks -- --suite=network

Crypto Suite (infrastructure/crypto.zig)

Cryptographic operation benchmarks:

  • SHA-256/SHA-512 hashing
  • AES-256 encryption
  • HMAC computation
  • Key derivation (PBKDF2, Argon2)
  • Random number generation
zig build benchmarks -- --suite=crypto

AI Suite (domain/ai/)

Machine learning operation benchmarks:

  • GEMM (General Matrix Multiply)
  • Attention mechanism
  • Activation functions (ReLU, GELU, SiLU)
  • Softmax computation
  • Layer normalization
zig build benchmarks -- --suite=ai

GPU Suite (domain/gpu/)

GPU kernel benchmarks:

  • Matmul, vector ops, reductions
  • Backend comparisons
  • GPU vs CPU comparisons
zig build benchmarks -- --suite=gpu

Competitive Benchmarks

Compare ABI performance against industry-standard implementations:

# Run competitive benchmarks
zig build bench-competitive

# With custom dataset size
zig build bench-competitive -- --vectors=100000 --dims=768

Available Comparisons

Comparison Target Metrics
FAISS Vector similarity search QPS, recall@k
Vector DBs Milvus, Pinecone Insert/search latency
LLM Inference llama.cpp Tokens/sec, memory usage

Results are output as JSON for easy integration with CI/CD pipelines.


Running Benchmarks

Command Line Options

zig build benchmarks -- [OPTIONS]

OPTIONS:
  --suite=<name>    Run specific suite (simd, memory, concurrency, database, network, crypto, ai, gpu)
  --quick           Run with reduced iterations
>>>>>>> origin/cursor/ai-module-source-organization-0282
  --verbose         Show detailed output
  --json            Output results as JSON to stdout
  --output=<file>   Write JSON report to a file

Examples

# All suites with verbose output
zig build benchmarks -- --verbose

# Database benchmarks only
zig build benchmarks -- --suite=database

# Quick verification run
zig build benchmarks -- --quick

# JSON output for CI integration
zig build benchmarks -- --output=benchmark_results.json

Understanding Results

Throughput Metrics

  • ops/sec: Operations per second (higher is better)
  • MB/sec or GB/sec: Data throughput (higher is better)
  • GFLOPS: Billion floating-point operations per second

Latency Metrics

  • μs (microseconds): 1/1,000,000 second
  • ns (nanoseconds): 1/1,000,000,000 second
  • p50/p99: Percentile latencies

Memory Metrics

  • RSS: Resident Set Size (actual memory usage)
  • fragmentation %: Wasted memory due to allocation patterns
  • allocs/sec: Allocation rate

Performance Baselines

Baseline reports are stored under benchmarks/baselines/ (see benchmarks/baselines/README.md). After significant changes, generate a fresh JSON report and store it under the appropriate branch or release directory:

# Generate a new baseline report
zig build benchmarks -- --output=benchmarks/baselines/branches/my_branch.json

Adding New Benchmarks

New benchmarks should follow this pattern:

const BenchmarkSuite = @import("mod.zig").BenchmarkSuite;

pub fn run(allocator: std.mem.Allocator) !void {
    var suite = BenchmarkSuite.init(allocator, "My Suite");
    defer suite.deinit();

    suite.benchmark("operation_name", struct {
        fn bench() void {
            // Operation to benchmark
        }
    }.bench, .{});

    suite.report();
}

Troubleshooting

Inconsistent Results

  • Disable CPU frequency scaling: sudo cpupower frequency-set -g performance
  • Close background applications
  • Run multiple iterations and average
  • Use --quick for initial verification

High Variance

  • Increase iteration count with --iterations=N
  • Check for thermal throttling
  • Ensure consistent memory pressure

Build Failures

# Ensure all dependencies are available
zig build benchmarks -Denable-database=true -Denable-gpu=true

See Also