Fast compilation. Native performance.
Ownership-based memory.
Garbage collection only where it matters.
A next-generation Haxe compiler with 5-tier JIT, ownership-based memory, and LLVM-powered native code generation.
// Rayzor monomorphizes generics to specialized native code @:generic class Container<T> { var value: T; public function new(v: T) { this.value = v; } public function get(): T { return this.value; } } // Compiles to Container__i32, Container__String var nums = new Container<Int>(42); var text = new Container<String>("hello");
// Lazy async futures — doesn't execute until awaited import rayzor.concurrent.Future; class Main { @:async static function compute(x: Int): Int { return x * 2; } static function main() { // Basic @:async + await var f = compute(21); trace(f.await()); // 42 // Lazy — doesn't execute until await var f2 = compute(100); trace("not blocked"); trace(f2.await()); // 200 // Multiple concurrent var a = compute(10); var b = compute(20); trace(a.await() + b.await()); // 60 trace("done"); } }
// Ownership-based memory — no GC for static types @:move class UniqueBuffer { var data: haxe.io.Bytes; public function new(size: Int) { this.data = haxe.io.Bytes.alloc(size); } } // Ownership transfer — freed at last use, no GC var buf = new UniqueBuffer(1024); process(buf); // buf moves here // buf is no longer valid — compile-time error if accessed // Reference-counted when sharing is needed @:arc class SharedState { var value: Int; }
// Safe fearless concurrency with Send types @:derive([Send]) class Message { public var value: Int; public function new(v: Int) { this.value = v; } } class Main { static function main() { var msg = new Message(42); var handle = rayzor.concurrent.Thread.spawn(() -> { return msg.value; }); var result = handle.join(); } }
// Message passing with channels import rayzor.concurrent.Thread; import rayzor.concurrent.Channel; import rayzor.concurrent.Arc; static function main() { var channel = new Arc(new Channel(10)); var threadChannel = channel.clone(); var sender = Thread.spawn(() -> { threadChannel.get().send(42); threadChannel.get().send(43); threadChannel.get().send(44); return 3; }); sender.join(); var v1 = channel.get().tryReceive(); var v2 = channel.get().tryReceive(); var v3 = channel.get().tryReceive(); }
// Rust-like mutex guards for safe shared state import rayzor.concurrent.Mutex; @:derive([Send]) class Counter { public var value: Int; public function new() { this.value = 0; } } static function main() { var counter = new Mutex(new Counter()); var guard = counter.lock(); var c = guard.get(); trace(42); }
// Atomic shared data with Arc and Send+Sync import rayzor.concurrent.Thread; import rayzor.concurrent.Arc; @:derive([Send, Sync]) class SharedData { public var value: Int; public function new(v: Int) { this.value = v; } } static function main() { var shared = new Arc(new SharedData(42)); var shared_clone = shared.clone(); var handle = Thread.spawn(() -> { return shared_clone.get().value; }); var result = handle.join(); }
// Zero-cost SIMD — maps to native SSE/NEON registers import rayzor.SIMD4f; class Main { static function main() { var a = SIMD4f.splat(1.0); var b = SIMD4f.splat(2.0); var c = a + b; trace(c); } }
// Embed C code directly — call native libraries @:cInclude(["/opt/homebrew/include"]) class Main { static function main() { var result = untyped __c__(' #include <raymath.h> long __entry__() { Vector2 v = { 3.0f, 4.0f }; float len = Vector2Length(v); float c = Clamp(15.0f, 0.0f, 10.0f); float l = Lerp(0.0f, 100.0f, 0.25f); return (long)(len * 10000.0f + c * 100.0f + l); } '); trace(result); // 51025 } }
Why Rayzor?
5-Tier JIT Compilation
Adaptive optimization from MIR interpreter through Cranelift to LLVM. Hot paths automatically tier-up for maximum performance.
Interpreter → Cranelift → LLVMSub-Second Compilation
Cranelift JIT compiles entire programs in under 200ms. With BLADE cache, rebuilds drop to under 6ms. Instant feedback during development.
< 1ms cached startupNative Performance
Tiered optimization through Cranelift and LLVM delivers up to 150x interpreter speed. Hot paths automatically tier-up for maximum throughput.
Up to 150x interpreterBLADE Incremental Cache
Per-module binary caching with source hash validation. Skip unchanged modules entirely for lightning-fast rebuilds. Only recompile what changed — everything else loads from pre-validated binary cache in milliseconds.
~30x faster rebuildsFaster Macro Expansion
Tiered execution inspired by morsel-parallelism. Cold macros run through a zero-overhead tree-walking interpreter. Hot macros automatically promote to bytecode — batch-compiled with all class dependencies into morsels, then executed by a stack-based VM.
Adaptive cold → hot promotionOwnership-Based Memory
Rust-inspired ownership tracking with compile-time drop analysis. Values are freed deterministically at their last use — no garbage collector needed for statically-typed code. Reference counting activates only when explicit sharing is required.
Zero GC pausesModern SSA Architecture
Full SSA-based intermediate representation with optimization passes, monomorphization for generics, and SIMD vectorization infrastructure. Every function is lowered to a graph of basic blocks with phi nodes, enabling powerful dataflow analysis and dead code elimination.
// bb0: %0 = const 42 : i32 %1 = const 10 : i32 %2 = add %0, %1 : i32 br bb1(%2) // bb1(%3: i32): %4 = gt %3, const 0 cbr %4, bb2, bb3 // bb2: %5 = call @process(%3) ret %5
Tiered Runtime for High Throughput
Functions start in the MIR interpreter for instant execution. As they get called repeatedly, Rayzor automatically promotes them to Cranelift JIT (~3ms), then to LLVM -O3 for maximum native speed — all while your program runs.
Getting Started
# Clone and build git clone https://github.com/darmie/rayzor.git cd rayzor cargo build --release # Run with tiered JIT rayzor run hello.hx --preset application # Compile to native rayzor compile hello.hx --stage native # Create single-file executable rayzor bundle main.hx --output app.rzb
Up and running in seconds
Rayzor requires the Rust toolchain (1.70+). Clone the repo, build with Cargo, and start compiling Haxe files with native performance.
Rayzor vs. Official Haxe Compiler
| Feature | Haxe (official) | Rayzor |
|---|---|---|
| Language Support | Full Haxe 4.x | Haxe 4.x (in progress) |
| JS/Python/PHP | Excellent | Not a goal |
| C++ Target | Slow compile, fast runtime | Fast compile, fast runtime |
| Compile Speed | 2-5s (C++) | ~3ms (Cranelift Tier 1) |
| JIT Runtime | No | 5-tier (Interp→Cranelift→LLVM) |
| Hot Path Optimization | No | Profile-guided tier-up |
| Memory Model | Garbage collected | Ownership-based (compile-time) |
| Incremental Builds | Limited | BLADE cache (~30x speedup) |
| Type Checking | Production | Production |
| Macro System | Full (eval-based) | Interpreter, reification, @:build |
| Optimizations | Backend-specific | SSA-based (universal) |