Skip to content

evaisse/the-great-analysis-challenge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

473 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The Great Analysis Challenge: Multi-Language Chess Engine Project

Polyglot chess engine benchmark: same functional spec, multiple language implementations, Docker-first workflow, shared tests.

Documentation

Start here: Documentation Hub

Core references:

Available Implementations

All implementations target parity for core features: perft, fen, ai, castling, en_passant, promotion.

Language Complexity LOC make build make analyze make test make test-chess-engine Features
📦 C 7,151.5 2,406 889ms, - MB 179ms, - MB 892ms, - MB 18.5s, - MB 🟢 9/9
💠 Crystal 8,040.5 3,308 1.5s, - MB 212ms, - MB 4.5s, - MB 19.1s, - MB 🟢 9/9
🎯 Dart 15,055.25 5,006 508ms, - MB 238ms, - MB 531ms, - MB 19.5s, - MB 🟡 9/9
💧 Elixir 5,312.75 2,084 577ms, - MB 966ms, - MB 584ms, - MB 21.1s, - MB 🟢 9/9
🌳 Elm 5,109.75 1,811 199ms, - MB 200ms, - MB 186ms, - MB 20.2s, - MB 🟢 9/9
✨ Gleam 28,044 4,211 build error - - - 🔴 9/9
🐹 Go 15,016.25 5,703 568ms, - MB 986ms, - MB 1.1s, - MB 24.5s, - MB 🟢 9/9
📐 Haskell 8,143.25 2,312 282ms, - MB 495ms, - MB 157ms, - MB 21.9s, - MB 🟢 9/9
🪶 Imba 6,167 1,708 451ms, - MB 454ms, - MB 157ms, - MB 19.4s, - MB 🟡 9/9
🟨 Javascript 4,396.5 1,602 <1ms, - MB 140ms, - MB 162ms, - MB 40s, - MB 🟢 9/9
🔮 Julia 5,997.25 2,083 <1ms, - MB 955ms, - MB 3.5s, - MB 22.6s, - MB 🟡 9/9
🧡 Kotlin 6,774 1,974 7.1s, - MB 6.7s, - MB 171ms, - MB 21.1s, - MB 🟡 9/9
🪐 Lua 15,254.25 4,192 <1ms, - MB 156ms, - MB 168ms, - MB 32.2s, - MB 🟢 9/9
🦊 Nim 7,067 1,636 979ms, - MB 912ms, - MB 147ms, - MB 18.5s, - MB 🟢 9/9
🐘 Php 18,067.25 5,879 <1ms, - MB 394ms, - MB 168ms, - MB 24.2s, - MB 🟢 9/9
🐍 Python 12,581.25 4,978 <1ms, - MB 213ms, - MB 1.8s, - MB 42.6s, - MB 🟡 9/9
🧠 Rescript 6,827.75 2,381 381ms, - MB 591ms, - MB 223ms, - MB 21.8s, - MB 🟡 9/9
❤️ Ruby 5,466 2,469 <1ms, - MB 2.1s, - MB 283ms, - MB 19.3s, - MB 🟡 9/9
🦀 Rust 9,721.75 2,834 201ms, - MB 559ms, - MB 776ms, - MB 18.4s, - MB 🟢 9/9
🐦 Swift 5,497.5 1,506 1.6s, - MB 7.7s, - MB 10.2s, - MB 18.2s, - MB 🟢 9/9
📘 Typescript 7,773.5 2,586 2s, - MB 4.2s, - MB 2.2s, - MB 24.2s, - MB 🟡 9/9
⚡ Zig 13,193 2,509 205ms, - MB 156ms, - MB 141ms, - MB 33.6s, - MB 🟢 9/9

Legend:

  • Features: 🟢/🟡/🔴 status badge followed by implemented features count from metadata, e.g. 🟡 6/9.
  • Complexity: weighted tokens-v3 semantic complexity_score, linked to the implementation entrypoint.
  • LOC: source lines of code from Git-discovered source files filtered by metadata org.chess.source_exts.
  • make ... columns: <duration>, <peak memory>.
  • -: metric missing or intentionally skipped.

Semantic Token Metrics (tokens-v3)

The README matrix uses the weighted complexity_score from tokens-v3. Raw tokens-v2 counts and the full semantic breakdown remain available in the versioned report JSON files and through the shared Bun tooling. These metrics are powered by Shiki and classify tokens into semantic categories while down-weighting punctuation and excluding comments from scoring.

Running semantic analysis

# Single implementation
bun run scripts/semantic-tokens/semantic_tokens.mjs implementations/rust --pretty

# All implementations
bun run scripts/semantic-tokens/semantic_tokens.mjs --all implementations/ --pretty

# Shared metrics pipeline (tokens-v2 + optional tokens-v3)
./workflow code-size-metrics --impl implementations/rust

# Refresh semantic metrics inside versioned reports
./workflow refresh-report-metrics

Complexity score

Category Weight Examples
keyword 1.0 if, fn, return, class
identifier 1.0 variables, functions, methods
type 1.0 type annotations, generics
operator 0.5 +, ==, &&, ->
literal 0.5 numbers, strings
punctuation 0.25 {}, (), ;, ,
comment 0.0 excluded from scoring
unknown 0.5 fallback classification

complexity_score = Σ(weight × count).

Quick Commands

make list-implementations
make image DIR=<language>
make build DIR=<language>
make analyze DIR=<language>
make test DIR=<language>
make test-unit-contract DIR=<language>
make test-chess-engine DIR=<language>
./workflow semantic-tokens implementations/<language> --pretty
./workflow refresh-report-metrics

All implementation build/test/analyze operations are Docker-only.

About

A multi-language base application to test performance of static analysis

Topics

Resources

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors