REChain Performance Benchmarks
๐ Comprehensive Performance Testing Suite
Version: 4.1.10+1160
Last Updated: 2025-12-06
Status: โ
Active Development
Overview
Benchmark Categories
Quick Start
Running Benchmarks
Benchmark Scripts
Results Analysis
Performance Targets
Continuous Integration
Reporting
This directory contains comprehensive performance benchmarking scripts and results
for the REChain project. Benchmarks help ensure optimal performance across all
critical components.
Measure performance of critical components
Track performance over time
Identify regression issues
Optimize resource usage
Validate performance targets
Automated : Run automatically in CI/CD
Reproducible : Consistent results across runs
Comprehensive : Cover all critical paths
Actionable : Clear pass/fail criteria
๐ฑ Application Benchmarks
Benchmark
Description
Target
Startup Time
Time to interactive
< 3s
Memory Usage
Baseline memory footprint
< 150MB
UI Responsiveness
60fps rendering
> 55fps
Battery Drain
Per-hour usage
< 5%
Cold Start
App launch time
< 5s
๐ฌ Matrix Protocol Benchmarks
Benchmark
Description
Target
Sync Latency
Initial sync time
< 2s
Message Send
End-to-end send time
< 500ms
Room Join
Join room time
< 1s
Typing Indicator
Typing notification
< 100ms
Read Receipt
Read receipt delivery
< 200ms
Key Exchange
E2EE key negotiation
< 1s
โ๏ธ Blockchain Benchmarks
Benchmark
Description
Target
Wallet Connect
Connection time
< 2s
Balance Fetch
Get balance latency
< 500ms
Transaction Sign
Sign transaction
< 1s
Transaction Send
Broadcast transaction
< 5s
Gas Estimation
Estimate gas limit
< 200ms
Token Transfer
ERC20 transfer
< 10s
Benchmark
Description
Target
Upload Small
< 1MB file upload
< 3s
Upload Large
> 100MB file upload
< 30s
Download Small
< 1MB file download
< 2s
Download Large
> 100MB file download
< 25s
Pin Operation
Pin content
< 1s
DHT Lookup
Find provider
< 2s
๐ง AI Services Benchmarks
Benchmark
Description
Target
Text Moderation
Content analysis
< 200ms
Translation
Language translation
< 500ms
Sentiment Analysis
Emotion detection
< 300ms
Summarization
Text summarization
< 1s
Tokenization
Word segmentation
< 50ms
Benchmark
Description
Target
Encryption (Message)
E2EE message
< 50ms
Encryption (File)
File encryption
< 100ms
Key Generation
New key pair
< 500ms
Signature Verification
Verify signature
< 100ms
Backup Encryption
Encrypted backup
< 2s
# Flutter for app benchmarks
flutter --version
# Python 3.8+ for scripts
python3 --version
# Additional tools
pip install matplotlib pandas numpy
cd benchmarks
# Run all benchmark suites
python3 run_benchmarks.py --all
# Run specific category
python3 run_benchmarks.py --matrix
python3 run_benchmarks.py --blockchain
python3 run_benchmarks.py --ipfs
python3 run_benchmarks.py --ai
python3 generate_report.py --input results/ --output report.html
# Startup time benchmark
./scripts/benchmark_startup.sh
# Memory usage benchmark
./scripts/benchmark_memory.sh --duration 60
# UI performance benchmark
./scripts/benchmark_ui.sh --iterations 100
# Battery drain benchmark
./scripts/benchmark_battery.sh --duration 3600
Matrix Protocol Benchmarks
# Sync performance
python3 matrix/benchmark_sync.py \
--homeserver https://matrix.marinchik.ink \
--duration 300
# Message throughput
python3 matrix/benchmark_messages.py \
--room ! room:server \
--count 100
# E2EE performance
python3 matrix/benchmark_encryption.py \
--iterations 50
# Wallet connection
python3 blockchain/benchmark_wallet.py \
--network ton \
--operations 10
# Transaction benchmarks
python3 blockchain/benchmark_transactions.py \
--network ethereum \
--iterations 20
# Upload performance
python3 ipfs/benchmark_upload.py \
--file-size 104857600 \
--iterations 5
# Download performance
python3 ipfs/benchmark_download.py \
--cid Qm... \
--iterations 5
# AI response times
python3 ai/benchmark_services.py \
--requests 100 \
--concurrency 10
benchmarks/
โโโ README.md โ
This file
โโโ run_benchmarks.py โ
Main runner script
โโโ generate_report.py โ
Report generator
โโโ requirements.txt โ
Python dependencies
โ
โโโ scripts/ โ
Shell scripts
โ โโโ benchmark_startup.sh โ
App startup
โ โโโ benchmark_memory.sh โ
Memory usage
โ โโโ benchmark_ui.sh โ
UI performance
โ โโโ benchmark_battery.sh โ
Battery drain
โ
โโโ matrix/ โ
Matrix benchmarks
โ โโโ benchmark_sync.py โ
Sync performance
โ โโโ benchmark_messages.py โ
Message throughput
โ โโโ benchmark_encryption.py โ
E2EE performance
โ โโโ __init__.py
โ โโโ config.py
โ
โโโ blockchain/ โ
Blockchain benchmarks
โ โโโ benchmark_wallet.py โ
Wallet operations
โ โโโ benchmark_transactions.py โ
Transaction processing
โ โโโ __init__.py
โ โโโ config.py
โ
โโโ ipfs/ โ
IPFS benchmarks
โ โโโ benchmark_upload.py โ
Upload performance
โ โโโ benchmark_download.py โ
Download performance
โ โโโ __init__.py
โ โโโ config.py
โ
โโโ ai/ โ
AI services benchmarks
โ โโโ benchmark_services.py โ
Service response times
โ โโโ __init__.py
โ โโโ config.py
โ
โโโ results/ โ
Results storage
โ โโโ .gitkeep
โ
โโโ reports/ โ
Generated reports
โโโ .gitkeep
#!/usr/bin/env python3
"""
REChain Benchmark Runner
Version: 4.1.10+1160
"""
import argparse
import sys
import time
from pathlib import Path
class BenchmarkRunner :
def __init__ (self ):
self .results_dir = Path (__file__ ).parent / "results"
self .results_dir .mkdir (exist_ok = True )
def run_all (self ):
"""Run all benchmark suites"""
suites = [
("Application" , self .run_application ),
("Matrix" , self .run_matrix ),
("Blockchain" , self .run_blockchain ),
("IPFS" , self .run_ipfs ),
("AI" , self .run_ai ),
]
for name , suite_func in suites :
print (f"\n { '=' * 60 } " )
print (f"Running { name } Benchmarks" )
print (f"{ '=' * 60 } " )
start = time .time ()
suite_func ()
elapsed = time .time () - start
print (f"\n { name } benchmarks completed in { elapsed :.2f} s" )
def run_application (self ):
"""Run application benchmarks"""
from scripts .benchmark_startup import BenchmarkStartup
from scripts .benchmark_memory import BenchmarkMemory
BenchmarkStartup ().run ()
BenchmarkMemory ().run ()
def run_matrix (self ):
"""Run Matrix protocol benchmarks"""
from matrix .benchmark_sync import SyncBenchmark
from matrix .benchmark_messages import MessageBenchmark
SyncBenchmark ().run ()
MessageBenchmark ().run ()
def run_blockchain (self ):
"""Run blockchain benchmarks"""
from blockchain .benchmark_wallet import WalletBenchmark
from blockchain .benchmark_transactions import TransactionBenchmark
WalletBenchmark ().run ()
TransactionBenchmark ().run ()
def run_ipfs (self ):
"""Run IPFS benchmarks"""
from ipfs .benchmark_upload import UploadBenchmark
from ipfs .benchmark_download import DownloadBenchmark
UploadBenchmark ().run ()
DownloadBenchmark ().run ()
def run_ai (self ):
"""Run AI services benchmarks"""
from ai .benchmark_services import AIBenchmark
AIBenchmark ().run ()
if __name__ == "__main__" :
parser = argparse .ArgumentParser (description = "REChain Benchmark Runner" )
parser .add_argument ("--all" , action = "store_true" , help = "Run all benchmarks" )
parser .add_argument ("--application" , action = "store_true" , help = "Run application benchmarks" )
parser .add_argument ("--matrix" , action = "store_true" , help = "Run Matrix benchmarks" )
parser .add_argument ("--blockchain" , action = "store_true" , help = "Run blockchain benchmarks" )
parser .add_argument ("--ipfs" , action = "store_true" , help = "Run IPFS benchmarks" )
parser .add_argument ("--ai" , action = "store_true" , help = "Run AI benchmarks" )
args = parser .parse_args ()
runner = BenchmarkRunner ()
if args .all or not any ([args .application , args .matrix ,
args .blockchain , args .ipfs , args .ai ]):
runner .run_all ()
else :
if args .application :
runner .run_application ()
if args .matrix :
runner .run_matrix ()
if args .blockchain :
runner .run_blockchain ()
if args .ipfs :
runner .run_ipfs ()
if args .ai :
runner .run_ai ()
Benchmarks generate JSON result files:
{
"benchmark" : " matrix_sync" ,
"timestamp" : " 2025-12-06T10:30:00Z" ,
"duration" : 300 ,
"iterations" : 10 ,
"results" : {
"mean" : 1.234 ,
"median" : 1.200 ,
"std_dev" : 0.123 ,
"min" : 1.050 ,
"max" : 1.567 ,
"p95" : 1.456 ,
"p99" : 1.534
},
"target" : {
"max" : 2.0 ,
"unit" : " seconds"
},
"status" : " PASS"
}
# Generate statistics
python3 analyze_results.py --input results/ --output analysis/
# Compare with baseline
python3 compare_results.py --baseline baseline/ --current results/
# Generate visualization
python3 visualize_results.py --input results/ --format png
Category
Metric
Target
Warning
Critical
App
Startup Time
< 3s
< 4s
< 5s
App
Memory Usage
< 150MB
< 200MB
< 250MB
App
UI FPS
> 55
> 50
> 45
Matrix
Sync Latency
< 2s
< 3s
< 5s
Matrix
Message Send
< 500ms
< 750ms
< 1s
Blockchain
Transaction
< 5s
< 10s
< 20s
IPFS
Upload 10MB
< 5s
< 8s
< 15s
AI
Moderation
< 200ms
< 300ms
< 500ms
PASS : All metrics within target
WARN : Some metrics in warning zone
FAIL : Any metric in critical zone
8. Continuous Integration
name : Performance Benchmarks
on :
schedule :
- cron : ' 0 0 * * 0' # Weekly on Sunday
push :
branches : [main]
pull_request :
branches : [main]
jobs :
benchmark :
runs-on : ubuntu-latest
steps :
- uses : actions/checkout@v4
- name : Setup Python
uses : actions/setup-python@v4
with :
python-version : ' 3.10'
- name : Install dependencies
run : |
pip install -r benchmarks/requirements.txt
- name : Run benchmarks
run : |
cd benchmarks
python3 run_benchmarks.py --all
- name : Upload results
uses : actions/upload-artifact@v4
with :
name : benchmark-results
path : benchmarks/results/
- name : Comment on PR
if : github.event_name == 'pull_request'
run : |
python3 .github/scripts/benchmark_comment.py
python3 generate_report.py \
--input results/ \
--output reports/benchmark_report.html \
--format html
Executive Summary
Overall status (PASS/WARN/FAIL)
Key metrics comparison
Trend analysis
Detailed Results
Per-category breakdown
Statistical analysis
Performance graphs
Recommendations
Optimization suggestions
Regression alerts
Capacity planning
File
Description
run_benchmarks.py
Main benchmark runner
generate_report.py
Report generator
requirements.txt
Python dependencies
scripts/*.sh
Shell benchmark scripts
matrix/*.py
Matrix protocol benchmarks
blockchain/*.py
Blockchain benchmarks
ipfs/*.py
IPFS benchmarks
ai/*.py
AI services benchmarks
Version
Date
Changes
4.1.10+1160
2025-12-06
Complete benchmark suite
4.1.10+1152
2025-07-08
Initial benchmark framework
4.1.9+1147
2025-06-01
Pre-release benchmarks
REChain: Building the Digital Infrastructure of Autonomous Organizations ๐