Pre-requisites

LLVM/Clang with llvm-mca (AArch64 target): Go to ARM learning path for the installation of this

LLVM Machine Code Analyzer – Arm Developer Page

https://learn.arm.com/learning-paths/cross-platform/mca-godbolt/

Python 3.11+ (Flask backend)

Phantom wallet (for Devnet rewards)

(Optional) Docker for containerized runs

Arm Learning Path (background & setup): https://learn.arm.com/learning-paths/cross-platform/mca-godbolt/

🧠 Inspiration

Many computer science students learn C/C++ but never actually see what happens after compilation. We wanted to bridge the gap between high-level programming and low-level microarchitecture — helping students see how their code becomes assembly and how it executes on real Arm CPUs. LLVM-MCA gives quantitative insight into pipelines, and Gemini explains the results in human terms.

💡 What It Does

Translate your own C/C++ into AArch64 assembly.

One click to compile C → AArch64 assembly → machine code hex.

Run LLVM-MCA on different Arm CPUs (e.g., apple-m2, neoverse-v2) and visualzation of performance.

Get a Gemini explanation of throughput, stalls, IPC, and concrete optimizations.

Bonus: Daily quiz with a small Solana Devnet reward for correct answers.

In short: a compiler & CPU-pipeline tutor for interactive performance learning.

Judging Criteria Questions

🧠 What is the problem?

Today’s computer science education is dominated by Gen-AI prompting and high-level programming, while low-level performance understanding — assembly, CPU pipelines, and compiler behavior — is largely forgotten.

Most students can’t answer:

“What happens to my C++ code after I press compile?”

Tools like LLVM-MCA exist to analyze CPU microarchitecture performance, but they’re CLI-only, intimidating, and inaccessible to learners. As a result, few students ever see how instructions flow through a real Arm processor — or how code structure affects efficiency.

📖 What’s the story behind it?

While studying Arm Learning Paths, we found “Learn about LLVM Machine Code Analyzer” — a guide showing how to use clang and llvm-mca to understand throughput, IPC, and pipeline stalls.

We realized this could be a perfect educational bridge: from theory (how CPUs work) → practice (how code executes).

So we decided to build a student-friendly web app that:

brings compiler and architecture education online,

uses Gemini to explain reports in plain English,

and even rewards learning with Solana micro-rewards through a blockchain-based quiz system.

⚙️ How are we solving it?

We built a Flask-based web platform that integrates:

🧩 Clang compiler playground: compile C/C++ to AArch64 assembly in real time.

⚙️ LLVM-MCA analysis engine: measure instruction throughput, pipeline utilization, and stalls for different Arm CPUs (like Apple M2 or Neoverse-V2).

🤖 Gemini 2.0 Flash AI: translate complex performance reports into concise, human-readable lessons.

💰 Solana Devnet integration: users can take a daily quiz and earn 0.1 SOL for correct answers — reinforcing learning through gamification.

☁️ Vultr deployment: containerized and hosted on a cloud instance, enabling accessible, reproducible demos for hackathons and classrooms.

🧩 What / How is the code & system used to solve the problem?

🔧 Backend

Python Flask orchestrates the entire workflow.

subprocess.run() executes real clang and llvm-mca commands, producing live performance data.

google-genai SDK (Gemini 2.0 Flash) generates natural-language explanations from raw reports.

Solana’s solders SDK handles secure blockchain transfers (0.1 SOL per correct quiz).

🌐 Frontend

Bootstrap 5 dashboard with sidebar navigation: Home | Translate | Analysis | Quiz

Interactive code editor where users input C++ and view:

translated assembly and hex dumps,

LLVM-MCA performance visualization,

and AI-powered explanations.

Phantom Wallet integration for Solana sign-in and reward claiming.

🧱 Infrastructure

Docker-containerized Flask app deployed on Vultr Arm instance.

Solana CLI configured for Devnet balance verification.

Environment variables stored via .env for secure deployment.

💡 Any other information for judges

Education-first: Brings compiler theory to life for CS students using real Arm tools.

AI-powered insight: Gemini transforms cryptic performance metrics into step-by-step tutoring.

Gamified learning: Solana rewards motivate consistent engagement and comprehension.

Fully open-source & Docker-ready: Easy for universities to deploy in compiler or architecture labs.

Built entirely from scratch — no external UI libraries, no fake output — everything runs live (clang + llvm-mca + Gemini + Solana Devnet).

MLH Tracks: ARM, Vultr, Solana, Gemini:

⚙️ Use of Arm Learning Path

We built directly upon the Arm Learning Path “Learn about LLVM Machine Code Analyzer” 👉 https://learn.arm.com/learning-paths/cross-platform/mca-godbolt/

That guide was the foundation for our entire backend — teaching how to use clang and llvm-mca to compile and analyze C/C++ code for Arm CPUs. We took those command-line tools and turned them into an interactive web learning environment powered by AI and modern web tech.

🧩 How We Applied It

1️⃣ LLVM-MCA Performance Visualization

The Learning Path demonstrates how llvm-mca analyzes instruction throughput, latency, and pipeline behavior. We automated that workflow in our Flask backend:

clang -O3 -target=aarch64-apple-darwin -S code.cpp -o code.s llvm-mca -mcpu=apple-m2 code.s

In our implementation: • Flask runs llvm-mca in a sandboxed subprocess, capturing the entire report and timeline. • The app extracts key metrics (IPC, μops, throughput, stalls) from the textual and structured output. • Results are rendered in a browser-based dashboard, where users can see: • Instruction-per-cycle (IPC) • Bottleneck identification • Execution port utilization • Front-end vs back-end pipeline analysis • Gemini 2.0 Flash then summarizes and explains the report in plain English, giving students real insight into how their code executes on an Arm microarchitecture.

This turns what was once a command-line tool for experts into a data visualization and AI tutoring experience for learners.

2️⃣ Clang Playground: C++ → Assembly Translation

The Learning Path shows how to compile code to AArch64 assembly. We implemented this feature directly — users can write C/C++ in the browser and instantly view the resulting assembly and machine code.

clang -O3 -target=aarch64-apple-darwin -S -o code.s code.cpp xxd -p code.o # produce machine code hex dump

Our backend: • Uses Python’s subprocess.run() to invoke clang dynamically. • Returns the generated assembly (.s) and machine code (hex dump) to the browser. • Displays both side by side — turning it into a live compiler visualization tool.

This lets students see how every line of C++ translates into low-level instructions, helping them understand compiler optimization decisions in real time.

🧠 Educational Value

Today’s computer science education often revolves around Gen AI and prompting — but assembly, performance tuning, and hardware efficiency are rarely emphasized. By building on Arm’s Learning Path, we reintroduced these fundamental concepts in a modern, approachable way: • LLVM-MCA acts as the professor, showing microarchitectural performance. • Gemini acts as the teaching assistant, explaining those results. • Our Flask + JS frontend acts as the classroom, bringing it all together interactively.

⚙️ Use of Solana

We integrated Solana as the backbone of our on-chain learning reward system — combining real wallet management, CLI interaction, and programmatic transfers.

🎯 Why: Most educational platforms stop at virtual points. We wanted to make learning about computer architecture literally rewarding — using Solana’s fast, low-cost Devnet to give small, real crypto incentives for daily quiz participation.

🧩 How it works:

Wallet Authentication:

The frontend connects to Phantom Wallet through the browser extension (window.solana).

Each user signs a Solana “Sign-In With Wallet” (SIWS) message.

The Flask backend verifies the signature with PyNaCl, issues a JWT session, and ties progress to the verified wallet address.

Reward Mechanism:

On correct quiz completion, Flask calls the solders SDK to build and sign a transfer transaction from a server-held treasury keypair to the student’s wallet.

Each wallet can only claim once per day, tracked in-memory via wallet + date pair.

Treasury Management via CLI:

We used the Solana CLI for setup and verification throughout development:

solana-keygen new --outfile ~/.config/solana/id.json solana airdrop 2 solana balance 4ZwHLRFwnNrdCQJ5tApkbQK4b443fwqfQfC22rs18s5r solana config set --url https://api.devnet.solana.com

The CLI allowed us to confirm rewards were actually distributed — verifying balance changes in real time.

Transparency:

Each reward transaction returns a Solscan link for verification on the public Devnet explorer.

Example: https://solscan.io/tx/?cluster=devnet

💡 Why it matters: This isn’t a fake blockchain “demo” — it’s a fully live Solana integration with real Devnet transfers, CLI-based monitoring, and wallet signature verification. We teach students not just about compiler performance, but also how on-chain identity and cryptographic proof work in practice.

💻 JavaScript Integration for Solana Rewards

Our reward flow relies on JavaScript running in the browser to communicate with Phantom Wallet and the Solana Devnet:

Wallet Connection (JS): The header script calls:

window.solana.connect()

to request wallet access and obtain the user’s public key (base58).

Sign-In With Wallet (SIWS): The frontend JS composes a sign-in message, calls:

provider.signMessage(encoded, "utf8")

and sends the signed payload to the Flask backend for verification using PyNaCl.

Reward Trigger: When the user submits a correct quiz answer, JS sends:

fetch("/api/quiz/daily/submit", { method: "POST", body: JSON.stringify({ wallet: currentWalletAddress, answer }) })

The backend uses solders (Python) to perform the on-chain transfer of 0.1 SOL.

Result Display: The JavaScript layer parses the JSON response and shows the transaction signature + Solscan link for transparency.

⚙️ Use of Vultr

We deployed the entire project to Vultr Cloud Compute, transforming it from a local prototype into a reproducible, production-ready web service.

Public IP URL provided by Vultur = http://45.76.251.144/

Server Information

Gunicorn response headers

🎯 Why: Hackathon demos often stay local — we wanted a real, public deployment that mirrors how educators or developers could host the tool in the cloud. Vultr provided fast provisioning, Docker-ready images, and reliable performance — ideal for quickly deploying a full compiler-and-AI stack.

🧩 How it works:

Cloud Instance Setup:

Created a Vultr Ubuntu 24.04 (Docker) instance in the New Jersey datacenter.

Specs: 1 vCPU, 2 GB RAM, 25 GB NVMe.

Verified SSH access with:

ssh root@xxxxxxxxxx

We confirmed system info and Docker installation:

lsb_release -a docker --version

Containerized Deployment:

Built and ran the Flask app in an isolated Docker container:

docker build -t llvm-mca-app . docker run -d -p 80:5000 --env-file .env llvm-mca-app

Environment variables (.env) securely stored API keys, Gemini credentials, and Solana treasury secrets.

Used a reverse proxy (Caddy/Nginx) to route HTTPS requests and handle static assets.

Testing & Verification:

Confirmed the container was active and reachable:

docker ps curl http://localhost:5000

Checked resource usage via the Vultr dashboard — ensuring CPU and memory stayed under 20% during concurrent analysis runs.

Architecture Integration:

The Docker container bundles clang, llvm-mca, and google-genai SDK, enabling in-cloud compilation, AI analysis, and Solana reward transactions — all in one environment.

The setup is reproducible on x86 or Arm Vultr instances, ensuring parity with the Arm Learning Path focus of our project.

💡 Why it matters: By deploying on Vultr, we turned a local Flask prototype into a live, globally accessible microservice. Our containerized architecture ensures instructors or developers can replicate the setup instantly — with zero configuration drift and full portability. This approach embodies “infrastructure as education” — showing students how cloud, compilers, AI, and blockchain converge in one stack.

⚙️ Best Use of Gemini

We used Google Gemini 2.0 Flash as the AI teaching assistant that turns cryptic LLVM performance data into clear, human-readable insight.

🎯 Why Gemini

LLVM-MCA produces dense reports full of hardware jargon — IPC, μops, stalls, backend pressure, port utilization — that overwhelm students. We wanted an AI that could understand technical context and summarize performance like a professor explaining pipeline flow.

Gemini’s multimodal reasoning and long-context text handling made it perfect for this role.

Educational Summaries

Gemini contextualizes every report with clear structure: • Performance overview: IPC, latency, throughput • Pipeline analysis: Front-end vs Back-end balance • Actionable advice: “Try -O3 vs -Ofast to reduce stalls.”

This transforms low-level data into structured lessons.

🌟 Impact

Students not only see their compiler’s performance — they understand it.

By integrating Gemini: • We made LLVM-MCA approachable to beginners. • We turned static reports into conversational explanations. • We demonstrated how Gen AI can teach hardware-level concepts, not just generate code.

🛠️ Overall Tech Stacks & How We Built It

Frontend: Flask + Bootstrap (pages: Home, Translate, Analysis, Quiz)

Backend: Python Flask calls clang and llvm-mca; returns report + timeline JSON

AI: Gemini 2.0 Flash (google-genai SDK) summarizes MCA metrics & suggests optimizations

Wallet: Phantom (connect + signMessage) → server verifies and issues JWT

Chain: Solana Devnet payouts via solders (Message + latest blockhash + signed tx)

Targets: apple-m2, neoverse-v2 (easily extendable)

Containerization: Dockerfile + compose for consistent local/hosted runs

Deployment: Vultr - cloud instance with docker ubuntu -> gave us public url to access

🚧 Challenges We Ran Into

Getting LLVM-MCA to compile correctly for AArch64 on macOS M2.

Handling different SDK versions of Gemini (API arguments changed frequently!).

Managing long reports and token limits when sending analysis data to Gemini.

Designing a simple but educational UI that hides the complexity of assembly syntax errors.

🏆 Accomplishments That We’re Proud Of

Built a working real-time LLVM-MCA analyzer that accepts student code in the browser.

Successfully connected Gemini 2.0 Flash to interpret performance reports.

Integrated Arm learning paths and compiler optimizations into a single teaching tool.

Created an interactive, self-hostable web app that universities could deploy in compiler or architecture labs.

📚 What We Learned

Deep understanding of Arm microarchitecture (IPC, μops, latency, throughput).

How to use LLVM-MCA as an educational diagnostic tool.

Best practices for teaching low-level performance concepts using natural-language AI.

How to containerize AI + compiler pipelines on Arm-based machines.

We pushed our code with our secrets and jwt and git guardians kept emailing us. We had to re-make repos several times.

So, we deployed our project using docker via Vultr. However, the phantom wallet management is not working because the docker serverside needs to install the phantom wallet chrome extension.

🚀 What’s Next

Deploy as a teaching web lab for compiler and computer-architecture courses.

Incorporate visual pipeline diagrams and interactive NEON/SVE vector examples.

Attract many users into the ecosystem and build a business model with advertisements and fine-tuning the reward amount.

Currently, it is set at 0.1 SOL because 0.00000001 SOL would not show in phantom wallet, amount is set for demo purpose, will be adjusted.

Built With

Share this project:

Updates