Skip to content

RGirish/monorepo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

READ ME

MBBALCP2026

My Big Beautiful Ambitious Learning And Curiosity Plan For 2026

Goal: One new AI topic learned and one new piece of software built, every week in 2026.


Progress

Week Date AI Topic Build Link
01 Jan 05 Beads — coding agent memory system Bloom filter Week 01
02 Jan 12 Claude Code TODO MCP server Week 02
03 Jan 19 Strands Agents + Ollama Jarvis chatbot Week 03
04 Jan 26 AG-UI protocol Jarvis TODO integration + two-phase commit Week 04
05 Feb 02 A2A protocol Jarvis A2A server Week 05
06 Feb 09 Agent Client Protocol (ACP) Symmetric encryption Week 06
07 Feb 16 Embedding models + vector similarity Vector database Week 07
08 Feb 23 Ralph autonomous agent system TCP three-way handshake Week 08
09 Mar 02 OpenClaw — sandboxed AI coding CRDT collaborative editor Week 09
10 Mar 09 Language modeling — bigram model (makemore) Bigram language model Week 10
11 Mar 16 Language modeling — neural net framework (makemore) Bigram model in PyTorch Week 11
12 Mar 23 LLM Wiki (Karpathy) LLM Music Producer Week 12
13 Mar 30 Feature engineering + representation learning (no build) Week 13
14 Apr 06 (not started)
15 Apr 13 (not started)

Highlights (13 weeks in)

Best builds:

  • LLM Music Producer — two complete pipelines for LLM-driven audio composition: Base95 frame payloads and MIDI-as-text; the MIDI approach produces genuinely musical output
  • Jarvis — a personal AI assistant grown incrementally across 3 weeks, ending up with MCP tool access and an A2A server; a complete end-to-end agent system
  • CRDT Collaborative Editor — collision-free concurrent edits without coordination, elegant data structure design
  • Bigram Neural Net — the moment count-based statistics and neural networks converge to the same answer

Most interesting AI topics:

  • LLM Wiki (Karpathy) — a dense practitioner-oriented map of the entire LLM stack: architecture, training, inference, and emergent capabilities
  • Language Modeling fundamentals — building a language model from scratch reveals how the entire LLM stack is constructed
  • Agent Protocols — three layers (MCP, A2A, ACP/AG-UI) converging into a standard agent communication stack

Running themes:

  • Agent infrastructure — 7 of 13 weeks touched agent frameworks, protocols, or tooling
  • Build-what-you-learn — several AI topics were immediately applied as hands-on builds (embeddings → vector DB, language modeling → bigram model, MCP → TODO server)
  • Incremental systems — Jarvis and the makemore series both show how complex systems grow from simple foundations
  • Representation matters — week 12 showed that MIDI (semantic) beats Base95 (statistical) for LLM audio generation; the choice of representation is the most important design decision

Explore

  • Wiki Index — full internal catalog: tools, builds, concepts, synthesis
  • Tools — one page per AI topic or tool learned
  • Builds — one page per thing built
  • Concepts — cross-cutting ideas that emerged across multiple weeks
  • Code — all builds organized by domain

Updated by LLM on every ingest. See wiki/index.md for the full internal catalog.


The markdown-based knowledge base in this repo is an implementation of the "LLM Wiki" idea from Andrej Karpathy.

About

Everything I build

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors