I build local-first AI tools, prompt-heavy workflows, and developer systems that are meant to be used, inspected, and improved.
Most of what I work on sits somewhere around DSPy, LM Studio, OpenAI-compatible local runtimes, RAG, prompt tooling, and repo-aware automation. Some of it is public, a lot of it is not, but the pattern is usually the same: make the workflow more useful, more inspectable, and less dependent on black-box magic.
- DSPy experiments and teaching tools
- LM Studio tooling and plugins
- prompt libraries, prompt workflows, and prompt testing
- local-first agent and developer environments
- repo mapping, structure-aware tooling, and workflow orchestration
- small utilities that make AI-assisted development less annoying
- DSPyTeach — turns source material into structured teaching briefs
- lms-llmsTxt — generates
llms.txt-style artifacts with DSPy + LM Studio - rag-v2 — an LM Studio RAG plugin
- dspy_workspace — experiments, utilities, and scratch space for DSPy work
- prompt-docs — reusable prompt assets and workflow patterns
There is also a larger private pile of work behind this profile covering:
- local-first AI development tooling
- prompt infrastructure and benchmarking
- code review and session analysis tools
- repo intelligence and planning tools
- dashboards, registries, and internal workflow systems
Most of it is in service of the same idea: AI tooling should be understandable, controllable, and useful in real workflows.
I usually prefer:
- local-first over SaaS-first
- tools you can inspect over tools you have to trust
- systems that help you move faster without hiding what they're doing
- practical workflows over polished hype demos
GitHub embed snippet
[](https://tokscale.ai/u/AcidicSoil)- LM Studio: https://lmstudio.ai/dirty-data
- X: https://x.com/d1rt7d4t4



