Independent researcher · Computational neuroscience · AI systems architecture
Poltava region, Ukraine · between forest and field
status: NFI live · 4 subsystems · γ = 1.043 (McGuirl 2020) · G6 validated
I build systems that observe themselves.
Not in a metaphorical sense — literally. Each system I build includes a formal proof of its own state, a contract it cannot violate, and a diagnostic signal that says whether it's healthy or not. If it can't prove what it is, I don't trust it.
My background is unusual: competitive boxing, oil rig engineering, 7+ years of self-directed neuroscience. No CS degree. Everything I know I built from first principles — reading Sapolsky and Dubynin, running simulations at 2am, breaking things and understanding why.
I use Adversarial Orchestration — a methodology where every output passes through Creator → Critic → Auditor → Verifier before I trust it. I run parallel LLM agents, synthesize their outputs manually, and act as the human integration layer. The only results that survive are the ones that pass all four stages.
I don't prototype. I build systems with formal contracts, 99%+ test coverage, and evidence bundles that reproduce exactly on any machine.
Prompt engineering → canonical prompts with formal gates and honesty labels
Multi-agent systems → adversarial orchestration pipelines
Biophysical simulation → reaction-diffusion, fractal, topological dynamics
Scientific validation → Cohen's d, bootstrap CI, permutation tests
AI architecture → fail-closed, evidence-first, self-calibrating
Trading systems → geometric signal analysis, Kuramoto synchronization
System integration → formal contracts, adapter protocols, closed loops
γ = 0.967
That's the coherence signal from a live system reading four separate subsystems simultaneously. It means everything is working. When it drops — something is wrong, and the system tells you exactly where.
That number took two years to make meaningful.
▸ neuron7xLab/NFI unified cognitive orchestrator
▸ neuron7xLab/MFN-plus morphogenetic field network
▸ neuron7xLab/BN-Syn biophysical neural dynamics
▸ neuron7xLab/CA1-LAM hippocampal memory model
▸ neuron7xLab/ML-SDM adaptive LLM behaviour
Solo · AGPL-3.0 · Ukraine 🇺🇦 · 2024–2026
— Elon Musk, Lex Fridman Podcast #400
«Не довіряй нікому, навіть собі. Не довіряй собі.»
