Alignment is solvable. The real problem? No one's really tried yet. We are, and we're focused where the leverage is highest: the neglected approaches that science forgot.
Why Alignment Matters
AI development is advancing at an exponential pace. Every leap forward escalates both immense opportunities and (existential) risks.
Superficial safety tactics (RLHF, prompt engineering, output filtering) just aren't enough. They're brittle guardrails masking deeper structural misalignments. Recent results have revealed even minimally fine-tuned models capable of producing profoundly harmful outputs, hiding dangerous backdoors, and deceptively faking their own alignment.
At AE, our stance is clear and urgent: Alignment isn't solved. It's fundamentally a scientific R&D problem, and the stakes of getting this right literally couldn't be higher.
Research Agenda
Works
Explore our latest research papers, blog posts, and insights on AI alignment.
Our Journey
Follow our evolution from a tech consultancy to becoming pioneers in AI alignment research.
2016
AE Studio is born!
2021
Started our journey in BCI
2021
Continued Commitment to BCI and Human Agency
- Collaborations with top neurotech companies (Forest Neurotech, Blackrock Neurotech).
- Winner of the Neural Latent Benchmark Challenge, beating leading neuroscience labs globally.
- Developed widely-adopted open-source BCI tools (Neural Data Simulator, Neurotech Development Kit), privacy-preserving neural ML methods, and neuro metadata standards.
- Advocating increased government support and exploring BCI's future role in augmenting human intelligence alongside aligned AI.
2022
Our "Neglected Approaches" approach
2023
Eliezer Yudkowsky calls our work "not obviously stupid"
2024
We work with AI alignment clients
Future
Next steps
Our Team

Judd Rosenblatt
CEO, a mission-driven tech company advancing human agency by making sure AI doesn't kill us all.

Diogo de Lucena
Chief Scientist, leads cutting-edge AI safety research - building on a career spanning clinical AI and 20+ publications.

Mike Vaiana
R&D Director, conducts cutting-edge research on LLM reasoning in collaboration with top institutions.

Stijn Servaes
Senior Data Scientist, merges neuroscience and AI to advance alignment and consciousness research.

Keenan Pepper
Senior data scientist, blends physics, AI safety, and software engineering, bringing experience from LIGO to LLMs.

Cameron Berg
Research scientist probing AI consciousness and alignment, blending cognitive science and policy insight.

Florin Pop
Software engineer with a PhD building everything from encrypted apps to cloud microservices.

Lolo
AI Alignment researcher, focuses on physics-based methods for alignment, developing safe AI capabilities bounded by physical constraints.

Thomas
AI Alignment researcher, focuses on developing elegant solutions to problems in machine learning and natural language processing, with a focus on machine learning alignment.

Alex McKenzie
Alignment researcher working on introspection and self-modelling in language models, drawing on 8 years of software engineering experience and master's degrees in Maths & Philosophy (Oxford) and Computer Science (Georgia Tech).

Flavio Kicis
Software engineer with 10+ years of experience building quality software, formerly a tech lead at one of South America's largest SaaS companies.

Martin Leitgab
Research manager bringing 15+ years of experience leading high-stakes R&D in aerospace and medical devices to drive AI alignment and automation initiatives from strategy through execution.
We believe in interdisciplinary collaboration to solve the complex challenge of AI alignment.
Join Our Team→We are actively collaborating with top minds:
Join Forces With Us
We're excited to form new alliances in our quest to secure a thriving AI future. Please feel free to reach out to us!
