90degrees

Enterprise Knowledge Systems

AIdoesnotfailatthemodel.Itfailsattheknowledge.

I turn fragmented company knowledge into reliable AI systems. I show what works on real enterprise data, design the knowledge architecture that scales, and translate governance into operational technical measures.

Methodology

Three steps to turn company knowledge into reliable AI systems.

I work directly on your real data, identify what is technically and economically viable, and translate architecture and governance into operational reality. Each step delivers standalone value and creates the foundation for the next.

1 / 3
01

Prove

Start with evidence, not opinions.

Deliverables

  • Working proof on real company data
  • Assessment of data quality, retrieval potential, and feasibility boundaries
  • Technical decision memo with next-step recommendation
Dennis Dickmann

About

Builder|ArchitectStrategist

I build enterprise AI systems where knowledge, retrieval, and real-world constraints actually matter.

My background spans deep technical engineering and CTO-level product and systems leadership — from custom retrieval and model infrastructure to enterprise copilots, regulated environments, and executive decision support.

I work best where systems are complex, requirements conflict, and decisions are expensive — and someone needs technical clarity without consulting theater.

  • Built and led an AI lab from zero as founder and CTO
  • Shipped enterprise AI copilots and retrieval systems on real company data
  • Trusted by enterprise and regulated organizations on architecture, risk, and governance
  • Recognized by HPE for technical innovation in enterprise AI
  • Open-source work in model serving, Triton kernels, and retrieval systems
  • ACM publication on LLM training and AMD MI300A system performance

Offers

Four ways to make enterprise AI work on real company knowledge.

Each engagement is tightly scoped, time-boxed, and built to produce a concrete result — not open-ended advisory.

013–5 days

Data Proof Sprint

Start by proving what your data can actually do.

The pain

Your knowledge is fragmented across documents, systems, and silos. Vendor demos look convincing, but nobody knows whether your own data is usable enough for a reliable AI system.

What you get

A working proof on your real data, clear feasibility boundaries, and a technical decision basis for what to build next — or what not to build at all.

Deliverables

Working proof on real company data
Assessment of data quality, retrieval potential, and feasibility boundaries
Technical decision memo with next-step recommendation
Architecture direction for a scalable next phase
3–5 days
025–7 days

Readiness Review

Then, understand what stands between a demo and a dependable system.

The pain

You have pilots, prototypes, or live systems, but nobody has a clear picture of where the real weaknesses are. Retrieval quality is shaky, costs are unclear, controls are weak, and the architecture is hard to judge.

What you get

A clear view of what is working, what is fragile, what is missing, and what should happen next. All translated into a prioritized 90-day action plan.

Deliverables

Assessment of architecture, data flows, retrieval quality, and system gaps
Evaluation of technical, operational, and economic constraints
Target-state architecture and prioritization logic
90-day roadmap with concrete actions, risks, and decisions
5–7 days
035–10 days

Operationalization Sprint

Then, turn requirements and governance into something your teams can actually run.

The pain

Legal, IT, and business all see different parts of the problem. Governance exists on paper, but not in operations. Nobody is turning requirements into technical controls, ownership, and day-to-day practice.

What you get

An operational setup your teams can work with: clear controls, responsibilities, documentation logic, and implementation priorities for scaling and governance.

Deliverables

Control and responsibility mapping for the relevant teams
Operational measures for governance, oversight, and documentation
Implementation priorities for scaling, compliance, and decision-making
AI Act and governance mapping aligned to your existing systems and processes
5–10 days
04Monthly

Technical Advisory Retainer

And if needed, keep the same technical depth on call as you move.

The pain

You do not need another large project. You need a senior technical counterpart who understands the system, sees the full picture, and can step in where needed — from architecture and governance to prototypes, vendor decisions, and strategic trade-offs.

What you get

An ongoing advisory setup that combines the substance of proof, review, and operationalization — with clear technical support, without forcing every new question into a separate sprint, and with the goal of making your team progressively independent.

Deliverables

Monthly review and decision session across architecture, governance, and priorities
Support for vendor, build, scaling, and operating-model decisions
Targeted technical deep dives, prototype reviews, or rapid proofs where needed
Priority access for high-stakes questions and critical moments
Monthly

References

Built in production. Trusted where failure is expensive.

Selected work from product development, retrieval systems, open-source R&D, and regulated enterprise environments.

01Founder & CTO

AI Studio — Built and Led from Research to Production

Built enterprise systems for multimodal search and retrieval, shipped a production-grade AI copilot for highly specialized expert workflows, and owned R&D from model pruning to large-scale distributed training.

Context

Built an AI studio from the ground up — from research agenda and engineering team to infrastructure, architecture, product delivery, and commercial applications.

Impact

Delivered an enterprise AI copilot processing 50,000+ documents with traceable, permission-aware retrieval in production. The studio's technical output directly enabled multiple commercial products and proved that advanced retrieval and reasoning systems could move beyond demos into real operations.

02Researcher & Engineer

Open-Source R&D — Retrieval, Serving, and Performance-Critical AI Systems

Published open-source work in enterprise search, multimodal retrieval, efficient model serving, and performance-critical system components — and translated it into production-grade systems.

Context

Independent R&D focused on the hard technical layer behind production-grade enterprise AI: model efficiency, retrieval quality, serving custom encoders, multimodal indexing, and system-level performance under real scaling constraints.

Impact

This work turned advanced retrieval and model infrastructure into reusable systems: from custom vLLM serving plugins for retrieval-heavy models to a multimodal late-interaction index optimized for throughput, latency, memory efficiency, and reliability at scale. The work also includes an ACM publication on system-level performance insights for LLM training on AMD MI300A infrastructure.

03External Advisor

AI Architecture & Risk Review — Enterprise in a Regulated Environment

Reviewed the architecture, risks, and governance implications of multiple generative AI initiatives in a regulated enterprise setting and created the basis for executive decision-making.

Context

A large organization in a highly regulated environment was assessing a growing set of current and planned generative AI initiatives and needed a clearer view of architectural dependencies, risk exposure, governance gaps, and implementation priorities.

Impact

Created a more coherent basis for leadership decisions, helped unblock stalled initiatives, and established a prioritized path toward stronger controls and clearer operating responsibilities.

04Featured by Hewlett Packard Enterprise

HPE Digital Game Changers

Profiled by HPE for work on making enterprise AI more practical, efficient, and deployable in real business environments.

Context

HPE featured Seedbox.AI in its Digital Game Changers series around the use of supercomputing at HLRS to accelerate model training, reduce compute demand, and make AI deployment more viable for enterprise use.

Impact

The story reflects work at the intersection of AI and high-performance computing: reducing model training time, lowering compute requirements, and helping customers move from proof-of-concept toward real implementation.

Inside the Data Proof Sprint

.pdf.xlsx.docx.csv.msgstructured_output.md# Maintenance Protocol## 1. Schedule| Component | Interval | Due || Bearing | 90 days | 03/15|| Hydraulic | 30 days | 03/30|## 2. Failure Modes### 2.1 Bearing Overheating- Root cause: Lubrication- Detection: >4.5 mm/s- Response: Shutdown
01 / 09

Data Structuring

PDFs, spreadsheets, scans, and legacy files are converted into a structured markdown representation so headings, tables, lists, and sections stay machine-readable.

Fit Check

Not for everyone. Deliberately.

This work is highly specialized, direct, and execution-focused. The best engagements start with clarity on both sides.

This is for you if…

  • You have real company data, a concrete AI or retrieval challenge, and the willingness to work on reality rather than theory.
  • You want proof on your own data — not another vendor demo, innovation workshop, or slide deck.
  • You already have an AI initiative, prototype, or live system and need a clear technical view of what works, what breaks, and what should happen next.
  • You value senior technical judgment, direct communication, and scoped engagements with clear deliverables.
  • You are prepared to make decisions, commit resources, and act on an honest assessment — even when the answer is not what you hoped for.

This is not for you if…

  • You are looking for generic AI inspiration, trend talks, or strategy workshops without real technical work.
  • You want free brainstorming, unpaid scoping, or open-ended advisory without a defined engagement.
  • You need body leasing, staff augmentation, or someone to quietly disappear into a delivery team.
  • You are optimizing for the cheapest option rather than for clarity, technical quality, and risk reduction.
  • You want reassurance, not reality — or someone to tell you that AI will solve everything.

Contact

Let's make this concrete.

The best next step is a technical intro call. If you want to share context or materials first, send me a direct email.

Book a technical intro call

30 minutes. No pitch deck, no fluff. We look at your situation, your constraints, and whether a scoped engagement makes sense.

Email Me Directly

Feel free to send a short message about your current situation, challenges, or concrete goals.