B Capital https://b.capital/ We empower entrepreneurs to think bigger. Scale faster. Grow global Thu, 19 Mar 2026 13:34:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://b.capital/wp-content/uploads/2023/10/cropped-BCapital_Logo_XL-32x32.png B Capital https://b.capital/ 32 32 Why We Invested in Lunar Energy https://b.capital/why-we-invested/why-we-invested-in-lunar-energy/ Thu, 19 Mar 2026 13:00:12 +0000 https://b.capital/?p=7303 By: Jeff Johnson, Karly Wentz, Nate Johnson and Eric Brook   The U.S. power grid is evolving rapidly as the economics and patterns of electricity production, consumption and pricing change. Demand is rising as homes electrify transportation, heating and appliances, while grid operations are growing more complex due to aging infrastructure, extreme weather and localized...

The post Why We Invested </br>in Lunar Energy appeared first on B Capital.

]]>
By: Jeff Johnson, Karly Wentz, Nate Johnson and Eric Brook

 

The U.S. power grid is evolving rapidly as the economics and patterns of electricity production, consumption and pricing change. Demand is rising as homes electrify transportation, heating and appliances, while grid operations are growing more complex due to aging infrastructure, extreme weather and localized congestion. At the same time, retail electricity pricing is shifting toward time-based and dynamic structures that place more responsibility on end users to actively manage consumption.

For homeowners, this translates to higher bills, more frequent exposure to outages and limited ability to control when energy flows to or from the grid. For utilities, it brings rising peak loads, constrained capacity and growing reliance on flexible distributed resources to maintain reliability. Residential solar and storage can help address these challenges, but only when hardware, software and grid integration seamlessly work together.

Lunar Energy has built exactly that. The company’s modular battery system was designed not just for performance, but for real-world deployment with installers in mind – fewer components, faster commissioning, and flexible configurations for different home sizes. The result is a system that works for both homeowners and the installers who put it in.

B Capital is pleased to lead the company’s $100M Series C funding round. Founded in 2020 and headquartered in Mountain View, California, Lunar Energy’s integrated hardware and software platform was designed from the ground up to serve homeowners, installers, and grid operators.

 

Why Legacy Residential Storage Is No Longer Sufficient

The gap between what residential storage could deliver and what it delivers today comes down to system design. Many incumbent solutions were built for a regulatory environment defined by net metering and static electricity rates, where active optimization was optional rather than required. As pricing structures and grid requirements evolve, those assumptions no longer hold.

Hardware-first products often lack the software sophistication required to adapt to changing rates or grid signals. Software-only platforms, in turn, depend on third-party devices they don’t control. Most residential systems weren’t designed to participate meaningfully in distributed power plants, either because they lack the necessary controls or because their platforms weren’t built for large-scale coordination.

The result is a fragmented ecosystem where hardware is often undifferentiated, software is bolted on rather than deeply integrated and grid participation remains an afterthought. What’s needed to unlock the full value of residential storage is a platform built from the ground up, one that combines purpose-built hardware with intelligent software and native grid connectivity. That level of coordination depends on integration across hardware, software and grid interfaces at the system architecture level, rather than relying on legacy architectures that were not designed for coordinated dispatch.

 

Turning Homes into Grid Assets

Lunar Energy has built an integrated home battery system designed to make electrification simple, affordable and resilient. Residential batteries are no longer simply backup devices. They are increasingly becoming coordinated grid infrastructure. With 650 MW of distributed devices under management and new systems deployed daily, Lunar Energy is proving that the future of residential energy requires hardware and software working together from day one to optimize for homeowners, installers and grid operators.1

For homeowners, the Lunar Energy system delivers immediate, tangible value. The modular battery provides whole-home backup during outages, a critical feature as extreme weather events and grid strain become more frequent. In 2024, the average U.S. electricity customer lost power for 11 hours, nearly double the prior decade’s average, with major weather events accounting for 80% of those hours.2 Recent studies also show that the longest outage a typical customer experiences has grown from about 8 hours in 2022 to nearly 13 hours by mid 2025, exactly the kind of extended interruption a whole home system is designed to cover.3

The product’s DC-coupled architecture, which connects solar panels directly to the battery and reduces unnecessary energy conversion, improves overall system efficiency. That higher efficiency translates directly into greater bill savings for homeowners, capturing more value from every kilowatt-hour of solar generation. Integrated smart circuit controls allow homeowners to prioritize which appliances stay powered during an outage via a simple app, giving them direct control over how stored energy is used.

Beyond the system’s differentiated hardware, Lunar Energy’s software platform provides customers with even more value. Rather than relying on static schedules or manual configuration, the company operates a forecasting and optimization platform that continuously models household energy production, consumption and tariff structures. In turn, the system automatically determines how and when to charge or discharge the battery across three objectives: lowering electricity bills, preserving backup readiness and enabling grid participation. In practice, this means a homeowner can open the Lunar Energy app, see exactly how energy is flowing through their home, and adjust priorities in real time – keeping the refrigerator and home office running during a grid event while temporarily pausing the EV charger. The system can also act autonomously, shifting loads and dispatch timing based on rate signals the homeowner never has to monitor. For installers and third-party asset owners, the same platform provides fleet-level visibility and management tools, enabling them to monitor system health, track performance and manage warranty workflows across their entire installed base from a single interface.

For utilities and grid operators, Lunar Energy’s platform transforms residential batteries into dispatchable assets, meaning they can be coordinated and activated when the grid requires support. Lunar Energy’s software already manages one of the largest distributed power programs in the country, coordinating nearly 150,000 devices across key markets including California, Hawaii, New England and Puerto Rico.4 Each connected home becomes a node in a virtual power plant, an aggregated network of distributed batteries coordinated to operate as a single dispatchable resource to meet peak demand. By coordinating thousands of small assets in real-time, Lunar Energy helps reduce strain on the grid without the cost or emissions of traditional peaker plants, fast-ramping gas facilities used during peak demand periods. The results are meaningful: in 2025, Lunar Energy customers earned an average of $464 through grid participation and saved an additional $338 on electricity bills compared to traditional residential battery operation.5 6

 

Market Position and Pathway to Scale

Lunar Energy has moved beyond development into its next phase of deployment at scale. The Lunar Energy system is fully commercialized and shipping today, with approximately 2,000 installations across key markets in the U.S. The company is scaling production to 20,000 units by the end of 2026 and 100,000 by 2028, a trajectory enabled by partnerships with contract manufacturers and a system architecture designed for scale.7

The company’s strategic go-to-market partnership and investment from Sunrun, the largest residential solar and storage provider in the U.S., gives Lunar Energy an immediate, high-volume deployment channel at national scale. Sunrun brings customer access, installation capability and deep utility relationships, while Lunar Energy provides the integrated platform purpose-built for complex regulatory and grid conditions. As both an installer and a third-party owner of residential energy assets, Sunrun benefits from Lunar Energy’s streamlined installation process and fleet management software across the full asset lifecycle. It also enables the company to expand its reach through additional channel partners to meet the needs of today’s market. Beyond Sunrun, Lunar Energy has established a broad network of regional and national installers, giving the company multiple paths to scale across geographies and customer segments.

 

Built by the Team That Scaled Modern Energy Storage

Lunar Energy’s founder and CEO, Kunal Girotra, previously led energy storage at Tesla during a formative period for the category. He was responsible for scaling battery products across residential, commercial and grid-scale applications, operating at the intersection of hardware engineering, manufacturing, software and grid integration.

That experience is directly reflected in Lunar Energy’s emphasis on system-level design and software-driven control. Success in residential storage requires navigating certification, supply chains, installation workflows, utility requirements and real-world failure modes. Kunal’s background reflects firsthand experience with these challenges at global scale, reducing execution risk in a sector where many companies underestimate operational complexity.

The broader Lunar Energy team brings complementary expertise across residential energy, grid software and large-scale system deployment. Collectively, they have built and operated platforms that manage energy assets across countless homes, providing a strong foundation for scaling the company’s integrated model.

 

Backing the Future of Residential Energy Infrastructure

At B Capital, we focus on companies we believe contain proven, scalable and differentiated technology addressing constraints in critical energy infrastructure. Lunar Energy embodies this thesis. The company has moved beyond prototypes to full commercial deployment, with a product in-market and a clear path to scale. Its integrated hardware and software platform creates defensible differentiation in a market where most players compete on only one dimension. It also enables ongoing value capture through grid services and software coordination. Its ability to serve homeowners, installers and utilities position Lunar Energy to capture value across the entire residential energy stack.

We see Lunar Energy as a potential platform company, one that could define how millions of American homes interact with the grid over the coming decades. The residential storage market is large and growing, with installed capacity projected to nearly double by 2030.8 But scale alone is not enough to capture the market. As distributed resources become central to grid reliability, leadership will increasingly accrue to companies positioned at the coordination layer. The winners will be companies that combine hardware excellence with software intelligence and deep grid integration. That is exactly what Lunar Energy has built, and we are thrilled to partner with Kunal and the team as they scale.

The investment was led by Jeff Johnson (General Partner, Head of Energy Tech at B Capital), alongside Karly Wentz (Partner, Energy Tech), with investment team members Nate Johnson and Eric Brook.

 

 


LEGAL DISCLAIMER
All information is as of 03.09.2026 and subject to change. This content is a high-level overview and for informational purposes only. The investment discussed herein is a portfolio company of B Capital; however, such investment does not represent all B Capital investments. Certain statements reflected herein reflect the subjective opinions and views of B Capital personnel. Such statements cannot be independently verified and are subject to change. There can be no assurance any such trends or correlations will continue in the future. Reference to third-party firms or businesses does not imply affiliation with or endorsement by such firms or businesses. It should not be assumed that any investments or companies identified and discussed herein were or will be profitable. Past performance is not indicative of future results. The information herein does not constitute or form part of an offer to issue or sell, or a solicitation of an offer to subscribe or buy, any securities or other financial instruments, nor does it constitute a financial promotion, investment advice or an inducement or incitement to participate in any product, offering or investment. Much of the relevant information is derived directly from various sources which B Capital believes to be reliable, but without independent verification. This information is provided for reference only and the companies described herein may not be representative of all relevant companies or B Capital investments. You should not rely upon this information to form the definitive basis for any decision, contract, commitment or action.

 

SOURCE

  1. Latitude Media, Home battery startup Lunar Energy aims to quadruple its manufacturing, February 4, 2026
  2. EIA, Hurricanes in 2024 led to the most hours without power in the United States in 10 years, December 1, 2025
  3. JD Power, Disasters Become a Fact of Life for Many U.S. Electric Utility Customers, October 2025
  4. Canary Media, Lunar Energy lands $232M to boost smart home batteries, February 4, 2026
  5. Bloomberg, Ex-Tesla Energy Chief Raises $230 Million for Battery Startup, February 4, 2026
  6. Lunar Energy, What is Lunar AI, https://www.lunarenergy.com/learn/learn-articles/what-is-lunar-ai
  7. Ibid.
  8. B Capital analysis

The post Why We Invested </br>in Lunar Energy appeared first on B Capital.

]]>
The AI Labor Crisis Isn’t Coming in 2028. The Investment Opportunity Is Here Now. https://b.capital/insights/the-ai-labor-crisis-isnt-coming-in-2028-the-investment-opportunity-is-here-now/ Thu, 26 Feb 2026 15:40:46 +0000 https://b.capital/?p=7281 By: Yan-David “Yanda” Erlich, General Partner, B Capital and Raj Ganguly, Co-Founder and Co-CEO, B Capital   Last weekend, the Citrini Research “2028 Global Intelligence Crisis” memo went viral, racking up roughly 16 million views after Michael Burry amplified it. IBM dropped 13%, and we saw broad weakness across software, payments, and delivery stocks.1 The...

The post The AI Labor Crisis </br>Isn’t Coming in 2028. </br>The Investment </br>Opportunity Is Here Now. appeared first on B Capital.

]]>
By: Yan-David “Yanda” Erlich, General Partner, B Capital and Raj Ganguly, Co-Founder and Co-CEO, B Capital

 

Last weekend, the Citrini Research “2028 Global Intelligence Crisis” memo went viral, racking up roughly 16 million views after Michael Burry amplified it. IBM dropped 13%, and we saw broad weakness across software, payments, and delivery stocks.1 The market panicked.

The memo paints a vivid picture: AI replaces white-collar labor faster than the economy can absorb, consumer demand collapses, unemployment spikes past 10%, and the S&P draws down nearly 40%.2 They call it “Ghost GDP,” where output shows up in profits but doesn’t circulate because displaced workers have lost their income.

It’s a clean left-tail story, and it’s wrong as a base case, but it’s directionally correct on the structural shift underneath, which is exactly where the investment opportunity sits.

 

What the Doomers Get Right

Strip away the compressed timeline and the stacked worst-case assumptions, and several of Citrini’s structural observations hold up.

White-collar work is the near-term fault line. A large share of knowledge work is read, write, decide, and coordinate. That’s exactly where current models are strongest. McKinsey estimates 60-70% of employee time is automatable by AI.3 The pressure will show up first in back office, operations, finance, support, sales enablement, and parts of legal and compliance. These aren’t theoretical targets; these are the functions where we see enterprise buyers already pulling budget.

AI is moving from tool to coworker. The real shift isn’t chatbots getting smarter. It’s AI gaining persistent memory, learning on the job, and planning autonomously. We went from “help me write this” (early ChatGPT, basic copilots) to “do this for me” (Codex, Claude Code, support bots) and are now entering “own this with me.” That last phase changes org design, not just task execution.

The distributional tension is real. If gains accrue primarily to capital while labor lags, demand weakens and politics get volatile. Even without collapse, wage, tax, and benefit debates will intensify. That affects regulation and procurement behavior. Investors who ignore this are underpricing political risk.

Take rates will face pressure. Agents routing around software is overstated near term, but the direction is correct. As discovery, evaluation, and execution become automated, friction-based pricing power erodes. The winners will be workflow-embedded products and infrastructure providers.

 

Where It Falls Apart

The Citrini scenario only works if every aggressive assumption resolves in the same direction simultaneously within 24 months. That’s not analysis, that’s a horror story presented as a base case.

Enterprise adoption doesn’t move at speed. Capability may be exponential, but deployment is not. Data restructuring, compliance, procurement cycles, and retraining are multi-year arcs. Only 5% of AI pilots currently achieve measurable P&L impact, per MIT research.4 32% stall after pilot.5 METR’s own randomized controlled trial found experienced developers were 19% slower with AI tools, even as benchmarks showed superhuman coding performance, a stark reminder that lab capability and production value are different things.6 The bottleneck is not demand: 92% of Fortune 500 already use ChatGPT, 82% of executives plan AI agent integration within three years, and $252 billion went into corporate AI investment in 2024 alone.7, 8, 9 The bottleneck is the infrastructure to deploy and scale AI coworkers in production.

“Ghost GDP” confuses distribution with destruction. Labor savings don’t vanish; they reallocate through lower prices, capex, profits, dividends, and tax revenue. The issue is distribution and timing, not whether gains circulate. This is an important distinction for investors: the value gets created; the question is where it accrues.

Policy is treated as inert. Automatic stabilizers, monetary easing, fiscal transfers, mortgage forbearance, and credit restructurings historically interrupt demand spirals. The memo assumes none of these mechanisms activate. That’s not how economies function under stress.

Business execution is understated. Code is cheap, but trust, compliance, distribution, and operational execution are not. Agents may pressure take rates, but they don’t eliminate institutional infrastructure in two years.

 

The Narrative Shock Creates Real Opportunity

The market is conflating a long-term structural shift with a 24-month crisis scenario. That dislocation creates two types of opportunity: mispriced growth exposures in public markets and private-market picks-and-shovels that win regardless of macro path. The latter is where we’re focused.

 

1. AI Co-Worker Applications: Replacing Hours, Not Just Tasks

The highest-conviction opportunity is AI coworkers that land in the enterprise with measurable ROI inside 90 days. The investment filter is simple: can it replace measurable hours with compliance, auditability, and a clear feedback loop?

Software Engineering (~$370B addressable). Claude Code and Codex have commoditized code generation. The frontier has moved to enterprise context and verifiable domains where the AI can prove its work is correct: formal proofs, math/physics, security analysis, AI research itself. The moat here is organizational context, not raw coding ability.

Sales & GTM (~$245B addressable). The market is crowded with AI SDRs, and most of them are dead on arrival. The winners own the system of record, learn from outcomes, and close the loop on what converts. Data rights are the moat; if you can’t observe what works and improve autonomously, you’re a feature, not a company.

Finance & CFO Office (~$215B addressable). High-volume operational workflows with clear accountability: AR/AP, collections, procurement ops, FP&A, compliance reporting. These processes are rules-based but manual-intensive, making them ideal for AI coworkers. The companies we’re most excited about are the ones replacing FP&A analysts, not just augmenting them, where “better” is quantifiable and feedback is continuous.

When AI works in the enterprise, the returns are significant: $3.70 ROI per dollar on average, with top performers hitting $10.30.10 Gartner projects 33% of enterprise software will include agentic AI by 2028, with 15% of daily decisions made by agents.11

 

2. The Infrastructure Layer: Where Durable Value Accrues

The application layer gets the headlines. The infrastructure layer gets the margins. This is where we believe the market is most under-invested.

Agentic Memory & Context. Models have memory, they don’t have your memory. The gap is organizational context: docs, code, tickets, CRM data, team terminology, approval workflows. This is what separates a stateless chatbot from a colleague. The defensibility compounds because memory improves as context accumulates, and switching costs grow over time as the AI learns your organization. Think of it as the unlock from agent to colleague.

Orchestration & Multi-Agent Coordination. Agents don’t yet collaborate well, with each other or with humans. The missing layer includes coordination protocols, escalation paths, and seamless handoffs between AI and human coworkers. The network effects here are powerful: value increases as more agents and humans use the same coordination layer. This is Slack for the human-AI workforce.

Production Observability. When agents run 24/7, ops teams need real-time visibility into what’s working and what’s not. Most existing tools focus on dev and debug, but the real pain is keeping agents reliable at scale. The first company to nail production-first observability for agentic systems owns the category. This is the Datadog opportunity for the agentic era.

Agentic Security. OpenClaw made this viscerally obvious (more on that below), but the principle applies broadly: agents with persistent access to enterprise systems represent a fundamentally new attack surface. Least-privilege access, skill sandboxing, action-level anomaly detection, and identity management for non-human actors. This category barely existed a year ago, and it will be table stakes for enterprise deployment within two years.

 

3. Why Now: Four Curves Crossed Simultaneously

This thesis isn’t speculative. It’s grounded in four technical inflection points that converged in 2024-25, and the data keeps accelerating.

Reasoning quality is on an exponential curve, and it’s steepening. METR’s time horizon research, which measures the length of tasks AI agents can reliably complete autonomously, shows capability doubling every ~7 months over six years.12 Their updated TH1.1 methodology (January 2026) suggests recent progress is actually 20% faster than the historical trend, with post-2023 doubling at 131 days.13 Claude Opus 4.6 now clocks a 50%-time-horizon of roughly 14.5 hours, meaning it can autonomously complete tasks that would take a skilled human half a working day.14 If this trend continues for 2-4 more years, we’re looking at agents that can reliably execute week-long projects. MIT Technology Review called METR’s time horizon plot “the most misunderstood graph in AI.” The misunderstanding cuts both ways: doomers extrapolate it to imminent catastrophe, skeptics dismiss it as benchmark gaming. The investment-relevant reading is that autonomous task capability is compounding on a steep, consistent curve, and the gap between what agents can do on benchmarks and what they actually do in production is precisely the market we’re investing into.

That gap is real, by the way: METR’s own developer productivity RCT (July 2025) found that experienced open-source developers were 19% slower when using AI coding tools, despite believing they were 20% faster.15 Algorithmic benchmarks overstate real-world performance because they can’t capture code quality, context understanding, and integration complexity. This is the deployment gap, and this is the opportunity.

Tool use is standardizing, but not the way anyone expected. MCP (model context protocol) was supposed to be the universal connector between AI agents and external services, and it has real traction: 17,000+ servers on MCP.so, OAuth-native authentication, and adoption by OpenAI, Google, and Microsoft.16 But MCP isn’t the whole story anymore. Skills (reusable prompt-and-script bundles that encode domain knowledge into agent behavior) have exploded: 96,000+ on SkillsMP and 5,700+ on ClawHub – all built on the SKILL.md standard that emerged from coding agents like Claude Code and Codex CLI.17 Meanwhile, CLIs (command-line interfaces) are emerging as a surprisingly effective third pattern. Agents have been trained to be exceptionally good at using command-line tools, and CLIs handle authentication, structured output, and composability through patterns that have been battle-tested for decades. Karpathy called it publicly: build for agents by exposing functionality via CLI, publishing task-specific skills, and shipping MCP servers. The investment implication is that the “tool use” layer is not a single protocol but an ecosystem of complementary patterns. Skills encode knowledge, MCP provides authenticated access, and CLIs offer execution efficiency. The companies building the orchestration, discovery, and security layers across all three will own the integration tier of the agentic stack.

Context windows expanded to 2M+ tokens. Persistent memory across sessions is now possible. AI can remember what it learned yesterday.

Inference costs collapsed 200x. GPT-4 equivalent capability now costs $0.40 per million tokens versus $20 in 2022.18 DeepSeek pushed this even further, running 90% cheaper than Western providers.

These aren’t independent trends. They’re compounding. AI coworkers are now technically feasible, economically viable, and enterprise-ready. The question is no longer if but how fast enterprises can deploy them.

 

4. The OpenClaw Moment: What 157K Stars in 60 Days Tells Us About Where AI Is Headed

If you want a single case study that encapsulates the entire AI coworker opportunity and its risks, look at OpenClaw.

OpenClaw (formerly Moltbot, formerly Clawdbot) started as a weekend side project by developer Peter Steinberger: a personal AI assistant that runs locally on your machine and connects to your messaging apps, email, calendar, and file systems to act autonomously on your behalf. Not a chatbot, not a copilot, but an agent that does things for you across the tools you already use.

The reception was extraordinary. OpenClaw hit 100,000 GitHub stars faster than Linux, Kubernetes, or any project in GitHub history. It crossed 157,000 stars within 60 days.19 On January 30, 2026, alone, it gained 34,168 stars in 48 hours.20 The project spawned Moltbook, an AI-only social network where only agents could post, which hit 1.5 million registered agents in five days and drew coverage from Fortune, CNBC, and TechCrunch. Y Combinator’s podcast team showed up in lobster costumes.21 “Claw” became Silicon Valley slang for locally-hosted AI agents.

This is demand signal, not hype signal. People don’t want another chatbot; they want AI that manages their inbox, controls their schedule, organizes their files, and executes multi-step workflows while they do something else. OpenClaw’s value proposition was blunt: “AI that actually does things, not just talks.” That resonated so strongly that OpenAI took notice. On February 14, Steinberger announced he was joining OpenAI, a move widely interpreted as OpenAI’s play to acquire agentic AI talent after their $3B bid for Windsurf (Codeium) fell through.22 The project transitioned to an independent open-source foundation under MIT license, mirroring the governance model of Linux and Kubernetes.

The enthusiasm is the bullish signal. Now here’s the cautionary one.

Within weeks of going viral, SecurityScorecard found over 40,000 exposed OpenClaw instances on the public internet, 63% of them vulnerable to remote code execution.23 Researchers discovered 400+ malicious “skills” on ClawHub (OpenClaw’s marketplace) distributing infostealers, remote access trojans, and backdoors disguised as legitimate automation tools.24 A critical one-click RCE vulnerability (CVE-2026-25253) meant attackers could compromise systems through a single link without the user installing anything. Skills execute with full agent and system permissions, with no sandboxing and no least-privilege access. Users were following YouTube tutorials that never mentioned security, deploying agents on cloud servers with authentication set to “none.”

The most alarming development: Hudson Rock documented the first observed case of an infostealer harvesting an entire AI agent configuration, not just browser passwords but the complete identity, permissions, and API keys of a personal AI agent. That’s a new attack surface that didn’t exist twelve months ago, and infostealers are now targeting AI personas as high-value assets.

This matters for our thesis on three levels.

First, the demand is real and it’s massive. 157K stars in 60 days, OpenAI acquiring the creator, and “claw” entering the tech lexicon as a verb are not indicators of a fad. Consumers and developers are telling us, loudly, that they want persistent AI agents with real autonomy over their digital lives. The enterprise version of this same demand is the AI coworker.

Second, agentic security is not a feature request, it’s a category. When agents have persistent access to email, calendars, financial accounts, and code repositories, the blast radius of a single compromise is an order of magnitude larger than a stolen password. Enterprise buyers will not deploy AI coworkers at scale without permissions frameworks, action-level anomaly detection, skill sandboxing, and audit trails. Every CISO who reads the OpenClaw postmortems becomes a buyer for agentic security tooling.

Third, OpenClaw draws a bright line between consumer-grade agent experiments and enterprise-grade AI coworkers. The difference is infrastructure: identity, access control, policy enforcement, observability, and governance. OpenClaw shipped the agent but not the infrastructure underneath it. That infrastructure layer is exactly where we’re investing.

 

5. The PE and Credit Warning

The Citrini memo should be a warning sign for PE-backed SaaS with weak differentiation and friction-based economics. If your portfolio company’s pricing power depends on being embedded in a workflow that an AI agent can route around, your margins are on borrowed time.

The selective opportunity in PE and credit: buy-and-build modernization plays where AI compresses COGS and SG&A, and distressed recurring revenue assets deeply embedded in workflows that agents will need to run through, not around. Avoid aspirational ARR quality and unsecured exposure if growth stalls.

 

6. Services as a Wedge

Implementation friction is real, which makes transformation services investable: mapping processes, instrumenting data, integrating systems, training organizations, and building governance layers. These capabilities scale if paired with product and repeatable playbooks. Enterprises want outcomes, not tools, and the companies that combine software and delivery to implement coworkers and redesign processes will compound.

 

What We’re Looking For

Our current investment filtering criteria across this thesis:

Team. AI technical depth plus domain expertise. Exceptional founders with strong founder-market fit over traction alone. In a market moving this fast, the team’s ability to navigate rapid shifts matters more than any current metric.

Data Moat Quality. As AI models commoditize, unique high-quality data with continuous feedback loops becomes the primary differentiator. If your data advantage can be replicated by a competitor with a bigger API budget, it’s not a moat.

Workflow Embedding Depth. Deep integration into daily workflows creates switching costs. Products that are indispensable to users’ daily work are defensible; products that sit on top of workflows are features waiting to be absorbed.

Progressive Defensibility. Technical moats alone don’t last in AI, and they may not even exist anymore. We’re looking for clear plans to layer in defenses over time: data accumulation, network effects, regulatory positioning, distribution lock-in.

Economic Value & Business Model Innovation. GTM and pricing that support AI economics for both the company and its customers. Sustainable unit economics are not optional.

We’re actively seeking Seed to Series C investments in AI coworker applications across engineering, sales, finance, and support, as well as enabling infrastructure in memory, orchestration, observability, and agentic security. Typical check sizes range from $5M to $50M.

 

Bottom Line

The Citrini piece is a useful stress test, not a prediction. The market appears to be treating a long-term structural shift as an imminent crisis, and that creates dislocation for investors willing to look past the noise.

The structural shift is real. METR’s data shows capability doubling every 4-7 months with no sign of deceleration. AI coworkers will change how enterprises operate, how work gets organized, and where value accrues. But it will follow enterprise timelines, not Twitter timelines.

If anything, the panic reinforces the thesis: AI coworkers and the enabling infrastructure are where durable value gets built. The 95% pilot failure rate isn’t evidence that AI doesn’t work — it’s the problem we’re investing to solve. OpenClaw showed us what happens when powerful agents ship without enterprise-grade infrastructure underneath them. The companies that bridge the gap from pilot to production, that give enterprises memory, orchestration, observability, security, and compliance, will capture outsized value in the decade ahead.

That’s where we’re putting capital, and that’s where we think you should be looking too.

Yan-David “Yanda” Erlich is a General Partner at B Capital, where he leads the firm’s AI co-worker and infrastructure investment thesis through Growth Fund IV and Ascent Fund III. Previously COO & CRO at Weights & Biases, GP at Coatue Ventures, and a 4x venture-backed founder. Reach him at [email protected].

Raj Ganguly is a Co-Founder and Co-CEO of B Capital. He is focused on connecting extraordinary entrepreneurs with the people, capital and support needed to drive exponential growth. In less than a decade and under Raj’s leadership, B Capital has grown into a global firm with 9 locations, 100+ employees and over $9B+ in assets under management.

 

 


LEGAL DISCLAIMER
All information is as of 2.25.2026 and subject to change. This content is a high-level overview and for informational purposes only. Certain statements reflected herein reflect the subjective opinions and views of B Capital personnel. Such statements cannot be independently verified and are subject to change. The companies discussed herein are not portfolio companies of B Capital. It should not be assumed that any companies identified and discussed herein were or will be profitable. Past performance is not indicative of future results. The information herein does not constitute or form part of an offer to issue or sell, or a solicitation of an offer to subscribe or buy, any securities or other financial instruments, nor does it constitute a financial promotion, investment advice or an inducement or incitement to participate in any product, offering or investment. Much of the relevant information is derived directly from various sources which B Capital believes to be reliable, but without independent verification. This information is provided for reference only and the companies described herein may not be representative of all relevant companies or B Capital investments. You should not rely upon this information to form the definitive basis for any decision, contract, commitment or action.

 

SOURCE

  1. Bloomberg, “Taleb, Citrini Fuel AI Scare Trade as IBM Drops Most in 25 Years,” February 23, 2026. https://www.bloomberg.com/news/articles/2026-02-23/software-payments-shares-tumble-after-citrini-post-on-ai-risks
  2. Citrini Research, “The 2028 Global Intelligence Crisis,” February 2026. https://www.citriniresearch.com/p/2028gic
  3. McKinsey Global Institute, “The Economic Potential of Generative AI: The Next Productivity Frontier,” June 2023. https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier
  4. MIT NANDA Initiative, “The GenAI Divide: State of AI in Business 2025,” August 2025. https://finance.yahoo.com/news/mit-report-95-generative-ai-105412686.html
  5. MIT NANDA Initiative, “The GenAI Divide: State of AI in Business 2025,” August 2025. https://finance.yahoo.com/news/mit-report-95-generative-ai-105412686.html
  6. METR, “Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity,” July 10, 2025. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
  7. VentureBeat, “OpenAI says ChatGPT now has 200M users,” August 2024. https://venturebeat.com/ai/openai-says-chatgpt-now-has-200m-users
  8. Microsoft, “2025 Work Trend Index: The Year the Frontier Firm Is Born,” April 2025. https://www.microsoft.com/en-us/worklab/work-trend-index/2025-the-year-the-frontier-firm-is-born
  9. Stanford Human-Centered AI Institute, AI Index Report 2025, April 2025. https://hai.stanford.edu/ai-index/2025-ai-index-report/economy
  10. IDC (via Microsoft), “Generative AI Delivering Substantial ROI,” January 2025. https://news.microsoft.com/en-xm/2025/01/14/generative-ai-delivering-substantial-roi-to-businesses-integrating-the-technology-across-operations-microsoft-sponsored-idc-report/
  11. Gartner, “Intelligent Agents in AI,” 2025. https://www.gartner.com/en/articles/intelligent-agent-in-ai
  12. METR, “Measuring AI Ability to Complete Long Tasks,” March 19, 2025. https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/
  13. METR, “Time Horizon 1.1,” January 29, 2026. https://metr.org/blog/2026-1-29-time-horizon-1-1/
  14. METR, “Time Horizons Live Dashboard,” February 2026. https://metr.org/time-horizons
  15. METR, “Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity,” July 10, 2025. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
  16. Anthropic, “Donating the Model Context Protocol to the Linux Foundation,” December 2025. https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation
  17. SkillsMP, “Agent Skills Marketplace,” accessed February 2026. https://skillsmp.com Dev.to (Haoyang Pang), “Every AI Agent Skills Platform You Need to Know in 2026,” February 2026. https://dev.to/haoyang_pang_a9f08cdb0b6c/every-ai-agent-skills-platform-you-need-to-know-in-2026-4alg
  18. Introl, “Inference Unit Economics: The True Cost Per Million Tokens,” December 2025. https://introl.com/blog/inference-unit-economics-true-cost-per-million-tokens-guide
  19. Immersive Labs, “OpenClaw Security Review: AI Agent or Malware Risk,” February 2026. https://www.immersivelabs.com/resources/c7-blog/openclaw-what-you-need-to-know-before-it-claws-its-way-into-your-organization
  20. Immersive Labs, “OpenClaw Security Review: AI Agent or Malware Risk,” February 2026. https://www.immersivelabs.com/resources/c7-blog/openclaw-what-you-need-to-know-before-it-claws-its-way-into-your-organization
  21. Axios, “Moltbook shows rapid demand for AI agents. The security world isn’t ready,” February 3, 2026. https://www.axios.com/2026/02/03/moltbook-openclaw-security-threats
  22. TechCrunch, “Windsurf’s CEO goes to Google; OpenAI’s acquisition falls apart,” July 11, 2025. https://techcrunch.com/2025/07/11/windsurfs-ceo-goes-to-google-openais-acquisition-falls-apart/
  23. SecurityScorecard, “How Exposed OpenClaw Deployments Turn Agentic AI Into an Attack Surface,” February 2026. https://securityscorecard.com/blog/how-exposed-openclaw-deployments-turn-agentic-ai-into-an-attack-surface/
  24. SecurityScorecard, “How Exposed OpenClaw Deployments Turn Agentic AI Into an Attack Surface,” February 2026. https://securityscorecard.com/blog/how-exposed-openclaw-deployments-turn-agentic-ai-into-an-attack-surface/

The post The AI Labor Crisis </br>Isn’t Coming in 2028. </br>The Investment </br>Opportunity Is Here Now. appeared first on B Capital.

]]>
Translating Code When Failure is Not an Option: Why We Invested in Code Metal https://b.capital/why-we-invested/translating-code-when-failure-is-not-an-option-why-we-invested-in-code-metal/ Thu, 19 Feb 2026 20:20:46 +0000 https://b.capital/?p=7268 By: Ida Girma and Yan-David (Yanda) Erlich   B Capital is thrilled to partner with Code Metal as the company scales verifiable code translation for mission-critical systems.   Automated AI at a Higher Standard AI has transformed how we write software. Coding copilots and platforms now produce capable code and meaningfully accelerate developer workflows. Much...

The post Translating Code When Failure is Not an Option: Why We Invested in </br>Code Metal appeared first on B Capital.

]]>
By: Ida Girma and Yan-David (Yanda) Erlich

 

B Capital is thrilled to partner with Code Metal as the company scales verifiable code translation for mission-critical systems.

 

Automated AI at a Higher Standard

AI has transformed how we write software. Coding copilots and platforms now produce capable code and meaningfully accelerate developer workflows. Much of the recent discourse across our industry and society centers on the massive productivity unlock, and manifold downstream effects, of advanced AI in coding. It is good and fast. For many traditional applications, this is enough.

But in domains like defense and aerospace, “good enough” is not good enough. In these environments, code runs satellites, jets, and edge devices deployed in contested or safety-critical settings. Regulation and compliance standards are high and uncompromising. A hallucinated function, unchecked edge case, or subtle memory bug can threaten national security, infrastructure, and, most gravely, human life.

 

The Defense Imperative

The Pentagon recently asserted that “AI-enabled capability development will re-define the character of military affairs over the next decade,” with defense budgets designed accordingly. Other governments similarly prioritize AI. The leading international summit on AI in the military domain acknowledges that AI “can and should contribute to international peace and security… help reduce the exposure of personnel to danger, improve the protection of civilians, and support more timely and better-informed decision-making.” As states intensify focus on AI, the need for verified, trustworthy code translation and edge computing has never been more urgent.

Yet policy ambition outpaces tooling. Code Metal has emerged as one of the fastest-growing defense technology companies precisely because it solves a severe and underappreciated problem: translating and optimizing massive codebases across hardware architectures and programming paradigms, with near-zero tolerance for error.

 

Code Metal’s Solution

Code Metal unites high-level reasoning with low-level verification to produce tested, optimized, and compliant code ready to deploy. This hybrid approach creates a translation and optimization engine reliable enough for defense and industrial customers, and engineers trust it in production.

Consider the challenge facing defense systems engineers today. When satellite communications protocols must be updated or ported to new hardware, or when legacy C++ defense systems must be modernized into memory-safe Rust to prevent cybersecurity vulnerabilities, the work is painstaking, manual, and slow. Existing AI coding tools offer speed, but they are not built for deployment in mission-critical environments.

Code Metal automates this translation with verification guarantees these systems demand—accelerating time-to-market, enabling portability across chips and devices, and powering rapid codebase modernization.

 

Growth at an Inflection Point

Code Metal has built remarkable commercial momentum in just two-and-a-half years since founding, winning customers including the U.S. Air Force, L3Harris, Toshiba, and RTX. The company is hitting hypergrowth, validating both the severity of the problem and the elegance of its solution.

While Code Metal’s initial work centered on defense applications, the platform also serves many enterprise domains. In telecommunications, companies are eager to translate high-level MATLAB prototypes into production-ready, edge-deployable code in days instead of months. Semiconductor manufacturers must rapidly translate open-source CUDA implementations into frameworks compatible with their chips. In automotive, industrial equipment, and other regulated spaces, the need recurs: moving prototypes to production, porting code between devices, and modernizing legacy systems into memory-safe languages—quickly, securely, and reliably. Code Metal enables all of this.

 

B Capital’s Verified Intelligence Thesis

Code Metal is B Capital’s latest investment in an AI thesis we’ve developed: as AI systems embed more deeply into critical infrastructure and decision-making, the standard must rise from plausible outputs to provable correctness. We need verified intelligence.

With our investments in Axiom (frontier mathematical reasoning, theorem proving, and discovery), Goodfire (interpretability research and systems design), and now Code Metal, we’ve put this conviction to work. Each portfolio company attacks a different surface of the problem. Together, they reflect our belief that trust in AI behaviors and outputs is essential in high-stakes domains.

Our verified intelligence thesis is deliberate and high conviction. It also forms just one part of a broader landscape we watch closely. We remain energized by frontier research directions that also engage with partially understood, emergent capabilities of large-scale AI. For example, we’re fascinated by forward-dynamics world models that enable agents to learn by imagining future scenarios, as well as continual learning architectures and hybrid systems that emulate the sophisticated hierarchies of human memory. The path to recursive self-improvement and superintelligence is neither straight nor settled, and we plan to invest across its most consequential turns. Still, we see an acute, underserved need for verified intelligence in massive markets, sometimes with human lives on the line. We’re excited to partner with Code Metal as they meet this need.

 

An Exceptional Team

Code Metal’s leaders are second-time founders with deep experience across aerospace, defense, and advanced AI systems. CEO Peter Morales previously developed AI reasoning systems for the F-35 and was a founding member of the AI Technology Group at MIT Lincoln Laboratory. CTO Alex Showalter-Bucher, also a Lincoln Lab alumnus, brings more than a decade of experience across defense agencies and technical leadership roles. Peter and Alex have felt the pain of translating and verifying mission-critical systems and know what it takes to earn trust in these markets.

The company has assembled an uncommonly strong bench of researchers and engineers from organizations including Intel, NASA, MathWorks, Lightmatter, and OpenAI. The team’s expertise bridges compilers, formal methods, AI systems, and high-stakes deployment environments. They’ve also proven they can sell and scale, building a winning culture.

 

Our Investment

B Capital invested in Code Metal’s Series B. We are proud to partner with the entire Code Metal team on this journey. Learn more at codemetal.ai or in WIRED here.

 

 


LEGAL DISCLAIMER
All information is as of 2.18.2026 and subject to change. The investments discussed herein are portfolio companies of B Capital; however, such investments do not represent all B Capital investments. Certain statements reflected herein reflect the subjective opinions and views of B Capital personnel. Such statements cannot be independently verified and are subject to change. There can be no assurance any such trends or correlations will continue in the future. Reference to third-party firms or businesses does not imply affiliation with or endorsement by such firms or businesses. It should not be assumed that any investments or companies identified and discussed herein were or will be profitable. Past performance is not indicative of future results. The information herein does not constitute or form part of an offer to issue or sell, or a solicitation of an offer to subscribe or buy, any securities or other financial instruments, nor does it constitute a financial promotion, investment advice or an inducement or incitement to participate in any product, offering or investment. Much of the relevant information is derived directly from various sources which B Capital believes to be reliable, but without independent verification. This information is provided for reference only and the companies described herein may not be representative of all relevant companies or B Capital investments. You should not rely upon this information to form the definitive basis for any decision, contract, commitment or action.

The post Translating Code When Failure is Not an Option: Why We Invested in </br>Code Metal appeared first on B Capital.

]]>
Why We Invested in JetZero https://b.capital/why-we-invested/why-we-invested-in-jetzero/ Tue, 17 Feb 2026 14:54:28 +0000 https://b.capital/?p=7258 By: Jeff Johnson, Karly Wentz, Nate Johnson and Eric Brook   B Capital is proud to partner with JetZero, a next-generation aircraft manufacturer redefining aviation through breakthrough design and best-in-class aerodynamics. The company’s blended-wing-body aircraft, designed for commercial, cargo and government use cases, can deliver up to a 50% improvement in fuel efficiency compared to...

The post Why We Invested </br>in JetZero appeared first on B Capital.

]]>
By: Jeff Johnson, Karly Wentz, Nate Johnson and Eric Brook

 

B Capital is proud to partner with JetZero, a next-generation aircraft manufacturer redefining aviation through breakthrough design and best-in-class aerodynamics. The company’s blended-wing-body aircraft, designed for commercial, cargo and government use cases, can deliver up to a 50% improvement in fuel efficiency compared to today’s aircraft.1

JetZero unlocks its step-change efficiency gains using technologies that airlines, regulators and airports can adopt in the near-future, without waiting decades for new systems or infrastructure. This approach directly addresses airlines’ largest cost driver while fitting seamlessly within today’s airline and airport infrastructure. The first commercial delivery is forecast for the early 2030s, with early flight demonstrations as soon as 2027.

For partner airlines such as United Airlines, Delta Air Lines and Alaska Airlines, this will translate into lower operating costs and an improved passenger experience. The same platform also supports highly efficient cargo configurations and meaningful advantages for transport and tanker missions, including aerial refueling, thereby likely expanding JetZero’s addressable market beyond commercial aviation.

 

Aviation Faces Structural Constraints

For decades, progress in aviation efficiency has been driven by incremental improvements. Advancements in engines, materials and winglets delivered meaningful efficiency gains in prior generations, but additional optimization is yielding diminishing marginal returns. As those returns shrink, meaningful progress requires changes at the aircraft architecture level rather than continued refinement of the tube-and-wing, which has remained largely unchanged for nearly a century.

That legacy design is increasingly misaligned with industry realities: jet fuel is the largest operating cost for airlines, while decarbonization pressure continues to intensify. These challenges are compounded by a Boeing-Airbus duopoly in which unprecedented order backlogs, risk aversion and legacy incentives limit the ability of incumbents to launch disruptive new aircraft programs.

Meanwhile, global demand for air travel continues to grow while the levers available to improve economics and reduce emissions are becoming constrained. Many proposed solutions focus on future propulsion systems that require new infrastructure, new certification pathways or fundamental changes to airline operations. While potentially promising in the long-term, these approaches do little to address the near-term efficiency gap facing the global fleet.

Compounding this challenge is the gap between single-aisle and wide-body aircraft. Many medium-haul routes fall between what narrow-bodies can serve efficiently and what wide-bodies are designed for. Narrow-bodies often lack the required payload, range or seat economics, while wide-bodies are oversized and inefficient for these flights. As a result, airlines are forced to deploy aircraft that are poorly matched to demand, driving excess fuel burn and higher operating costs. A 2018 ICF study estimated that the “missing middle” represents one of the largest underserved segments in global aviation, spanning roughly 20–30% of global airline routes and representing hundreds of billions of dollars in potential market opportunity, reflecting a structural gap that continues to persist across global fleets today.2

 

JetZero’s Approach: Aerodynamics Over New Propulsion

The aviation industry is pursuing multiple paths to improve sustainability and economics. Sustainable aviation fuel is a drop-in solution but remains supply constrained and more expensive than conventional jet fuel. Hydrogen and electric propulsion promise deep emissions reductions but require new aircraft architectures, new infrastructure and new regulatory frameworks that will take decades to mature and often face significant range or payload limitations.

JetZero takes a different approach, focusing on the single largest efficiency lever available today: aerodynamics. By fundamentally improving how lift is generated and drag is reduced, JetZero delivers immediate, material efficiency gains while remaining compatible with existing engines, infrastructure and regulatory pathways. These gains compound across the system, reducing fuel burn, structural weight and thrust requirements.

JetZero’s Z4 aircraft is purpose-built for today’s market. With seating capacity of 200-250 passengers and ranges spanning domestic, transatlantic and select long-haul routes, the Z4 is designed as a direct replacement for aging 757 and 767 fleets, while outperforming both modern narrow-body and wide-body alternatives on cost per seat.3 That same efficiency profile translates directly to cargo operations, improving payload and fuel economics on medium- and long-haul freight routes.

Crucially, JetZero achieves these gains using existing engine technology and certified systems. Rather than waiting on hydrogen, batteries or regulatory reinvention, the aircraft is designed to operate within current FAA certification and airline operating frameworks. Advances in digital design tools, modern composites and certified off-the-shelf systems now make large-scale blended-wing-body aircraft manufacturable and certifiable for the first time.

We believe the underlying technology is materially de-risked. The design builds on more than 30 years of government and industry research and over $1B in cumulative investment across NASA, the U.S. Department of War and commercial aerospace programs.4,5 The program is now approaching a major de-risking milestone, with a full-scale non-commercial demonstrator and first flight planned in the near term, reducing remaining technical and execution risk.

 

Built for Real-World Operations and Execution

JetZero is designing the aircraft to fit within existing airport infrastructure and airline operations, without requiring new gates, jet bridges or specialized ground equipment. The blended-wing-body layout enables wider seats, higher ceilings and configurable cabin zones while potentially supporting faster turnaround times and more efficient boarding at congested hubs.

For passengers, this translates to a meaningfully differentiated experience: more spacious seating across all classes, wider aisles, reduced boarding congestion, improved overhead storage and cabin layouts that support quieter, more comfortable travel. Unlike incremental cabin retrofits, these improvements are structural, not cosmetic.

The development program combines aerospace rigor with a modern execution mindset. JetZero is vertically integrated where it matters most, owning the aircraft structure and flight control systems and sourcing certified components from established suppliers to reduce certification risk and non-recurring engineering. This execution model prioritizes capital deployment toward the most complex technical areas, supporting a more capital-efficient path from development through certification and into production.

 

Strong Validation from Partners

JetZero has secured meaningful commercial validation from leading airlines that are committing both capital and aircraft demand. United Airlines has invested in the company and entered into a conditional purchase agreement for 100 aircraft, with options for an additional 100.6 Alaska Airlines has also invested and secured early production positions, while Delta Air Lines is deeply engaged across aircraft design, operations and other contributions.7,8 Together, these partnerships reflect airline confidence not only in the aircraft’s economics but in JetZero’s ability to execute a clean-sheet program at scale.

The company also maintains deep connectivity with the U.S. Air Force on a full-scale demonstrator program supported by over $200 million in non-dilutive funding.9 For the Air Force, the platform is expected to deliver extended range, increased payload flexibility and lower fuel and logistics costs across transport and tanker missions. These capabilities address critical limitations of the aging tanker fleet.

In parallel, JetZero has secured significant non-dilutive state-level support from North Carolina tied to manufacturing and workforce development, further improving the capital efficiency of the program.10

 

The Right Team to Build a Market-Defining Company

Building a clean-sheet commercial aircraft requires rare depth across engineering, certification and industrial execution. JetZero’s leadership team blends experience scaling complex hardware programs with deep commercial aerospace expertise, including senior roles at Tesla, BETA Technologies, Boeing, SpaceX, Gulfstream and Northrop Grumman.

JetZero is led by CEO and co-founder Tom O’Leary, who previously held senior leadership roles at Tesla and served as COO at aerospace startup BETA Technologies, bringing an execution-driven operating mindset shaped by scaling complex hardware programs alongside a strong focus on customer outcomes and disciplined program delivery.

 

A Structural Shift in Commercial Aviation

We believe the aviation industry is at an inflection point. Passenger demand is growing, fuel and emissions constraints are tightening and incumbent OEMs face record backlogs with limited incentive to launch disruptive new programs. Airlines, regulators and governments are aligned around the need for real efficiency gains that can be delivered within current operational and regulatory frameworks.

By delivering a fundamentally more efficient aircraft using proven technologies, optimized for airline economics and flexible across passenger, cargo and government applications, JetZero is positioned to reshape a core segment of aviation. It is redefining what is possible within the constraints that actually matter.

At B Capital, we focus on companies that enable step-change improvements in critical infrastructure systems. JetZero represents that opportunity in aviation. We are proud to invest in JetZero as it advances toward demonstration flight and commercial service, and we look forward to supporting the company as it works to deliver a more efficient, resilient and competitive aviation industry.

 

The investment was led by Jeff Johnson (General Partner, Head of Energy Tech at B Capital), alongside Karly Wentz (Partner, Energy Tech), with investment team members Nate Johnson and Eric Brook.

 

 


LEGAL DISCLAIMER
All information is as of 1.5.2026 and subject to change. The investment discussed herein is a portfolio company of B Capital; however, such investment does not represent all B Capital investments. Certain statements reflected herein reflect the subjective opinions and views of B Capital personnel. Such statements cannot be independently verified and are subject to change. Reference to third-party firms or businesses does not imply affiliation with or endorsement by such firms or businesses. It should not be assumed that any investments or companies identified and discussed herein were or will be profitable. Past performance is not indicative of future results. The information herein does not constitute or form part of an offer to issue or sell, or a solicitation of an offer to subscribe or buy, any securities or other financial instruments, nor does it constitute a financial promotion, investment advice or an inducement or incitement to participate in any product, offering or investment. Much of the relevant information is derived directly from various sources which B Capital believes to be reliable, but without independent verification. This information is provided for reference only and the companies described herein may not be representative of all relevant companies or B Capital investments. You should not rely upon this information to form the definitive basis for any decision, contract, commitment or action.

 

SOURCE

  1. Alaska Airlines, Alaska Airlines announces investment in JetZero to propel innovative aircraft technology and design, August 13, 2024
  2. Boeing, Commercial Market Outlook, 2025-2044, 2025
  3. Delta Air Lines, Delta, JetZero partner to design the future of air travel by advancing first-of-its-kind, 50% more fuel-efficient aircraft for domestic and international routes, March 5, 2025
  4. EESI, S. and International Commitments to Tackle Commercial Aviation Emissions, January 31, 2025
  5. European Federation for Transport and Environment, The aviation industry and the stall in aircraft innovation, June 18, 2025
  6. ICF, Making the Case for a Middle of the Market Aircraft, 2018
  7. International Air Transport Association, Net zero 2050: new aircraft technology, December 2025
  8. International Air Transport Association, Reviving the Commercial Aircraft Supply Chain, October 2025
  9. International Air Transport Association, Unveiling the biggest airline costs, June 4, 2024
  10. International Council on Clean Transportation, Fuel burn of new commercial jet aircraft: 1960 to 2024, January 2025
  11. JetZero, United Invests in Next Generation Blended Wing Aircraft Start-Up JetZero, April 24, 2025
  12. McKinsey & Company, Fuel efficiency: Why airlines need to switch to more ambitious measures, March 2022
  13. Politico, Why electric aircraft may never be the next big thing, January 24, 2024
  14. S&P Global, Airlines see relief with $86 jet fuel, SAF costs hinder sustainability: IATA chief, June 2, 2025
  15. Scientific American, Hydrogen-Powered Airplanes Face 5 Big Challenges, May 4, 2024
  16. Seabury Capital Group, JetZero CEO Lands A Silicon Valley Mindset at U.S. Chamber of Commerce Global Aviation Summit, September 17, 2025
  17. University of Illinois, Grainger College of Engineering, Blended wing brings air travel greater range, fuel efficiency, and comfort, June 26, 2024

The post Why We Invested </br>in JetZero appeared first on B Capital.

]]>
Where AI Value Will Be Built Next https://b.capital/insights/where-ai-value-will-be-built-next/ Thu, 29 Jan 2026 15:47:57 +0000 https://b.capital/?p=7195 Not in the Model, in the Enterprise Environment By: Yan-David “Yanda” Erlich, General Partner, B Capital   The Model Isn’t the Moat Anymore For two years, “AI progress” meant “the model got better.” That era is ending. The evidence is stark: according to MIT’s State of AI in Business 2025 report, only 5% of enterprise...

The post Where AI Value Will Be Built Next appeared first on B Capital.

]]>
Not in the Model, in the Enterprise Environment

By: Yan-David “Yanda” Erlich, General Partner, B Capital

 

The Model Isn’t the Moat Anymore

For two years, “AI progress” meant “the model got better.” That era is ending.

The evidence is stark: according to MIT’s State of AI in Business 2025 report, only 5% of enterprise GenAI pilots achieve measurable P&L impact.1 S&P Global found that 42% of companies abandoned most AI initiatives in 2025, up from 17% in 2024.2 The average organization scrapped 46% of AI proof-of-concepts before reaching production.2

These aren’t bad models. They’re bad environments.

Model capability is still improving, but for most enterprises it is no longer the limiting constraint. What matters now is everything around the model: integration, governance, distribution, measurement and the ability to learn in production without breaking trust.

One constraint is consistently underweighted in almost every AI strategy deck I see: organizational fit.

If AI is going to deliver durable value, it must function less like a tool and more like a coworker. One that collaborates with humans, operates inside real team workflows and carries context over time.

The winners won’t be the teams with the “smartest” model. They’ll be the teams with the best environment to deploy, trust and continuously improve AI.

 

The Coworker vs. Tool Distinction

This isn’t semantics. The difference between AI-as-tool and AI-as-coworker determines whether value compounds or collapses.

A tool waits to be invoked. It processes inputs and returns outputs. It has no memory of your organization, no understanding of how your team actually works, no awareness of who should approve what. Every session starts from zero.

A coworker maintains context across interactions. It knows your domain, your team’s terminology and your approval workflows. It can be delegated to, supervised and held accountable. It gets better at its job over time because it learns from outcomes, not just prompts.

The MIT data validates this distinction. Their research found that vendor-built solutions succeed at a 67% rate, while internal builds fail at a 67% rate.1 Why? Vendors who win are the ones building coworker-like systems with deep workflow integration, not generic tools bolted onto existing processes.

Consider what a new human hire experiences: onboarding, permissions, a manager, feedback loops, access to institutional knowledge, clear escalation paths. We don’t hand them a keyboard and expect productivity on day one. Yet that’s exactly how most enterprises deploy AI.

 

The Shift: From Capability Race to Execution Advantage

In practice, this shift shows up when AI performs well in pilots but fails to survive first contact with real workflows. Buyers are no longer asking “Does it ace a benchmark?” They’re asking a different class of questions altogether:

  • Integration: Can it plug into my workflows without rewriting my org chart?
  • Governance: Can it touch sensitive data without creating security, privacy or compliance blowback?
  • Accountability: Who is responsible when it’s wrong?
  • Measurement: Can we evaluate it in production and improve it safely?
  • Scale: Can we roll it out to thousands of users without adoption collapsing?
  • Collaboration: Can it work with my team like a competent new hire, or does it just generate text?

Those are not model questions. Those are execution questions.

And execution compounds. Deployment creates feedback. Feedback enables improvement. Improvement drives adoption. Adoption earns deeper integration. That loop becomes the moat.

McKinsey’s 2025 AI survey confirms this pattern: organizations reporting “significant” financial returns are twice as likely to have redesigned end-to-end workflows before selecting models.3 The execution advantage emerges less from initial capability and more from the ability to learn safely in production over time.

 

5 Ingredients of AI Execution Advantage

By “environment,” I mean the structural conditions that let AI compound in production. These conditions determine whether improvement accumulates or stalls after deployment.

 

1. Integration Surface

How quickly AI can ship into real workflows.

Value compounds fastest when AI lives inside the system of record, removes steps, reduces cycle time and tightens feedback loops. The MIT research shows that ROI is lowest in sales and marketing pilots, where most GenAI budgets are concentrated, and highest in back-office automation where integration is deepest.1

The integration question isn’t “can we connect via API?” It’s “can we embed deeply enough to observe outcomes and improve?”

 

2. Data Rights and Governance

What the system can legally and operationally learn from in production.

If you can’t observe outcomes, you can’t improve. If you can’t improve, you don’t compound. Companies that solve this and can learn from production without violating governance will outperform those that can’t.

 

3. Distribution and Procurement 

How deployments become default, not optional.

AI doesn’t win by demos. It wins by rollout. PwC’s 2025 survey found that 79% of organizations have adopted AI agents at some level, but only 35% report broad adoption, and 68% say half or fewer employees interact with agents in their daily work.4 The gap between “we have AI” and “AI is how we work” is primarily a distribution problem.

 

4. Production Learning Loop 

Evaluation, monitoring and improvement without breaking trust.

Real-world evaluation tied to business KPIs. Monitoring for drift and failure modes. Human routing for uncertainty. Continuous improvement with governance guardrails. Gartner predicts that 30% of GenAI projects will be abandoned after proof-of-concept by the end of 2025.5 Not because the technology failed, but because organizations couldn’t build the infrastructure to improve safely in production.

 

Organizational Fit

AI must function as a coworker, not a tool.

This is the missing pillar that most AI strategies ignore entirely. Enterprises are networks of roles, permissions, incentives and handoffs. “Agentic” only works when AI behaves like a well-scoped teammate: collaboration mechanics inside existing workflows, identity and least-privilege permissions, durable memory and context and on-the-job learning that operates without violating governance.

When I evaluate AI companies, I ask: “Would you hire this system as a junior employee?” If the answer requires caveats about supervision, permissions and trust boundaries, you’ve identified the product work that matters.

 

Where AI Value Compounds Fastest

Value concentrates where execution environments support compounding

Instrumented digital workflows where shipping is fast and telemetry is rich. Software development, customer support, back-office operations. Anywhere outcomes can be observed quickly and iteration is cheap.

High-volume operational workflows with clear accountability and measurable outcomes. Claims processing, compliance review, financial operations. Environments where “better” is quantifiable and feedback is continuous.

Physical operations with telemetry and hard KPIs. Manufacturing, logistics, healthcare delivery. Domains where the system of record captures reality and improvement is directly measurable.

Generic assistants without durable data rights, distribution leverage and a compounding learning loop get competed down to commodity margins.

 

What This Means for Founders

Markets are not just industries. They are execution environments. This favors teams that optimize for compounding environments over early polish or surface-level performance.

Wedge into the system of record. Don’t build alongside the workflow. Become the workflow. The difference between “we integrate with Salesforce” and “we are where deals get done” is the difference between tool and coworker.

Secure data rights early. The legal and operational ability to learn from production is a moat. Companies that negotiate this upfront, while offering clear value exchange, will outperform those who treat it as a Phase 2 problem.

Design for procurement from day one. Audit logs, SSO, role-based access and compliance certifications. These aren’t features; they’re prerequisites for the environments where AI compounds.

Treat evaluation as product. If you can’t show measurable improvement on business KPIs, you can’t justify continued investment.

Build the AI coworker layer. Collaboration, identity, permissions, memory and handoffs. This is the unsexy work that separates pilots from production systems.

Environments that support compounding often look weaker early yet outperform over time. This allows founders to look wrong early and still be right in the long run.

 

What This Means for Enterprises

Buying AI like ordinary software and expecting it to behave like ordinary software does not work. AI systems improve only when they are treated as production systems with owners, feedback and failure modes.

Establish an AI operating model. Clear owners, defined accountability and incident response. Who is responsible when the AI makes a mistake? If you can’t answer this question, you’re not ready for production.

Tie AI performance to business KPIs. Not accuracy metrics, not user satisfaction scores. Actual business outcomes: revenue, cost, cycle time and error rates.

Reduce fragmentation where learning loops need consistency. Every team using a different AI tool means every team learning in isolation. Consolidation isn’t about cost savings; it’s about compounding.

Treat AI coworker fit as a first-class requirement. When evaluating vendors, ask: “How does this integrate with how my team actually works?” Not how it works in a demo. How it works in your environment, with your permissions, your approval flows and your existing tools.

 

What This Means for Investors

Model quality is no longer the primary diligence question. Instead, evaluate:

Ownership of the integration surface. Does the company control the system of record, or are they dependent on someone else’s platform?

Durable data rights and credible governance. Can they legally and operationally learn from production? Is their data strategy a moat or a liability?

A scalable distribution path. Can they reach thousands of users without a proportional increase in sales and support costs?

Evidence of a production learning loop. Are they improving from deployment, or shipping static models?

A credible path to AI coworker fit. Can they function inside real enterprise environments with real permissions and real accountability?

We believe the best AI investments right now are companies building execution infrastructure, not model capability alone. The model layer is commoditizing; the execution layer is where durable value will be built.

 

How This Shows Up in Our Portfolio

This framework has shaped our investing strategy for some time. A few examples:

Perplexity: Enterprise knowledge work is an execution environment problem. Perplexity’s enterprise offering is explicitly about deploying AI into organizational context: collaboration in Spaces, answers from organizational apps and files, enterprise permissioning, auditability and “no training on your data.” This is governance, distribution and coworker-fit working together in production.

Unblocked: A literal AI coworker for engineering teams. Unblocked plugs into the tools engineers already use, connects code, documentation and conversations, supplies shared team context that makes other AI coding tools more effective. Enterprise fit is table stakes: SSO, RBAC, audit logs and security posture designed for production.

Goodfire: If you care about production reliability, you eventually care about controlling behavior, not just prompting it. Goodfire is building interpretability tooling that surfaces failure modes, enables behavior design and supports durable fixes. This maps directly to the production learning loop and governance required for AI systems to improve safely.

Axiom: In domains where correctness is existential, value shifts toward systems that can reason rigorously and be evaluated against hard truth. Axiom’s focus on an AI mathematician is a wedge into verifiability-first reasoning. It’s upstream capability in service of downstream production requirements.

 

Where Advantage Compounds

For the next decade, the biggest AI outcomes will not come from “the model got better.”

They will likely come from environments where AI can be deployed, trusted, measured and improved continuously inside real workflows. The environments we choose to build in will determine which AI systems endure.

The data is already pointing the way: 95% of pilots fail not because AI doesn’t work, but because organizations haven’t built the necessary working environment.1 The 5% that succeed share common characteristics: deep workflow integration, clear governance, production learning loops and organizational fit.1

Capability is table stakes. Execution advantage is the moat.

The question for founders, enterprises and investors isn’t “which model is best?” It’s “which environments support compounding?”

Build there, and if you’re already building there, I’d love to talk to you.

 


Yan-David “Yanda” Erlich is a General Partner at B Capital, where he focuses on AI infrastructure and AI coworker investments. Previously, he was COO & CRO at Weights & Biases and a GP at Coatue.

 

 


LEGAL DISCLAIMER
All information is as of 1.21.2026 and subject to change. This content is a high-level overview and for informational purposes only. Certain statements reflected herein reflect the subjective opinions and views of B Capital personnel. Such statements cannot be independently verified and are subject to change. The investments discussed herein are portfolio companies of B Capital; however, such investments do not represent all B Capital investments. It should not be assumed that any investments or companies identified and discussed herein were or will be profitable. Past performance is not indicative of future results. The information herein does not constitute or form part of an offer to issue or sell, or a solicitation of an offer to subscribe or buy, any securities or other financial instruments, nor does it constitute a financial promotion, investment advice or an inducement or incitement to participate in any product, offering or investment. Much of the relevant information is derived directly from various sources which B Capital believes to be reliable, but without independent verification. This information is provided for reference only and the companies described herein may not be representative of all relevant companies or B Capital investments. You should not rely upon this information to form the definitive basis for any decision, contract, commitment or action.

 

SOURCE

  1. MIT Sloan Management Review and Boston Consulting Group, “The State of AI in Business 2025,” 2025. https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
  2. S&P Global Market Intelligence, “Generative AI Shows Rapid Growth but Yields Mixed Results,” October 2025. https://www.spglobal.com/market-intelligence/en/news-insights/research/2025/10/generative-ai-shows-rapid-growth-but-yields-mixed-results
  3. McKinsey & Company, “The State of AI in 2025: Agents, Innovation, and Transformation,” 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
  4. PwC AI Agent Survey, “AI Agents and Enterprise Adoption,” May 2025. https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-agent-survey.html
  5. Gartner, “Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept by End of 2025,” July 29, 2024. https://www.gartner.com/en/newsroom/press-releases/2024-07-29-gartner-predicts-30-percent-of-generative-ai-projects-will-be-abandoned-after-proof-of-concept-by-end-of-2025

The post Where AI Value Will Be Built Next appeared first on B Capital.

]]>
Why We Invested in Fervo Energy https://b.capital/why-we-invested/why-we-invested-in-fervo-energy/ Thu, 22 Jan 2026 15:56:04 +0000 https://b.capital/?p=7172 By: Jeff Johnson, Karly Wentz, Nate Johnson and Eric Brook   B Capital is proud to partner with Fervo Energy, the leading developer of next-generation geothermal power, delivering 24/7, carbon free electricity at scale. Fervo Energy applies proven oil and gas drilling and subsurface techniques to unlock scalable geothermal development, enabling Enhanced Geothermal Systems (EGS)...

The post Why We Invested </br>in Fervo Energy appeared first on B Capital.

]]>
By: Jeff Johnson, Karly Wentz, Nate Johnson and Eric Brook

 

B Capital is proud to partner with Fervo Energy, the leading developer of next-generation geothermal power, delivering 24/7, carbon free electricity at scale. Fervo Energy applies proven oil and gas drilling and subsurface techniques to unlock scalable geothermal development, enabling Enhanced Geothermal Systems (EGS) that materially expand where and how geothermal power can be deployed.

 

AI and Electricity Demand Are Driving the Next Wave of Power Infrastructure

U.S. power demand is accelerating at an unprecedented rate as AI-driven compute scales and more sectors electrify. Data centers, industrial facilities, and utilities require clean, high-uptime power sources that can be delivered on compressed timelines to meet projected load growth.

The set of solutions capable of meeting these requirements is narrow. Many alternatives face long permitting timelines, integration and interconnection challenges, or deployment schedules that are too slow for near-term demand. Speed, reliability, and cost have become critical.

Fervo Energy directly addresses this gap. By delivering clean, firm power with short development timelines – and its first commercial-scale project scheduled to come online in 2026 – Fervo Energy is positioned to supply capacity faster and more competitively than most alternatives available today. Combined with improving cost curves and growing regulatory and bipartisan support for geothermal, EGS is emerging as one of the few scalable solutions aligned with the grid’s immediate needs.

 

Why is EGS the Solution?

Geothermal energy is a long-established power source that converts the earth’s natural heat into electricity through wells drilled into hot subsurface rock. Historically, the deployment of geothermal energy depended on a narrow set of natural conditions, where high temperatures, permeable rock, and accessible water reservoirs naturally coexist. Once those sites were fully developed, growth in the geothermal space was increasingly constrained by site availability, rather than by demand.

This is increasingly misaligned with the growing electricity demand we see today. As AI and electrification push electricity demand higher, the grid requires significant sources of clean, reliable power that can be built near population and data center hubs. Traditional geothermal could only be developed in a few geologically ideal locations. EGS addresses this challenge by using established drilling and subsurface engineering techniques to create geothermal reservoirs where heat already exists. This meaningfully expands the sites appropriate for reservoir creation, fundamentally changing the scale and accessibility of geothermal power.

Conventional geothermal in the United States supports an estimated ~40 GW of potential capacity, constrained to rare locations with naturally favorable reservoirs – of which only ~4 GW is currently operating today.1 By contrast, EGS unlocks an estimated 5,500 GW+ of potential capacity by enabling development wherever hot rock exists, which is nearly ubiquitous.2,3 This expands the resource by orders of magnitude, transforming geothermal into a scalable baseload infrastructure that can be developed at a competitive cost on faster timelines, and where the grid needs energy most.

Furthermore, early field pilots demonstrate that EGS scalability is driven by repeatable execution and iteration over time, rather than geology alone. As drilling programs are standardized and wells are developed in sequence, performance improves and costs decline. This execution-driven learning curve positions geothermal to compete on both reliability and economics as deployment scales.

 

Why We Believe in Fervo Energy

As one of the first to demonstrate the viability of EGS at commercial scale, Fervo Energy is defining a new model for geothermal development grounded in repeatability, standardization, and data-driven execution. Fervo Energy anticipates its flagship development, Cape Station in Utah, will deliver 100 MW of clean, firm power beginning in 2026, with a path to 500 MW by 2028 – demonstrating both predictable performance and an attractive cost profile.

Fervo Energy stands out as the clear leader in EGS today, backed by early evidence across customer demand, field execution, and industry partnerships:

  1. Clear and growing demand for clean, firm power: Fervo Energy has secured long-term agreements with hyperscale customers, reflecting strong demand. Public commitments, including Fervo Energy’s 115 MW power purchase agreement with Google and NV Energy, demonstrate how demand for reliable, carbon-free power is being secured through long-duration contracts.
  2. Proven subsurface execution and reservoir development: Fervo Energy has successfully drilled horizontal geothermal wells using controlled fracture stimulation and permanent fiber-optic sensing, demonstrating repeatable execution across multiple drilling cycles. Its early commercial pilot, Project Red, showed meaningful technical progress, including drilling speeds that more than doubled relative to prior efforts, with third-party validation. At Cape Station, the company continues to improve drilling times and operational efficiency with each successive well.
  3. Best-in-class industry partnerships: Early collaboration with Devon Energy and Liberty Energy has strengthened Fervo Energy’s access to drilling expertise, equipment, and subsurface talent, further validating and accelerating its approach to scaling EGS.

We have followed Fervo Energy since first meeting Tim Latimer (CEO and Founder) in the Company’s early days. Over time, it has become clear that Fervo Energy’s technical progress reflects the experience and rigor of the team, as the work has progressed from early academic research to tackling one of the most important opportunities in the modern American energy space at scale.

Tim began his career as a drilling engineer at BHP Billiton, working across complex programs in the Permian and Eagle Ford basins, before earning an MBA and a master’s degree focused on energy and climate at Stanford. Jack Norbeck (CTO and Co-Founder) brings deep geothermal and reservoir engineering expertise shaped by hands-on work at Calpine’s Geysers field, research roles at the U.S. Geological Survey and Berkeley Lab, and a PhD in Energy Resources Engineering from Stanford, while David Ulrey (CFO) adds seasoned infrastructure finance leadership from energy capital markets and long-duration power project finance, along with prior roles in the U.S. Army and at National Oilwell Varco (NOV). Together, the team has demonstrated repeatable subsurface execution across pilots and early commercial development, reinforcing that they are the right team to scale EGS technology.

 

Conclusion

At B Capital, we believe the next decade of power markets will be defined by clean, firm generation that can be delivered with both speed and predictable economics. As investors with deep expertise across energy infrastructure and AI-driven compute, we focus on backing companies that enable the physical infrastructure required to support rising electricity demand, particularly as AI-driven load growth places new strain on power systems globally.

Fervo Energy sits at the center of this shift. By making geothermal a repeatable, scalable resource, the company has the potential to unlock gigawatts of clean, firm power capable of supporting industrial load centers and hyperscale compute. We are proud to partner with Fervo Energy as enhanced geothermal enters its next chapter at grid scale.

The investment was led by Jeff Johnson (General Partner, Head of Energy & Resilience Tech at B Capital), alongside Karly Wentz (Partner, Energy & Resilience Tech), with investment team members Nate Johnson and Eric Brook.

 

 


LEGAL DISCLAIMER
All information is as of 12.3.2025 and subject to change. The investment discussed herein is a portfolio company of B Capital; however, such investment does not represent all B Capital investments. Certain statements reflected herein reflect the subjective opinions and views of B Capital personnel. Such statements cannot be independently verified and are subject to change. Reference to third-party firms or businesses does not imply affiliation with or endorsement by such firms or businesses. It should not be assumed that any investments or companies identified and discussed herein were or will be profitable. Past performance is not indicative of future results. The information herein does not constitute or form part of an offer to issue or sell, or a solicitation of an offer to subscribe or buy, any securities or other financial instruments, nor does it constitute a financial promotion, investment advice or an inducement or incitement to participate in any product, offering or investment. Much of the relevant information is derived directly from various sources which B Capital believes to be reliable, but without independent verification. This information is provided for reference only and the companies described herein may not be representative of all relevant companies or B Capital investments. You should not rely upon this information to form the definitive basis for any decision, contract, commitment or action. Any forward-looking statements are based solely on information provided by the company or on publicly available data and reflect the views of the authors as of the date of publication.

SOURCE

  1. S. Department of Energy, Pathways to Commercial Liftoff: Next-Generation Geothermal Power, March 2024.
  2. S. Energy Information Administration (EIA), Electricity Existing Capacity by Energy Source, 2024.
  3. S. Department of Energy, Pathways to Commercial Liftoff: Next-Generation Geothermal Power, March 2024.

The post Why We Invested </br>in Fervo Energy appeared first on B Capital.

]]>
B Capital Why We Invested: Solve Therapeutics https://b.capital/why-we-invested/b-capital-why-we-invested-solve-therapeutics/ Mon, 24 Nov 2025 17:30:49 +0000 https://b.capital/?p=7067 By: Robert Mittendorff MD, Jason Grosz, Jack Grimes and Ava Soltani   B Capital is proud to partner with Solve Therapeutics, a clinical-stage biotechnology company developing next-generation antibody-drug conjugates (ADCs) for patients with solid tumors. The investment was led by General Partner Robert Mittendorff, MD, MBA, who will join Solve’s board, and investment team members...

The post B Capital Why We Invested: Solve Therapeutics appeared first on B Capital.

]]>
By: Robert Mittendorff MD, Jason Grosz, Jack Grimes and Ava Soltani

 

B Capital is proud to partner with Solve Therapeutics, a clinical-stage biotechnology company developing next-generation antibody-drug conjugates (ADCs) for patients with solid tumors. The investment was led by General Partner Robert Mittendorff, MD, MBA, who will join Solve’s board, and investment team members Jason Grosz, Jack Grimes and Ava Soltani. Solve is advancing two programs in Phase 1 trials, SLV-154 and SLV-324, both of which leverage the company’s proprietary CloakLink™ technology.

 

Continued Evolution of ADCs

ADCs, which deliver cytotoxic payloads directly to tumor cells while reducing damage to healthy tissues, are a transformative therapeutic class and one of the most active areas of innovation in oncology. Their rapid adoption reflects advances in antibody engineering, linker chemistry, and cytotoxic payload design. Despite recent ADC successes, meaningful challenges remain, particularly around reducing toxicity and achieving greater efficacy in solid tumors. We believe the next generation of ADCs will be shaped by improvements across three core dimensions that
Solve aims to address:

  • Linkers that enhance stability and reduce hydrophobicity: A key goal in ADC design is to ensure that the drug remains stable in circulation and releases its payload only after reaching the tumor. Hydrophilic linker systems, such as Solve’s CloakLink™ platform, aim to reduce hydrophobicity and improve plasma stability, potentially broadening the therapeutic index of ADCs across solid tumor settings.
  • Payloads optimized for potency and tolerability: ADC payloads must be powerful enough to kill tumor cells while maintaining a manageable safety profile. Selecting best-in-class payloads with optimal potency, a broad bystander effect*, and low affinity to drug efflux pumps will be critical to improving efficacy and slowing resistance.
  • Target and patient selection grounded in precision oncology: Expanding the ADC target universe to capture a broader range of solid tumors and identifying patients most likely to benefit from treatment remain essential to maximizing ADC efficacy. Solve’s novel targets and use of novel diagnostic approaches represent important progress toward enabling more precise patient stratification and better clinical outcomes.

We believe Solve is pioneering the next generation of ADCs through the thoughtful optimization of each of these key dimensions and are excited to see the performance of their best-in-class ADCs in the clinic.

 

A Proven Team with Deep Operational and Oncology Expertise

Beyond its platform and pipeline, Solve is led by a seasoned leadership team with deep oncology, clinical development, and operational experience. Solve’s management has extensive experience working together at oncology biotech companies and a history of successful biotech exits, including VelosBio (acquired by Merck for $2.75B) and Acerta Pharma (acquired by AstraZeneca for $7B). The depth of operational, scientific, and transaction experience within this leadership group positions Solve strongly as it advances its clinical programs.

  • Dave Johnson (CEO): Dave is a biotech executive with more than 30 years of experience in oncology. He previously served as CEO of Acerta Pharma, an oncology focused pharma company, where he led the company through a critical phase of corporate growth from approximately 40 to 150+ employees. During his tenure, he guided the company from signal seeking first-in-human trials to more than 20 active clinical studies, leading to its $7B acquisition by AstraZeneca. Dave later founded VelosBio, raising approximately $200M from a top-tier syndicate and advancing the company’s lead ADC into the clinic before its $2.75B acquisition by Merck.
  • Clayton Knox, MD (President): Before joining Solve, Clayton served as President and Chief Business/Operating Officer at VelosBio, where he co-led business operations and corporate strategy, helping guide the company through its $2.75B acquisition by Merck. He previously held senior roles at Acerta Pharma, contributing to its transaction with AstraZeneca and at Mavupharma, where he led the company through IND preparation and its acquisition by AbbVie.
  • Langdon Miller, MD (Chief Medical Officer): Langdon has more than 30 years of experience in oncology drug development and clinical research. He served as Executive Vice President of Development and CMO at VelosBio, overseeing clinical programs leading up to the company’s acquisition by Merck. His previous roles include senior leadership positions at Calistoga Pharmaceuticals and Gilead, as well as earlier roles at PTC Therapeutics, Pharmacia & Upjohn and the National Cancer Institute.

At B Capital, we believe the next era of oncology will be shaped by therapeutics that combine strong biological rationale, optimized engineering, and precision patient selection. We are proud to support Solve Therapeutics as it advances its mission to deliver safer, more effective targeted therapies for patients with life-threatening cancers, and look forward to partnering with the team as its programs progress through the clinic.

 

*Bystander Effect: Ability of the ADC payload to diffuse to and kill neighboring cells in the tumor microenvironment after being delivered to an antigen positive tumor cell

 

 


LEGAL DISCLAIMER
All information is as of 11.19.2025 and subject to change. The investment discussed herein is a portfolio company of B Capital; however, such investment does not represent all B Capital investments. Certain statements reflected herein reflect the subjective opinions and views of B Capital personnel. Such statements cannot be independently verified and are subject to change. Reference to third-party firms or businesses does not imply affiliation with or endorsement by such firms or businesses. It should not be assumed that any investments or companies identified and discussed herein were or will be profitable. Past performance is not indicative of future results. The information herein does not constitute or form part of an offer to issue or sell, or a solicitation of an offer to subscribe or buy, any securities or other financial instruments, nor does it constitute a financial promotion, investment advice or an inducement or incitement to participate in any product, offering or investment. Much of the relevant information is derived directly from various sources which B Capital believes to be reliable, but without independent verification. This information is provided for reference only and the companies described herein may not be representative of all relevant companies or B Capital investments. You should not rely upon this information to form the definitive basis for any decision, contract, commitment or action.

The post B Capital Why We Invested: Solve Therapeutics appeared first on B Capital.

]]>
B Capital Highlights AI-Driven Investment Momentum and Global Thought Leadership at 2025 Annual General Meeting and CEO Summit https://b.capital/news-article/b-capital-highlights-ai-driven-investment-momentum-and-global-thought-leadership-at-2025-annual-general-meeting-and-ceo-summit/ Thu, 23 Oct 2025 14:40:25 +0000 https://b.capital/?p=7025 NEW YORK–(BUSINESS WIRE)–B Capital, a global multi-stage investment firm, today announced key highlights from its 2025 Annual General Meeting (AGM) and Chief Executive Officer (CEO) Summit. The events brought together more than 200 CEOs, investors and industry luminaries to discuss how artificial intelligence (AI) is transforming businesses, investment strategies, leadership and global markets. With the...

The post B Capital Highlights AI-Driven Investment Momentum and Global Thought Leadership at 2025 Annual General Meeting and CEO Summit appeared first on B Capital.

]]>

NEW YORK–(BUSINESS WIRE)–B Capital, a global multi-stage investment firm, today announced key highlights from its 2025 Annual General Meeting (AGM) and Chief Executive Officer (CEO) Summit. The events brought together more than 200 CEOs, investors and industry luminaries to discuss how artificial intelligence (AI) is transforming businesses, investment strategies, leadership and global markets.

With the theme “Venture at AI Velocity,” B Capital’s 2025 AGM and CEO Summit reflected the firm’s dedication to partnering with visionary founders building enduring, technology-enabled businesses around the globe. The event further demonstrated the firm’s commitment to harnessing leading-edge AI technology to drive results.

“Our strategy was designed for this moment,” said Howard Morgan, Chairman and General Partner of B Capital. “With our exceptional team, global reach and commitment to advancing AI responsibly, we’re shaping the next generation of transformative companies.”

Annual General Meeting: Venture at AI Velocity

B Capital’s 2025 AGM spotlighted firm updates and portfolio highlights, as well as emphasized the strong foundation the firm has built to drive the next phase of the AI evolution.

B Capital’s leadership underscored four key differentiators that will help position the firm to succeed in the current market environment:

  • Founders with technology builder/operator DNA.
  • Deep technical expertise across AI, healthcare and resilience tech sectors.
  • Global network, enabling early access to exceptional founders and markets.
  • Value-add approach, leveraging B Capital’s strategic partnership with BCG, internal advisory resources and leading-edge AI engineering team, which supports both the firm and portfolio.

“AI is not a sector, but rather, a foundational capability that amplifies everything we do,” said Eduardo Saverin, Co-Founder and Co-CEO of B Capital. “Our platform connects expertise, technology and global reach in a way that allows us to invest and operate at AI velocity.”

The AGM featured a lineup of key opinion leaders, including:

  • William Ford (CEO, General Atlantic), Lynn Martin (President, NYSE), and Steve Pagliuca (Senior Advisor, Bain Capital) with Romaine Bostick (Bloomberg) – Markets in Motion: Strategies for a Transforming Economy
  • Yan-David Erlich (General Partner, Tech AI, B Capital) with Carina Hong (Co-Founder & CEO, Axiom) and Eric Ho (Co-Founder & CEO, Goodfire) – Expanding the Frontiers of AI Systems
  • Robert Mittendorff, MD (General Partner, Head of Healthcare, B Capital) with Sandeep Gupta (Innovaccer) and Justin Nicols (Sift Healthcare) – Accelerating Health: From Intelligence to Impact
  • Timur Akazhanov (General Partner, Head of Strategic Growth, B Capital) with Linus Bergstrom (BCG), Luke Hansen (CompanyCam) and Andrea Pisoni (Head of AI, B Capital) – AI as an Advantage: Driving Portfolio Value
  • Karly Wentz (Partner, B Capital) with Jeff Johnson (General Partner, Head of Resilience Tech, B Capital), Kunal Girotra (Lunar Energy), and Tom O’Leary (JetZero) – Resilience Tech: Market-Driven Solutions in a Climate 3.0 World
  • Vikas Taneja (Managing Director & Partner, BCG and B Capital Advisor) with David Agus, MD (Ellison Medical Institute), Jennifer Morris (The Nature Conservancy) and Howard Morgan (Chairman & General Partner, B Capital) – Inventing Tomorrow: Breakthroughs That Redefine What’s Possible
  • And three-time Olympic Gold Medalist Shaun White (The Snow League; WHITESPACE) in conversation with Adam Lilling (PLUS Capital) – Going for Gold

CEO Summit: Leadership and Purpose in an Era of Acceleration

Now in its second year, B Capital’s CEO Summit was designed to provide the firm’s portfolio company founders and CEOs with an opportunity to focus on leadership, while also learning from one another and world-class operators across industries. This year, B Capital convened roughly three dozen CEOs building transformative companies across tech, AI, healthcare and resilience tech.

“Our CEO Summit exemplifies the highly curated ecosystem we’ve built at B Capital, bringing together the brightest minds in technology and business to exchange ideas that shape industries,” said Raj Ganguly, Co-Founder and Co-CEO of B Capital. “Innovation is never a solo act – it’s learned, shared and constantly evolving. This summit reinforces how our value-add goes far beyond capital; it’s about building a community that drives meaningful, long-term impact.”

The program, which focused on scaling responsibly, leading through uncertainty and building enduring cultures, featured:

  • Andy Dunn (Founder, Pie; Co-Founder, Bonobos) in conversation with Brad Kay (Founder & CEO, Chief Brand Advisory) – Highs & Lows: Lessons from an Exited Founder
  • Arianna Huffington (Founder & CEO, Thrive Global) with Rich Lesser (Global Chair, BCG) – Achieving Your Vision While Building a Thriving Life
  • Liza Landsman (CEO, The Points Guy) – Early Stage: Scaling from $10M to $100M
  • Russell Dubner (Global CCO, BCG) – Growth: Earning and Building Corporate Trust
  • Jimmy Pitaro (Chairman, ESPN) with Hannah Storm (Anchor, ESPN) – Leading Through Constant Change
  • Karen Page (Board Partner, B Capital), Dan Rosensweig (Executive Chairman, Chegg), and Lisa Shalett (Co-Founder, Extraordinary Women on Boards) in conversation with Alex Konrad (Forbes, Upstarts) – Building and Managing an Effective Board
  • Howard Morgan (Chairman & General Partner, B Capital) with Lucinda Shen (Axios) – Lessons from a Legend

The CEO Summit concluded with a fireside chat with David Rubenstein, Co-Founder and Chairman of The Carlyle Group.

About B Capital

B Capital invests globally in extraordinary founders and businesses shaping the future through technology. With more than $9 billion in assets under management and dedicated stage-based funds, the firm focuses on seed to early- and late-stage venture growth investments, primarily in the technology, healthcare and resilience tech sectors. Founded in 2015, B Capital has an integrated, global team across nine locations in the U.S. and Asia. The firm’s value-add platform, together with the consulting expertise of its strategic partner, The Boston Consulting Group, provides entrepreneurs with the tools and resources to scale quickly and efficiently, expand into new markets and build market-leading businesses. For more information, click here.

The post B Capital Highlights AI-Driven Investment Momentum and Global Thought Leadership at 2025 Annual General Meeting and CEO Summit appeared first on B Capital.

]]>
Toward Mathematical Superintelligence: Why We Invested in Axiom https://b.capital/why-we-invested/toward-mathematical-superintelligence-why-we-invested-in-axiom/ Tue, 30 Sep 2025 14:00:53 +0000 https://b.capital/?p=6995 By: Ida Girma and Yan-David (Yanda) Erlich B Capital is thrilled to partner with Axiom on its mission to build mathematical superintelligence.   Mathematics as the Next Frontier Mathematics demystifies our most complex systems: from the universe swelling through spacetime to the neural circuits firing in our dreams. For centuries, researchers with rare intellect and...

The post Toward Mathematical Superintelligence: Why We Invested in Axiom appeared first on B Capital.

]]>
By: Ida Girma and Yan-David (Yanda) Erlich

B Capital is thrilled to partner with Axiom on its mission to build mathematical superintelligence.

 

Mathematics as the Next Frontier

Mathematics demystifies our most complex systems: from the universe swelling through spacetime to the neural circuits firing in our dreams. For centuries, researchers with rare intellect and specialized expertise dedicated their lives to expanding the frontiers of our knowledge. Today, AI promises to usher in a new era of rapid, exponential breakthroughs. Yet our capacity for discovery in highly complex domains remains constrained by the reasoning limitations of even the most advanced AI systems.

Language models produce astonishing results, with emergent capabilities rapidly unfolding. But these models still make reasoning errors in unpredictable, inscrutable ways. While language models’ training inputs are vast, they vary in quality and range from casual, unstructured text to formal, structured data. Post-training techniques better align model behavior with human preferences, resulting in excellent outputs across most contexts. However, these outputs still fall short of provable correctness necessary in the most critical, quantitative disciplines.

 

Axiom’s Differentiated Approach

True precision in quantitative intelligence may well require reimagining reasoning within the sandbox of mathematics. Axiom is leading this bold approach.

The company builds on the insight that the specialized programming language Lean and advanced mathematical proofs offer uniquely rigorous training data. By combining AI, programming languages, and mathematics, Axiom is creating the foundation for verified quantitative reasoning that’s provably correct. The result: a self-improving reasoner, starting with an AI mathematician.

Its scientific and commercial applications are vast. Consider the high-stakes, multi-trillion-dollar fields of alpha discovery in quantitative trading, software and protocol verification, engineering, and frontier scientific research. In each, legibility of logic and certainty of correctness are essential. Black-box reasoning, no matter how convincing, cannot be fully trusted to direct consequential decisions in these domains. A tool that conjectures and proves novel quantitative hypotheses, and ultimately provides reasoning certainty for decisions, will be immensely valuable. Now is the time to build it.

 

The People Driving Mathematical Superintelligence

The Axiom team is uniquely suited to seize this moment and make advanced mathematical reasoning a reality. CEO Carina Hong is a generational mathematician—a winner of the Morgan Prize whose published research spans number theory, combinatorics, theoretical computer science, and probability. She also operates with a fire and focus we recognize in the very best of founders, bridging visionary leadership and relentless execution.

Carina has assembled an exceptionally talent-dense founding team with impressive speed. CTO Shubho Sengupta led Meta FAIR teams that developed OpenGo and CrypTen. Before that, he worked on distributed training systems that shaped Google Brain and was among the earliest CUDA developers. François Charton joins Axiom after pioneering the application of transformers to complex mathematical problems starting back in 2019. Notably, he recently solved a century-old open problem, disproving a 30-year-old conjecture. Hugh Leather’s trailblazing experience applying deep learning to code generation, including building the first LLMs for compilers and GPU code generation, brings yet another critical strength to Axiom’s team.

Every Axiom team member we spend time with—from senior leaders with decades of experience to high-velocity engineers and gifted undergraduate interns—brims with great research intuition and taste, drive, and a passion to shape the future of AI and mathematics. We believe they have the right domain intellect, strategy, execution, and culture to thrive, and they are just getting started.

 

Our Investment in Axiom

B Capital led Axiom’s Seed round, with Yan-David (Yanda) Erlich joining as board director and Ida Girma as board observer. We’re proud to partner with Axiom on this journey toward mathematical superintelligence.

Learn more at axiommath.ai and in Forbes, out today.

 

 


LEGAL DISCLAIMER
All information is as of 9.29.2025 and subject to change. The investment discussed herein is a portfolio company of B Capital; however, such investment does not represent all B Capital investments. Certain statements reflected herein reflect the subjective opinions and views of B Capital personnel. Such statements cannot be independently verified and are subject to change. Reference to third-party firms or businesses does not imply affiliation with or endorsement by such firms or businesses. It should not be assumed that any investments or companies identified and discussed herein were or will be profitable. Past performance is not indicative of future results. The information herein does not constitute or form part of an offer to issue or sell, or a solicitation of an offer to subscribe or buy, any securities or other financial instruments, nor does it constitute a financial promotion, investment advice or an inducement or incitement to participate in any product, offering or investment. Much of the relevant information is derived directly from various sources which B Capital believes to be reliable, but without independent verification. This information is provided for reference only and the companies described herein may not be representative of all relevant companies or B Capital investments. You should not rely upon this information to form the definitive basis for any decision, contract, commitment or action.

The post Toward Mathematical Superintelligence: Why We Invested in Axiom appeared first on B Capital.

]]>
The 10 Factors That Guide Our Resilience Tech Investments https://b.capital/insights/the-10-factors-that-guide-our-resilience-tech-investments/ Thu, 18 Sep 2025 19:32:08 +0000 https://b.capital/?p=6973 By: Jeff Johnson, General Partner & Head of Energy & Resilience, B Capital and Karly Wentz, Partner, B Capital   The world is entering a new era of disruption driven by geopolitical instability, resource scarcity, weather extremes, and aging infrastructure. At B Capital, we invest in Resilience Tech—companies building the energy, industrial, and infrastructure systems...

The post The 10 Factors That Guide Our Resilience Tech Investments appeared first on B Capital.

]]>
By: Jeff Johnson, General Partner & Head of Energy & Resilience, B Capital and Karly Wentz, Partner, B Capital

 

The world is entering a new era of disruption driven by geopolitical instability, resource scarcity, weather extremes, and aging infrastructure. At B Capital, we invest in Resilience Tech—companies building the energy, industrial, and infrastructure systems that keep industries, governments and communities running when it matters most. We outlined our approach in Resilience Tech: Systems That Withstand What’s Next.

Building in this space isn’t easy.  Compared to other areas of investing, Resilience Tech covers a broad range of businesses including hardware, software, deep tech, and infrastructure. It requires balancing capital intensity with efficiency, navigating regulation and real-world complexity and proving both technical and commercial viability. As we explored in Climate 3.0: Investing in Scalable, Profitable Climate Solutions, success comes from pairing strong business fundamentals with solutions that can scale.

Below is our 10-point “North Star” growth investment framework—an outline of how we typically underwrite companies and how we believe the most successful companies will be built. It’s designed to assess which technologies will succeed not just in theory, but in practice.

Note: For illustrative purposes only and subject to change. Certain statements reflected herein reflect the subjective opinions and views of B Capital and its personnel. Such statements cannot be independently verified and are subject to change. Actual portfolio construction and returns of investments will vary.

1. Multiple metrics used to inform capital efficiency, including forward burn multiple (projected cumulative cash burn / new annual revenue over next two years) and net value creation multiple (projected enterprise value at exit – current pre-money valuation) / total equity raised through exit, including current round)

 

1. Market Readiness: Customer Pull, Not Just Potential

The first thing we ask is whether the market is ready, and whether a company can build a real scaled business. Too often, early-stage climate startups fall into what we call the “first-of-a-kind folly”: pilots look promising, but the jump to the first commercial-scale project never happens. To us, a ready market is one where customers are pulling solutions in, with budget lines, procurement processes, and incentives already in place to support adoption. That pull matters because it separates an interesting technology from a business that can scale.

That’s why we prioritize companies earning revenue from their customers’ operating budgets rather than from marketing or innovation budgets. It shows that the product is solving a core need. We also look closely at whether a company has chosen the right entry point for today’s market, not just the market as it might exist a decade from now. Too often we see products designed for the way the world should work in 10 to 20 years, before the infrastructure, incentives or customer behavior are in place to support them. For example, the smart grid companies of the 2010s were working on interesting technologies, but struggled commercially because the market wasn’t yet ready.

In addition to prioritizing real revenue, we look for large markets that can support a scaled VC-backable business. We don’t inflate numbers with a theoretical TAM, we focus on the Serviceable Addressable Market (SAM) at exit—the share that can realistically be captured and turned into revenue.

Best-in-class benchmarks:

  • $10M+ in next-twelve-months (NTM) revenue
  • Signed contracts or conditional purchase orders (POs) into recurring, active budgets
  • SAM > $5B, with a credible path to penetration

These aren’t hard gates, plenty of exceptional companies fall short on one or more of these and still go on to be generational businesses. But they’re the markers that help give us conviction.

 

2. Competitive Advantage: Be a Painkiller

In a space where dozens of companies claim to solve the same problem, differentiation is everything. We look for Resilience Tech startups offering transformative products, not just incremental improvements. Many Resilience Tech solutions have long been vitamins, helpful but not essential. We’re looking for painkillers.

We are most confident in companies that demonstrate a structural moat, whether it be through proprietary technology, superior data, privileged access or unique distribution.

Best-in-class benchmarks:

  • Clearly articulated differentiation that customers care about and are willing to
    pay for
  • Defensibility through IP, network effects, data or supply chain
  • Emerging winner in their category

In a crowded field, small technical differences rarely stand out. Real breakthroughs come from clear, meaningful advantages.

 

3. Adoption Readiness: Proven Tech that Can Scale

We invest in proven technologies with a clear path to commercialization. Too often, Resilience Tech startups focus on technical milestones and delay what is often the more salient question: How can this solution be adopted at scale? The traditional Technology Readiness Level (TRL) framework falls short in answering this. That’s why we use the Department of Energy’s Adoption Readiness Level (ARL) framework, which evaluates a technology’s commercial viability.

The ARL framework expands the lens to include 17 dimensions across four critical categories:

  • Value Proposition – Does the product solve a real customer problem at a compelling price?
  • Market Acceptance – Does the company have the channels, partnerships, and operating model needed to translate demand into scaled adoption?
  • Resource Maturity – Can the company secure the inputs and partners it needs to scale?
  • License to Operate – Are regulatory, community, and permitting hurdles addressed?

Best-in-class benchmarks:

  • ARL 7+, signaling readiness for scaled deployment (i.e., proven customer demand and market conditions to support adoption)
  • Demonstrated ability to move from pilots to multi-site or repeatable deployments
  • Proactive strategies to overcome deployment risk, not defer it

The ARL framework has become a cornerstone of our investment process because it captures what most TRL-based evaluations miss: a realistic path from prototype to product to platform.

We break this down further in our article on why ARLs are essential to scaling climate tech.

 

4. Unit Economics: Beyond Green Premiums

We believe the most impactful climate solutions are those that become cheaper with scale. If a product depends on a permanent green premium—even as more of it is produced—it will struggle to grow. Our threshold is clear: the business must stand on its own, with no subsidies built into the cost base.

Best-in-class benchmarks:

  • Cost parity or better vs. incumbent solutions
  • A clear path to long-term cost leadership
  • Scalable, margin-positive unit economics without dependency on policy levers

 

5. Regulatory Risk: Beware of the Stroke of the Pen

Policy can be a powerful tailwind, but it shouldn’t be the foundation of a business. In markets like sustainable aviation fuel (SAF) and direct air capture (DAC), where demand today is almost entirely policy-driven, we proceed carefully. If a market could vanish with a single regulatory change, court ruling, or election result, it doesn’t have the durability we look for.

Best-in-class benchmarks:

  • A market-driven value proposition
  • Potential for short-term regulatory upside, but long-term independence necessary
  • Focus on de-risking any policy exposure

Direct subsidies can catalyze adoption by improving near-term unit economics, while broader policy tailwinds create long-term demand shifts, but in both cases, the underlying product must be commercially viable on its own.

 

6. Financing Risk: Match the Story to the Structure

Financing can accelerate a company’s growth or quietly undermine it. The strongest companies raise the right kind of capital at the right time. Not every business is suited for venture funding; some sectors are better served by philanthropies or corporate partners. That’s why every company should think about financing early and build a plan aligned with the types of capital that will realistically be available. For example, in some cases grants or customer pre-payments are available while project finance, often mistakenly cited as an early option, is typically only accessible once technology is proven and cash flows are contracted.

Success comes from sequencing. Companies must align technical and commercial milestones with financing, building credibility step by step. Too many stumble not because the technology failed, but because the financing story did. The rule is simple: know when cheaper capital will be available to you, tie every dollar to a milestone that de-risks the business, and sequence raises to create options. Great founders don’t just build technology—they match the structure of their financing to the story of their company.

Best-in-class benchmarks:

  • Source of capital aligned with business needs and risk
  • Maximal leverage from lowest cost capital available
  • Clear view into future financing rounds

 

7. Capital Efficiency: Efficient Doesn’t Mean Cheap

Some of the best opportunities in energy and resilience are capital-intensive. That’s not a problem—so long as capital is used efficiently. We evaluate companies using tools we’ve developed internally like the two-year forward burn multiple (burn / projected new revenue) and the net value creation multiple (projected enterprise value uplift between current round and exit / total projected future equity raised, including current round) to ensure that when we do invest in capital-intensive businesses, there is a clear path to value accretion.

Best-in-class benchmarks:

  • Business plan fully funded by the raise
  • Strategic capital deployment tied to measurable milestones that increase value
  • A credible path to capital-efficient scale, whether via partnerships, outsourcing, asset-light models or low-cost capital

 

8. Valuation: Don’t Raise Like Tech If You Operate Like Industry

Raising using software company valuation multiples while running an industrial business with inherently lower multiples creates misalignment that is almost impossible to unwind as the company grows. Too often, founders pitch with tech-company narratives but deliver industrial-company margins. The result: inflated valuations, misaligned investor expectations and painful resets down the line.

Best-in-class benchmarks:

  • Realistic expectations on valuation multiples
  • Reasonable comps tied to current market conditions
  • Clear revenue growth and margin targets

 

9. Exit: Don’t Price Yourself Out of a Win Too Early

We evaluate exit paths before we invest. The best companies can go public, but acquisitions are a common path. If cap tables or valuations make M&A functionally impossible, founders risk closing off their most likely path to success.

Best-in-class benchmarks:

  • Multiple credible exit pathways
  • Valuation discipline at each stage
  • Alignment between cost of capital, growth strategy, and target acquirers

If the most likely outcome is M&A, don’t raise in a way that blocks that option. For example, avoid setting valuations so high that potential acquirers are priced out.

 

10. Team: Build with Business Builders

We value technical depth—but we invest in business builders. The strongest founders pair technical innovation with the ability to sell and scale products. We look for teams that know how to operate in messy, regulated, infrastructure-heavy sectors. We also place a premium on strong and effective leadership that can guide a company through its various phases of growth, attracting top talent along the way.

Best-in-class benchmarks:

  • Industry experience with the target customer
  • A proven ability to secure contracts
  • Demonstrated leadership and ability to develop a best-in-class team

Commercial capabilities need to grow alongside the technology to position the business for success. Founders that prioritize commercialization in parallel with tech are best positioned to win.

 

Conclusion

Underlying all of this is impact. The strongest Resilience Tech companies deliver measurable improvements: power grids that stay online, supply chains that withstand shocks, infrastructure that endures weather extremes. When outcomes can be tracked in uptime, cost savings or avoided emissions, it drives adoption, validates durability and cements long-term value. Impact also extends beyond the commercial sphere. The best companies advance planetary resilience with clear environmental benefits, while demonstrating political resilience by offering solutions that resonate across the political spectrum. Companies that combine measurable outcomes with broad-based relevance are best positioned to scale and endure.

This framework isn’t about perfection. It’s about alignment. We’re excited to work with founders and teams who are honest about their risks, intentional about their strategies, and bold in their ambition. The most compelling entrepreneurs are magnetic, open-minded and resilient leaders who push through obstacles with unending optimism and inspire others to join them. The strongest companies emerge as industry leaders with deep markets that can be understood by financial markets and analysts. And perhaps most importantly, the most valuable opportunities are those solving the world’s biggest and most urgent challenges.

Our Resilience Tech investment thesis is grounded in a simple belief: the next decade belongs to the companies that can scale. That means prioritizing businesses with proven potential to grow and deliver lasting impact. This framework helps us identify those opportunities, and we hope it’s a useful tool for entrepreneurs and other investors to do the same.

 

 


LEGAL DISCLAIMER
All information is as of 9.10.2025 and subject to change. This content is a high-level overview and for informational purposes only. Certain statements reflected herein reflect the subjective opinions and views of B Capital personnel. Such statements cannot be independently verified and are subject to change. Reference to third-party firms or businesses does not imply affiliation with or endorsement by such firms or businesses. It should not be assumed that any investments or companies identified and discussed herein were or will be profitable. Past performance is not indicative of future results. The information herein does not constitute or form part of an offer to issue or sell, or a solicitation of an offer to subscribe or buy, any securities or other financial instruments, nor does it constitute a financial promotion, investment advice or an inducement or incitement to participate in any product, offering or investment. Much of the relevant information is derived directly from various sources which B Capital believes to be reliable, but without independent verification. This information is provided for reference only and the companies described herein may not be representative of all relevant companies or B Capital investments. You should not rely upon this information to form the definitive basis for any decision, contract, commitment or action.

The post The 10 Factors That Guide Our Resilience Tech Investments appeared first on B Capital.

]]>
Why We Invested: ARTBIO https://b.capital/why-we-invested/why-we-invested-artbio/ Thu, 31 Jul 2025 13:00:12 +0000 https://b.capital/?p=6960 By: Robert Mittendorff MD, Jason Grosz and Jack Grimes   B Capital is proud to partner with ARTBIO, an oncology biotech company pioneering the development of radioligand therapies (RLTs) based on the radionuclide lead-212 (Pb-212), a short-lived alpha-emitting isotope that has the potential to broaden the therapeutic index of radiopharmaceuticals. ARTBIO has developed a differentiated...

The post Why We Invested: </br>ARTBIO appeared first on B Capital.

]]>
By: Robert Mittendorff MD, Jason Grosz and Jack Grimes

 

B Capital is proud to partner with ARTBIO, an oncology biotech company pioneering the development of radioligand therapies (RLTs) based on the radionuclide lead-212 (Pb-212), a short-lived alpha-emitting isotope that has the potential to broaden the therapeutic index of radiopharmaceuticals.

ARTBIO has developed a differentiated pipeline of RLTs for both validated and first-in-class targets, with its lead program, AB001, entering a Phase 1 trial this year for metastatic castration-resistant prostate cancer (mCRPC), an advanced form of prostate cancer that has spread beyond the prostate and progressed on androgen deprivation therapy. With a robust technological foundation, proprietary isotope production platform, and a highly experienced team, we believe ARTBIO has the potential to define the next chapter of precision oncology.

 

RLTs: A Transformative Modality in Oncology

Radiopharmaceuticals have emerged as a transformative modality in oncology, delivering paradigm-shifting clinical results and poised to become a cornerstone of cancer treatment. RLTs, which consist of a targeting vector linked to a radioactive isotope, enable the systemic delivery of radiation precisely to tumor cells that express a target receptor. This targeted approach enables the systemic delivery of radiation while minimizing damage to surrounding healthy tissue. The clinical success of two approved RLTs that rely on the beta-emitting isotope lutetium-177 (Lu-177) –Pluvicto for prostate cancer and Lutathera for neuroendocrine tumors – are rapidly becoming the standard-of-care for their respective indications while enjoying commercial success. However, these first-generation therapies represent just the tip of the iceberg for this modality, with RLTs exploiting novel targets and more potent radioactive isotopes poised to improve efficacy, combat resistance, and enable indication expansion.

Looking forward we believe that alpha emitters, and Pb-212 in particular, are positioned to unlock the full therapeutic potential of RLTs.

 

Pb-212: A Differentiated Alpha-Emitting Isotope

ARTBIO’s differentiation stems from its use of Pb-212 as the radioactive payload, which we believe to be the ideal isotope for maximizing both safety and efficacy. When compared to beta emitters like Lu-177, we believe Pb-212 offers several key advantages:

  • Higher linear energy transfer (LET): Alpha particles cause double-stranded DNA breaks that rapidly trigger cancer programmed cell death (apoptosis), while beta particles typically cause single-stranded breaks that cells can more easily repair through normal DNA damage response mechanisms.
  • Shorter path length: The alpha particles’ shorter emission range limits the radioactive exposure to healthy tissues surrounding the tumor, improving the therapeutic index.
  • Short half-life: Pb-212’s short half-life (10.6 hours) allows for a higher dose rate – meaning a greater quantity of radiation is absorbed over a shorter time period – which may enhance efficacy and enable more frequent dosing. The shorter half-life of Pb-212 also aligns well with the half-life of commonly used ligands (e.g., small molecules, peptides), potentially translating to a better safety profile by reducing off-target toxicity to healthy tissues.

While Pb-212 is not the only alpha emitter in development, we have strong reason to believe that it will be a best-in-class alpha emitter. The most common alpha emitter in the clinic today, actinium-225 (Ac-225), has a significantly longer half-life than Pb-212 (~10 days vs. 10.6 hours), resulting in prolonged radiation exposure and a narrower therapeutic index.

We believe the ability to irradiate the tumor more rapidly while eliminating the long tail of radiation exposure to healthy tissues will meaningfully expand the therapeutic index of radiopharmaceuticals based on Pb-212.

Early clinical experience with Pb-212 RLTs from academic centers and Pb-212 focused biotechs have provided preliminary confirmation of the theoretical advantages of Pb-212. In small datasets, these trials have demonstrated the potential of:

  • Encouraging efficacy signals: Response rates have been striking, even in Lu-177-resistant patients.
  • Favorable safety profile: Pb-212 shows improved safety compared to Ac-225, with less myelosuppression (bone marrow suppression) and minimal salivary gland uptake, thereby reducing off-target radiation damage to sensitive tissues and improving patient tolerability.

These early findings suggest that Pb-212 may unlock the next phase of radioligand therapy—delivering potent, targeted radiation with a more favorable safety profile.

 

Innovating on Multiple Fronts: A Broad Pipeline & Best-In-Class Pb-212 Generator

ARTBIO’s lead program, AB001, targets PSMA, a clinically validated target for prostate cancer. This program is designed to address key unmet needs in metastatic castration-resistant prostate cancer (mCRPC)—an advanced form of prostate cancer that no longer responds to hormone therapy—with the potential to benefit both Pluvicto-naïve and Pluvicto-experienced patients. Beyond AB001, ARTBIO has built a deep pipeline of novel targets with broad applicability across many solid tumors through both internal R&D and external partnerships with companies like 3B Pharmaceuticals, a leader in peptide-based radiopharmaceutical development, and Parabilis, a biotech pioneering a new class of helical peptides called Helicons.

ARTBIO has also developed a proprietary Pb-212 generator, AlphaDirect, which will be able to reliably produce clinical- and commercial-scale supply of Pb-212. This will allow ARTBIO to overcome the production challenges that have plagued other centrally-manufactured alpha emitters like Ac-225. AlphaDirect delivers >99.9% isotope purity and enables the distributed manufacturing network required to successfully deliver an isotope with a 10.6-hour half-life to patients.

In parallel, ARTBIO has built a strong operational backbone through partnerships with leading regional contract development and manufacturing organizations (CDMOs), including Nucleus RadioPharma, PharmaLogic, SpectronRx, and others. These specialized partners will support the production and distribution of ARTBIO’s Pb-212-based radiopharmaceuticals, positioning ARTBIO for success as it advances multiple RLTs into the clinic.

 

A Highly Experienced Leadership Team Well-Positioned to Shape the Future of Radiopharmaceuticals

The expertise of ARTBIO’s management team and their strategic approach to RLT development is highly impressive. The team has the right combination of commercial experience, scientific acumen, clinical development strategy, and manufacturing expertise:

  • Emanuele Ostuni, PhD (CEO) – Former Head of Europe for Cell & Gene Therapies at Novartis, where he oversaw the commercialization of Kymriah.
  • Nick Pullen, PhD (Chief Scientific Officer) – Former Head of Research at Jnana Therapeutics (biotech specializing in chemoproteomics-driven small molecule drug discovery for rare and immunemediated diseases), which was acquired for $800M by Otsuka (up to $1.1B with milestones).
  • Margaret Yu, MD (Chief Medical Officer) – Led clinical development for the prostate cancer drugs Zytiga and Erleada as the Prostate Disease Area Leader at Janssen.
  • Philippe Dasse, PharmD (Chief Technical Officer) – Former Head of Technical Operations at Advanced Accelerator Applications (France-based nuclear medicine specialist that developed Lutathera) and continued in that role at Novartis following its $3.9B acquisition. At Novartis, Philippe built and scaled the infrastructure for the commercial launch of Pluvicto.

With a differentiated isotope, broad pipeline, proprietary generator, and a proven leadership team, we believe ARTBIO is uniquely positioned to shape the future of alpha radioligand therapy.

 

 


LEGAL DISCLAIMER
All information is as of 7.25.2025 and subject to change. The investment discussed herein is a portfolio company of B Capital; however, such investment does not represent all B Capital investments. Certain statements reflected herein reflect the subjective opinions and views of B Capital personnel. Such statements cannot be independently verified and are subject to change. Reference to third-party firms or businesses does not imply affiliation with or endorsement by such firms or businesses. It should not be assumed that any investments or companies identified and discussed herein were or will be profitable. Past performance is not indicative of future results. The information herein does not constitute or form part of an offer to issue or sell, or a solicitation of an offer to subscribe or buy, any securities or other financial instruments, nor does it constitute a financial promotion, investment advice or an inducement or incitement to participate in any product, offering or investment. Much of the relevant information is derived directly from various sources which B Capital believes to be reliable, but without independent verification. This information is provided for reference only and the companies described herein may not be representative of all relevant companies or B Capital investments. You should not rely upon this information to form the definitive basis for any decision, contract, commitment or action.

The post Why We Invested: </br>ARTBIO appeared first on B Capital.

]]>
Resilience Tech: Systems That Withstand What’s Next https://b.capital/insights/resilience-tech-systems-that-withstand-whats-next/ Tue, 29 Jul 2025 13:50:28 +0000 https://b.capital/?p=6938 By: Jeff Johnson and Karly Wentz   In our last piece, we introduced Climate 3.0—our term for a more disciplined, market-driven era of climate technology, focused on scalable, capital-efficient businesses with strong fundamentals. We also acknowledged a tough reality: many climate startups—particularly in the West—have relied on policies and incentives that have become inconsistent, unclear...

The post Resilience Tech: Systems That Withstand What’s Next appeared first on B Capital.

]]>
By: Jeff Johnson and Karly Wentz

 

In our last piece, we introduced Climate 3.0—our term for a more disciplined, market-driven era of climate technology, focused on scalable, capital-efficient businesses with strong fundamentals. We also acknowledged a tough reality: many climate startups—particularly in the West—have relied on policies and incentives that have become inconsistent, unclear or even unavailable. The ground can feel unstable, and voluntary efforts alone won’t build lasting change.

So, the real question is: What kind of foundation can stand the test of time?

Here at B Capital, we believe the answer is Resilience Tech.

Resilience Tech is a refined lens on climate tech, highlighting solutions that strengthen energy, industrial, and infrastructure systems. It includes the tools, infrastructure and platforms that help people, businesses and governments adapt to a changing world—from climate shocks and energy volatility to food insecurity and fragile supply chains. These are the solutions that mitigate systemic risk while creating real economic value and remaining focused on planetary boundaries. Importantly, unlike many traditional climate technologies, Resilience Tech does not depend on incentives or policy mandates. It’s driven by urgent, market-based demand and built to scale on commercial fundamentals. Resilience Tech is underpinned by significant macro trends that we believe will create a ripe landscape for creating generation defining companies over the next decade.

In this article, we define Resilience Tech, highlight the core sectors we’re focused on and explore the structural megatrends shaping long-term demand for resilient systems.

 

What is Resilience Tech?

At its core, Resilience Tech is about building the systems that help societies withstand disruption—whether from climate shocks, geopolitical instability or surging energy demand. We define this emerging category across three interconnected pillars: Energy Resilience, Industrial Resilience and Infrastructure Resilience.

These pillars are deeply interconnected. Together, they form the foundation of a more resilient global economy—one designed to absorb volatility while driving sustainable growth.

Core Investment Themes

We believe the next wave of category-defining companies will emerge at their convergence, fueled by structural megatrends that are creating profound and enduring opportunities.

  • Energy Resilience: Powering the Future
    The rise of AI—and the electrification of buildings, transport and industry—is driving unprecedented electricity demand. Legacy infrastructure can’t keep up. We’re focusing on technologies that enable a more resilient, distributed and intelligent energy system built for the loads of tomorrow.
  • Industrial Resilience: Rebuilding the Global Supply Chain
    Geopolitical fragmentation and economic nationalism are reshaping how goods are made and moved. From revolutionary products to flexible manufacturing to logistics platforms enabling reshoring and regionalization, we’re backing solutions that make industry more responsive, secure, and self-reliant. A key enabler of this shift will be Physical AI, catalyzing the next wave of automation, precision, and adaptability across the supply chain.
  • Infrastructure Resilience: Adapting to a New Reality
    Extreme weather events—floods, wildfires, droughts and heat waves—are no longer future risks. They’re today’s reality. We support technologies that help communities adapt and recover in real time, from predictive climate analytics to systems that safeguard critical infrastructure.

Underpinning all three is a broader principle we call Planetary Resilience: the imperative to proactively manage environmental issues at scale, not just for impact, but for long-term stability and risk management.

In each of these domains, Resilience Tech sits at the intersection of urgent need and investable opportunity. It’s not a niche. We believe it’s the next frontier of essential technology, with significant macro tailwinds that drive a substantial portion of the global economy. The companies building it aren’t just solving problems—they’re designing the systems that will define a more stable, future-ready world.

 

I. Energy Resilience: Powering the Future

For the first time in a generation, electricity demand is structurally rising in the US. From 2008 to 2020, U.S. power consumption remained mostly flat—even as the economy and population grew—thanks to gains in energy efficiency. 1  It appears that era is over. 2

Resurgence in Electricity Demand Growth
AI and electrification of industry, transportation and buildings driving acceleration in power demand

US Electricity Demand (TWh)3

A new wave of demand is being driven by the combined forces of AI adoption, industrial reshoring and broader electrification across sectors. Global electricity demand is projected to grow by 2-3% annually through 2027, with data centers forecasted to double their electricity consumption this decade.4  At the same time, the shift to electrify buildings, manufacturing and transport is placing unprecedented strain on outdated grid infrastructure.

We see this as one of the most powerful—and potentially profitable—shifts in the economy.

Where We’re Focused: 5

  • Enabling renewables and storage: Technologies enabling cleaner, faster and more reliable energy deployment

    Renewables—especially solar—are now the lowest-cost and quickest-to-deploy sources of electricity. While module costs continue to fall, soft costs such as installation, financing, permitting and operations now make up a growing share of total project expenses, creating opportunities to improve economics through software and services. Energy storage solutions, particularly batteries, are essential to managing the intermittency of renewables and are becoming increasingly cost-competitive. Policy shifts are reshaping the market in real time in the United States, but the strong underlying economics of these solutions are likely to remain attractive globally for the long term. This theme is reflected in B Capital portfolio companies like LevelTen, a marketplace for energy transactions, and Omnidian, a provider of performance and protection services for distributed energy systems.
  • Grid congestion solutions: Software and hardware innovations increasing grid flexibility and resilience by enhancing transmission capacity, optimizing load management and improving overall utilization

    As electricity demand accelerates, technologies that improve grid efficiency have become essential. The U.S. grid must expand transmission capacity by more than 60% by 2035 to meet projected demand6. Once dismissed as “solutions in search of a problem,” smart grid technologies are now being rapidly adopted to relieve today’s constraints—boosting performance, improving reliability and lowering system-level costs. The most successful businesses not only tackle the technical challenges but also overcome business model barriers, ensuring their solutions align with how utilities and grid operators operate and generate revenue.
  • High-impact energy efficiency: Products that reduce energy intensity in critical load centers—such as data centers, factories and commercial buildings—are becoming essential

    Efficiency now directly impacts grid reliability. As each marginal electron grows more valuable, demand is rising for solutions that lower power consumption without compromising performance. This isn’t just about decarbonization; it’s about keeping pace with accelerating demand. Companies that solve these challenges will benefit from durable tailwinds, regardless of shifts in climate policy.

 

II. Industrial Resilience: Rebuilding the Global Supply Chain

Geopolitical shifts and the rapid development of energy and resilience technologies are reshaping global supply chains. Many innovative energy solutions rely heavily on critical minerals, the supply of which is concentrated in just a handful of countries—raising national security concerns for the U.S. and its allies. At the same time, manufacturing capacity remains heavily concentrated in China, creating additional geopolitical and supply chain risks that companies must navigate.

Derisking of Climate-related Global Supply Chains
Increasingly fragmented supply chains amid geopolitical tension, driving reshoring

U.S. Private Construction between 2003-2024 ($B)7

The result is a push to de-risk supply chains, reduce resource intensity and localize production wherever possible across sectors.

Where We’re Focused:8

  • Resource efficiency and circularity: Technologies that reduce waste and improve the economics of mineral extraction and utilization

    As ore grades decline and demand rises, we see growing interest in technologies that reduce the cost and improve efficiency of resource extraction and recycling. National security concerns are accelerating demand for domestic or allied access to these materials.
  • Domestic and allied manufacturing: Platforms that enable more secure, regionalized supply chains for critical industries

    Between the Biden-era IRA and the Trump-era tariffs, there’s bipartisan momentum behind domestic manufacturing. Global industrial policy is coalescing around resilience, and automation, robotics and AI are changing the unit economics of onshore production.
  • Water resilience: Innovations in reuse, purification and conservation to address growing scarcity and regulatory pressure

    Water is emerging as one of the next major input constraints, particularly for heavy industry, data centers and energy. Though historically underpriced, water is gaining recognition as a key operational and regulatory risk.

This trend is creating new opportunities for companies that can offer lower-risk, economically competitive alternatives in high-leverage supply chain categories. A key catalyst will be Physical AI—AI embedded in robotics, sensors, and industrial systems—to enable smarter, more adaptive infrastructure that can withstand geopolitical shocks, resource constraints, and operational volatility.

 

III. Infrastructure Resilience: Adapting to a New Weather Reality

Extreme weather is no longer a future scenario—it’s a recurring line item on corporate P&Ls. Companies across sectors are now actively planning for wildfires, floods, droughts and heat waves—not just to protect operations, but to reduce insurance costs, meet regulatory requirements and avoid disruption.

Escalating Impacts of Climate Change
Heat waves, wildfires, floods, drought and extreme weather driving increased need for adaption and resilience

Combined Cost of U.S. Weather and Climate Disasters (CPI-Adjusted $B)9

This marks a pivot in how we think about weather-driven adaptation. What was once considered reactive is now seen as proactive risk management and a significant cost to be optimized—and increasingly, a source of competitive advantage.

Where We’re Focused:10

We’re concentrating on the most pressing and economically significant climate risks—where disruption is already driving budget shifts, policy changes and new demand for resilient solutions.

  • Climate risk intelligence: Tools that help enterprises and governments measure, model and mitigate weather and disaster exposure

    Asset-intensive businesses are increasingly viewing climate modeling as core infrastructure, with improved data and AI models enabling more powerful use cases and applications. One example is Overstory (a B Capital portfolio company), which helps utilities optimize vegetation management in fire-prone areas—reducing operational costs while lowering wildfire risk.
  • Adaptation technologies: Infrastructure and services that reduce vulnerability to acute and chronic climate stressors

    These include fire-resistant materials, heat-mitigating wearables, flood solutions and drought-proof water systems. Adaptation isn’t a future problem—it’s where capital is flowing today in response to visible disruption.

While mitigation remains critical, adaptation is where urgency is colliding with economic rationale—and where we see growing budget allocations and purchasing behavior.

Looking Ahead

Each of these megatrends—electrification, supply chain realignment, and climate adaptation—is massive in scope. But trends alone don’t generate returns. The key is identifying companies that are not only aligned with these shifts, but also meet our core investment criteria: scalability, strong and predictable unit economics, and commercial readiness that aren’t dependent on policy incentives. We’ll explore these criteria further in our next piece.

For now, the message is simple: in Resilience Tech, capital flows to fundamentals—and fundamentals follow the megatrends reshaping our world.

 

 


LEGAL DISCLAIMER
All information is as of 7.22.2025 and subject to change. Certain statements reflected herein reflect the subjective opinions and views of B Capital personnel. Such statements cannot be independently verified and are subject to change. The investments discussed herein are portfolio companies of B Capital; however, such investments do not represent all B Capital investments. It should not be assumed that any investments or companies identified and discussed herein were or will be profitable. Past performance is not indicative of future results. The information herein does not constitute or form part of an offer to issue or sell, or a solicitation of an offer to subscribe or buy, any securities or other financial instruments, nor does it constitute a financial promotion, investment advice or an inducement or incitement to participate in any product, offering or investment. Much of the relevant information is derived directly from various sources which B Capital believes to be reliable, but without independent verification. This information is provided for reference only and the companies described herein may not be representative of all relevant companies or B Capital investments. You should not rely upon this information to form the definitive basis for any decision, contract, commitment or action.

 

SOURCES

  1. S. Energy Information Administration, “How the United States Uses Energy,” Last updated July 15, 2024, accessed March 27, 2025, https://www.eia.gov/energyexplained/use-of-energy/.
  2. S. Energy Information Administration, “Short-Term Energy Outlook,” March 2025, https://www.eia.gov/outlooks/steo/report/elec_coal_renew.php. Accessed March 27, 2025.
  3. IEA, “EA World Energy Outlook 2024” data as of October 2024. APS (Announced Pledges Scenario) reflects climate targets and commitments as stated by governments, while STEPS (Stated Policies Scenario) projects outcomes based on currently implemented policies.
  4. IEA (International Energy Agency), Growth in Global Electricity Demand Is Set to Accelerate, January 2025. Available at: https://www.iea.org/news/growth-in-global-electricity-demand-is-set-to-accelerate-in-the-coming-years-as-power-hungry-sectors-expand
  5. For illustrative purposes only, areas of focus are subject to change.
  6. Nathan Shreve, Zachary Zimmerman, and Rob Gramlich, Fewer New Miles: The US Transmission Grid in the 2020s, Grid Strategies with support from Americans for a Clean Energy Grid, July 2024. Available at: https://cleanenergygrid.org/wp-content/uploads/2024/07/GS_ACEG-Fewer-New-Miles-Report-July-2024.pdf
  7. United States Census Bureau “Construction Spending – “Historical Data” as of 2024
  8. For illustrative purposes only, areas of focus are subject to change.
  9. National Centers for Environmental Information, ” United States Billion-Dollar Disaster Events 1980-2024 (CPI Adjusted)“
  10. For illustrative purposes only, areas of focus are subject to change.

The post Resilience Tech: Systems That Withstand What’s Next appeared first on B Capital.

]]>
Part 2: The Buyer’s Playbook – A Strategic Guide to M&A from the Acquirer’s Perspective https://b.capital/insights/part-2-the-buyers-playbook-a-strategic-guide-to-ma-from-the-acquirers-perspective/ Thu, 10 Jul 2025 13:00:02 +0000 https://b.capital/?p=6876 By: Ronan Kennedy   In our first article, we focused on how founders can prepare for a successful exit by anticipating buyer needs and building with optionality in mind. Preparing for M&A: How to Sell Your Company (Instead of Just Being Bought) This second installment shifts focus to the acquirer. Whether you’re scaling into new...

The post Part 2: The Buyer’s Playbook – A Strategic Guide to M&A from the Acquirer’s Perspective appeared first on B Capital.

]]>
By: Ronan Kennedy

 

In our first article, we focused on how founders can prepare for a successful exit by anticipating buyer needs and building with optionality in mind.

Preparing for M&A: How to Sell Your Company (Instead of Just Being Bought)

This second installment shifts focus to the acquirer. Whether you’re scaling into new markets, building out your product ecosystem or deepening your customer relationships, M&A can serve as a powerful, proactive tool to advance your long-term strategy. But success depends on more than identifying a target. It requires thoughtful planning, structured evaluation and a clear vision for how the acquisition will integrate with the company and create lasting value.

Below, we outline a practical approach to buy-side M&A—from identifying the right themes to structuring and integrating deals with intention.

 

Start with Strategy—Not Targets

The best M&A processes don’t start with a target list—they start with a clear vision. Where do you want your business to be in five or 10 years? What capabilities, markets, or scale will it take to get there? That strategic clarity helps turn an acquisition discussion from reactive and opportunistic to intentional and deliberate.

After developing the top-level corporate strategy, we advise companies to map the journey in three steps:

1. Take a Structured Approach to Get from Strategy to Target List
Strategy → Themes → Categories → Companies

  • Themes: How could you achieve the strategic vision? Identify big bets—such as entering adjacent markets, expanding up or down the value chain or enhancing your solution suite.
  • Categories: Break each theme into specific product areas or business models. What does the current product roadmap look like? How could this category improve or enhance the current roadmap? What makes one category more attractive than another? In what sequence should you pursue these categories?
  • Companies: Identify key players in each category—some may be acquisition targets, but others will serve as reference benchmarks, or perhaps potential partners. Determine if one deal will suffice or if multiple will be needed to achieve your goals. This process helps leadership teams align around not just what they might buy, but why—and ultimately how everything fits into the broader vision and mission statement.

2. Leverage the Six Ms to Evaluate Each Opportunity
Once you’ve identified promising companies, assess them through a structured lens. We use the Six Ms—a simple but powerful framework to evaluate strategic fit and deal viability. Focus not just on the target, but on the potential of the combined business.

  • Market – How does the acquisition reshape your market? Does it expand, consolidate, or deepen share-of-wallet? Is it a high-growth or more durable market—and is it core to your future growth?
  • Management – Is the team strong—and potential future leaders within your organization? Are they looking to stay or move on? Do your styles align enough to work well together?
  • Margins – How would this transaction change your gross and operating margins? Does cash burn increase/decrease? Are the unit economics attractive and sustainable?
  • Monetization – How will the acquisition generate returns—and on what timeline? Will it drive more revenue from existing customers, unlock new markets, or strengthen pricing power and platform value?
  • Moat – How does the acquisition deepen your competitive edge? Does it make you more essential to customers, enable pricing advantages, or offer clearly superior technology to alternatives?
  • Mitigants – What are the risks of the transaction and implementation? How can those be reduced or avoided altogether?

At times, we apply a GPA-style grading system across the Six Ms, weighting each based on what matters most—such as talent in emerging sectors or defensibility in crowded markets. Pending corporate needs, you can more heavily weigh one category compared to another, creating a practical tool for aligning internally, guiding board discussions and pinpointing where a strategic premium or tailored deal structure may be justified.

3. Lead with Vision, Not Valuation
Discover → Design → Deliver

Approaching a company shouldn’t start with a number. It should start with understanding the product, clarifying the management team’s motivation and establishing a shared purpose. This is your discovery phase, where you’re learning as much as you can about the target.

The best acquirers start with questions—not term sheets:

  • Do we share a vision? Are we tackling the same problem from different angles?
  • Would joining forces make us stronger—and deliver more value to customers?
  • What’s the right model: partnership, joint venture, or full integration under one voice?

Think of an acquisition more like a transplant than a transaction. Sometimes the host rejects the organ—other times, it’s the organ that rejects the host. Thoughtful planning helps reduce the risk of rejection from both parties!

We often suggest early collaborations or technical pilots to test fit before discussing deal terms. Customer overlap, joint go-to-market or shared infrastructure can reveal true alignment.

At the end of the day, acquisitions are human. Founders have dreams, teams have cultures and customers have choices.

 

Design for Flexibility

Once alignment is established, structure becomes the focus—and it’s here that acquirers can shape real outcomes. As you engage more deeply with the target, you’ll gain a clearer picture of the business: its financials, management team, customer base and product. The Six Ms will come into sharper focus, giving you the insights needed to determine whether to make an offer—and how to structure it.

Negotiations are often collaborative, with both sides surfacing priorities and pain points. As you consider the deal, be sure to address all three legs of the stool: the business purchase (and any future investment), the employment agreement and intangibles that could become deal-breakers if left unresolved. Understanding the other party’s motivations and constraints allows you to craft creative, joint-value solutions that preserve trust—and ensure long-term accretion.

Key structuring tools include:

  • Cash vs. equity. Manage balance sheets, align stakeholders and offer future upside through thoughtful consideration of payment mix.
  • Earn-ins. Commonly referred to as an “earn-out,” we prefer to position these options as an earn-in, enabling sellers to realize full value based on future performance. This helps align incentives and limit downside for buyers if promised conditions don’t materialize.
  • Employment agreements. Identify critical talent and reflect retention priorities through compensation and employment incentive design. This could also include board governance for larger combinations.
  • Tax-aware structuring. If the sellers are eligible for QSBS or other tax benefits, be considerate – as small adjustments in the proposed structure can unlock meaningful value to the founders.
  • Intangibles. These can include important factors like branding, work/office location, org charts and reporting structures, titles, and even media releases to celebrate the successful acquisition.

 

Plan for Integration Before You Close

The biggest risk in M&A isn’t valuation—it’s successful integration. And too often, integration planning starts after the deal is signed.

At B Capital, we advocate for pre-merger integration, or “Pre-MI,” planning.  This involves scenario planning across several workstreams while deals are still being evaluated. Early integration thinking reduces execution risk, identifies obstacles and costs, aligns vision and ensures teams hit the ground running on day one to reduce the chaos of a merger.

Think through:

  • Brand Identity: Will the target be rebranded, sub-branded, or remain independent? What best supports market positioning and value retention?
  • Team Structure: Who will lead which functions post-close? How will reporting lines evolve, and when will changes to healthcare, retirement and other benefits be implemented?
  • Communications: What’s the unified story we’ll tell customers, investors and employees—and when?
  • Infrastructure: How will IT systems, product platforms and development roadmaps be integrated?
  • Finance & Operations: How will accounting systems be consolidated? What’s the plan for managing expenses, vendor contracts, payroll and benefits?
  • Integration Timeline: What’s the roadmap with key milestones across Day 0, Day 10, Day 100 and Year 1?

We bring in detailed pre-MI checklists and playbooks to help teams stress-test integration plans before they become roadblocks. This proactive approach not only accelerates value capture but also preserves momentum during the transition.

Many of these considerations also feed directly into the financial model—shaping both the economics of the merger and the unified vision for the combined business.

 

Driving Value Through Intentional M&A

M&A can be a powerful growth lever—but it’s also complex, time-intensive and high-stakes. Whether you’re a first-time acquirer or an experienced operator refining your strategy, choosing the right partner matters.

The best acquirers treat M&A as a strategic tool—not a reactionary process. They plan ahead, evaluate with discipline and integrate with intention. When done well, an acquisition doesn’t just accelerate your roadmap—it transforms your business, culture and position in the market. It can also open doors to new financing tools and investors who specifically support inorganic growth strategies.

Whether it’s your first deal or part of a broader strategy, success starts with early planning, clear alignment and a focus on the people who will ultimately be the source of delivering value.

 

 


LEGAL DISCLAIMER
All information is as of 7.8.25 and subject to change. Certain statements reflected herein reflect the subjective opinions and views of B Capital personnel. Such statements cannot be independently verified and are subject to change. Reference to third-party firms or businesses does not imply affiliation with or endorsement by such firms or businesses. It should not be assumed that any investments or companies identified and discussed herein were or will be profitable. Past performance is not indicative of future results. The information herein does not constitute or form part of an offer to issue or sell, or a solicitation of an offer to subscribe or buy, any securities or other financial instruments, nor does it constitute a financial promotion, investment advice or an inducement or incitement to participate in any product, offering or investment. Much of the relevant information is derived directly from various sources which B Capital believes to be reliable, but without independent verification. This information is provided for reference only and the companies described herein may not be representative of all relevant companies or B Capital investments. You should not rely upon this information to form the definitive basis for any decision, contract, commitment or action.

The post Part 2: The Buyer’s Playbook – A Strategic Guide to M&A from the Acquirer’s Perspective appeared first on B Capital.

]]>
How to Be Bought (vs. Sold): A Strategic Approach to M&A https://b.capital/insights/preparing-for-ma-how-to-sell-your-company-instead-of-just-being-bought/ Tue, 08 Apr 2025 13:00:21 +0000 https://b.capital/?p=6713 This is the first in a series of articles exploring the evolving M&A landscape. In this piece, we focus on the sell-side perspective—helping founders and executives understand how to position their company for a successful acquisition. By: Ronan Kennedy   When Should a Company Start Preparing for M&A? Mergers and acquisitions (M&A) should always be...

The post How to Be Bought (vs. Sold): A Strategic Approach to M&A appeared first on B Capital.

]]>
This is the first in a series of articles exploring the evolving M&A landscape. In this piece, we focus on the sell-side perspective—helping founders and executives understand how to position their company for a successful acquisition.

By: Ronan Kennedy

 

When Should a Company Start Preparing for M&A?

Mergers and acquisitions (M&A) should always be a core consideration in an executive team’s strategic planning process. As the old saying goes, “It’s better to be bought than sold.”

Even if you aren’t actively seeking to sell, being prepared ensures that when an opportunity arises, leadership can act swiftly and strategically to assess the situation and respond in a way that maximizes value capture. While plans rarely unfold exactly as expected, the act of planning itself is invaluable – ensuring a foundation is in place when the right moment comes.

Anticipating potential scenarios allows founders to stay in control, understand the levers available to them, and shape the outcome on their terms. Knowing your source of value to the acquiror helps with positioning, negotiating, and structuring any potential transaction.

Future articles in this series will explore M&A from the buy-side perspective, but this article focuses on the essential steps a company should take when preparing to sell.

 

Preparing for M&A as a Seller

1. Understanding Your Buyer Base

The first step in preparing for an acquisition is identifying the world of potential buyers and understanding their motivations. A well-defined and diverse buyer base increases the likelihood of securing an optimal deal. Different buyers will have distinct reasons for exploring acquisitions with you, which will influence how they value your company.

  • Buyers seeking cross-sell and upsell opportunities aim to integrate your offerings into their customer base and their offerings into your customer base.
  • Revenue-focused acquirers prioritize your revenue stream to boost financial performance and market share.
  • Technology-driven buyers may be looking for innovation to modernize legacy systems and drive business model transformation.
  • Market expansion buyers see your company as a strategic entry point into new regions or sectors.

By understanding these different motivations, you can assess where your company’s highest source of value may lay. With that understanding, management teams can then tailor their relationships, operations, positioning, and projections to align with the desires of the strongest potential acquirers.

2. Crafting Your Narrative

Once you have identified the potential buyer landscape, the next step is shaping a compelling narrative that connects your company’s current state to what potential buyers need. This could involve:

  • Clear articulation of market positioning in a way that aligns with buyers’ strategic goals, ensuring they see the value in an acquisition.
  • Highlighting key value propositions that make your company an attractive asset.
  • Structuring historical and projected financials in a manner that is consistent with how acquirer will assess you, ensuring they can easily assess the return on investment.

Your acquirers may calculate financial and operational metrics using a different methodology than your own. Some acquirers expect to see synergies in one year, others in two or three years.

A well-articulated narrative not only increases your appeal but also influences valuation, deal structure and the speed at which negotiations progress. It also increases your personal value to the acquiror, demonstrating your understanding of the combined vision together, and therefore your role in any transaction’s success.

3. Strengthening Relationships Before the Deal

Successful M&A is not just about financials – it is also about relationships, strategic alignment and trust. Establishing relationships at the right levels within potential acquirers is crucial to ensuring a smooth transaction.

Before engaging in formal deal discussions, it is essential to align on:

  • Shared values, culture and long-term vision to ensure a seamless integration post-transaction.
  • A common understanding of industry challenges and how an acquisition can create mutual benefits.
  • Synergies that make the acquisition logical, enabling both parties to leverage combined strengths.

Startups can proactively build relationships with potential buyers early on through commercial and technical partnerships, servicing common customers, and other engagements. This fosters familiarity before formal discussions even begin. Establishing these connections and aligning on strategic fit early can streamline negotiations and improve the chances of a successful deal.

4. Structuring the Deal and Pricing Considerations

Once alignment is established, structuring the financial aspects of the deal becomes a key focus. Pricing an M&A transaction extends beyond the purchase price—it involves three fundamental components:

  • Business purchase terms – Understanding what’s being purchased, the value of the purchase, and structure of payment (including earnouts)
  • Employment agreements – Outlining how key executives and employees will be incentivized post-acquisition to ensure continuity and smooth integration.
  • Intangible factors – Assessing elements like cultural fit, brand equity and autonomy plays a critical role in negotiations, valuation, and the transaction’s success.

Founders should take a holistic approach to optimizing value across these dimensions, ensuring that stakeholders are aligned and the business is positioned for a strong outcome.

 

Proactivity vs. Waiting to Be Bought

The most successful acquisitions occur when a company is strategically positioned to be bought—not when it is forced to sell. Proactive M&A conversations often serve as a positive signal to future investors, illustrating a path to liquidity and the sophistication of the management team.

To position your company for an acquisition:

  • Actively engage with potential acquirers and strategic partners to create multiple options.
  • Maintain flexibility and optionality—you can always decline an offer or choose to wait for a better time.
  • Have a proactive strategy for responding to competitive pressures, including scenarios where your competitors are acquired before you.
  • Operate with a long-term vision, running the business in a way that naturally aligns with the interests of potential buyers.

A well-executed exit strategy starts with a structured plan, strong relationships, and an operational approach that aligns with what acquirers seek in a target company.

 

How We Support Our Portfolio Companies

Navigating an M&A process requires deep expertise, a strong network and strategic foresight. At B Capital, our dedicated Capital Advisory team works closely with companies to help them:

  • Develop a structured M&A roadmap and identify potential acquirers with the highest strategic fit.
  • Translate their value into language that resonates with buyers and maximizes positioning.
  • Assist with introductions and backchannel discussions that help move negotiations forward.
  • Provide guidance on industry-standard deal structures and financial considerations to ensure an optimal outcome.
  • Help founders understand key valuation drivers, including pro forma financials, earn-out structures, synergy attribution and realization, and successful integration planning.
  • Evaluate if hiring an advisor/banker would help or hurt a transaction

With our deep industry expertise and global network, we empower founders to confidently approach M&A opportunities, ensuring that when the time comes, they are well-prepared to maximize value and achieve a successful outcome.

 

 


LEGAL DISCLAIMER
All information is as of 4.1.25 and subject to change. Certain statements reflected herein reflect the subjective opinions and views of B Capital personnel. Such statements cannot be independently verified and are subject to change. Reference to third-party firms or businesses does not imply affiliation with or endorsement by such firms or businesses. It should not be assumed that any investments or companies identified and discussed herein were or will be profitable. Past performance is not indicative of future results. The information herein does not constitute or form part of an offer to issue or sell, or a solicitation of an offer to subscribe or buy, any securities or other financial instruments, nor does it constitute a financial promotion, investment advice or an inducement or incitement to participate in any product, offering or investment. Much of the relevant information is derived directly from various sources which B Capital believes to be reliable, but without independent verification. This information is provided for reference only and the companies described herein may not be representative of all relevant companies or B Capital investments. You should not rely upon this information to form the definitive basis for any decision, contract, commitment or action.

For the avoidance of doubt, the above information is for reference only and B Capital is not providing broker dealer services to any other party in any potential transaction.

The post How to Be Bought (vs. Sold): A Strategic Approach to M&A appeared first on B Capital.

]]>
RAG and the Future of Intelligent Enterprise Applications: Insights from Startup Leaders https://b.capital/insights/rag-and-the-future-of-intelligent-enterprise-applications-insights-from-startup-leaders/ Wed, 02 Apr 2025 13:00:00 +0000 https://b.capital/?p=6631 This article was originally published by Microsoft for Startups, access the original here. By: Nick Giometti and Rob Ferguson   What This Paper Is About, Who It’s For, and What Reading It Will Do for You AI has never been so powerful, but scaling generative AI (GenAI) applications to enterprise-grade production is hard. This white...

The post RAG and the Future of Intelligent Enterprise Applications: Insights from Startup Leaders appeared first on B Capital.

]]>
This article was originally published by Microsoft for Startups, access the original here.

By: Nick Giometti and Rob Ferguson

 

What This Paper Is About, Who It’s For, and What Reading It Will Do for You

AI has never been so powerful, but scaling generative AI (GenAI) applications to enterprise-grade production is hard. This white paper is intended to be a field guide for navigating the numerous hurdles specific to building robust GenAI applications. Through dozens of interviews with buyers, AI practitioners, and founders, authors Rob Ferguson and Nick Giometti have assembled a set of beliefs and best practices for building production-ready AI systems.

While there’s no “one-size-fits-all approach” to building GenAI applications, this collection of practical advice centers on the assertion that Retrieval-Augmented Generation (RAG) is the best methodology for marrying enterprise-specific context to the emergent capabilities of language models. By the end of this paper, the authors want readers to understand three key areas:

  • Outcomes: What is RAG, and how does it drive value in GenAI applications?
  • Challenges: What are the biggest pains preventing GenAI systems from reaching production?
  • Solutions: How are today’s leading startups solving these problems, and how can enterprises partner with them to build toward the future of intelligent applications?

This paper consists of three chapters and a conclusion.

In the first chapter, author Rob Ferguson establishes the current state of enterprise GenAI adoption and provides a brief primer on RAG.

Next, author Nick Giometti posits that scaling pains can be mapped to three common domains, each representing significant investment opportunities for startups to overcome: mastering context, building trust, and incorporating feedback.

In the third chapter, eight startup leaders each share a lesson from their hard-earned perspectives building in GenAI, and demonstrate how their solutions empower enterprise buyers to create their own intelligent systems.

Lastly, the conclusion highlights the components to prioritize for successful intelligent applications and closes with predictions for how enterprises can build future-proof AI scaffolding to match the ever-evolving capabilities of language models.

 

Chapter 1: Getting to GenAI-Native Applications
Co-Author: Rob Ferguson, Head of AI for Microsoft for Startups

Introduction

This white paper started with a simple mission. My co-author Nick Giometti and I wanted to discover which AI startups are delivering meaningful technology for the enterprise by using the latest generative AI (GenAI) models. We spoke with dozens of startups and were amazed by the creativity and insights from those on the cutting edge. What you’re reading is the result of months of learning about the essential components of building successful AI applications.

We delve into the experiences of leading AI startups to give enterprises a clear understanding of Retrieval-Augmented Generation (RAG). By exploring real-world applications, challenges, and best practices, you’ll gain actionable knowledge to navigate the complexities of RAG integration, make informed decisions about AI adoption, and strategically position your organization for success in building intelligent enterprise applications.

The adoption of GenAI in enterprise environments is moving fast but is still in its early days. While many enterprises have experimented with AI technologies, most implementations remain pilots or proofs of concept (POCs) rather than full-scale deployments. In this chapter, we’ll explore the current landscape of GenAI adoption and the emergence of custom AI copilots, and introduce a framework for understanding the horizons of GenAI technology integration

 

Adopting Copilots

According to the Microsoft Trend Index, 59% of employees are bringing their own AI tools into work. While this raises concerns for business leaders about safely and responsibly integrating AI technology, it highlights the eagerness of employees to leverage AI in their workflows. While 42% of enterprises reported using AI in some capacity, the majority of these implementations aren’t yet fully integrated into enterprise operations.

So far, most people experience GenAI through various copilots, like ChatGPT, Bing Search, and GitHub Copilot. However, startups building custom copilots have seen incredible interest and traction. Early leaders like Harvey, Sierra, and Glean have each raised over $100 million, achieving “unicorn” status.

Custom copilots created by startups are an amazing way to discover what these GenAI models can do. They excel when adapting to existing workflows or integrating to solve longstanding problems. Imagine being a lawyer who had to summarize 10,000 legal documents before GenAI came along. That’s a game-changer.

But as incredible as these custom copilots are, they can still hallucinate facts, they can be expensive and integrating them into complex enterprise environments often comes with challenges. The most common criticism is simple: a bad copilot promises it can do everything, but it doesn’t do any of those things well.

 

Three Horizons of GenAI Adoption

As the Head of AI for Microsoft for Startups, I, together with my team, not only help the world’s best startups use the best of Big Tech but also help Big Tech use the best of startups. Through this experience, we’ve developed a framework for understanding how GenAI technology is being adopted. We call it the “Three Horizons of GenAI Adoption.”

What we’ve noticed is that the best GenAI companies are really good at controlling the context that GenAI is exposed to during a workflow. They understand the data a user is working with and how they’re working with it. They then perform an incredible balancing act of matching the capabilities of GenAI models within the user’s specific context.

We consider these standout applications “GenAI-native applications” when they’re predictably successful while balancing costs. (I’ve seen many cool demos where you wouldn’t believe the cost to run at scale.) The key is that GenAI-native applications limit context to avoid over-promising and hallucinating information that isn’t within the models’ priors.

Open-ended copilots are an incredible technology, but they’re best at augmenting an individual’s success. GenAI-native applications scale across workflows.

Looking toward the final horizon, we see a tremendous future for agents that work together across application boundaries. These aren’t just autonomous agents that can deliver a result on their own. In this stage, which we call “GenAI systems,” the value of GenAI scales between enterprises by planning operations while safely sharing memory, encapsulating the cost of tasks, working within a secure plugin or API architecture, and explicitly addressing trust and safety concerns. At the final horizon, many experts (myself included) wonder if the value of GenAI could scale to match the entire software industry.

 

Understanding RAG: A Primer

Whenever you see a copilot cite its sources, chances are that RAG techniques are at work. Many people, after studying RAG, might think, “Is this just fancy search?”

It’s easy to get caught up in the latest innovations. However, the real value of any technology lies not in its novelty but in its ability to solve real-world problems and improve business operations. RAG represents only a fraction of what gets built in intelligent enterprise applications, but it plays a crucial role in enabling businesses to understand how the information is retrieved and integrated.

Understanding RAG isn’t about chasing the latest tech trends. It’s about recognizing a powerful tool that can deliver tangible improvements to workflows and decision-making processes. By augmenting large language models (LLMs) with the ability to dynamically retrieve and incorporate relevant information, RAG addresses a fundamental challenge in enterprise AI: combining the broad capabilities of AI with the specific, up-to-date knowledge that businesses rely on.

 

What Is RAG?

RAG enhances the capabilities of LLMs by dynamically incorporating external information during the generation process. Think of it as adding search (or retrieval) capabilities to your LLM. This approach bridges the gap between the vast knowledge embedded in LLMs and the specific, current information needed for accurate and contextual responses.

At its core, RAG is a process. When presented with a query, the system first retrieves a curated knowledge base and then uses this information to augment the LLM’s response. This process can be broken down into three main steps.

 

RAG at a Glance

  • Retrieval: Collect up-to-date data.
  • Augmentation: Combine real-time data with GenAI models.
  • Generation: Produce accurate, context-aware results.

 

Why Enterprises Should Adopt RAG

RAG addresses several critical limitations when building applications with LLMs.

Increased Accuracy and Relevance
Leveraging real-time data to reduce misinformation and outdated knowledge.

Domain-Specific Customization
Tailoring AI solutions to address unique industry challenges.

Scalability without Retraining
Enabling efficient updates to knowledge bases without the need for frequent model retraining.

Transparency and Explainability
Promoting trust in AI by clearly showing data sources, enhancing decision-making processes.

 

Conclusion

For now, if you still think of RAG as “fancy search” that helps a GenAI model cite its sources, that’s perfectly fine. We’ll dive deeper into the specifics in Chapter 3. For now, I’ll hand it off to Nick to explain why he invests in infrastructure startups that build with RAG technologies in mind.

 

Chapter 2: B Capital’s Bet on the Future of Enterprise Intelligence
Co-Author: Nick Giometti Senior Principal at B Capital

Key Themes

  • At B Capital, we’re betting on enterprise incentives to create a vast distribution of specialized intelligences rather than rely on a singular artificial general intelligence (AGI). The future of intelligence is contextual, not general.
  • Scaling intelligent applications is hard. The model layer is only a single component in a larger foundation of opinionated application and infrastructure design choices. Defining best practices in GenAIOps means wrangling interdisciplinary concepts across all of DevOps, MLOps, as well as LLMOps.
  • The largest investment opportunities exist for companies building the essential abstractions that empower enterprises to build intelligent systems, which:
    • Master enterprise context.
    • Build and maintain trust.
    • Drive continuous improvement.

 

No Singular Intelligence

What’s the enterprise incentive for artificial generative intelligence (AGI)? While AGI captivates the imagination, it’s hard to reconcile our current economic constructs with a world where every job has been displaced by the “one model to rule them all.” What happens to corporate competitive advantage when every enterprise is hiring and selling services generated by the same universally capable AI worker?

At B Capital, we envision a future shaped not by a monolithic general intelligence, but by a diverse ecosystem of highly specialized intelligences. We believe enterprises will thrive by leveraging their proprietary data and domain expertise to build artificial contextual intelligence, rather than relying on generalized models.

By tailoring models, knowledge bases, and retrieval systems to their specific domains, enterprises can:

  • Maximize the value of their unique expertise
  • Drive superior returns on technology investments
  • Maintain and enhance their competitive advantage

This approach aligns AI development with business objectives, ensuring that advancements in artificial intelligence augment rather than replace human capabilities, fostering sustainable growth and innovation.

 

Mastering Context

In the world of intelligent enterprise applications, context is king. Not only does context enhance and filter the performance of general models to achieve specialized outcomes, but it also serves as a guiding design principle: when building for tomorrow, context provides a means of backwards induction. The delta between the state of today’s data, processes, and infrastructure and the desired future outcomes creates a map for filling production gaps.

For AI systems to augment or automate productivity in a manner that approximates and eventually surpasses an enterprise’s human workforce, they need to fully capture the specifics of its unique domain. This contextual mesh includes its language, workflows, regulatory environment, and unique value propositions.

Because enterprise context is constantly evolving, mapping this complex web is no simple task. Businesses must integrate vast, multimodal, unstructured, and dynamic context into their operational use cases to achieve production ready outcomes. Given the critical yet challenging nature of capturing this context, we see significant value creation opportunities for startups that simplify these complexities.

Investment Opportunities in Context Management

  • Ingesting, structuring, and refining domain-specific data
  • Developing custom knowledge graphs and fine-tuned embeddings
  • Optimizing retrieval strategies to anticipate user-specific and use case–specific context

Startups that navigate, curate, and adapt each enterprise’s unique context form the basis of a portable, Retrieval-Augmented Generation (RAG)–based AI scaffolding. While RAG is the fastest way to bring enterprise data into a model’s context window, the complexity arises from transforming data into a functioning knowledge base. While knowledge is heterogenous, we believe critical RAG infrastructure will need to answer a core set of questions, like:

  • Where’s my data, and who’s responsible for maintaining it?
  • How do I make my data machine-interpretable without losing context?
  • How do I make my system better at answering my users’ most important questions?

RAG is an immediate and lasting value driver because, regardless of what new model enhancements emerge, maintaining a dynamic collection of each enterprise’s context allows businesses to rapidly experiment toward production. Isolating context as a constant and varying the model allows businesses to maximize AI-driven revenue by identifying more valuable use cases and outcomes, and to minimize costs by optimizing for the cheapest model without sacrificing performance.

While building intelligent applications feels like both a marathon and a sprint, the best advice for both distances is to race your own race, and control what you can control. For most enterprises, what new models can do is a function of frontier research labs. Instead, mastering the context through which AI is applied becomes the highest-impact behavior. At this stage of GenAI adoption, the startups that abstract the context-control plane excite us the most.

 

Building Trust through Transparency and Reliability

In any enterprise environment, AI adoption depends foremost on trust. If employees and customers don’t trust an AI’s decisions, they will hesitate to use it, wasting developers’ time and resources.

Trust is built on two pillars: transparency—how the AI arrives at its conclusions—and reliability—consistent, unbiased, secure performance. Enterprises need assurance that AI systems provide correct information, but they must also be able to prove they do so ethically, legally, and securely.

Building trust is no easy task: increased expectations brought on by impressive demos burden builders as they attempt to scale these evolving capabilities to production. A developer who has built the safest possible system still has to contend with the user’s attention span of 5,000 milliseconds or less. And because users won’t trust any application that fails four or more times in rapid succession, the trust window between alpha and production is incredibly small. Old problems intersecting with new models require a new standard for trust.

Building trust into the next generation of intelligent applications doesn’t just have to overcome classical problems related to safety and robustness—role-based access control (RBAC), managing personally identifiable information (PII), and latency. It also faces the added challenge of trying to wrangle powerful nondeterministic engines whose emergent behaviors aren’t well understood.

It can take months of iterating from experimentation to production to gain a user’s trust, but only a moment to lose it. Unfortunately, while the path to earning trust is narrow, the branches leading to betrayal are wide. GenAI’s emergent capabilities present a paradox: we love solutions that offer creative ways to solve problems, but we fear what we can’t control.

Earning trust means adhering to a strict contract of expected behaviors; and depending on the use case or industry, those expectations vary widely. That said, we believe there are core dependencies for building trustworthy intelligent applications. The end goal of building effective guardrails is to ensure that the right data is surfaced to the right user at the right time.

Investment Opportunities in Reliability and Transparency

  • Explainability and interpretability tools
  • Governance and compliance frameworks
  • Data privacy and security solutions

Large language models (LLMs) are probabilistic, making it challenging for an AI assistant to explain how it arrived at its answer. Even with chain-of-thought prompting, where the model walks through its ‘reasoning,’ it is still generating the most probable next token in sequence. Rather than revealing the underlying weights and biases encoded in a neural network of billions of parameters, the model demonstrates how it ‘thinks’ a similar prompt might be answered by another model—without offering a direct window into its internal mechanics.

If we can’t control the “why,” then building trustworthy outputs becomes a function of controlling the “what” and the “how.”  Without fine-tuning a base model to align it with some guidelines for moderation, governance shifts to critical application and infrastructure choices.

How does shifting transparency and reliability to an approach driven by “how” and “what” guardrails manifest in application design?

These decisions materialize through questions like:

  • What data sources should a RAG application have access to and should RBAC be handled at the user or source level?
  • What’s the threshold for flagging harmful prompts or escalating them to security teams?
  • When can cached responses be used to save on cost and generate faster answers?

Choices to answer these questions come with trade-offs.

  • Should guardrails be handled at each node in an application or just before an answer is generated?
  • Should intelligent applications be accessed through a single gateway or individual walled gardens?

We’re excited to support startups that simplify these complexities, enabling enterprises to quickly establish their own standards for trust, reliability, and transparency.

 

Feedback-Driven Systems

Lastly, AI systems, like the businesses they serve, need to evolve continuously to stay relevant and effective. The real world is dynamic: markets shift, customer preferences change, and regulations evolve. This is where feedback-driven systems come in. These systems are designed to learn from their users, their environments, and their outcomes. Feedback loops allow AI to adapt, improving its performance over time and aligning more closely with the enterprise’s evolving needs.

Because GenAI is still in its nascent phase of human-machine interactions, we believe that today’s collaboration with artificial intelligence will appear primitive to our future selves. As we’re only just beginning to understand what these models are capable of, developers face the added challenges of creativity and preference: how do we know what we want if we’ve never seen it before?

Steve Jobs famously said, “You can’t just ask customers what they want and then try to give that to them. By the time you get it built, they’ll want something new.” In a recent Tweet, AI researcher Andrej Karpathy evolved on this sentiment by describing the future of intelligent applications as “Input Optional Products.” He states:

Don’t ask your users for input. Coming up with input is hard, and a barrier to use. Think of users as wanting to play. We have AI predict the input! Design product into autonomous environments. Allow users to play by steering a bit.

The key to building intelligent applications that humans readily accept with little to no intervention lies in the iterative capture of preference data.

Input may be optional, but feedback is essential.

Startups that focus on incorporating preference into the model and application infrastructure will accrue value, as they enable enterprises not just to scale their intelligent systems to production, but also to evolve over time.

Investment Opportunities in Feedback-Driven Systems

  • Implicit and explicit feedback collection tools
  • Active learning frameworks
  • AI performance monitoring and evaluation platforms

Abstracting feedback systems is a multifaceted challenge: answering ‘Is this a good answer?’ is entirely different from answering ‘Is this a good experience?’ Additionally, feedback systems must accommodate increasingly nuanced preferences. Early GenAI applications relied on simple thumbs-up or thumbs-down buttons, but today’s intelligent systems, integrated into browsers or IDEs, enable users to edit outputs directly in-line. The difference between the original machine-generated output and the human-edited version can now be evaluated across dimensions like tone, complexity, and syntax.

As intelligent applications advance, so too must the mechanisms for capturing and applying feedback. If we are defined by what we measure, these startups will play a pivotal role in shaping what intelligent systems can ultimately become.

 

Scaling to the Future of Intelligence

Through our interviews, we discovered that these three themes—mastering context, building trust, and enabling feedback driven evolution—are the cornerstones of scaling AI in enterprise environments. The leaders we spoke with each create software abstractions to simplify these complex challenges and are well positioned to become essential partners for companies looking to implement intelligent enterprise applications. As AI moves from demo environments to mission-critical business operations, mastering these dimensions will be key to success, both for enterprises and for the startups that help them get there.

All information is as of 3.17.2025 and subject to change. Certain statements reflected herein reflect the subjective opinions and views of B Capital personnel. Such statements cannot be independently verified and are subject to change. Reference to third-party firms or businesses does not imply affiliation with or endorsement by such firms or businesses. It should not be assumed that any investments or companies identified and discussed herein were or will be profitable. Past performance is not indicative of future results. The information herein does not constitute or form part of an offer to issue or sell, or a solicitation of an offer to subscribe or buy, any securities or other financial instruments, nor does it constitute a financial promotion, investment advice or an inducement or incitement to participate in any product, offering or investment. Much of the relevant information is derived directly from various sources which B Capital believes to be reliable, but without independent verification. This information is provided for reference only and the companies described herein may not be representative of all relevant companies or B Capital investments. You should not rely upon this information to form the definitive basis for any decision, contract, commitment or action.

 

Chapter 3: Lessons from the Innovators

In our quest to understand the real-world applications and challenges of implementing intelligent systems in enterprise environments, we went on an extensive research journey. We had the privilege of interviewing dozens of startups at the forefront of AI and data-augmented applications. Through these conversations, we gained invaluable insights into the practical considerations, innovative approaches, and emerging best practices in this rapidly evolving field.

From the wealth of information we gathered, we’ve distilled key lessons from eight standout startups. Each of these companies offers a unique perspective on how to effectively leverage technologies like Retrieval-Augmented Generation (RAG) and other AI-driven solutions to address specific business challenges. Their experiences and insights provide a ground-level view of what it takes to build and deploy intelligent applications that deliver real value in enterprise settings.

 

Lesson 1: Don’t Let Perfect Be the Enemy of Production

Enterprises prioritize maximizing profit and return on investment, which often means optimizing existing operations. But when it comes to generative AI (GenAI), even the most advanced companies face the same challenge: you can’t optimize what isn’t in production. The sooner businesses deploy their GenAI applications and gather real customer feedback, the sooner they can decide whether to refine and scale the product or pivot to something new.

In this section we’ll cover what it means to build RAG-powered applications toward minimum viable production.

 

A Brief Aside on Vector DBs

We didn’t interview a single “vector database (DB) company” while writing this white paper.

Vector databases are undeniably important. However, while developers often start by selecting a vector DB and identifying the ideal reference architecture for their intelligent applications, this approach may overlook key considerations. It doesn’t necessarily help developers grasp the trade-offs involved in building GenAI-native applications. We believe the first step should instead focus on understanding the underlying embeddings that define your domain space and the mechanics of retrieving them effectively.

 

A Primer on Embeddings: Are Vectors Just Vibes?

At the heart of building modern AI are unique digital fingerprints, known as vectors, that turn data like words into unique positions in a multidimensional space. There are lots of ways to capture these sorts of relationships, but any kind of data—including entire documents or even images—can be embedded into a multidimensional vector space. In this space, each entity has a relationship with every other one, no matter how small.

Multi-dimensional Embedding

Imagine you walk into a theoretical “multidimensional” showroom. From your initial vantage point, you notice:

  • A Honda and a Toyota are parked close together in one area.
  • A Porsche is placed slightly apart.
  • A row of pickup trucks is in another section.

Right away, you can tell the cars have been logically grouped together.

How to arrange cars in a multi-dimensional showroom?

As you walk around to other vantage points, you notice additional groupings that might not have been immediately obvious:

  • By fuel efficiency
  • By country of origin

Even if you can’t discern all these dimensions at once, you understand that these cars relate to each other in multiple ways. The closer they’re placed to each other, the more related they are in a particular aspect. The relationship might not be perfectly clear, but you can imagine how a Toyota and a Honda “vibe” together in a way that separates them from a Porsche, even if you’re not exactly sure why.

Understanding Car Brand Relationships

That’s the power of embeddings: they capture intricate relationships in a format that computers can easily work with.

 

Minimum Viable RAG

So how do we “transform” our data so it becomes machine-readable? This is where embedding models come into play. A lightweight embedding model is a specialized tool designed to convert text into these multidimensional representations. In fact, in the simplest possible version of RAG, we can skip the large language model (LLM) altogether and just use embeddings to generate search results.

Defining the Bi-Encoder Approach

  1. One process encodes the search words in our query into embeddings.
  2. Another process converts our candidate documents into embeddings.
  3. We then compare these embeddings by using a measure called “cosine similarity.”

The vector search result is the document with the closest match across the dimensions; therefore, it’s considered the most relevant to the given prompt. In other words, the query searched and retrieved the documents whose “vibes” most closely matched the question’s. Depending on how you want to ‘augment’ the search results, there might be no need for a complex LLM.

 

Founder Spotlight: LlamaIndex’s Approach to Production-Ready RAG
Jerry Liu, CEO of LlamaIndex

Jerry Liu, CEO of LlamaIndex, has empowered thousands of developers to build production-ready GenAI applications through his company’s flexible RAG framework. Jerry believes that, while optimization is a complex, iterative process, reaching a workable production state is relatively simple. As he sees it, you don’t need the most powerful LLM to start seeing significant results, but “can get reasonably far just as a base layer using relatively cheap embedding models” paired with “a query or rewriting layer.”

LlamaIndex is an immensely popular open-source toolkit for connecting LLMs to external data sources, with a growing community of contributors to ensure constant improvement and innovation. According to Jerry, “The toolkit is intentionally unopinionated because we want to capture all the best practices and give developers optionality.”

Along with vector search, LlamaIndex’s framework provides powerful tools for ingesting and indexing various types of data, from structured databases to unstructured text documents. This flexibility allows developers to connect their LLMs to any data source and optimize for speed and accuracy in retrieval.

But Jerry sees an important distinction in isolating the performance optimizations that result from tinkering with the retrieval system and those derived from fine-tuning the underlying generative language model.

There are pain points of retrieval and then there are pain points around synthesis. It actually helps isolate these two because a lot of times users have bad retrieval. And when you have bad retrieval, this isn’t really an LLM problem anymore. It’s just a recommendation systems problem.

Jerry believes that “you can and should try to optimize your retrieval system” to make sure “you have a good search interface” before putting any “retrieved information into an LLM.”

How can builders optimize retrieval? Jerry highlights several best practices that are available to experiment with in LlamaIndex, both at the prompt level (How can you force the system to ask better questions?) and at the search level (What algorithms are used to find the most contextually relevant information?). Regarding prompting, Jerry has this specific advice: “Take the question and break it down into sub-queries.” What was originally a complex question becomes a series of more simplistic searches that are easier to execute.

When it comes to “the retrieval setting,” builders should try both “hybrid search,” which combines keywords (exact words and phrases) with semantic meaning (contextually similar concepts), as well as “reranking,” an approach that refines the order of retrieved documents based on their relevance to a query. Only after improving the performance of the retrieval system should builders focus on gains from tuning language model synthesis.

 

Generation

The “Generation” part of RAG is where the GenAI model comes in. Notice: “GenAI” model, not “large language model” (LLM). The reality is, there’s no particular need at this stage to use a very large model. Nearly all of the top startups we talked to use multiple models, and they understand where to use them. Saying “GenAI model” will help your business recognize that it’s not always appropriate to use the “large” model.

The Generation stage leverages the power of GenAI models to synthesize coherent and contextually relevant responses based on the information retrieved in the earlier stage. Let’s see how retrieval and the GenAI model might fit together with a technique called “in-context learning.”

 

The Open-Book Exam

1. User query example
Prompt:

2. Document retrieval
Process: The retriever searches for a candidate document that identifies a document (or section) discussing AI applications in healthcare.

3. Prompt construction
Augmented prompt:

{{insert relevant excerpt or summary}}.

In this simple example, we include all of the relevant text we found in the retrieval step and squeeze it into one long description (a prompt) of what we want the GenAI model to answer.

When the search process successfully retrieves relevant information, the GenAI model can craft a well-informed answer. Researchers describe this as using information retrieval to augment answer generation—or, more simply, as an “open-book exam.” This analogy is particularly fitting: much like a student consulting their textbook during an exam, the RAG system draws upon external sources to bolster its responses. By contrast, traditional prompting relies solely on the model’s internal knowledge—the information “frozen” within its parameters during training: a “closed-book exam.”

 

Context Windows

When using RAG in-context, such as in the example above, there’s an upper limit to how much information you can squeeze in at one time. This limit is called the “context window length.” As you can imagine, there’s only so much studying you can catch up with during an open-book exam. If it’s too much, you’ll have to find another way prior to the exam itself.

“We were in the business of inventing a bunch of these techniques in the beginning,” Jerry Liu says of building LlamaIndex, “and as a result, you have common core solutions and different techniques. You have to sift through and figure out the best ones.” In fact, one of LlamaIndex’s major innovations was handling long contexts by summarizing and retrieving the most relevant information from large datasets. This allowed LLMs to work with much larger amounts of data than they could otherwise process in a single session.

In this example, we didn’t use a vector DB at all. If you have a small number of documents with little governance rules, maybe you could skip it?

 

Lesson 2: Data Quality Is the Largest Hurdle

Even if we could cram all the right books into our “open-book exam,” having the right information doesn’t always mean we generate the right result.

Common RAG Problems

  • Knowledge updates
  • Data attribution
  • External knowledge
  • Data preparation
  • Mapping data surface area (what’s relevant for in-context)
  • Compute resources
  • Latency requirements
  • Hallucinations

When helping companies navigate these challenges, Jerry Liu says, “Basically, RAG is this sequence of different components, where each component has tunable parameters. And to really optimize the entire system, you have to jointly tune all the parameters at once, which is why there’s so many choices in RAG.” Jerry describes this as a “combinatorial explosion” that complicates standards from forming: “The downside of having so many techniques is that it becomes very hard for developers to figure out best practices.” While the state-space of RAG-related challenges is vast, one area we believe startups are adding the most value in today is reimagining and refining data quality to become compatible with LLMs.

 

Unstructured Data Is a Major Challenge

RAG data is a really specific kind of data. Depending on how it’s parsed and ingested, when data is brought into a context window, the way a human interprets a document might be completely different than the way a language model sees it.

Any AI system is only as good as the data that powers it, and the most common complaint we hear from builders struggling to reach production is the difficulty of processing unstructured data into machine-readable context. For your enterprise, there’s a good chance that tables in your documents aren’t going to be interpreted correctly by an LLM.

It might be surprising, but the data on the left below is more likely to work with many GenAI models if it’s structured in the format on the right.

 

Founder Spotlight: Building Next-Gen Data Pipelines with Unstructured
Brian Raymond, CEO of Unstructured

Data preparation has long been a critical challenge for enterprises, but the rise of generative AI (GenAI) applications has introduced new complexities. While Microsoft offers advanced tools like Azure Form Recognizer and Azure AI Document Intelligence for extracting data from documents, these were developed for specific, structured ingestion pipelines. Similarly, intelligent document processing platforms like HyperScience and Instabase excel at handling uniform datasets, such as millions of identical file types with consistent layouts.

However, most GenAI applications are designed to work with highly diverse file types and sources. As Brian Raymond, CEO of Unstructured, explains, “They were built for a totally different use case. And the use case was: I have a million documents, but they’re all an identical file type and have identical layouts.” Today’s GenAInative use cases demand a new generation of data preparation techniques tailored to manage this heterogeneity effectively.

Unstructured provides a powerful platform designed to help companies preprocess and transform their data, regardless of the format (such as PDFs, Word documents, images, and more) or source (data lakes, external applications, or local drives), into formats that are easily digestible by LLMs.

As Brian sees it, chances are that your developers have “an Azure blob that’s full of who knows what. Tons of different file types, an infinite number of document layouts, and I just need to get that to a Vector Database.”

Addressing data ingestion pipelines for GenAI-native applications presents a significant cold-start challenge, often described as “death by a thousand cuts.” The complexity lies in overcoming countless engineering hurdles associated with processing diverse file types and document layouts. To tackle these issues, Unstructured has taken an innovative approach by integrating 500 ingestion libraries into its platform. This comprehensive solution is designed to handle the heterogeneity of file formats and layouts, ensuring seamless data preparation for GenAI applications.

For the enterprise, perhaps the most common reality is a table that has been translated in the wrong format. Brian continues, “We’re simplifying the world, and in our world that means we are spending lots of energy on tables and forms.”

 

Table Detection with Vision Models

In practice, Unstructured developed custom technology, leveraging vision models, to better interpret the role of tables in documents. As Brian explains, “Only the data owners truly understand the extent of information loss when restructuring data for an LLM. The goal is to minimize this loss when transitioning data from raw to ‘RAG-ready.’ You want to preserve as much valuable signal as possible, while also filtering out unnecessary noise.”

Unstructured developed Chipper, a vision transformer model that performs both object detection and optical character recognition (OCR) in a single step, as well as Hi-Res, which Brian describes as a “more traditional object detection model for tables, forms, and other types of data.” Measuring performance optimization within unstructured data pipelines involves a wide range of metrics:

“Plain concatenated text, percentage of words missed, word order, accuracy on columns, accuracy on rows, content accuracy, as well as self-predicted accuracy, all those sorts of things.”

 

Structured vs. Unstructured ETL

Is an entirely new extraction, transformation, and loading (ETL) pipeline needed for enterprises that have existing tools like Fivetran or dbt? When comparing these to structured data pipeline tools, Brian calls out two key distinctions: “They’re terrific at moving data” and “terrific at transforming structured data.” However, he asserts, “They will do nothing to help get it into a RAG-ready state.” Again, Brian calls out a mismatch in previous “modern data stack” and current GenAI use cases:

The language of the modern data stack is SQL, right? And it’s designed to feed data warehouses to feed BI applications. Our world is primarily feeding vector stores and the ‘BI applications’ are really like chatbot UXs powered by foundation models. Brian sees these use cases as existing in “parallel universes.” These problem spaces are both so massive that Brian believes each has a right to exist and drive meaningful enterprise value. He further elaborates:

There’s enough competition to figure out how to clean and normalize structured data. If you’re talking image and natural language data… from doing scheduling and how you’re architecting the connectors… you need such a complex suite of tooling to do that.

And providing that complex suite of tooling is exactly where Unstructured sees its place in any enterprise RAG application.

 

Defining RAG-Ready Data (and Metadata)

At the core of Unstructured’s value proposition is the notion of helping businesses refine their unique context from crude representations to “RAG-ready data.” Brian qualifies this end state as “Chunked, Vectorized, Summarized JSON.” But context contains not only the source data itself but also key characteristics that surround a file. According to Brian, this hidden context, known as metadata, exists in three buckets:

  1. File-Level: “For example, role-based access controls, versioning, and file path.”
  2. Pipeline Generated: “Document elements, like is this a title, subtitle, header, footer, hierarchy within a document, XY coordinates, page number, language detection.”
  3. Classifiers: “Organizations may want to nest these in this pipeline to generate net new metadata or to populate a knowledge graph with tuples to support their use cases.”

This processed data and metadata work in tandem with retrieval systems to reduce hallucination, improve filtering and reranking, and provide more contextually relevant responses.

 

Moving at the Speed of Data

Another confounding factor for unstructured data quality is that context needs to be regularly refreshed; the value of static embeddings rapidly decays in a dynamic world.

How can Unstructured help enterprises keep their context current? Brian describes the world they’re building for as “one where you’re continuously hydrating long-term memory” while “continuously feeding your architecture human-generated data.” That way, RAG systems “counter hallucinations that are out of data” with “new context from your organization.”

 

Lesson 3: AI Techniques Are Adaptable

Up until this point, we’ve largely focused on data ingestion, quality, and retrieval strategies for text and image-based context. What happens when you introduce multimodal data such as audio and video into RAG systems?

 

The Evolution of Content Management and Retrieval

Data types might be heterogenous, but context is nearly always dynamic. Cody Coleman, CEO of Coactive AI, explains, “When you think about an enterprise, they have very specific, things that they care about. Whether it be for their brand, their characters, or their specific IP, a model might not have a notion about it. ”Cody took us through an example of finding all of a brand’s logos in a video. “It is a faster, more scalable, cheaper to be able to handle that last mile of getting to the custom taxonomy that matters for that business. As well as, you know, deal with the dynamic nature of the world that we live in – where there’s new things and new trends coming up all the time.”

At Coactive AI, they conceptualize this as dynamic tags found in the underlying data. “Dynamic tags allow us to do scalable multimodal prompts. We can take in content, vectorize text prompts, vectorize image prompts, and we can take these classifiers and vectorize that as well.” Experts might call this “efficient active learning.”

“You can label an entire catalog with a dynamic tag in a matter of seconds or minutes from scratch by providing as few as five examples or a single word.”

For multimodal AI, this seems extremely important. “Visual concepts can’t be described easily in words,” explains Cody. “You need to do a step to define what it’s like visually. You know, the Barbie Movie?” Cody showed us an image of the look and feel of “Barbie-core”:


“Barbie-Core”

“Like you can’t describe it in words or anything like that. You have to describe it through examples and through this kind of process.”

 

RAG vs. Retrieval Augmented Classification (RAC)

Although we tend to think of these models for their generative capabilities, using the techniques Cody mentioned creates an entirely new concept, which we could call “discriminative AI. (diagram) ”Cody called this process “Retrieval Augmented Classification” (RAC).” Fix: Although we tend to think of these models for their generative capabilities, using the techniques Cody mentioned creates an entirely new concept, which we could call “discriminative AI.” (diagram) Cody called this process “Retrieval Augmented Classification” (RAC).

  • Generative AI (GenAI) creates or generates new content (text, images, and so on).
  • Discriminative AI classifies or makes decisions about existing data.

RAC could be used to rapidly create or refine classifiers or decision making models for specific concepts. Just as RAG improved GenAI by grounding it in retrieved information, RAC could potentially improve discriminative AI by providing it with more relevant context for its decisions. Cody explains, “What we’ve seen from augmenting GenAI with retrieval could also be applied to discriminative AI tasks, potentially leading to more accurate and efficient classification and decision-making processes, especially for complex or specialized domains.”

 

Founder Spotlight: Coactive AI Takes RAG Multimodal
Cody Coleman, CEO of Coactive AI

Cody Coleman, CEO of Coactive AI, is an expert at distributed retrieval systems.

Coactive AI helps businesses process and derive insights from vast amounts of unstructured image and video data. Their innovative platform streamlines the traditionally manual process of tagging and searching visual content, using advanced AI techniques to make this data searchable without the need for metadata or annotations.

According to Cody, most organizations use a simple “tag-load search” algorithm to retrieve and enrich content:

  1. Tag: Content is manually or automatically tagged with metadata like keywords and categories.
  2. Load: Context is loaded into a database and associated with the metadata.
  3. Search: The system retrieves results by matching the search query against the stored tags.

Cody suggests we might benefit by shifting from “tag-load-search” to “load-search-tag,” especially with enterprise-scale multimodal AI. “With multimodal AI,” says Cody, “which is what we do at Coactive, we can flip the process on its head with a load-search-tag approach… Here we can load and index the raw images and videos…and then make them searchable by understanding the pixels or in the audio directly.” In other words, to scale massively, we can use AI to tag our content much more efficiently.

 

Lesson 4: Using Humans to Build Minimum Viable Preference

We’ve reached the point of the white paper where it’s time for our readers to put their GenAI applications in front of real users. While we’ve covered best practices for mastering internal context (building high-quality, multimodal data ingestion and retrieval pipelines), we’ve yet to discuss the external goals and preferences of our users. Where context is concerned, “it takes two to tango.” Similar to the issue we raised in Lesson 1, if we can only optimize what’s in production, how do we establish a ground truth? What even is a good answer, anyway?

 

Founder Spotlight: Getting to Ground Truth with Labelbox
Manu Sharma, CEO of Labelbox

Manu Sharma, CEO of Labelbox, has seen how critical golden datasets are to building next-gen applications. Manu shared that even the most advanced enterprises struggle with getting their intelligent applications to answer questions in a manner that’s consistent with their users’ expectations. He states, “At a sufficiently large scale, models are not really behaving as how they would like to. It’s usually evident in user engagement or response quality and feedback.”

Labelbox is a data-centric AI platform that specializes in helping enterprises efficiently manage, label, and optimize large datasets for machine learning (ML). The platform is built to streamline the process of creating high-quality training data through its robust annotation tools, allowing teams to annotate, organize, and iterate on datasets across various media types, including images, text, and video, and more.

 

What Is a Golden Dataset?

In the context of RAG applications, a golden dataset is a carefully curated collection of data, typically taking the format of:

  • Input: What was the question?
  • Output: What was the answer?
  • Score: How well did the output answer the input’s prompt according to a judge?

The input and output data can be any combination of modalities, but the ultimate goal is to serve as a benchmark for training, finetuning, and evaluating RAG applications.

The use of a golden dataset is crucial because it ensures that internal data mastery translates into a seamless and relevant user experience. By leveraging such a dataset, developers can fine-tune their RAG applications to align with real-world use cases, ensuring that the system delivers outputs that meet user expectations and external objectives.

 

The Role of Human-in-the-Loop Data in RAG Systems

Manu believes that because these applications are doing things that have never been done before, it’s hard to simulate human feedback. When it comes to frontier behaviors, there’s no substitute for human experts’ preference data:

The primary way to mitigate and to improve the performance of the system is to figure out…where the system failed to produce the right answer…then figure out a human-in-the-loop process to produce a reference experience or example.

Labelbox’s network of expert labelers and its annotation platform solve the cold-start problem of seeding ground truth with a mix of human intelligence augmented with software. Before putting your application in front of paying customers and risking losing them to suboptimal experiences, you can pay to have Labelbox’s experts generate the first batch of production-grade feedback in a shielded environment. Battle testing with paid experts can offer a lower-risk, high-reward hedge toward establishing a foundation of ground truth to improve upon.

 

Building in the Unknown

Intelligent applications are nondeterministic. Their outputs or behaviors can vary even with the same inputs. For Labelbox, that sometimes means helping enterprises understand really complex activities. “To give you a sense,” Manu says, “some of the best models that are pushing the boundaries of coding capabilities, you really need very advanced software engineers to work for many hours to produce the right example of data or to do an eval on which quote is particularly good and why that might be the case.”

Human-in-the-loop data is integral in all kinds of AI operation activities. The most common human-in-the-loop processes include:

  • Classification.
  • Reward data.
  • Production data.

According to Manu, “Evals are very important. And even today, it continues to be the case that human reviewed ground truth is the gold standard.” Some of the metrics for assessing RAG quality include “plain concatenated text, percentage of words missed, word order, accuracy on columns, accuracy on rows, content accuracy, as well as self-predicted accuracy – all those sorts of things.”

 

Lesson 5: Evolving through Production-driven Development

Production is a continuously evolving process. Even if an application has shipped to paying customers, the initial version will look and behave very differently than later iterations, whether it’s the second, fifth, or hundredth version. Just like our interactions with AI systems, there’s no “single shot” when it comes to production: each deployment is an opportunity to learn, iterate, and improve.

Feedback from production users is critical in shaping both the performance and user experience of an intelligent system. This feedback helps identify areas where the interface could be more intuitive, or where the system’s responses need refinement to better meet user needs.

 

Founder Spotlight: Incorporating Real-Time Feedback with RAGAS
Jithin James, co-founder of RAGAS

As we’ve outlined extensively up to this point, RAG-powered applications have many levers to pull to improve performance. As such, collecting and incorporating feedback into these intelligent systems requires specialized evaluation frameworks. Jithin James, co-founder of RAGAS, has open-sourced his beliefs on what such a specialized system would entail.

When building and evaluating an LLM application, there are several key components to consider. You’ll have assets, such as the datasets and models, the application itself that performs the desired tasks, metrics or evaluation criteria to measure performance, and tools for logging and visualization.

From our perspective, much of the focus remains on two critical aspects: creating a good test set and determining how to effectively measure performance. Both of these areas are nuanced and require deeper exploration. For instance, there are many layers of complexity even within these two areas alone.

RAGAS is an open-source platform aimed at automating the evaluation of RAG systems. It fills a critical gap by offering metrics to evaluate RAG pipelines without relying on human annotations, making it an important tool for developers working on LLM-based applications. It’s a comprehensive framework for assessing these systems across several dimensions, such as faithfulness, precision, and relevance of the retrieved data.

 

Customizable Failure Responses

“We have handling failures at different levels. At the API level, you can build logic into exactly what you want to do, looking at the output of that, or even at the whole application.”

“We call it Production-driven Development,” says Jithin, “but the whole idea is that, so even when you’re building ML applications, you have a test set, a set of metrics, and then what you do is you try to get an objective way to measure what is happening.”

Key Components of RAG Evaluation

  • Test set creation
  • Metrics for assessing RAG performance
  • Automated evaluation processes

In the RAGAS implementation, “There are two parts of it. 1. The innovations on the model-assisted eval. 2. The innovations on tracing UI and how and to visualize it.” It’s an iterative process, says Jithin.

“You try to figure out why that’s happening is the workflow we are advocating for. The community is slowly figuring this out… It won’t be just a lot of code, paradigms, (all this stuff will exist), but also census model assisted evaluation and a testing platform.”

In practice, production-driven development yields a faster time-to-production when working with GenAI models. It improves reliability and performance, and results in better alignment with real-world use cases.

 

Lesson 6: Performance Depends on the Whole System

While many enterprises are successfully integrating GenAI tools into their workflows, from our research, very few enterprises have left the stage of building out GenAI Applications. There’s a big gap between a proof of concept and a production-ready application. “As your business evolves to Day 2: You might be launching in a new country, the way you use LLM flows will inevitably be slightly different,” says Jeu George, CEO of Orkes.

 

Founder Spotlight: Orchestrating Intelligent Systems with Orkes
Jeu George, CEO of Orkes

Orkes is a platform specializing in workflow orchestration, designed to help developers build and scale distributed applications with enhanced observability, security, and durability. Orkes builds upon the open-source Conductor project, which the team initially developed at Netflix. This orchestration tool, now widely adopted across industries, allows businesses to streamline complex workflows, including microservices and event-driven architectures.

Jeu explains that the challenge involves, “building out that LLM model and then going to existing applications where some of the services may be written in Java, some may be in Python, some may be in Golang, and now you are trying to use this LLM model in the right place in the right app.” To us, it sounds a lot like GenAI needs DevOps.

 

Model Selection and Routing

Every major AI company we interviewed for this white paper admitted to needing to use multiple models to either balance costs, improve functionality with domain-specific models, or improve system performance in the face of the long latency times experienced with larger models. Although bleeding-edge startups are usually routing between different GenAI models, that might not be the case for enterprises.

As Jeu explains, you have to choose the right model for specific tasks, and dynamic routing is based on performance. “So, when it comes to model routing there isn’t one way of figuring out what is the best model to answer it… The best would be based on the performance or the cost or the latency, whatever might be the criteria there.”

For many enterprises, this is going to involve hybrid approaches:
combining LLMs with traditional ML models. Traditional ML classifiers—decision trees, support vector machines (SVMs), k-nearest neighbors (KNN), and so on—still represent the vast majority of deployed AI. They’re often used in tasks where structured data is involved, such as predicting customer churn or classifying images in limited, domain-specific contexts. These models are typically simpler, faster to train, and well-suited to tasks involving smaller, labeled datasets. They also tend to be more interpretable, making them the go-to solution for structured datasets and scenarios where understanding model decisions is critical.

Jeu explains this as “integrating deterministic and probabilistic models”:

“You could have trained a simple, classifier model on your data, so you don’t really need an LLM to do that. Depending upon the question, instead of asking an LLM, I could just route it to my own custom model, which could be very tiny, which would be very inexpensive to run as well.”

 

Lesson 7: RAG Isn’t a Single Technology

At this point, you likely realize that RAG in practice isn’t really one specific technology. In “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks,” when Lewis et al. formalized and popularized RAG at Facebook Research AI, they were actually introducing a very specific method of building “RAG models.” The output of the suggested system is a single “contextualized” generative model. The components work together by using ML across the entire system, not just the language model.

 

Large-Scale Systems

“It’s not just about language models or Vector Databases,” says Douwe Kiela, CEO of Contextual AI. “It’s about the entire system… The model is maybe 10-20% of the entire system. And in the end, it’s about how all these parts work together.” At the point a person realizes that their system might actually be Frankenstein’s Monster, they’re likely surprised it works at all. It goes to show how powerful this technology is, that it continues to be useful even if you use stale intelligence. The reality is that these derivations from the optimal system matter mostly when the ingestion, retrieval, or extraction pipelines are complex or at a very large scale. As Douwe says, “Extraction, especially at scale, is much more difficult than most people anticipate.” Harkening back to the original RAG paper, Contextual AI can take any pretrained model and then use end-to-end ML to build what they’ve termed RAG 2.0. By focusing on end-to-end training, they improve performance dramatically for complex systems. In simple terms, they combine all the components of RAG, retrieval, augmentation, and generation by building a model that optimizes across all of the components. Douwe explains, “Our approach is to let them grow up together, learning to work in unison from the beginning. This integrated growth strategy leads to specialization and maximized performance.”

 

RAG and Fine-Tuning

For ML nerds, Contextual AI’s approach is brilliant. By using end-to-end ML, you can back-prop loss through the entire system, not just the language model! Or as Douwe says,

“RAG allows us to generalize to new data as it is received, while fine-tuning is used to maximize the performance of the system.”

It’s a false dichotomy that you have to make a choice between fine-tuning or RAG. In the RAG 2.0 system, they use an end-to-end pretrained contextual model, which “allows for a fast feedback loop with Knowledge Transfer Optimization (KTO), enabling direct incorporation of feedback.” In other words, they can rapidly incorporate new information or corrections into their contextual LMs, without the lengthy retraining processes.

There are many podcasts and experts that will tell you to avoid finetuning, but what this advice really represents is that you need to understand where the effort should be spent for your organization.

Is it in maximizing the of the model to work with a large corpus of data, or is it in integrating business rules and governance?

 

Founder Spotlight: Building RAG 2.0 with Contextual AI
Douwe Kiela, CEO of Contextual AI

Douwe Kiela, CEO of Contextual AI, is one of the authors of this seminal paper. Contextual AI creates enterprise-grade LLMs. These models are designed to be highly customizable, secure, and efficient for real-world business applications. From Douwe’s point of view, “A typical RAG system today uses a frozen off-the-shelf model for embeddings, a vector database for retrieval, and a black-box language model for generation, stitched together through prompting or an orchestration framework. This leads to a ‘Frankenstein’s monster’ of GenAI: the individual components technically work, but the whole is far from optimal.”

This makes sense with what we learned about embedding models earlier. The embeddings being used in many RAG systems are completely separate from the model they’re being used with. Ingestion of the candidate documents, the embedding model, and retrieving are all components that affect each other. As Douwe explains, in the idealized system, “RAG allows us to generalize to new data as it is received, while fine-tuning is used to maximize the performance of the system.”

 

Lesson 8: Enforce Governance by Building for the Right Trade-Offs

Douwe Kiela explains Contextual AI’s approach as: “We bring the model to the data to ensure privacy. Auditability is also a crucial feature. With our model, because it’s more integrated, you can trace back exactly where the data came from.” Depending on the nature of how data flows through your enterprise and incorporates into your system, this might be the most salient way to build the system. It helps to step back and think about how enterprise users and applications are using search technology now.

 

Vector vs. Keyword Search

Vector search techniques are often called “semantic search,” as opposed to traditional keyword search.

Everyone has used keyword search. You typically try to find something by remembering a unique word in the document or a term that appears frequently. If that exact word doesn’t exist in any document, you get no results. Undoubtedly, every enterprise has a system that uses keyword search somewhere.

Vector search, on the other hand, always returns results, even if the query and documents aren’t closely related. For example, a vector search for “employee onboarding” might return results about “new hire orientation,” even if those exact words aren’t used. However, depending on how the embeddings were created, it might also return results about “layoffs” if, say, fundraising isn’t going well…

 

Information Access Panic

Immediately after introducing semantic search, the most common “security training” is teaching all the staff to set the confidentiality levels of their documents.

 

Hallucinations

RAG can easily hallucinate, which is why Vectara even maintains the Hughes Hallucination Evaluation Model (HHEM) leaderboard to evaluate how often an LLM introduces hallucinations when summarizing a document.

In practice, RAG techniques are more complex than in these simple examples. For example, Vectara combines vector and keyword search in what they call “Hybrid Retrieval.” Amr Awadalla, CEO of Vectara, explains, “it’s not just vector search, but vector search augmented with keyword search (lexical search). And Vectara uses specialized techniques to optimize how fast the documents are actually retrieved.” Think of the Dewey Decimal system instead of going through documents one by one.

 

Founder Spotlight: Balancing Great Powers and Greater Responsibilities with Vectara
Amr Awadallah,CEO of Vectara

Enterprises adopting GenAI tools quickly learn that there’s stale, potentially dangerous information with inappropriate access levels, and that many systems have far more visibility than was previously believed. You need “very strict role-based access control,” says Amr Awadallah, CEO of Vectara (he co-founded Cloudera in 2008).

Vectara’s mission is to democratize access to powerful, trustworthy AI solutions, especially in search and information retrieval, helping organizations build GenAI applications while avoiding common pitfalls like bias and copyright issues. For enterprises to be successful at deploying AI tools, understanding retrieval might be the best way to break apart this problem. “The nice thing about RAG,” says Amr, “is because we’re not putting the needles inside fine-tuning of the model itself, it is impractical to limit who can see it and who cannot see it.”

 

Conclusion

Through our conversations with startup leaders, a clear pattern emerged: the evolution of RAG and intelligent systems is moving toward increasingly sophisticated and automated interactions. While current implementations focus primarily on retrieval and generation, the future points toward systems that can not only understand and respond but also take concrete actions based on that understanding. This progression reflects a natural maturation of the technology, as explained by Amr Awadallah’s three-phase model.

 

Getting to the Action Engine

Amr taught us that, “at the beginning of any new technical building block, you always will get people building components as opposed to building the block itself, the solution itself… The IKEA developer market is going to be way bigger than the Home Depot developer market.” The technical names are prescriptive vs. descriptive development. “In prescriptive we tell you what to do (ala the recipe from IKEA to assemble a table), while for descriptive we tell you what you can do with the individual pieces (Home Depot raw ingredients) and you have to figure it out how to make the table.”

And that’s what we learned in the development of this white paper. Some enterprise use cases are going to involve incredibly specific descriptive development for complex workflows, and some are going to involve adopting more generalized, prescriptive components.

Amr breaks the adoption of this technology into three phases:

  1. Search Engines – The Past “You search, you get back a list of documents, but you must go through them one-by-one until you find the answer. That’s the old world.”
  2. Answer Engines – The Present “You search, and then you get back an answer to a question. You don’t have to click on any documents. The answer is going to be right there.”
  3. Action Engines – The Future “Action engines take the information provided in an answer and use it to complete an action on behalf of the user. And that’s where you hear all these things about agents and AI agents, and that’s what the action is about. It’s now taking the action on my behalf.”

Recap

The landscape of generative AI (GenAI), particularly in Retrieval-Augmented Generation (RAG), is rapidly evolving, paving the way for innovative solutions across various sectors. As organizations transition from basic AI implementations to sophisticated GenAI systems, the key to success lies in a comprehensive understanding of user needs, application context, and data characteristics. To maximize the potential of these technologies, startups and enterprises should prioritize:

  1. Integration and Flexibility: Move toward more integrated, end-to-end systems like RAG 2.0, which promise greater flexibility and performance.
  2. Planning for Model Orchestration: Tools and platforms that can manage the complexities of model selection, data privacy, and workflow management are essential for enterprise AI deployment.
  3. Multimodal Capabilities: AI systems must seamlessly handle various data types and modalities, from text and images to audio and structured data.
  4. Human-AI Collaboration: Effective AI systems will need to balance automation with human expertise, especially in high-stakes or regulated industries.
  5. Continuous Improvement: Regularly assess and enhance AI capabilities through real-world testing to maintain relevance and effectiveness.
  6. Staying Informed: Stay updated on advancements in AI models and implementation techniques to leverage emerging technologies for a competitive edge.

As we look to the future, it’s clear that the success of AI in enterprise settings will depend not just on the power of individual models but on the sophistication of the systems that manage, orchestrate, and optimize these models. The convergence of advanced RAG techniques, intelligent orchestration, and human-centric design promises to unlock new possibilities in AI applications, driving innovation and efficiency across industries.

The road ahead will require continued collaboration between AI researchers, software engineers, and domain experts to build systems that are not only powerful and flexible but also trustworthy and aligned with human values. As these technologies mature, they have the potential to transform how businesses operate, how knowledge is accessed and applied, and how humans and AI collaborate to solve complex problems.

In this evolving landscape, staying informed about the latest developments in RAG, AI orchestration, and related technologies will be crucial for organizations looking to harness the full potential of AI while navigating the challenges of deployment, scalability, and ethical considerations.

Stay ahead of AI advancements – join Microsoft for Startups and gain insights for the future.

Microsoft for Startups empowers early-stage founders with the tools, resources, and mentorship needed for growth and innovation. With access to advanced technologies, expert one on one guidance, and a global network, we help startups stay competitive in a rapidly evolving landscape.

The post RAG and the Future of Intelligent Enterprise Applications: Insights from Startup Leaders appeared first on B Capital.

]]>
Climate 3.0: Investing in Scalable, Profitable Climate Solutions https://b.capital/insights/climate-3-0-investing-in-scalable-profitable-climate-solutions/ Thu, 13 Mar 2025 14:50:04 +0000 https://b.capital/?p=6580 By: Jeff Johnson, Don Wood, and Karly Wentz   I. Introducing Climate 3.0 Climate investing is at a turning point. After two decades of rapid capital deployment, technological breakthroughs, and shifting policy landscapes, we are now in a new phase—one that moves beyond impact at the expense of economics toward a more disciplined, market-driven approach....

The post Climate 3.0: Investing in Scalable, Profitable Climate Solutions appeared first on B Capital.

]]>
By: Jeff Johnson, Don Wood, and Karly Wentz

 

I. Introducing Climate 3.0

Climate investing is at a turning point. After two decades of rapid capital deployment, technological breakthroughs, and shifting policy landscapes, we are now in a new phase—one that moves beyond impact at the expense of economics toward a more disciplined, market-driven approach. This next phase, which we call Climate 3.0, shifts the focus to scaling business when they are ready to thrive on their own economic merits.

Past cycles—Cleantech 1.0 and Climate 2.0—helped lay the foundation for today’s market by advancing technology, driving down costs and increasing awareness of climate solutions. There were some high-profile successes in these past cycles; however, many companies struggled to commercialize effectively, resulting in billions invested with limited returns.

Our analysis of these past cycles led us to develop the Climate 3.0 framework, which we’ve been closely studying since mid-2024. The lesson is clear: for climate solutions to succeed, they must be grounded in strong economic fundamentals—not just policy incentives, corporate commitments, or high CO2 emission reductions.

 

Over the past 12 months, we’ve refined our investment strategy to address the challenges presented by previous climate investment cycles, adopting a more fundamentals-driven approach:

  1. Prioritizing strong unit economics and economic value.
  2. Focusing on scalable solutions with demonstrated real-world adoption.
  3. Applying rigorous growth investing principles to climate investments.

This article examines lessons from past cycles, key trends shaping Climate 3.0, and
B Capital’s strategy to capitalize on the market opportunity. It is the first in a series exploring our investment methodology, sector-specific opportunities and the companies leading this next phase.

Our thesis: the climate winners of the future will be those delivering sustainable, profitable growth—businesses that succeed not because of subsidies or mandates, but because they offer better, cheaper and more efficient solutions, often with co-benefits such as jobs and national security.

 

II. Climate 1.0 (2006–2012): Betting on Disruption Without the Right Foundations

Climate 1.0 (2006–2012) was the “clean tech” era, fueled by venture capital enthusiasm for renewable energy, electric vehicles (EVs) and biofuels. Investors poured $25B+1 into climate startups, betting on market disruption and technological breakthroughs.

During this period, some companies, like Tesla, defied the odds, proving that clean technology could compete with legacy industries. However, most failed due to fundamental flaws in unit economics and commercialization.

 

Challenges and Lessons from Climate 1.0

Despite early optimism, most Climate 1.0 companies struggled to achieve sustainable growth due to several key challenges:

  • Capital-intensive business models struggled to scale. Without patient capital or strong market incentives, many startups struggled to secure follow-on funding. High-cost models with long payback periods proved difficult to sustain, particularly in infrastructure-heavy sectors.
  • Markets weren’t ready for broad adoption. While technologies like biofuels were technically viable, their higher costs compared to conventional alternatives hindered adoption. Additionally, customers were reluctant to invest in the infrastructure needed to leverage these innovations. Even the best technology can struggle without favorable market conditions.
  • Government incentives alone couldn’t drive sustainable demand. Many companies relied heavily on subsidies and tax credits, but when policy support shifted, their business models became unsustainable. Long-term viability requires economic competitiveness and cannot rely solely on policy-driven demand.
  • Scientific progress doesn’t follow a set timeline. Breakthroughs can’t be rushed, and many companies struggled when commercialization timelines didn’t align with the pace of scientific discovery. Building a business ahead of the science proved risky and unpredictable.

While Climate 1.0 demonstrated the potential of clean technology, it also underscored the need for better market and industry alignment. The experience reinforced that commercialization is more important than innovation alone. Government support is not a substitute for commercial viability and companies must compete on price and performance to survive. These lessons shaped Climate 2.0, which brought more expertise and capital but introduced new challenges of its own.

 

III. Climate 2.0 (2013–2023): More Capital, More Expertise—But Challenging Fundamentals

Climate 2.0 represented a shift from the early clean tech era, rebranding the sector as “climate tech” and attracting a more sophisticated investor base. Institutional investors, corporate venture arms, high-net-worth individuals, family offices and government-backed initiatives like the Inflation Reduction Act (IRA) helped drive over $200 billion2 in capital into the space. This period was characterized by optimism, a belief in the scalability of climate solutions and a rush to deploy capital into technologies that promised to accelerate the energy transition. Furthermore, many investors backed businesses with high carbon impact, betting that the decarbonization benefits would create value—even when the solutions were significantly more expensive than existing alternatives. However, that assumption proved difficult to sustain.

This wave of investment was also enabled by a historically low-interest rate environment, which allowed capital-intensive models to thrive despite weak underlying economics. With abundant, cheap capital, many climate companies were able to sustain operations and expand aggressively, even if their business models were not fundamentally sound. This environment obscured deep structural challenges, resulting in overinvestment in speculative sectors and inefficient capital allocation.

 

Challenges and Lessons from Climate 2.0

Despite an influx of capital and growing institutional interest, many Climate 2.0 companies struggled to achieve long-term success due to several key challenges:

  • Misaligned incentives weakened business models. Many companies were built around voluntary carbon pricing and corporate commitments, assuming regulatory support and net-zero pledges would create a sustainable revenue stream. But these mechanisms proved volatile and unreliable, leaving businesses exposed when corporate priorities shifted or regulatory frameworks failed to materialize as expected.
  • Valuation bubbles and capital misallocation distorted the market. The SPAC boom and broader tech euphoria led to inflated valuations, with many climate companies raising substantial funding at unrealistic multiples. This drove overfunding of underperforming businesses, while crucial sectors remained underfunded.
  • Overdependence on cheap capital proved unsustainable. Near-zero interest rates fueled rapid growth but encouraged unsustainable practices. Companies focused on expansion and capital-intensive technologies over profitability, creating vulnerabilities when rates rose sharply in 2022,3 making many business models untenable without cheap financing.

While Climate 2.0 brought greater expertise and institutional capital, it also reinforced key investment lessons: commercialization and scale matter more than innovation alone, valuation discipline is essential to avoiding market corrections and capital efficiency is critical in a higher interest rate environment. Companies must prioritize profitability, scalability and cost discipline—long payback periods and capital-intensive models are difficult to sustain. These lessons have shaped the emergence of Climate 3.0, where market-driven, economically viable businesses will define the next era of climate investing.

 

IV. The Climate 3.0 Investment Framework: A More Disciplined, Market-Driven Approach

Climate 3.0 builds on the lessons from the past two decades of climate investing, shifting toward an investment strategy that prioritizes economic fundamentals, scalability, partnerships and resilience. This phase is driven by long-term structural trends, not policy shifts, and will persist regardless of global political circumstances.

While early-stage innovation is crucial, R&D investment should be prioritized when a clear, scalable commercial path exists. New technologies will drive climate solutions, but the focus must remain on those that are both impactful and economically viable. Scaling business models where costs—such as labor, materials, or operations—rise disproportionately with growth does not tend to lead to lasting success, as these inflationary models struggle to achieve sustainable profitability. In this phase, we must prioritize solutions that achieve both meaningful impact and financial viability.

 

Read more about how B Capital uses the Adoption Readiness Levels (ARL) framework to evaluate whether climate companies can scale beyond innovation in:

After Decades of Focus on Technology Innovation, It’s Time for Climate Companies to Ask: Can We Scale This?

 

What Defines Climate 3.0?

The companies that succeed in Climate 3.0 will share key characteristics that set them apart from the models of the past:

  • Market-Driven and Economically Competitive – The best climate solutions will win because they offer cost savings, efficiency gains or superior performance—not because they rely on subsidies or policy mandates. These solutions will attract ready customers with the budget and willingness to adopt them.
  • Scalable Capital Efficient – Growth-stage climate companies must demonstrate clear paths to profitability, without relying on large, continuous funding rounds. Some of the best companies will also develop innovative partnerships to be more efficient with capital and resources.
  • Resilient to Market Cycles – Climate investing should not be dependent on regulatory volatility, green premiums or short-term economic conditions. The strongest companies will be those that can adapt to changing market dynamics and maintain demand in any environment.
  • Technologically Ready for Deployment – Innovation remains essential, but the focus is to scale-up companies with proven technologies, not speculative R&D-heavy models. The ability to move from pilot stage to full deployment is critical.
  • Aligned with Structural Megatrends – The Climate 3.0 investment landscape is shaped by major shifts such as electrification, supply chain realignment, and climate adaptation. Companies positioned to capitalize on these trends will be best equipped for long-term success.

 

Evolution of Climate Tech Investing4,5,6

(B Capital, 2025)

 

Conclusion: Investing in the Future of Climate at Scale

At B Capital, our analysis of the climate investment landscape has led us to believe that the most impactful climate solutions will come from companies that can scale profitably and endure over time. As growth investors, our role is to back businesses that have moved beyond early-stage development and are positioned for rapid expansion with strong commercial traction.

We care deeply about impact, but we also recognize that the companies that achieve real, lasting change will be those that can compete and thrive on their own economic merits. Venture capital plays a specific role within the broader climate investment landscape, and not all climate solutions are suited for VC funding.

Building on the successes and challenges of Climate 1.0 and 2.0, we now have a clearer understanding of what works and where capital should be deployed for the greatest impact and returns. For growth-stage climate companies, success will come from their ability to scale profitably, withstand market shifts, and deliver durable returns—without depending on subsidies or policy-driven demand.

In the next article, we will explore the megatrends shaping Climate 3.0 in greater depth, outlining the investment opportunities that will define this next era.

 


LEGAL DISCLAIMER
All information is as of 03.12.25 and subject to change. Certain statements reflected herein reflect the subjective opinions and views of B Capital personnel. Such statements cannot be independently verified and are subject to change. Reference to third-party firms or businesses does not imply affiliation with or endorsement by such firms or businesses. It should not be assumed that any investments or companies identified and discussed herein were or will be profitable. Past performance is not indicative of future results. The information herein does not constitute or form part of an offer to issue or sell, or a solicitation of an offer to subscribe or buy, any securities or other financial instruments, nor does it constitute a financial promotion, investment advice or an inducement or incitement to participate in any product, offering or investment. Much of the relevant information is derived directly from various sources which B Capital believes to be reliable, but without independent verification. This information is provided for reference only and the companies described herein may not be representative of all relevant companies or B Capital investments. You should not rely upon this information to form the definitive basis for any decision, contract, commitment or action.

 

SOURCES

  1. Benjamin Gaddy, Varun Sivaram, and Francis O’Sullivan, “Venture Capital and Cleantech: The Wrong Model for Clean Energy Innovation,” MIT Energy Initiative Working Paper, July 2016, https://energy.mit.edu/wp-content/uploads/2016/07/MITEI-WP-2016-06.pdf.
  2. Boston Consulting Group, “From Clean Tech 1.0 to Climate Tech 2.0: A New Era of Investment Opportunities,” June 30, 2023.
  3. International Monetary Fund, Corporate Sector Vulnerabilities and High Levels of Interest Rates(Washington, DC: International Monetary Fund, 2025), https://www.imf.org/en/Publications/Departmental-Papers-Policy-Papers/Issues/2025/01/08/Corporate-Sector-Vulnerabilities-and-High-Levels-of-Interest-Rates-556372.
  4. State of Climate Tech 2023: How can the world reverse the fall in climate tech investment?” PwC, October 2023; “$32bn and 30% drop as market hits pause in 2023” Sightline Climate, January 2024.
  5. Information as of September 30, 2024. This information was calculated in the sole discretion of B Capital and therefore such information is based on the best estimates of B Capital. If B Capital were to use different methodologies, the results may be materially different. As a result, prospective investors should not place undue reliance on the information herein and such information should not be used as a basis for investment decisions regarding an investment in any investment fund managed by B Capital. Applying historical average annual energy transition deployment growth of 25% (2013 – 2023, per BNEF), to annual climate tech investment per Sightline Climate from 2024 – 2034.
  6. Average federal funds rate; 2024 represents 2024 average YTD federal funds rate.

The post Climate 3.0: Investing in Scalable, Profitable Climate Solutions appeared first on B Capital.

]]>