<![CDATA[Betaworks - Medium]]> https://render.betaworks.com?source=rss----1baed5266331---4 https://cdn-images-1.medium.com/proxy/1*TGH72Nnw24QL3iV9IOm4VA.png Betaworks - Medium https://render.betaworks.com?source=rss----1baed5266331---4 Medium Thu, 19 Mar 2026 11:59:47 GMT <![CDATA[Announcing Betaworks Fund 3.0]]> https://render.betaworks.com/announcing-betaworks-fund-3-0-9f47e7c20e42?source=rss----1baed5266331---4 https://medium.com/p/9f47e7c20e42 Tue, 22 Jul 2025 19:55:10 GMT 2025-07-22T19:55:10.222Z

Today, we’re thrilled to announce that we’ve closed $66 million for Betaworks Fund 3.0 — a pre-seed/seed stage venture fund dedicated to investing in, building with, and supporting exceptional founders.

Betaworks Fund 3.0 builds on the momentum of Fund 1.0 (48M, 2016) and Fund 2.0 (46M, 2020) and continues our legacy of backing creative technologists who are shaping the future with ML/AI.

Alongside our core pre-seed/seed investments, Betaworks also backs companies through our Camp program, which has been around since 2016. Camp is a cohort-based investment program — all thesis-based — that focuses on zero-to-one product development within a specific theme.

HuggingFace graduated out of the very first Camp program (BotCamp) in 2016, which was focused on NLP and conversational interfaces.

Since Camp’s launch, we’ve backed approximately 100 companies across 12 classes. Cohorts are small by design (averaging eight companies per class) and highly selective.

Camp themes throughout the years include NLP, computer vision, synthetic data/media, voice/audio generation, AI agents, as well as augmentative and application layer AI.

Our next (Fall 2025) cohort — with the theme AI Camp: Interfaces — begins on August 18.

Betaworks has been around since 2008. At that time, it wasn’t a venture firm at all. Betaworks started just like any other venture-backed company, but the product we delivered was startups.

Arguably, Betaworks was the first real venture studio, and it was the birthplace of household names like GIPHY, Dots, Chartbeat, and Bitly. We established ourselves as a pillar of the New York tech ecosystem and have grown alongside it over the past 17 years.

With early bets in companies like Tumblr, Kickstarter, and Everlane, and more recent investments in HuggingFace, RecRoom, The Browser Company, and Granola, we’re thrilled to continue building. After all, the Betaworks team are all builders at heart.


Announcing Betaworks Fund 3.0 was originally published in Betaworks on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Apply to Betaworks’ Fall AI Camp: Interfaces]]> https://render.betaworks.com/apply-to-betaworks-fall-ai-camp-interfaces-b99a3f4ac81e?source=rss----1baed5266331---4 https://medium.com/p/b99a3f4ac81e Wed, 04 Jun 2025 12:02:36 GMT 2025-06-04T12:03:25.809Z A call for founders reimagining how humans experience AI

TL;DR

The next Betaworks Camp theme is Interfaces.

We’re looking for early-stage startups that are rethinking the human-machine boundary through new interface paradigms: voice-native tools, ambient agents, novel input/output flows, multi-modal or invisible interfaces, sophisticated prompt-based interactions. We want to see products with a strong opinion, that treat the interface as fundamental.

If you’re designing how AI is experienced, not just what it can do, we want to talk. Apply here.

From “app layer” to “interface layer”

In past Camps, we explored agentic systems, native applications, and the fast-shifting AI stack. We’ve seen products that blurred the line between tools and teammates — systems with cognition, memory, even initiative. We find ourselves often returning to the same question: how should people interact with artificial intelligence?

That’s why we’re excited to see new teams that treat interface as leverage…interface, not as an afterthought, but as the product surface itself.

This Camp builds on everything we’ve learned but it sharpens the focus. We’re looking for novelty at the point of interaction.

Why now?

We believe this is a pivotal moment for interface innovation and recent technical unlocks have transformed what’s possible at the interaction layer:

  • Multimodal models blur input and output across media in the same inference space
  • Real-time voice enables natural language-as-interface in a way distinct from pure text mode and is still underadopted primarily because the fundamentals of the possible best experiences are still being discovered
  • Memory and long context unlock persistent experiences
  • Agent orchestration, and multi-agent orchestration, allow for multi-step, long running, fault tolerant, and adaptive workflows
  • Ambient and spatial hardware open new canvases for interaction

These developments shift the bottleneck from what AI can do to how users experience it. The opportunity now is to design interfaces that couldn’t exist before AI and that feel inevitable once used. As Mary Meeker put it recently: “The question isn’t whether platforms or specialists win — it’s who can abstract the right layer, own the interface, and capture the logic of work itself.”

What are we looking for?

We’re interested in products where the interface is the invention. Here are some attributes and examples…

🧠 Sophisticated Prompting as Interface

Prompting is UX for AI systems. We want to see:

  • The power of the most advanced LM systems is often able to obfuscate poor prompting. We want to see novel techniques, strategies that push capabilities, use tools in unexpected ways, and are aware of the computing environments they exist in. For example prompts that create behavior that presumes a complex, agentic system, self-reflective, retrying, or adaptive prompts.
  • How good are your prompts, really? What makes them better than other companies? How thoughtful are you about maximizing performance?

⚡ Speed and Throughput as Design Constraints

  • Instantaneous feel: Meeting you where you are with perceived latency close to zero, systems that are latency-aware UIs: creating seamless interactions while models think
  • High-throughput loops: Multimodal, information-dense outputs (can you generate visuals instead of reams of text that models are happy to generate, or audio, etc)
  • Is your client optimized in a way where it isn’t a bottleneck for interaction? (if there is one)

✨ Delight-inducing Interfaces

Yes, we want to know, “How does it feel?”

  • Experience design as moat: Interfaces that go beyond functionality to actually feel good, and that reward intuition and mastery
  • Sensorially rich UIs: Visual, audio, and haptic design that invites emotion, systems that spark curiosity, exploration, and delight

👀 Interfaces That Aren’t (Just) Interfaces

We’re especially excited by interfaces that are:

  • Invisible or ambient: Assistants that observe, infer, and act
  • Embedded and ephemeral: UIs that come and go based on context (if you’re making something that is a low latency spontaneous interface with diffusion LMs let us know)
  • Rethought primitives: Space, identity, or memory as first order objects that can be manipulated by AI systems

This Camp is for teams that believe the next major shift isn’t just in the models, it’s in how we experience what they can do.

How Camp works

Camp is a 13-week, in-person program at Betaworks in NYC. International companies are welcome to apply but must be willing to relocate to New York during the program. Participating companies receive:

  • Up to $500K in investment
  • Support from experts in product, GTM, design, business operations, and fundraising
  • Weekly workshops, founder roundtables, pitch practice and speed-dating with investors
  • Physical office space in NYC’s Meatpacking District (a block from the Whitney Museum and The High Line)
  • A capstone Demo Day in front of a room of high-quality investors and innovators

Build the interface layer of AI

We’re not looking for wrappers. We’re looking for windows into the future — interfaces that engage and surprise us with their novelty, and disappear when they should.

👉 Apply to Betaworks AI Camp: Interfaces


Apply to Betaworks’ Fall AI Camp: Interfaces was originally published in Betaworks on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Announcing the Application Layer AI Camp Cohort]]> https://render.betaworks.com/announcing-the-application-layer-ai-camp-cohort-b6beb6ea8233?source=rss----1baed5266331---4 https://medium.com/p/b6beb6ea8233 Tue, 13 May 2025 19:06:49 GMT 2025-05-15T16:57:44.150Z

Almost exactly one year ago, we loudly proclaimed that the market dynamics were such that value would start to accrue at the application layer of the AI ecosystem.

This was not just by virtue of wicked competition at the foundation layer (which would flatten access to the underlying technology and speed up its advancement), but also by the maturation of the tooling and infrastructure layers of the ecosystem.

“We see AI as a phase change in technology akin to the birth of the internet,” we wrote. “And, just like the internet, the services and products built on top of it can form wave-like patterns — replicating existing workflows, augmenting those workflows, and ultimately paving the way for truly native experiences.”

In the back half of 2024, we invested in nine incredible companies building native experiences in AI, from products that turn designers into coders, to AI for collaboration across massive IT/ERP projects, to the AI-native ‘LinkedIn for gen Z.’

We learned a ton.

We learned that increasing test-time compute, and other techniques that increase the aggregate reasoning capabilities of AI systems (improved CoT, self-consistency sampling, better RL, etc.) would act as a force multiplier against all the things we care about at the application layer (like resilience, flexibility, the ability to self-improve). We learned that innovation at the interface level has only just begun.

This led us to the App Layer thesis; a doubling down on AI-powered applications and a refinement of that thesis that has pushed us toward the future of work, productivity, automation tools for SMBs, and the overall disruption of antiquated vertical industries.

This cohort is made up of nine teams that are harnessing rapidly improving foundation layer capabilities with purpose built design and architecture; building products that deliver value quickly to a well-defined user profile.

Our co-investors in this cohort include Slack Fund, True Ventures, and Fermont Capital. App Layer Camp is proudly sponsored by Mercury.

So, without further ado, meet the teams:

Decode

getdecode.dev

A whiteboard for your ideas that writes software for you.

Betaworks POV: It’s pretty obvious that language models are getting better at the mechanics of coding every day. Our initial attraction to Decode was rooted in their approach to the interface of programming, rather than the code-gen itself. Their vision is some of the most maximally visual and spatially aware approaches to creating working code that we have seen, one that takes full advantage of the visual and conceptual understanding capabilities of the latest multimodal models. They bring years of practical experience to making the tool that they themselves want.

Problem: AI code generation lacks context from scattered product requirements across Notion, Figma, and Miro, creating friction and disconnected conversations.

Solution: IDE-integrated whiteboarding app for organizing requirements, creating wireframes, and generating code using AI that understands all inputs.

Team

  • Francois Laberge (CEO): 20+ years building developer tools
  • Sriraam Raja (Founding Engineer): AI/learning tools expert, Harvard masters at intersection of AI/technology and education

Target Customers: Product-minded developers who both design and code — frontend and design engineers.

Trampoline AI

trampoline.ai

Complex RFP response management for service companies

Betaworks POV: We actually met the Trampoline team over a year ago when they were building a universal search tool for businesses. But it wasn’t until Q4 24 that they had found strong PMF from their sales-oriented users. They realized that they could take the power of their self-improving architecture, combine it with some of the most powerful UI primitives from the web 2.0 era, and unleash LLMs on the highly complex process of answering RFPs within an organization. Not only are they providing strong value to voracious customers, but they are organically solving the context problem of large projects with collaborative (and literal) building blocks.

Problem: Service companies struggle with complex RFPs requiring significant time and stakeholder coordination, especially challenging for SMBs competing against larger firms.

Solution: AI-powered platform automating RFP response process, saving 30–80% time while improving win rates through intelligent task management and knowledge base maintenance.

Team:

Target Customers: Service and tech companies selling to government/large corporations, including IT consulting, architecture, and maintenance services.

TabTabTab

tabtabtab.ai

AI everywhere on your computer without prompting.

Betaworks POV: At Betaworks, we are obsessed with the idea of removing barriers to using AI, and when we met the TabTabTab team they had actually built something that many of us had been dreaming about for a long time. Sometimes, removing those barriers means building software that lives on the client, so that it can sit on a layer above the operating system and all the other applications you use. What that actually means in practice is that TabTabTab has built a product that is invokable in milliseconds, can respond to your keyboard and mouse instantly, and has awareness of everything that you, as the user do. From there, they can build products that truly can fit into every part of your workday and workflow. This is one where watching the demo really helps to understand the total possibility space.

Problem: AI tools are fragmented and require constant context switching, breaking user flow and requiring manual effort.

Solution: Computer layer that seamlessly connects context from screen/clipboard/apps to AI, starting with LLM-powered copy & paste.

Team: Former Bloomberg engineers with experience at Palantir and Citadel, focused on human-computer interaction.

Target Customers: Computer power users who frequently switch between apps, with users in 59+ countries.

Superposition

www.superposition.ai

AI recruiter for startup founders

Betaworks POV: Superposition ticked several boxes for us: a founder with vast domain expertise and taste, a multimodal interface, and the ambition to not just build tools for service providers but to become the service provider. Edmund’s nuanced and opinionated approach to recruiting combined with the technical chops to successfully run a relatively complex multistep process via AI is a powerful combination that is proving highly effective for customers.

Problem: Early hiring challenges: founders manually screen thousands or use agencies that don’t understand startups.

Solution: AI agent collaborating on hire design, candidate matching, and personalized outreach with automated interview scheduling.

Team:

  • Edmund Cuthbert (CEO): 7-year recruiter, hired Brex engineer #1
  • Xiang Li (CTO): Early YC startup engineer with hiring experience

Target Customers: YC and Betaworks startups (Seed-Series A) hiring engineers in SF, NYC, and London.

Hopper

hopper.dev

Vibe planning for scalable AI coding

Betaworks POV: Hopper is built as the result of Nathan’s and Austin’s genuine pain and frustration and decades of collective experience managing engineering teams at Amazon, Microsoft, and venture-backed startups. The velocity of scaling engineering teams is not determined by the talent of the coders, but by the efficiency of the handoffs from kernel of an idea, to PRD, to spec, to prod. And the Hopper team was ambitious and fearless enough to tell the truth: the handoff problem can’t be solved without unifying those workflows in a single tool.

Problem: AI coding tools fall short at scale, creating mismatched code in enterprise contexts.

Solution: AI planning companion guiding from idea to detailed tickets with technical auditing and organizational context.

Team: 25+ years leading engineering teams in demanding big tech environments.

Target Customers: Currently in trials with companies seeking scalable AI coding solutions.

Afterimage

Democratizing access to justice for patients

Betaworks POV: When Andrew Gregory told us the story of his journey to Afterimage, we were compelled both by his willingness to validate his own thesis via data — even if it meant doing things that don’t scale (like driving all over the North East to access paper medical records) — as well as his technical chops. But what really sealed the deal was his ‘stop-at-nothing’ ambition to prevent medical injustice in this country, even if it means transforming the status quo as we know it.

Problem: Millions affected by medical errors lack ways to understand incidents or seek justice, causing healthcare frustration.

Solution: Platform helping patients discover what happened and obtain justice when needed.

Team:

  • Andrew Gregory (CEO): Startup founder, AI scientist
  • Peter Anderson: Leading medical malpractice attorney, Morgan & Morgan DC founder

Target Customers: Patients and families feeling mistreated by healthcare providers.

JigsawML

jigsawml.com

AI-powered Cloud Management Platform

Betaworks POV: We’ve looked at a fair amount of developer tooling in the cloud management and dev-ops space. Jigsaw was the first company that we had talked to that decided to take on the very hard challenge of taking a code analytic approach to allowing AI to understand software architecture. Pracheer, using his deep experience building AWS from the early days and building what would be some of the fundamental technologies at the intersection of AI and data storage/retrieval as a founder of Pinecone, has been able to achieve impressive results.

Problem: Complex cloud management requires constant context-specific decisions despite available tools.

Solution: AI-driven visualization and management of enterprise architecture, addressing cost, security, and troubleshooting.

Team: Early AWS engineers and Pinecone founding team member with cloud/AI expertise.

Target Customers: Senior technology executives seeking improved team productivity and quality.

NetAssist

netassist.live/seniors

NetAssist makes digital life simpler for seniors and more effective for caregivers

Betaworks POV: The super simple problem NetAssist is targeting is one that is both highly relatable (at least to me and my fellow millennials) and also highly fitting for this moment in technology, as it could only be solved in this way at this moment in time. By reanimating the OG preferred interface for senior citizens (the phone) and architecting a system that can patiently teach, perform tasks itself, and coordinate among family, caregivers and businesses serving this demographic, NetAssist looks to bring the latest in application layer services to tens of millions who have been left behind by tech.

Problem: America’s seniors battle a perfect storm: fading digital confidence, growing isolation, and overwhelmed support networks. As one of our fastest growing demographics, they need more help than ever while their caregivers and local services buckle under pressure. Today’s fragmented digital landscape is a maze of confusion — devices, tools, and platforms that weren’t built with seniors in mind. The stakes are high: critical services go unused, connections wither, and an entire generation risks digital abandonment while their caretakers suffer from overload.

Solution: NetAssist bridges the digital divide for seniors through intuitive voice and text agents that simplify complex online experiences. Our technology captures users’ needs and seamlessly guides them to solutions, local services, and community connections — while keeping caregivers involved in every step of the way.

Team:

NetAssist was born after its founder played family tech support one too many times. Seeing firsthand how difficult simple online tasks were for his mother — from navigating medical systems to exploring connections in her community — he realized millions of seniors and their caregivers face these same challenges daily. The team brings deep experience building and launching new technical products across diverse markets — from Microsoft’s developer tools to AI applications for international aquaculture operations and consumer marketplaces for developing markets. The team is fueled by empathy and armed with technical expertise to lead NetAssist’s mission.

Target Customers: Senior citizens struggling with technology and their caregivers seeking simplified digital solutions.

Graze Social

graze.social

The toolkit for building the future of social

Betaworks POV: We’ve known Devin as someone who has been working at the intersection of machine learning and social systems on the internet for years. In a very short period, while building Graze in the open, they have quickly become the most important technology built on top of both Bluesky and ATProto. Graze provides key technolgy, and importantly infrastructure that embeds algorithmic intelligence into feed creation, that reimagines how content is composed in a decentralized ecosystem, for the benefit of all users.

Problem: Bluesky/ATProto is the best shot we’ve had in 20 years to fix social media, but it’s still in its infancy. Developers face a high learning curve, community leaders can’t make a living, and organizations lack audience control.

Solution: Custom feeds are the new social web foundation. We provide tools, platforms and the first Bluesky-native sponsored content marketplace to help build, grow and monetize custom feeds.

Team: Decade of experience building tools and understanding how platforms shape behavior. Experts in making complex interfaces intuitive and scaling social web products.

Target Customers:

  • Community stewards seeking control and sustainable income
  • Organizations wanting direct audience relationships
  • Developers needing ATProto tools and infrastructure
  • Creators exploring sponsored content opportunities

Learn more about the Betaworks Camp program here.

Applications for H2 2025 will open soon.


Announcing the Application Layer AI Camp Cohort was originally published in Betaworks on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Apply to Betaworks’ AI Camp: App Layer for $500k in funding]]> https://render.betaworks.com/apply-to-betaworks-ai-camp-app-layer-for-500k-in-funding-005f71076f3e?source=rss----1baed5266331---4 https://medium.com/p/005f71076f3e Tue, 19 Nov 2024 20:56:47 GMT 2024-11-25T14:43:13.851Z

TL;DR (but you should read the whole thing): Camp is an in-residence investment & mentorship program for startups building with frontier technologies. For this upcoming cohort — based on recent developments in tech, interfaces, and the market — we’ve refined our previous thesis around AI Applications and are doubling down. We’re seeking teams that are harnessing rapidly improving foundation layer capabilities with a custom architecture; building products that deliver value quickly to a well-defined user profile; with a special interest in applications that touch the workflows and productivity of individuals, SMBs, and antiquated vertical industries. If that describes you, apply here.

Evolution of a Theme

Compared to previous shifts in technology, AI builders have been spoiled as of late. Spoiled with speed. The speed of progress in the last several years has brought us model after model, each better than the last, and new and powerful techniques to harness them. Massively capitalized and talent-rich foundation model companies are knife fighting, pushing each other to keep leaping forward.

This has created the marketplace dynamics that we have been expecting and hoping for, where value is rapidly being pushed up the stack to the application layer (the fun, exciting layer where everyone gets to do new magical things). That’s why we just invested in nine companies (at $500k each) building native AI at the application layer. You can check out those investments here.

And in 2025, we’re doing it again.

In the process of doing AI Camp: Native Applications, and seeing so many incredible companies in the space, we have witnessed the increasing capabilities of the foundational technologies of the AI ecosystem drive the creation of radically new products. We believe the technology continues to evolve and that opportunity is large enough to stand up another camp focused on AI at the application layer, with a refinement of our thesis.

Our first camp was built on the belief that the ecosystem writ large — a competitive, and fast-advancing foundation layer, and an evolving middleware/tooling layer — would pave the way for a flourishing and innovative application layer of products and services.

What we learned in the last six months is that there is unprecedented capacity in the market for new AI enabled products and services, especially given the advancement in reasoning capabilities (System 2 thinking) of the models, increasing context windows, and growing demand for AI (and agentic) software. We’ve gotten an even better sense for what has the greatest chance of venture-scale success and want to double down on our thesis.

Our Observations, and Opinions

Technology Changes

During the last camp (fall 2024), OpenAI had just released the o1 model, aka Strawberry. It was close to what we expected, yet it set an interesting and high bar that merited some recalibration of prior thinking. Also, based on the historical precedent, it’s not unrealistic to believe that the other major players are cooking up and shipping similar tech, and that the open side of the AI ecosystem will get close to parity in a short amount of time. So, what we are scrambling for API access to and experimenting with now will become ubiquitous in the near future.

So why do we care (beyond jealousy) that it’s now better at math than us?

This is one of the best examples of System 2 thinking that we’ve seen from an LLM system to date.

Without these newer system 2-like structures built around them, a vanilla model is just a pile of weights, purely reactive without the capacity to be reflective, making it much less usable or valuable in work that requires multi-step instructions (such as moderately complex arithmetic), conditional logic and iterative loops, sustained reasoning, deductive reasoning, or contextual continuity.

To anthropomorphize a bit, we’re seeing the ability to be reflective, the way a human would be in determining whether to make an investment or hire a candidate, pulling multiple bits of context together, in the right order, and analyzing each bit individually and as an interconnected whole of bits. And this increased our conviction around the application layer.

Vastly improving techniques like chain of thought (CoT) prompting and self-consistency sampling are a big part of the reason for this. It amounts to something of a decisive shift in the way we increase the aggregate intelligence capabilities of AI systems. The industry continues to get better and better at RL-ing models, making architecture improvements, and of course throwing more tokens and parameters at the problem. But what has been shown perhaps definitively in recent months is that throwing test time compute (including the growing ability to utilize raw context at increasingly mind-boggling length) at the problem creates another axis of techniques (and cost, and investment) to push forward on, and results in a force multiplier for everything that we care about.

The foundation of our camp program is always an emergent technology that unlocks potential for new venture-scale companies. Reasoning — system 2 thinking — is a massive unlock.

Interface/Positioning Changes

One of our core learnings during the last camp was that AI can power software that overflows the boundaries of a tool. When something is dynamically responding to a user with personalized outputs, and evolving its presentation and adapting its responses over the course of many interactions, that is no longer a tool, but a role. And as humans, we can naturally begin to think of that software as an entity with agency, whether it’s a virtual companion who is asking to be treated as such (Ursula), a design tool that actually writes the corresponding code and ships a pull request (Dessn), a note-taking AI app that feels like a college-educated assistant taking notes based on your agenda/goals (Granola), or a camera that knows how to compose photos and direct subjects (Alice Camera). We’re interested in the consequences of this shift for both how products are designed, but also how they are distributed and sold.

Value creation at the application layer, however, is reliant on innovation at the interface level. Really specific HCI design and choices at the user experience level drive so much of what is distinctive about the best products in this area. This was a core component of the last camp, and we are seeing some dramatic shifts in the core modalities — (four of the nine companies incorporated voice/audio as a main input, and not one of the companies relied primarily on text-based chat) — and we expect the sensory aperture of these products to widen.

We believe that voice will continue to have its moment — driven by the rise in NotebookLM, OAI’s Advanced Voice Mode, and reports that Eleven Labs has tripled its revenue in the last year. Sometimes it takes longer than you’d think for the products to catch up with the full capabilities of technology like this, and we’d guess that we have yet to see the products that best take advantage of multimodal and end-to-end audio systems. (We’re personally excited about companies from our last camp, Beebot and Ursula.)

Not all companies in the next camp will be voice-powered. However, we’re excited by the concept of more human-like interfaces combined with agentic software that displays cognition, autonomy, persistence, and memory. This allows for some virtual embodiment of this intelligence, perhaps opening up a broader category of artificial life.

Market Changes

More recently, as we see indications of an asymptote, the open side of the ecosystem has gotten closer to parity with the closed models. Competition is white hot and multidimensional, not just within the closed source models but across the whole of the ecosystem, and creates a peace dividend for incredible technologists at the application layer.

While we were early to this thesis, we’re excited to see that we aren’t alone.

“The most interesting layer for venture capital. ~20 application layer companies with $1Bn+ in revenue were created during the cloud transition, another ~20 were created during the mobile transition, and we suspect the same will be true here,” wrote Sequoia’s Sonya Huang and Pat Grady in October.

The market has also produced a cambrian explosion of tooling and platforms built on top of foundation layer tech. There are a bewildering array of model routers, vector stores, tooling and libraries at every layer of abstraction, infrastructure if you want to roll your own, or three or four different varieties of serverless. There are a dozen ways you can run evals and companies that will help you fine tune and run adapters. It can be confusing at times but there’s little that you can think of that isn’t in the supermarket. When a market borders on oversupply, it’s a great time to be on the demand side, and build something on top of it.

Who Should Apply?

AI Camp: App Layer will be foundation agnostic, but will seek teams that are actively selecting and adapting to the foundation layer capabilities with a custom architecture that leverages the foundation models to beat the performance of naive prompting.

Given that the latest models can incorporate large amounts of context into their reasoning, work through complex multi-step processes, and even manipulate existing software and websites (and robots?), we then look at the spaces where there is the most context, and the most complex processes (with the most at stake), and the most desire to turn a tool into a role.

If a single human accounts for X amount of data/context, and Y number of multi-step processes, then what does that say about a business? Dozens, hundreds and sometimes thousands of individuals, troves upon troves of data/context, scores of multi-step, interconnected processes all laddering up to one or two north-star goals that themselves evolve over time.

That’s why we’re incredibly excited to continue executing our thesis around AI at the application layer with a focus on the future of work and productivity for individuals, SMBs, and antiquated vertical industries.

We’ve spent a bunch of time harping on what we think are the best things about the tech. But in a lot of ways the things we are most focused on now are “back to basics” in terms of what we think will win.

Attributes we are looking for:

—Products that focus on evident value

  • What the thing does, and why it is new should be obvious, specific, and provable within the Camp program
  • Founders should be deeply familiar with the problems faced by their customers

— Products that deliver value quickly and are aligned with well-defined users

  • Customers: SMBs, users and prosumers (Bottoms-up/PLG), even some interesting enterprise stuff is cool

— Careful, deliberate thinking about what interfaces best implement the reasoning capabilities in ways that serve users best

  • Have you figured out a new interface or form factor that makes the connection between users and machines better, faster, more fun?
  • We’re really interested in intelligent systems that utilize real time adaptive i/o: end-to-end voice systems, fast copilots, proactive/persistent interfaces

— Positioning that allows for focussed data collection and application

  • Even better if you have a theory about how you are going to build a flywheel / moat out of it

— Novel applications of AI that reflect a deep understanding of verticals and the pain points of users currently unaddressed by SaaS

  • Founders with the right combination of technical expertise and domain expertise/lived experience

— Companies that leverage agentic systems coupled with new business models to explore new unlocks with AI

  • Can AI be sold as a managed service, as employees, as personas, as the output of work it produces?

If you are building something in this space, we’d love to hear from you. Apply here.

How Camp Works

Camp will run from late-February through May, 2025.

Camp consists of 13 weeks in-residence at Betaworks where early stage companies receive support with product development, platform strategy, data science, storytelling, and fundraising from the Betaworks team, dedicated mentors, and our network of industry leaders. Participants get access to the Betaworks shared workspace, located in NYC’s vibrant Meatpacking district, for the duration of the program.

In-person participation is required for the majority of the program — the first and final two weeks are mandatory. Programming includes optional sessions with guests (1–2 per week) and a weekly required all-hands standup. Sessions include speed dating with investors, visits from industry leaders, workshops, founder stories, and live demos. Camp culminates in Demo Day, where each team presents to a room full of investors. See past Demo Days here.

Each participating company will receive a guaranteed investment of up to $500k. Betaworks Ventures will invest up to $250k on an uncapped SAFE note with a 25% discount, and receive a 5% common stock stake in the business. Our syndicate partners will be adding up to $250k total on uncapped SAFEs with the same 25% discount. To summarize, participating companies will receive an uncapped SAFE note with a 25% discount from Betaworks + our syndicate partners, and Betaworks will receive 5% of the company’s common stock. More details here.

Betaworks has been investing at the forefront of ML/AI since the launch of its first fund in 2016, with portfolio companies that include HuggingFace, Nomic, Flower, Granola, and more. To learn more about Betaworks, we recommend visiting our community space in NYC during one of our regular public events. You can sign up here to stay in the know: beta.works/bytes


Apply to Betaworks’ AI Camp: App Layer for $500k in funding was originally published in Betaworks on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Meet the Nine Native AI Startups Launching at Betaworks Camp Demo Day]]> https://render.betaworks.com/meet-the-nine-native-ai-startups-launching-at-betaworks-camp-demo-day-866b149de87e?source=rss----1baed5266331---4 https://medium.com/p/866b149de87e Wed, 30 Oct 2024 15:53:46 GMT 2024-10-30T15:53:46.654Z

We’re excited to reveal the full cohort of teams from our latest AI Camp: Native Applications.

These nine teams are building at the application layer, unlocking new, native user experiences only made possible with AI. Read more about how we picked this Camp theme here.

We tend to be drawn to founders who bring a deep understanding of a problem based on direct, personal experience and who can exert their personal taste when coming up with the solution. The founders in this cohort have all lived the problem they are tackling.

We asked each team the same set of questions, about the problem they are solving, why they are the team to solve it, their target customers, and their business model. Read on to learn more about each one!

Alice Camera

Alice Camera is a AI-native camera built to make everyone a professional creator.

Founding team: Vishal Kumar, Vikas Kumar

The Betaworks POV

A big part of our thinking around this camp was that there would be some blend of copilots and agentic AI living at the application layer, but Alice Camera represented the first concrete example of active intelligence transforming what once was a tool to be wielded by humans into an active participant alongside humans. We were thrilled by Vishal’s vision to allow a camera to actually become a photographer.

What is the problem you are trying to solve?

High-quality photo and video content creation is more necessary than ever before, for both creators and brands. The problem is that professional cameras are expensive, out-dated and complex. But, capturing footage is the first part of the battle, but the real challenge comes after. Content creation has many fragmented steps — from transferring files, to colour grading, to video editing and visual effects — that can be really time consuming and expensive.

How are you solving the problem?

Alice Camera is an AI-native Micro Four Thirds mirrorless camera that attaches to the user’s phone. It allows everyone to capture professional quality content and has an easy to use interface via your smartphone’s screen. And, it’s built for the AI era — it’s the only professional camera on the market with a powerful Qualcomm Snapdragon chip and a TPU from Google.

These chips allow us to run AI algorithms on-device to automate complex camera functionality. We’re also working on active intelligence for Alice Camera, controlled entirely by your voice. It handles those tedious post-production tasks directly within the camera itself, helping streamline workflows like never before. Our localised camera assistant uses an LLM to do essential content creation workflows; we envision Alice Camera becoming an intelligent and active participant in the content creation process.

We’re not just building a camera that attaches to your phone — we’re bundling essential content creation hardware and software into one seamless package. Alice Camera will be a content studio in your hands.

Why are you the right team to solve it?

We’re a team that understands this market. Vishal, CEO, worked as a data scientist at Sotheby’s but built a side-hustle as a creator with over 30,000 followers. Our CTO Liam is a PhD second-time founder who’s built consumer electronics for creatives and previously studied at Oxford University. Vik, our COO, previously worked at JP Morgan. Ollie previously built a camera at the age of 17 and also studied at Oxford University. And, Maiya is a creator with over 100,000 followers on her personal accounts.

Who are your target customers?

Primary target customers are the 200 million creators and businesses (owners and market executives) who want to create high-quality content for their personal brands or on behalf of others.

What is your business model?

The Alice Camera hardware will cost our users $900 and to get access to our advanced editing software we would look to charge a monthly software subscription fee when we launch it in 2025.

ESAI

ESAI empowers Gen Z to craft their personal narrative for college apps and beyond.

Founding team: Julia Dixon

The Betaworks POV

In a world where hard skills will be democratized via AI and optimized workflows, the real mark of ‘talent’ will be in a person’s soft skills. What is the ‘X’ factor they bring to the job around their creativity, their judgment and decision making, and their taste? Julia comes from a career in college advising and counseling and understands not only how to democratize access to these services via AI, but how to prepare Gen Z for the evolving future ahead of them and help them tell their story and build their personal brand.

What is the problem you are trying to solve?

Over half of today’s job skills will be automated within a decade. As soft skills become the core differentiator, Gen Z’s ability to build and communicate their personal brand becomes crucial for future success. Yet nearly 70% of Gen Z students struggle with the first major test of this skill: crafting a compelling personal narrative for college applications. With public school counselors serving an average of 400 students each and private college advisors charging upwards of $300 per hour, a major inequality has emerged at precisely the moment when students first need to articulate their unique value proposition. Without accessible tools to help students uncover and showcase their authentic story, this unequal playing field threatens to follow Gen Z from college admissions into their careers.

How are you solving the problem?

ESAI democratizes personal narrative building with ethical, AI-powered tools that help students spark, sculpt, and showcase their authentic story. Starting with college applications, the platform automates the expensive advising experience by helping students uncover unlikely connections across their experiences, transform raw materials into compelling narratives, and adapt their story for different opportunities. This builds the storytelling muscles students will need throughout their careers while making professional-grade guidance accessible to everyone at a fraction of the traditional cost.

Why are you the right team to solve it?

As a former college advisor, Julia saw a major inequality emerging as only the wealthiest families could afford resources for their students to stand out and get into the most competitive colleges and universities. She created ESAI to help level the playing field, so students of all backgrounds and resources could have a fair shot at building a story for their future. Leveraging Julia’s background in marketing and Gen Z community-building, ESAI went viral to over 20 million students, helping over 250,000 in their admissions journey over the last year.

Who are your target customers?

Gen Z, starting with undergraduate applicants in the US and international high school students applying to US colleges and universities.

What is your business model?

ESAI is a B2C, freemium subscription model. We work directly with students and families so we can grow with our users throughout their early career journey. Using a social feedback loop, ESAI creates customized sharable assets for each user with the goal of being the place students build and share their story over time.

Additionally, ESAI has some key distribution partners like national nonprofit, American Student Assistance.

Autoplay

Autoplay: AI that understands user intent and UI to power AI agents that help users navigate and master software in real-time.

Founding team: Sam Nesbitt, Marie Gouilliard,

The Betaworks POV

It’s not often that you come across a team that spikes in opposite directions with such a clearly unified vision for the company, but we couldn’t resist that lethal combination when we met the Autoplay team. Marie incorporates a handful of bleeding edge technical approaches — blended data inputs, inverted RLHF, and the latest research around agents from the gaming world — while Sam continues to get the highest cold DM open rates on LinkedIn that I’ve ever seen. We were completely unsurprised when their pre seed round came together in a matter of a couple weeks.

What is the problem you are trying to solve?

Autoplay is solving the problem of product adoption and user engagement for software. The core issue is that users often don’t know what they don’t know, meaning they struggle to fully utilize the software because they are unaware of key features or how best to use them. This leads to poor user engagement, and ultimately churn.

How are you solving the problem?

The product leverages self-driving technology to learn the software and integrates with session replay databases to understand user intent. The AI predicts user goals at both individual and enterprise levels, guiding users through the software in real-time; showing only relevant information based on their needs and offering insights into how others in the organization use the same tools.

Who are your target customers?

B2B SaaS companies

What is your business model?

Usage model — based on the amount of input data (session replays) to train the AI for each software.

Ursula

Ursula is an artificial life lab.

Founding team: Pedro Lucca Denardi Passarelli, Jonathan Celio

The Betaworks POV

We all grew up with stories and became attached to the characters in those stories, whether it was comic strips or Disney movies or video games. But once we reach a certain age, a part of our brain knows that those characters are fiction, no matter how emotionally attached we are to them. When we met Pedro and were introduced to Ursula, it was the first time we were not only tricked into believing that this character was ‘alive’, but we were convinced by the genuine belief of the founder that he could create artificial life.

What is the problem you are trying to solve?

Creating lifelike artificially alive characters has been treated purely as a computer science problem. We believe that character development in any medium is fundamentally an artistic process. We believe you need to combine art and technology to create life.

How are you solving the problem?

Our proprietary cognitive architecture is a mix of agentic LLM behavior and The Sims-inspired symbolic AI. These creatures have emotions, experience needs, form memories, and exhibit unique behavioral patterns, while having the ability to interact with the world around them through their animated bodies.

Why are you the right team to solve it?

We are an Emmy-Award winning team of creative technologists who worked on some of the world’s best gaming companies, generative media startups and animation studios.

Who are your target customers?

Our first creature is Ursula, a companion for kids, but we intend to launch different characters for different audiences over the course of the company’s life.

What is your business model?

Monthly subscription for some of the apps but we fully intend to monetize our IP down the line (merch, licensing of characters and tech)

Dessn

Dessn enables product designers to contribute to product building, without coding.

Founding team: Gabriella Hachem, Nim Cheema

The Betaworks POV

It’s absolutely wild that more than a decade after the design revolution, with companies like Figma seemingly taking over the world, that there are still massive pain points around the hand-off between designers and developers. The Dessn team excited us because they are capitalizing on the latest in AI to deliver a value proposition to designers and developers that is highly complex on the inside and dead simple on the outside. Often when you integrate AI into a system, you allow for probabalistic results rather than the rigidity of deterministic results, and yet the Dessn team has found a way to give ultra fine tune controls to designers — which is obviously a requirement given designers are some of the most intense control freaks on the planet — while still leveraging the speed and power of AI to make their lives easier.

What is the problem you are trying to solve?

Only developers can contribute to product building right now, creating a huge bottleneck/dependency. This results in 1) a slow and painful back & forth/feedback loop, 2) Tradeoffs on product quality due to “limited resources”, and 3) Different sources of truth between devs & designers/PMs.

Current design tools live very far away from production and have no limitations/constraints that code actually has. The handoff process that exists between the product functions results in tons of info being lost in translation, and therefore a lower quality product.

How are you solving the problem?

Our product is a Chrome extension that overlays on top of your live app (staging or prod), and enables you to make UI changes. Our AI takes on the “burden” of writing the code, and pushing straight to the codebase. Devs just have to approve the code to get it live.

We’re building the translation layer between designer and code, and basically turning any designer into a design engineer.

Why are you the right team to solve it?

Gabriella and Nim have been working on product teams together for the last 7 years. As Head of Product, Gab was constantly frustrated on not being able to directly execute on the decisions she was making. Things were constantly getting lost in translation during the handoff process, resulting in lower quality product building through a slow and painful process. Nim has always lived at the intersection of design, product, and engineering and has always wanted to find a way to enable more people to be there as well. As Head of AI at Planned, he dove deeply into the world of LLMs and realized that this was the technology that could finally unlock what both of them (and the market) have been waiting for.

Who are your target customers?

Target customers are SaaS companies around Series A-B.

What is your business model?

$99/month for automated component mapping, unlimited users and changes.

Sarama

Sarama has built the first system that uses dog vocals to give the deepest insights about their health, behavior, and preferences.

Founding team: Praful Mathur, Shathabi Ravindra

The Betaworks POV

AI is making parts of the world legible that have never been legible before. When we launched this Camp session, we knew that to be true, and we became fascinated with non-language-based transformer models… physics engines, weather models, etc. But we never would have expected that our dogs’ barks were going to be part of that equation. The team at Sarama has the right blend of technical and AI expertise, GTM, and passion for this space to deliver a collar that can eventually translate dog vocalizations into human language and we can’t wait to figure out what our dogs are actually trying to tell us.

What is the problem you are trying to solve?

We’re using our ML to co-develop a language between dogs & people to establish interspecies conversation.

How are you solving the problem?

Our smart collar uses continuous passive audio monitoring to encourage interaction between dogs and their owners, enhancing model training and vocalization translation. Paired with our app, it delivers personalized insights into health, behavioral triggers, and routine anomalies to allow dog parents a deeper understanding of how their dog, both emotionally and physically.

Feature list:

  • Sleek, lightweight, minimal collar that has microphone, temperature, humidity sensor, heart rate, GPS, LED for ambient emotion display, ML chip, and bluetooth/wifi
  • Privacy focused — filters out human speech, keeping only dog barks and important environmental triggers. Plus, some of the ML runs on the device, so the data stays private and secure
  • The collar doesn’t have a D-ring so it’ll be purely functional for monitoring

Why are you the right team to solve it?

Both Peter & Praful have focused on animal communication for years in research capacity. Peter’s work directly contributed to the funding of Earth Species Project & establishment of Project CETI. Additionally, the founding team has been deeply involved in founding multiple startups & scaling established multi-billion dollar companies. Our singular goal for 2025 is to maximize adoption of our product to bring in data to improve our models.

Bios:

  • Praful is a 4x founder & early-stage investor that’s worked across logistics and deep tech. He was involved in passing a mandate in Boston around taxi regulation, passing Senate Bill S-734, and organized the first Uplift Series of talks in SF last year.
  • Abi is a full-stack, technical growth marketer and startup advisor with over a decade of experience in accelerating revenue growth at consumer tech and SaaS companies
  • Peter is one of the leading experts in machine learning with a focus on animal behavior and bioacoustics processing.

Who are your target customers?

  • Dog parents who dote over their dogs especially those with additional needs e.g. disabled dogs, dogs with mobility issues, dogs with separation anxiety, adult dogs, starting to show early signs of aging or have continuous health issues.
  • The people we love to interact with buy clothes, premium food, toys & treats that leave their other friends questioning their sanity, and those with an obsession to provide the best care for their dogs.

What is your business model?

Subscription based model. We charge a $35 subscription with a 12 month commitment upfront.

Tato

Tato simplifies complex IT projects with auto documentation & AI powered insights

Founding team: Justin Delisle, Vlad Lokshin, Mathieu Chretien

The Betaworks POV

One of the things that these LLMs are very good at is making sense out of high volumes of unstructured data. When we got to know the Tato team, and learned that Justin is a Microsoft distinguished wizard ninja of enterprise architecture (™), we were able to go deeper than we ever have around a seemingly unsexy space: enterprise IT and ERP consulting projects. But the more we investigated, the more we realized that the sheer volume of unstructured data generated by this industry is the perfect fit for a highly customized, AI-powered tool. And on top of that, the Tato team, with vast experience living the problem, was the perfect team to deliver that tool.

What is the problem you are trying to solve?

Complex IT projects fail because people can’t keep up with everything and everyone on these hundred person transformations.

How are you solving the problem?

Tato is added to project interactions like meetings, emails, documents, and project management apps. It auto documents the project and gets the right insights to the right people at the right time.

Why are you the right team to solve it?

  • Justin Delisle, CEO, software engineer with a decade of experience implementing ERP projects and acting as practice leader at a consulting firm.
  • Vlad Lokshin, head of product, has built multiple products 0 to 1 with many millions in revenue.
  • Mathieu Chretien, head of GTM, took the US market operations from $0 to $200M at previous startup.

Who are your target customers?

Consulting firms who work on complex IT projects

What is your business model?

B2B SaaS Subscription.

Hopscotch Labs

Hopscotch Labs is building a City Guide for Airpods. It’s called BeeBot.

Founding team: Dennis Crowley, Max Sklar, AJ Fragoso

The Betaworks POV

Dennis Crowley has been working for 20 years on software that gets people to look up from their devices and experience the world and people around them IRL — Software for the Streets. At each new phase change of technology, from SMS to smartphones, he has capitalized on the latest unlocks toward high-utility and delightful features for end users. With BeeBot, he’s doing the same by leveraging the proliferation of Airpods and the rise of AI and we’re excited to be along for the ride once again.

What is the problem you are trying to solve?

We are continuing the “software that makes cities easier to use” mission from both Dodgeball and Foursquare. We want to make people more aware of their surroundings and more connected to their neighborhoods/cities.

How are you solving the problem?

BeeBot is an example of “an app you don’t have to use”. Once you install the app on your iPhone, BeeBot “turns on” whenever you put Airpods / headphones on. Once it’s on, it’ll occasionally augment your walk with info about what’s nearby — places, people, events, etc. It’s not a walking tour (as it’s not telling you where to go), but more of a “walking assistant”. It’s designed to be proactive and lightweight. You may only hear a few messages per day, and the messages are designed to be short and non-obtrusive (think: two sentences).

Who are your target customers?

Anyone who walks around with Airpods. :)

We’re building for people who live in cities, biased towards locals instead of tourists.

What is your business model?

TBD for now, but most likely subscription ($x/month), plus relationships with local merchants and content providers.

Unternet

Unternet is building Operator, a new, intelligent client for the web.

Founding team: Rupert Manfredi

The Betaworks POV:

A core focus in this camp is innovation around interface, and how to bring agentic, AI-powered touchpoints to average consumers. If the main surface area for most users is the internet, we had a strong suspicion that it would be a browser, but hadn’t seen a distinct vision that was attuned with the tenets of the web until we met Rupert. He had been tinkering, sketching, and strategizing his vision for a year while working full-time at Adept and was ready to take the plunge. He’s fully convinced us that the personal computing revolution hasn’t yet begun.

What is the problem you are trying to solve?

All our software today basically exists in “dumb rectangles”, whether that’s windows in your OS or tabs in a browser. These windows are oblivious to what’s going on inside them and what actions apps can take, or who you are and what you’re trying to accomplish.

Why is this a problem? Because any task on our computers involves numerous steps and requires lots of context, but our software environment doesn’t track any of it. We end up manually browsing web pages, searching for menu items, re-entering information, logging on to multiple services (and scattering our personal information across the web in the process). Your computer can’t see the big picture, so you’re stuck managing all the little details.

(A simple example: comparison shopping means manually juggling tabs for different sites, tracking prices and reviews, while cross-referencing your budget spreadsheet — all tasks your computer could help with, but doesn’t.)

AI has the potential to solve this. But while we have the raw models, we’re missing the software building blocks to bring this vision to life. And we need open standards anyone can build on, just like the web — so this becomes as ubiquitous and accessible for all, beyond any single company’s reach.

How are you solving the problem?

Like all good stories, our secret plan comes in three parts:

  1. Building a new form of web application — “web applets” — that can be understood & used by AI, while preserving your privacy and data ownership. An extension of the web, and an open standard anyone can build upon and contribute to.
  2. Building an intelligent client for the web, that translates user intent into actionable steps within these applets
  3. Establishing an ecosystem of services to make it easy to build and distribute this software

Why are you the right team to solve it?

Rupert Manfredi has been working on the fundamental problem of how humans will interact with AI systems for over 6 years (long before it was cool). His diverse experience spans collaborating with ML research teams at Google, innovating on generative UI & browser technologies at Mozilla, and developing workflow augmentation tools using large, multimodal action models at Adept alongside researchers from OpenAI, DeepMind, and Google Brain.

Who are your target customers?

Now: early adopters, and developers interested in building with web applets.

In the future: everyone who uses a web browser to do things.

What is your business model?

There will be a set of services that we can provide, which — while not mandatory — will be a great default for most users and developers building on this platform. In particular:

  • Hosted, privacy-preserving model & sync
  • High-quality information API, for getting better answers than regular web search
  • Third party payments, both for developers and web publishers (who need to be paid for their work!)

Meet the Nine Native AI Startups Launching at Betaworks Camp Demo Day was originally published in Betaworks on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Deep Dive: Native AI User Experiences]]> https://render.betaworks.com/deep-dive-native-ai-user-experiences-8cb47216c847?source=rss----1baed5266331---4 https://medium.com/p/8cb47216c847 Wed, 28 Aug 2024 15:43:48 GMT 2024-08-28T15:43:48.891Z For the most recent Betaworks Deep Dive (hosted quarterly for our LPs), we focused on ai-native interfaces — this is also the theme of our current Camp.

We talked about our thoughts on soft vs. hard interfaces, permanent vs. disposable interfaces, and self generating software. Check out a selection of clips from our discussion:

Why Do Screens Persist as Portals Into Digital Experiences?

Are Hallucinations Concerning in the Development of New AI Interfaces?

How Are Developers Innovating on the Interface of Native AI Products?

How Will Interfaces Harness All Five Senses?

  • * Learn more about some of the companies that we think are innovating with native AI experiences: Granola, Opponent, and Skej.

Deep Dive: Native AI User Experiences was originally published in Betaworks on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Announcing AI Camp: Native Applications]]> https://render.betaworks.com/announcing-ai-camp-native-applications-e1358061c601?source=rss----1baed5266331---4 https://medium.com/p/e1358061c601 Tue, 28 May 2024 13:35:01 GMT 2024-05-28T21:52:20.173Z

We’re seeking early stage teams who are building at the application layer unlocking new, native user experiences only made possible with AI. Apply here.

We exist in an era of an unusual dissonance. The explosion of compute applied to scaling the capabilities of models that approximate cognition yields new discoveries every day. But it feels fundamentally unsatisfying because so much of those capabilities remain locked away in the weights of models, even as they are proven by research. Our intuition is that the corresponding innovation in the space of applications lags behind the power of the science, and that there are incredible native experiences and companies that can be built and tremendous value for humanity created in filling that gap.

Put differently: Making this is solved…

…but we aren’t yet living in the future that we imagine.

And making this is solved:

…but we aren’t yet living in the future that we imagine.

In our brainstorming around this theme, we ended up with quite a few scribbles to depict the gap between the underlying technology capabilities, the infrastructure, and the applications we actually want to use. Here’s a look at a slightly polished version of that visualization.

The dissatisfaction described above hits us in different ways on a daily basis. We’ve invested in companies that train models. We’ve invested in companies that make the infrastructure for making better models and improving the outputs of AI systems. And yet, when end users like ourselves are faced with a task or project we are certain could be done faster, better, smarter, by harnessing the power of AI, the ready-made tools we have available to us in that moment are surprisingly limited.

At Betaworks, we see AI as a phase change in technology akin to the birth of the internet. And, just like the internet, the services and products built on top of it can form wave-like patterns — replicating existing workflows, augmenting those workflows, and ultimately paving the way for truly native experiences.

Think back to the original apps on the App Store. They were lighters, bells, flashlights. Nothing of distinctive value, but a clear exploration of the native mobile experience that could leverage things like gyroscopes, accelerometers, touch screens, and other elements of the hardware. Services like Stripe, Unity, New Relic/MixPanel and Twilio emerged as important tools/infrastructure within that landscape. The convergence of this type of advanced mobile hardware at scale with the maturation of the infrastructure layer around apps led to brand new, native experiences for consumers. Eventually, we enjoyed high-utility apps like Uber and Instagram or Dots — apps that couldn’t have existed without the multi touch interface on the pocket-sized computers, we carry around in our pockets.

Now put this in the context of AI.

How frequently do you use native AI?

We’ve been talking to people about this for a while now, and it isn’t 100% easy to explain, so maybe it’s best to share a super simple example:

Take restaurant reservations (thanks to Extensible for this example). Before LLMs were a glint in anyone’s eye, you would see a restaurant, go to Opentable, and make a reservation.

But now you have Claude and GPT4! So you teach your LLM how to use the Opentable API, throw up a chat widget and tell your app “make me a reservation at Roberta’s.” You make an incredibly awesome system prompt. Congratulations, you may close Xcode, you just made the 2024 version of iFart.

So what would the AI native version of this app be like? This is what we are trying to find out because to be honest, we’re not completely sure, in part because AI capabilities imply so many different things. But to get the juices (and applications) flowing, here are some ideas of the kinds things that can take you from the GPT API wrapper to something we’d be more interested in working with you on:

  • The AI system is resilient to the fact that there are multiple Roberta’s locations, and queries the user.
  • Rather than typing in the name of the restaurant, the system accepts a picture that you send it via SMS.
  • The system uses tools, like reading the JPEG metadata to determine which location you are trying to make a reservation at.
  • The system has ample context, and the ability to know when you like to eat, when you have other reservations, and with whom and kicks off several ancillary workflows in the background, around making calendar entries and inviting friends.
  • When the system doesn’t find a reservation at the time it wants to make it, it makes a phone call to the restaurant to see if there are unlisted tables.
  • You never even needed to ask it to make the reservation, the system knows tables are hard to get and it noticed you were going at a pretty regular time, so it held the table for you, and sent you an email to confirm if you actually wanted it.
  • The system has an understanding of what you like about Roberta’s and makes a different reservation based on a high dimensional mapping of pizza concepts when it can’t get the reservation you want.

We probably wouldn’t do something in restaurant reservations. These are just some ideas. But really, we want you to tell us about what you’re doing at the edge of what is possible with AI.

In the past year+, millions upon millions of dollars have been invested in the picks and shovels of the AI ecosystem. We’ve invested in some of those picks and shovels ourselves. This camp is for people who are using those picks and shovels, plus their own secret sauce, to make highly valuable, engaging products for end users.

Which is why we’re seeking up to 12 startups building at the application layer of AI to deliver net new experiences for end users and businesses.

To be clear, we don’t necessarily mean mobile apps. We’re focused on the application layer where users can actually interact with, and get value from, the technology. This can take the form of consumer-facing products, software targeted at prosumers, or even business-oriented applications that have a bottoms-up sales motion. The interface for these things can be anything from voice to chat to GUI, on web or mobile or even spatial computing.

Here are some key attributes we’re looking for:

Native Interface/UX — A unique POV on interface is important. The most obvious interface for AI native applications is natural language/conversation, either via text or voice, but there are likely innovations not yet imagined at the GUI level, not to mention agentic interfaces that may arise within other existing workflows/applications or be altogether disposable/invisible. For example, Dragon (from Opponent) is only accessible via a FaceTime call, so kids interact with the creature just like they would with Grandma.

Personalization — Native AI software doesn’t reach its potential without personalization. But this is both a technical challenge and a UX challenge. Systems for persistent memory, context windows, etc. stand in the way of truly personalized software, but so does the move toward easier onboarding and less consumer friction. How do you pull, from the mess of a human brain, preferences, priorities and inherent knowledge and inject that into a model to produce an output that’s as good as, or better than, what a human could do on their own?

Technical Defensibility/Moat — we are not looking for simple GPT wrappers. Whether that means fine-tuning an open source model on a specific data set, running a sophisticated RAG pipeline, or having access to some set of proprietary and/or synthetic data, we require that your application bring something beyond a nicely designed interface atop a closed model. Have you invented the next COT prompting strat that everyone will be using in 6 months? Have you figured out a fine tuning or adapter strategy that’s more efficient than what everyone else has? Have you figured out how to do something that nobody else can do yet? Show us.

Resilience — We all know that a large obstacle to true mainstream adoption is a high failure rate, and a failure rate that increases with complexity. We’ve been working with these models for a few years now. Tell us about how you are going to delight your users with high quality experiences that work when you need them most.

Distribution Edge — A lot has changed in the past few years around distribution of consumer software. The old playbook is relatively useless. Most consumer software benefits from scaling its user base — on the obvious end that simply looks like more revenue, but on the more discreet end that may look like a proprietary data set that can be used to make the product more valuable. However, who is thinking about scale as a benefit to the end user? We’re looking for products that incentivize growth via a mechanic within that software.

Teams that bring a unique perspective on these attributes, and demonstrate an ability to bring some technical expertise alongside that perspective, are highly attractive candidates for camp.

If you think that’s you, please apply here.

Here’s how Camp works:

Camp is a thematic investment and in-residence program for startups building in frontier technologies. Each camp consists of 13 weeks “in-residence” at Betaworks to help early stage companies with product development, platform strategy, data science, branding, and fundraising. Entrepreneurs have access to the Betaworks team, its network, and to a carefully curated group of industry leaders to assist with general company-building needs.

For each participating Camp company, Betaworks Ventures will invest $250k on an uncapped SAFE note with a 25% discount, and receive a 5% common stock stake in the business. Our syndicate partners will be adding up to $250k total on uncapped SAFEs with the same 25% discount.

To summarize, participating companies will receive an uncapped SAFE note with a 25% discount from Betaworks + our syndicate partners, and Betaworks will receive 5% of the company’s common stock. More details here.

We also host regular community events, you can sign up here to stay in the know: beta.works/bytes


Announcing AI Camp: Native Applications was originally published in Betaworks on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Introducing the AI Camp: Agents Cohort]]> https://render.betaworks.com/introducing-the-ai-camp-agents-cohort-2096487b5d12?source=rss----1baed5266331---4 https://medium.com/p/2096487b5d12 Tue, 09 Apr 2024 20:59:23 GMT 2024-04-10T21:35:41.995Z

In December 2023 we announced the theme of the 10th Camp program at Betaworks: Agents. This theme choice was based on our belief that agents are a critical part of the AI ecosystem and could well become the catch-all term for natively AI software. There is so much innovation happening in this space — some at the app layer and a lot in middleware.

Out of the more than 250 companies that applied, we picked nine teams, each of which have formed a unique insight about agents and agentic AI, and are taking a differentiated approach to solving massive problems. The cohort we assembled is a distinct mix of infrastructure/tooling and application-layer products, with founders from the US, Europe, and Canada. While some of the products are built specifically for developers, others are targeted at the enterprise, and others at prosumers.

Besides helping them build their products, their GTM and preparing for launch, design partners and customers or investors, Camp is an opportunity for us to spend 13 weeks working really closely with these teams and further develop our thesis around the technologies they are using.

At Betaworks, we use a general framework to help us think about emerging tech, which tend to follow the this pattern of development:

  • First wave: replicating existing workflows
  • Second wave: augmentation
  • Third wave: truly native experiences

Our experience with this Camp cohort confirms that Agents represent this third wave of AI development. To that end, we think “agents” will become the overarching term for native AI software in the same way that “apps” became the term for native mobile software. Not only do agents represent AI-native software, but there is the potential for agents to actually be the majority of software in the future, not just a subset of software. In other words, we think agents are going to eat software alive.

You don’t have to take our word for it. Come see for yourself: Camp culminates in Demo Day on Tuesday May 7th at Betaworks in NYC. This is an opportunity to hear the story behind each product and see them in action with live demos.

Read more about each team below, and let us know here if you are interested in attending Demo Day in person at Betaworks.

Twin Labs

Seamlessly delegate repetitive tasks to an AI trained to autonomously control any tool.

Twin Labs is building apps and models to enable knowledge workers to assign repetitive tasks to an AI with the same ease as delegating to a smart intern. Users create “Flows,” natural language descriptions of tasks to run or automate. Twin’s custom action model translates them into the right sequence of actions to execute. Finally, these actions are executed autonomously through its headless browser, which can control and navigate within any application. Twin targets teams (HR, Sales, Finance, Ops, Back-Office) in SMBs struggling to keep up with growth, and looking to automate their repetitive work. Their business model is usage-based.

Hugo Mercier (CEO) was previously the founder and CEO of Dreem ($60m raised, 150 employees, exited). Joao Justi (CTO) is the co-founder of Videosupport (acquired in April 2023) where he led both the ML engine and core product engineering.

Jsonify

Turn webpages and documents into useful structured data automatically

Jsonify has built smart AI agents that can discover, monitor and extract data from websites and documents, using advanced computer vision models to look at the page and understand structure in the same way that a human does. In other words, it can turn messy websites and documents into sensible structured JSON or CSV data, in seconds — with no human intervention! Jsonify’s customers are tech startups who want to build on top of the AI stack/API, and non-technical businesses who have data needs and don’t want to maintain or build data pipelines themselves. The product has a tiered pricing model (based on the number of run tasks run), including a freemium level.

Founder Paul Hunkin has been working in generative AI agents since 2022, has 15+ years of experience in software development, is a 3x previous co-founder/CTO (Quacks.ai, Apellix, ButlerTech), and has worked with Google, NASA, and Sony.

Resolvd AI

Resolvd empowers engineering teams to automate repetitive cloud workflows using AI-powered agents, enabling them to focus on innovation and high-impact work.

Engineering teams are increasingly burdened with maintaining microservice-based cloud infrastructure, diverting their focus from core product development and innovation. Resolvd’s desktop application allows users to effortlessly capture their cloud-related workflows via screen recordings, automatically generating step-by-step documentation and automating those workflows through a powerful CLI/Browser engine. Resolvd is targeting (a) engineering and product teams looking to offload time-consuming cloud-related tasks; (b) engineers who need a more efficient way to manage their workflows; and (c) infrastructure and IT teams that want to reduce bottlenecks and dependencies. Resolvd operates on a freemium, seat-based SaaS model. Individual engineers can build and automate their cloud workflows for free, while enterprises can then upgrade to access advanced features such as collaboration, custom integrations, and secure VPC deployments.

Founder Ananth Manivannan spent three years as a backend software engineer at Capital One during a transformative time in the company, shifting from monolithic, on-prem servers to 100% microservice, cloud-based architecture. This changed how engineering teams worked, and he started building automated tooling to help with these new cloud/infra related workflows.

Floode

Floode is a personalized AI executive assistant that automates routine communication management.

We waste time scrolling through our email inboxes every 15 minutes to identify the 1% of critical information amidst 99% of irrelevant data. To address this issue, Floode is replacing the outdated inbox with a highly personalized AI executive assistant tailored to each user’s needs. Through seamless collaboration with the user, the Floode AI assistant can determine the next steps for each incoming email before processing the information. Floode is currently available as a web app and will be released on mobile by the end of the year.

The Floode AI executive assistant is available starting at $30/month or $300/year. The target customers are executives in tech, especially startup CEOs who lose precious hours juggling hundreds of emails. In the long term, Floode aims to provide an AI executive assistant to anyone communicating online for professional purposes.

Sarah Allali (CEO), formerly at Airbnb, has a background in Cognitive Science and Human-Computer Interaction. Nicolas Cabrignac (CTO) specializes in AI/ML and Human-Computer Interaction. In 2019, they co-founded Moone, one of the first GPT-3 powered AI assistants for managers.

Extensible AI

Capture regressions in your agents before and after deploying to production.

The LLM, agent logic, and the environment an agent operates in are all subject to change. With so many variables in flux, reliably measuring an agent’s reliability in production and staging can be challenging. Companies are left in the dark about whether their deployed agents are regressing or meeting performance expectations. Currently, agents are manually tested which leaves gaping holes in coverage, leading to embarrassing and trust-eroding situations when untested, and even tested, scenarios go haywire in production. Extensible is building a commercially-usable logging tool for agents (completely free and open-source) as well as a plug-and-play reliability tool on top of it.

Extensible provides high-quality, production-ready, fully open-source tools, helping enterprises set up custom at-scale logging infra at cost. Their target customers are AI Agent companies working on deploying agents to production.

Co-founders Parth and Omkaar bring a unique background of Product, ML, and Distributed Systems to the applied ML Agent space. Parth Sareen (CEO) is finishing up Mechatronics Engineering @ UWaterloo. Former roles include: Distributed Systems Engineer at Autodesk, and internships at Apple, Tesla, Deloitte. Omkaar Kamath (CTO) studies Management Engineering @ UWaterloo. He built an early version of an “agent” while interning at Autodesk (back in 2021, before LLM agents were a thing) to conduct competitor research on a recurring basis for a fraction of the cost. Former internships include Majik Systems and Carta.

Skej

Your New AI Scheduling Assistant

Scheduling the 1+ billion meetings that occur every year is a laborious, time consuming task that kills productivity and is notoriously hard to automate. Enter Skej: a dynamic AI agent that seamlessly handles all back and forth scheduling communications and calendar bookings.

Simply copy Skej in an email, DM, or Slack/Teams conversation and watch it take over the entire scheduling process. Skej simulates the actions of a great executive assistant, and it’s compatible with any existing calendar tool. Skej offers a free tier and a paid subscription where users can unlock premium features and advanced scheduling tools.

Founders (and brothers) Paul and Justin Canetti have built and scaled multiple companies including MAZ Systems (no-code app development platform acquired by PSG Equity) and Bounce House (scheduling platform acquired by Declare Health).

Opponent

Adversarial agents capable of deep play with children.

Attention is essential to healthy development in children. Parents spend time and money to ensure their kids receive quality attention — school, activities, playdates — but what about all the ambient time in between? Today parents are atomized and turn more and more to on-demand digital media. Opponent Systems is building a new kind of attention-giving agent to augment family life. Our first product is a digital dragon you can facetime/video call like an extended family member. It incorporates a new architecture for graduating memories into cognitive maps (from made-up games to common sense), and a System 2-inspired faculty for applying those maps to the play at hand. Target customers are the divided parent, the only child, the atomized family.

Founder Ian Cheng previously worked as an artist making multi-agent simulations presented at MoMA, Whitney, MOCA, Serpentine, M Plus, Tate, Leeum, De Young, Venice Biennale. BA Cognitive Science, UC Berkeley.

High Dimensional Research (HDR)

A full stack framework for developers, whether hobbyist or professional, to understand, develop, and deploy agentic applications or online-enabled agents.

Building agentic applications is hard and even the state of the art is unreliable. HDR has developed a batteries-included framework for launching web agents. At the core of this framework is the Collective Memory Index, which transmits information between models to enable any LLM to execute tasks reliably. HDR sells credits on a per use basis. Their target customers are AI developers at all team sizes and levels of technical ability with additional utilities for enterprises.

Tynan (Ty) Daly (CTO) has spent his career in ML across a variety of industries, including training and deploying standard, exotic, and original transformer architectures. Matilde Park (CPO) has spent her career leading teams building video games and peer-to-peer application spaces. Gates Torrey (CEO) has spent his career investing across a variety of asset classes and stages, with a focus on transformative technologies and novel assets.

Mbodi

Enabling internet scale learning in robotics

The fundamental problem for ML in robotics is data scarcity. Mbodi’s tools uniquely provide cross-embodiment dataset transfer learning with generative AI. The current alternatives are spending tens of millions on tele-operation and hardware or paying Nvidia Monopoly money for their omniverse.

Mbodi’s target customers are any company or research lab with a robot. Initial targets are the open source enthusiasts like Meta, research labs, and startups as well as service robot companies. Mbodi will provide their unique, cross-embodiment end-to-end learning framework as an open-source alternative to extensive tele-operation and ecosystem lock-in as encountered with Nvidia. Simultaneously, they will charge per token for hosting and servicing multimodal generative AI features for robotics such as inference-time planning, dreaming (training on generated sensor data), a dashboard for observing catastrophic forgetting, and high performance low latency inference endpoints.

Co-founder Sebastian Peralta is a roboticist, AI researcher, and previous network latency minimizer at Google’s public DNS. Co-founder Xavier (Tianhao) Chi was previously a tech lead at Google with extensive technical and product experience.

About Camp
Camp is a thematic investment and in-residence program for startups building in frontier technologies. Betaworks has been investing in AI and machine learning since 2016 — we wrote the first check into HuggingFace as a part of BotCamp. For each cohort, we select 8–10 companies building within a theme to participate in a 13-week program.

For this cohort, each company received $500K in investment from Betaworks and our syndication partners. Participants receive 1:1 mentorship, tailored programming, introductions to investors, and product development-focused guidance. Teams have access to the Betaworks office and event space, and the program culminates in an IRL Demo Day here in New York City.

Our Partners


Introducing the AI Camp: Agents Cohort was originally published in Betaworks on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Apply to AI Camp: Agents — the 10th Accelerator Program from Betaworks — Starting Feb 2024]]> https://render.betaworks.com/apply-to-ai-camp-agents-the-10th-accelerator-program-from-betaworks-starting-feb-2024-37f0c70678db?source=rss----1baed5266331---4 https://medium.com/p/37f0c70678db Thu, 07 Dec 2023 14:10:10 GMT 2023-12-14T16:27:37.420Z Application now open for AI Camp: Agents — Starting Feb 2024

We at Betaworks believe that agents will become a key element to the new AI-driven internet. If you are building in this space, please apply.

Betaworks Camp is back in session and this time, we’re looking for startups focused on AI Agents.

Betaworks has been investing in AI and machine learning since 2016, all the way back to the very first Betaworks Camp — Betaworks wrote the first check into HuggingFace as a part of BotCamp. Here’s how it works:

Betaworks will select approximately 10 companies to participate in the 13-week program. Each company will receive $500K in investment.

Alongside that investment from Betaworks and our syndication partners, participants will receive 1:1 mentorship, tailored programming, events, and product development-focused guidance. The program will take place IRL in New York City at the Betaworks offices, culminating in a Demo Day.

Applications are open now — apply here. We will be reviewing applications on a rolling basis with priority given to applicants who submit before Friday 12/29. The final deadline for applications is Friday 1/12. Camp will begin in February 2024.

Why Agents?

Our next AI Camp is focused on agents and the technology that both enables their creation and ensures they fulfill your/their goals. We believe that agents will become a key element to the new AI-driven internet. You can read more about our thesis on agents here.

In our definition, an AI Agent can:

1. Perceive, synthesize, and remember its context;

2. Independently plan a set of actions toward an abstract goal;

3. Use the tools necessary to execute against that goal without human support; and

4. Evaluate the results of its work against the overarching goal.

We’ve spent some time identifying the attributes of this category that we’d hope potential Camp companies have a unique POV on:

Autonomous — an agent must be able to perceive and synthesize data and ultimately navigate its own path of decisions via that synthesis

Language — the unlock or “why now” of agents is most certainly the rise of LLMs which can simulate some manner of cognition, providing a thoroughfare through which agents can process complex information and develop plans. Moreover, language is the native human processing language — it is both incredibly precise and highly flexible, which allows for both fine tuned control and interpretability.

Domain Native Interface — the method by which a human interacts with an agent is still highly TBD. However, our position is that agents (unlike augmentative AI which must score high on human usability) are about utility, and thus, many can be relatively unseen. Agents may very well provide the most utility when they infiltrate an already high-touch/high-visibility interface, rather than paving their own path to the user.

Personalization — By definition (at least by our definition), agents are not about a single call and response. That’s search. Because of this, the overarching goal given by a human user likely means something different to human A than it does to human B. Infrastructure that gives an agent some measure of theory of mind (see Plastic Labs from AI Camp: Augment), rather than developing an agent that reaches for the lowest common denominator, is more likely to provide high utility.

Human Guardrails — Given that agentic AI is still in its infancy, and that it’s predominantly built off of non-deterministic LLMs, the notion of agents skittering off and executing against their own plans without any checks and balances isn’t all that attractive. Our position is that the interface for agentic AI must have some concept of choke points, wherein a human can see the pathing of the agent and either approve or disapprove of the work. Furthermore, agents that can outline and ‘explain’ their work on a step-by-step basis allow for businesses, developers, and even legislators to navigate the next generation of AI and lay down policy.

Disposable — We don’t believe that all AI agents will be disposable, but we believe that disposable software is an important layer to consider when approaching agentic AI. Some of the most taxing and tedious tasks in our lives are things that we do only once or every so often, meaning that the market has not built any high-value tool for that task. Agents represent the ability to efficiently off-load a task without the high cost of developing human-built software against it.

Tool Using — Humans are only as good as the tools they can build and wield. A great deal of the value agents can provide will come from their ability to use tools, as well (see Unakin from AI Camp: Augment).

Deal Structure

For each participating Camp company, Betaworks Ventures will invest $250k on an uncapped SAFE note with a 25% discount, and receive a 5% common stock stake in the business. We’re very excited to be working alongside our friends at Differential Ventures, Mozilla Ventures — as well as a third partner to-be-announced soon — who will be adding $250k total on uncapped SAFEs with the same 25% discount.

To summarize, participating companies will receive a total of $500k on an uncapped SAFE note with a 25% discount from Betaworks + our three syndicate partners, and Betaworks will receive 5% of the company’s common stock.

The Program

What makes Camp different from other accelerators? The team at betaworks does deep research into the evolution of a new technology and we make a bet on a cohort of companies that we think are defining that category. We bring that carefully selected group of founders together to learn from one another as they embark into uncharted territory, and tap them into our people-network of portfolio founders, researchers, tech big brains and, of course, investors.

Unlike most accelerator programs, Betaworks Camp focuses specifically on product development and early product-market fit. Alongside the basics of business building, our curriculum goes deep into the focus area (in this case, agentic AI) to uncover the latest techniques, research papers, and tooling on behalf of our cohort of startups.

Each team will be paired with a mentor, and gets direct access to the Betaworks investments team on a weekly basis. We schedule office hours and learning sessions with investors and guest speakers from our roster of former portfolio company founders and investment partners. Previous mentors and guest speakers include: Clem Delangue (Hugging Face), Emad Mostaque (Stability), Mike Mignano (LSVP), Linus Lee (Notion), Hillary Mason (Hidden Door), Naomi Gleit (Meta), Brian Donohue (Instapaper), and Gilad Lotan (BuzzFeed).

We will select up to 12 teams to participate in this cohort. Camp will begin in early February, and lasts for 13 weeks. Programming takes place at the Betaworks offices in NYC’s Meatpacking District, where teams will have access to shared workspace, desks, and conference rooms. In the final week each team will present their product at Demo Day before a room of investors.

What former participants have to say about Camp:

“I’ve done other accelerators, I’ve been a mentor in some, and the spirit of the Betaworks camp is the one that best shows an understanding of what early-stage innovation really is.”

“I feel that, here, I learn by rubbing shoulders with other innovators and getting immersed in the right environment. I love it. Never lose that spirit.”

”A very valuable experience! The program was authentic and everyone brought a lot of experience to the group. My cohort continually shared valuable resources and was inspiring.”

“Overall, it was a great experience, and I would highly recommend the program to other founders. The Betaworks team and the other camp companies were super helpful.”

“I absolutely LOVED spending time here and embedding myself in this wonderful environment.”

If you are working in this space and are looking for mentorship, support, and community, then we hope you will apply to Camp. Apply here.


Apply to AI Camp: Agents — the 10th Accelerator Program from Betaworks — Starting Feb 2024 was originally published in Betaworks on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Announcing the next Betaworks Camp program — AI Camp: Agents]]> https://render.betaworks.com/announcing-the-next-betaworks-camp-program-ai-camp-agents-13e9a404ad47?source=rss----1baed5266331---4 https://medium.com/p/13e9a404ad47 Mon, 27 Nov 2023 17:56:22 GMT 2023-12-07T15:06:06.631Z Announcing the next Betaworks Camp program — AI Camp: Agents

Update: the application to AI Camp: Agents is now open! Apply here.

We think agents — fully developed agents and the tools to enable them — represent the next wave of AI and of innovation. In the past nine months, we’ve seen ebbs and flows of buzz around agentic artificial intelligence, from the flurries of press coverage around Baby AGI and AutoGPT to the temporary excitement that was building around OpenAI’s ChatGPT plug-in ecosystem and now the excitement around GPTs.

This is just the tip of the iceberg.

Our last Camp cohort focused on augmentative AI technology — software that was purpose-built to pair with a human in their existing or emergent behaviors, workflows, etc. to create a positive-sum outcome. In the midst of that Camp, we saw twinklings of the gravitational pull toward agentic AI.

Moreover, Betaworks has an extensive portfolio in the AI space, including but not limited to Stability, Nomic, Flower, and HuggingFace. HuggingFace is a particularly important part of the emergent agent ecosystem and has historically provided support to Camp companies in the AI space.

Betaworks Camp is a cohort-based investment program at the pre-seed level. Betaworks invests in between 8–12 companies, all of which are focused on an irruption-phase technology (in this case, agents). Details on the investment, application timeline and more are forthcoming. (You can stay in the loop with us here.)

Our next AI Camp is focused on agents and the technology that both enables their creation and ensures they fulfill your/their goals. What defines an agent? In our view, an AI Agent can:

1. Perceive, synthesize, and remember its context;

2. Independently plan a set of actions toward an abstract goal;

3. Use the tools necessary to execute against that goal without human support; and

4. Evaluate the results of its work against the overarching goal.

Modern LLMs need to be enhanced in each of the dimensions of the above definition, and those enhancements will probably arise from infrastructural/framework unlocks, rather than purely adding more training data or model parameters. While the hype on agents has gone up and down for close to 25 years, we now see that the infrastructure is currently being built, such that the moment for agentic applications and software is coming.

(1) Improved Perception & Synthesis

LLMs do a fine job of modeling both syntax and semantics, but as John Searle outlines in the “chinese room” thought experiment, this is really just a function of sufficiently powerful information processing and pattern recognition systems, not necessarily indicative of “true knowledge or understanding” itself. We observe this in LLMs ability to confidently hallucinate obviously incorrect or nonexistent information. Adding the capacity for a more holistic understanding of the environment will be critical to ‘agency’. Some of this will certainly be solved by just greater context, but there will probably have to be more optimal methods stacked on top — already we’re seeing people using extant models/context windows and getting them to perform better synthesis with certain metacognitive frameworks.

Memory, Reflection, & Metacognition

Metacognition is the ability to think about how one is thinking. Being able to reflect on one’s own information processing (and linking/comparing it to prior versions) can allow systems to retain context across multiple actions — core to being able to develop and execute complex tasks. Other solutions to this problem will be not just putting more memories into context, but actually knowing which memories or data are most relevant to include.

(2) Improved Goal Setting & Planning

Once models have done a better job synthesizing their immediate environment, they also need to do a better job of actually coming up with a plan of execution. Currently, it seems the problem here is that models do not do a good job of scoping out a “well-shaped” decision tree. By that we mean, sometimes the agent’s decision tree is too bushy (evaluating too many alternative paths) or too tall (evaluating too far down a particular path).

There is obviously no deterministic way to know ex ante exactly how wide or deep the decision tree needs to be. However, there are a couple of potential methods that can help us reduce this problem.

Adversarial scoping

General Adversarial Networks (GANs) are an old concept in deep learning — old but not that old! You might recall the LP deep dive we did on GAN’s in 2018. The basic premise is that you have 1 model (the generator) do all of the generations, and another model (the discriminator) do some kind of “grading”, until the generator passes a sufficient grading threshold. You can imagine a world where agents are paired with adversarial LLMs whose job is to ‘red-light’ a particular path of decisions as either being too narrow/wide/expensive/etc.

Cooperative scoping

Alternatively, you can get agents to improve their accuracy by giving them the affordances of a ‘toolkit’ or a ‘team’. For more on this, check out the SayCan paper and the Dreamcoder paper. Teams for agents are also relatively straightforward in that you can split an agent’s workflow into the work of many subordinate agents with more limited scopes.

Reflecting & Pathfinding

Beyond using some external system to help moderate an agent’s behavior, agents can also use some frameworks for self-reflection and memory to improve their performance. Ideas like “Show your Work” and “Chain of Thought” are already showing up in LLM applications where an LLM has to outline their plan to solve a problem, solve the problem, and then reflect on their solution. Merging reflection with decision trees can result in something akin to “Trees of Thought”. The basic intuition here is that we can take approaches to pathfinding through a graph/tree structure from classical computer science and apply them to decision trees to compare alternative paths with the depth of a given one.

(3) Improved Executables and Evaluations

There will probably have to be structured ways for LLMs to actually interface with data, applications, APIs, and other agents to affect change in the “real world” of the user. We saw a version of this with ChatGPT plugins, but that fizzled out (as of this writing) — that likely has to do more with the fact that devs will be reluctant to build for another’s application, vs just the idea of exposing their product to an LLM in some way.

Maybe code synthesis agents will automatically learn to use Stripe APIs, or use LangChain, or maybe there is still a business/product to be built standardizing APIs and making them more accessible to models. We doubt that this is an entirely trivial problem, since modern SOA LLMs still have trouble properly structuring JSON outputs, but there are some examples of people attempting to solve this in AI-native ways. The Gorilla paper, for example, is an LLM fine-tuned specifically for making API calls. The same way a classical developer today pulls in a structured code module in their software project, we imagine a world where an AI-native developer (or maybe even an agent) pulls in a task specific model like this to handle deciding on API calls.

AI Camp: Agents

Incorporating the above synthesis around infrastructural white spaces, we’ve spent some time identifying the attributes of this category that we’d hope potential Camp companies have a unique POV on:

Autonomous — an agent must be able to perceive and synthesize data and ultimately navigate its own path of decisions via that synthesis

Language — the unlock or “why now” of agents is most certainly the rise of LLMs which can simulate some manner of cognition, providing a thoroughfare through which agents can process complex information and develop plans. Moreover, language is the native human processing language — it is both incredibly precise and highly flexible, which allows for both fine tuned control and interpretability.

Domain Native Interface — the method by which a human interacts with an agent is still highly TBD. Agents may very well provide the most utility when they infiltrate an already high-touch/high-visibility interface, rather than paving their own path to the user.

Personalization — By definition (at least by our definition), agents are not about a single call and response. That’s search. Because of this, the overarching goal given by a human user likely means something different to human A than it does to human B. Infrastructure that gives an agent some measure of theory of mind (see Plastic Labs from AI Camp: Augment), rather than developing an agent that reaches for the lowest common denominator, is more likely to provide high utility.

Human Guardrails — Given that agentic AI is still in its infancy, and that it’s predominantly built off of non-deterministic LLMs, the notion of agents skittering off and executing against their own plans without any checks and balances isn’t all that attractive. Our position is that the interface for agentic AI must have some concept of choke points, wherein a human can see the pathing of the agent and either approve or disapprove of the work. Furthermore, agents that can outline and ‘explain’ their work on a step-by-step basis allow for businesses, developers, and even legislators to navigate the next generation of AI and lay down policy.

Disposable — We don’t believe that all AI agents will be disposable, but we believe that disposable software is an important layer to consider when approaching agentic AI. Some of the most taxing and tedious tasks in our lives are things that we do only once every so often, meaning that the market has not built any high-value tool for that task. Agents represent the ability to efficiently off-load a task without the high cost of developing human-built software against it.

Tool Using — Humans are only as good as the tools they can build and wield. A great deal of the value agents can provide will come from their ability to use tools, as well (see Unakin from AI Camp: Augment).

There are roadblocks, indeed, that are sprinkled along the path of AI agents in their march toward prime time. Some are technical — the current context capacity of these reasoning models is limited to a finite amount of data, and models that fail at one step in their chain of reasoning and execution struggle to remember where they left off, and instead start from scratch. Others are political — if the concept of AI is sending a shiver through the public around jobs, safety, etc, then more autonomous AI is certain to attract some headwinds.

We believe that the potential for value creation is incredibly high and that the timing is right to build a portfolio in this space.

Are you building agentic AI technology? Apply here.


Announcing the next Betaworks Camp program — AI Camp: Agents was originally published in Betaworks on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>