We’ve built the digital economy on blind trust: trusting vendors, contracts, and terms of service instead of verifying what software actually does. Trust is not a security property. Verifiable compute changes that: you can get cryptographic proof that the code running on a server matches its auditable source. With major regulatory deadlines arriving in 2026, this technology is more relevant than ever.
Table of contents
When was the last time you actually verified that a piece of software does what it claims? Not read the terms of service. Not taken a vendor’s word for it. Actually verified it.
For most people the answer is never. And that’s the problem.
The default model for security today is delegation. You buy cybersecurity insurance. You pay for proprietary tools. You sign contracts with vendors. In every case, you’re paying someone else to make the problem go away.
This works until it doesn’t. And it often doesn’t.
SolarWinds was supposed to be the company that kept others secure. They were the leading security and IT management vendor, trusted by Fortune 500 companies and U.S. government agencies alike. Then in 2020, attackers injected a backdoor into SolarWinds’ own software updates. Because none of their customers could verify what code they were actually running, the compromise spread silently through trusted update channels to thousands of organisations. The company whose entire job was security became the attack vector.
This isn’t an isolated case. It’s the natural consequence of a system built on blind trust.
Think about how many times a day you send data to services you can’t inspect.
You type a prompt into a chatbot. What happens to that data? Is it logged? Used for training? Shared with third parties? You have no way to know. The terms of service say one thing, but terms of service are a legal obligation, not a technical constraint. They describe what a company promises to do, not what the software actually does.
You send a message to a friend over a chat platform. Once you hit send, all you know is that your message went to some server at some IP address. You have no idea how it’s stored, who can read it, or whether the encryption the company advertises is real.
You store files in the cloud. You use a password manager. You submit medical information through a health portal. In each case, you’re trusting that the software behind the interface behaves the way someone told you it would. You have no way to check.
The same problem exists at every scale. A financial institution deploys an AI model to analyse client data. What code is actually running on that server? Is it the model they audited? Has it been modified since deployment? The vendor’s documentation says one thing, but documentation is a promise, not a proof.
We’ve gotten used to this because it’s been the only option. Software services are black boxes. Some people try to protect themselves with privacy-preserving tools like VPNs or Tor. Others just trust blindly because they have no alternative. Neither group can actually verify what the services they depend on are doing with their data.
There’s a widespread belief that well-funded proprietary tools are inherently more secure than open alternatives. This is a failure to reason clearly. While funding can help with resources required to do security well, it doesn’t produce security by default.
If you can’t inspect it, you can’t verify it. If you can’t verify it, you’re just trusting. And trust is not a security property. It’s the absence of one. Verifiability is a pre-requisite for a reasonable level of security.
Legal frameworks help, but they’re reactive. They punish breaches after they happen. They don’t prevent them. A contract that says “we won’t misuse your data” does very little in practical terms to stop software from misusing your data. Only the technical controls in the system can do that.
Real transparency means being able to verify what software is running on a server, what it’s capable of, and what it does with data you send it. Not by reading a blog post or a privacy policy, but by inspecting the actual code and proving it matches what’s deployed.
Almost no system works this way today. When you interact with a web service, you’re interacting with a black box. You send a request and get a response. Everything in between is invisible to you.
Public blockchains got this right in one narrow domain: every transaction is verifiable, every state change is auditable. But blockchains are impractical for most software. The question is whether we can bring a similar level of verifiability to general-purpose computing. It turns out we can.
Confidential compute hardware, specifically secure enclaves combined with remote attestation, makes it possible to provide cryptographic proof of what software is running behind a given domain or IP. Combined with full-source bootstrapping and reproducible builds, this means anyone can independently verify exactly what code a server is executing.
This isn’t theoretical. It’s how Caution works today. Deploy software to an enclave, and anyone can rebuild the image from source, compare it against the live attestation, and get cryptographic proof that the running code matches the auditable source.
The shift is fundamental: from “trust us” to “verify it yourself.” Users no longer have to hope that companies are telling the truth about how their data is handled. Companies don’t have to cross their fingers that the code they hope is running in their mission critical systems is actually what they expect it to be.
Regulatory deadlines are making this urgent. Multiple major compliance frameworks are reaching enforcement milestones simultaneously. The EU AI Act’s obligations for most high-risk AI systems take effect August 2, 2026, requiring organisations in healthcare, finance, government, and critical infrastructure to demonstrate how their workloads operate, not just claim they are secure. In parallel, the HIPAA Security Rule overhaul (expected to be finalised in 2026, pending regulatory approval) introduces mandatory, prescriptive cybersecurity controls across the entire healthcare sector for the first time, with stricter audit requirements and faster breach notification timelines. Organisations across multiple verticals now face hard deadlines with real enforcement consequences. Verifiable compute is no longer a nice-to-have. It is an essential tool to aid compliance requirement.
The EU AI Act’s obligations for high-risk AI systems take effect on August 2, 2026, requiring organisations to demonstrate how their workloads operate. The HIPAA Security Rule overhaul, expected to be finalised in 2026, introduces mandatory cybersecurity controls across the healthcare sector for the first time.
Verifiability is a missing building block. Not just for security, but for individual freedom and the entire digital economy.
The ability to verify what software does is a prerequisite for trust in a digital world. Without it, even the most carefully architected systems are incomplete. You can choose your own tools, control your own data, build on open standards, but the moment you interact with a service you can’t inspect, you’re back to trusting someone else. Verifiable compute closes that gap. It gives any system the one thing that’s been missing: a way to prove that remote software respects the rules it claims to follow.
This is also why the technology has to be truly open source. Not open core with the important parts behind a paywall. Fully open, and auditable by anyone. If the goal is to remove the need for blind trust, the tool that does it can’t require blind trust either. Anything less would be a contradiction.
We think of this as infrastructure for the open internet. The same way public key cryptography gave individuals the power to communicate privately, verifiable compute gives them the power to interact with services confidently. It’s a primitive that makes other freedoms possible.
]]>We deployed an LLM to a secure enclave and verified exactly what code is running: an industry first for full source bootstrapped, deterministic, fully verifiable, and end-to-end encrypted AI inference.
Table of contents
Large Language Models have transformed nearly every industry, but a fundamental problem remains: how does one use an LLM without exposing sensitive data to third parties?
Tech companies, and AI companies in particular, have a poor track record with user data. Prompts may be logged, used for training, shared with contractors, or retained indefinitely. Alternatively, the LLM may be biased by the hosts, via hidden prompts. Privacy policies can also change at any time. When data leaves your control, there is no way to verify how it’s handled.
There is significant demand for AI applications that better protect user data privacy. Some attempts use Trusted Execution Environments (TEEs) to isolate data, providing remote attestation as proof. But these solutions fall short: their “proofs” only demonstrate that the deployed code hasn’t changed, not what that code actually is.
Without full verifiability, the trust still lies with the operator alone. But promises are not good enough; we need a concrete way to verify the safety of data sent into third party servers.
Additionally, many confidential compute solutions today terminate TLS outside of the enclave, leaving the data exposed on the host within which the enclave runs. This defeats the entire point of enclaves as the data is exposed to an untrusted system, outside of the secure enclave. To mitigate this, the data has to be encrypted until it reaches inside of the enclave.
We cover how the approach Caution takes mitigates both of these risks.
Imagine being able to inspect the exact code powering an online service, and being able to prove it can’t mishandle your data: no logging, no saving, no undesirable behaviour.
In this demo, Caution deploys a verifiable AI inference app, letting you prove precisely what code runs inside a secure enclave.
The verifiable AI inference demo runs a CPU-based LLM and is not optimised for performance. The goal is to demonstrate the verification workflow, not production inference efficiency. Full GPU-backed inference is planned once EnclaveOS V2 is production-ready. With the right partners, we could accelerate this. If you’re interested, please reach out.
LLM applications must be deterministic for verification to work. Our prior enclave experiments with LLMs made this straightforward.
The deployment uses the standard Caution platform workflow:
caution init
git push caution main
caution verify
❯ git push caution test-enclave
...
Deployment successful!
Application: http://<redacted>:8080
Attestation: http://<redacted>:5000/attestation
Run 'caution verify' to verify the application attestation.
❯ caution verify
Verifying enclave attestation...
Challenge nonce (sent): dc695fd5e10b2f0887a0ec163520127b40455defaa31686c4dcee77884c1177c
Requesting attestation...
Verifying attestation...
✓ Certificate chain verified against AWS Nitro root CA
✓ All certificates are within validity period
✓ COSE signature verification passed
✓ Nonce verified (prevents replay attacks)
Challenge nonce (received): dc695fd5e10b2f0887a0ec163520127b40455defaa31686c4dcee77884c1177c
✓ Attestation verified successfully
Remote PCR values (from deployed enclave):
PCR0: 267a49a97b94b57e11ef1fe59c798415d61157c68563d6b2901ef17a48c0c4b82f66c45fc0a156bcf014b742b75a277f
PCR1: 267a49a97b94b57e11ef1fe59c798415d61157c68563d6b2901ef17a48c0c4b82f66c45fc0a156bcf014b742b75a277f
PCR2: 21b9efbc184807662e966d34f390821309eeac6802309798826296bf3e8bec7c10edb30948c90ba67310f7b964fc500a
Manifest information:
App source: https://git.distrust.co/public/llmshell/archive/bd4d093ae51663e21ed29ab2607324080a8704d5.tar.gz (git archive)
Enclave source: https://git.distrust.co/public/enclaveos/archive/attestation_service.tar.gz (git archive)
Reproducing build from current directory...
Build artifacts available at: /home/user/.cache/caution/build/.tmp802BZp/eif-stage
You can review everything that went into building this enclave:
• Containerfile.eif - The complete build recipe
• app/ - Your application files
• enclave/ - EnclaveOS source (attestation-service, init)
• run.sh - Generated startup script
• manifest.json - Build provenance information
Expected PCR values:
PCR0: 267a49a97b94b57e11ef1fe59c798415d61157c68563d6b2901ef17a48c0c4b82f66c45fc0a156bcf014b742b75a277f
PCR1: 267a49a97b94b57e11ef1fe59c798415d61157c68563d6b2901ef17a48c0c4b82f66c45fc0a156bcf014b742b75a277f
PCR2: 21b9efbc184807662e966d34f390821309eeac6802309798826296bf3e8bec7c10edb30948c90ba67310f7b964fc500a
Comparing PCR values...
✓ Attestation verification PASSED
The deployed enclave matches the expected PCRs.
This means the code running in the enclave is exactly what you expect.
Powered by: Caution (https://caution.co)
The files for local reproduction are stored in the cache directory, containing every line of code used to build the software:
❯ tree -I app /home/user/.cache/caution/build/.tmp802BZp/eif-stage/
/home/user/.cache/caution/build/.tmp802BZp/eif-stage/
├── app
│ ├── <omitted full app file system...>
├── build.log
├── Containerfile.eif
├── enclave
│ ├── attestation-service
│ │ ├── Cargo.toml
│ │ └── src
│ │ └── main.rs
│ ├── Cargo.lock
│ ├── Cargo.toml
│ ├── Containerfile
│ ├── init.sh
│ ├── LICENSE.md
│ ├── Makefile
│ ├── README.md
│ ├── src
│ │ ├── aws
│ │ │ ├── Cargo.toml
│ │ │ └── src
│ │ │ └── lib.rs
│ │ ├── init
│ │ │ ├── Cargo.lock
│ │ │ ├── Cargo.toml
│ │ │ └── init.rs
│ │ └── system
│ │ ├── Cargo.toml
│ │ └── src
│ │ └── lib.rs
│ └── udhcpc-script.sh
├── manifest.json
├── output
│ ├── enclave.eif
│ ├── enclave.pcrs
│ └── rootfs.cpio.gz
└── run.sh
69 directories, 1432 files
With Caution platform, you can:
This enables the first fully verifiable LLM deployment. No trust required: full verification covers every line of code down to the kernel, proving the LLM can’t perform undesirable actions with your data.
Verifiability solves a major problem: knowing exactly what code is running inside of a secure enclave. But there’s a second problem that is often not addressed adequately: TLS termination.
In typical enclave deployments, TLS terminates at a reverse proxy or load balancer outside the enclave. The traffic is then forwarded to the enclave in plaintext. This means the host system within which the secure enclave runs, the very thing the enclave is supposed to protect against, can read every request and response.
While enclave remote attestation without end to end encryption still preserves integrity and prevents things like inferance manipulation by advertizers, it is useless for confidentiality (in spite of what some marketing teams might imply). A compromised host, a malicious cloud operator, or an attacker with infrastructure access can intercept all data before it ever reaches the protected environment.
Caution solves this with STEVE (Secure Transport Encryption Via Enclave), a freely licensed open source solution which adds a second encryption layer that terminates exclusively inside the enclave.
STEVE uses X25519 key exchange with Ed25519 signatures bound to the enclave’s attestation. Clients verify they’re communicating with the attested enclave before establishing an encrypted channel. The host never sees plaintext application data.
What makes this powerful is that the E2E leverages the hardware backed keys only accessible inside of the secure enclave. In other words it’s hardware backed keys provided by confidential compute components that is backing the security of this setup.
For client side applications, a service worker handles encryption transparently, requiring no application changes. For this LLM deployment, prompts and responses are encrypted from the browser all the way into the enclave.
This combination of full verifiability and true end-to-end encryption is what sets Caution apart from other confidential compute solutions.
Caution is currently available in alpha access for teams testing and deploying reproducible enclaves. Learn more at alpha.caution.co.
We are developing EnclaveOS for broader attestation hardware support and superior isolation beyond AWS Nitro. Here’s what’s coming in 2026:
We invite developers building and operating verifiable compute to join our open Community space on Matrix to ask questions, share ideas, and help us shape the future of verifiable compute.
Attestation without reproducible builds is still a black box because there’s no way to prove that the code in the enclave matches your source. Existing TEE solutions also rely on a single hardware root of trust like TDX, SEV, or Nitro, creating a single point of failure.
Two key innovations that solve these problems:
Caution automates this end to end, making verifiable enclaves dramatically easier to deploy.
Table of contents
Secure enclaves are currently very underutilized, and most of the ways they are used today are often security theater still giving a single sysadmin the power to control or modify them at any time. Enclaves can isolate code at runtime, and prove that the code has not changed, but not what that code actually is. Attestation engines supported by most enclave platforms prove that software running inside of an enclave hashes to a specific value called a PCR, but unless that PCR can be independently reproduced from source code, there is no way to confirm what code the enclave is truly running.
Additionally, all enclave solutions today rely on only one type of attestation by a single vendor, exposing their systems to insider, supply chain, and side channel risks during periods when that one engine has a known flaw, which happens from time to time.
This is the core gap in today’s confidential compute stack. Isolation and attestation without reproducibility and platform diversity is insufficient for high security applications.
Caution solves this problem by providing a cloud hosting platform which leverages EnclaveOS at its core, and offers verifiable compute across multiple enclave platforms, each with attestation by at least two different methods.
This post walks through why this matters, why current tooling falls short, and how Caution makes verifiable compute practical.
Deploying verifiable workloads should be straightforward. In practice, it’s a nightmare.
The tooling is fragmented and incomplete. Each enclave platform (AWS Nitro, Intel TDX, AMD SEV) has its own SDK, its own attestation format, its own deployment quirks, and non-deterministic in most cases which blocks useful verification. There’s no unified abstraction, so teams end up building custom integrations from scratch and being forced to pick a single platform to support.
Reproducibility is an afterthought. Most enclave deployments can’t actually prove what code is running. Attestations give you a hash, but if you can’t reproduce that hash from source code, you’re trusting whoever built the binary. This is a major shortcoming that essentially all enclave players are exposed to today.
It requires expensive specialists. Companies end up having to hire 3+ security engineers at $300k+ to build and maintain custom enclave infrastructure that avoids trusting a single engineer. Even then, the result is usually a brittle system that’s hard to audit and painful to update.
Vendor lock-in is the norm. Once you’ve built your deployment pipeline for one cloud’s native offering, migrating to another is often a rewrite.
The result: high security verifiable compute remains inaccessible to most teams, and even well-resourced organisations struggle to get it right.
Caution is the generalized verifiable compute platform. It solves the core problems that make truly verifiable compute inaccessible today.
For the developer, Caution provides a single, consistent, and easy way to deploy to Trusted Execution Environments. It turns months of custom infrastructure work and expensive security engineering into one git-driven unified workflow that runs in minutes.
Caution builds a reproducible enclave image, provisions infrastructure, and exposes application and attestation endpoints. Once an enclave is live, anyone can verify exactly what is running inside. Cryptographic proof becomes part of the runtime itself rather than an afterthought, and verifiable compute becomes something real teams can adopt without friction. This is an industry first: no other platform provides end-to-end reproducible verification out of the box.
Caution is designed for portability across clouds and hardware, which strengthens trust minimization by removing dependence on any single vendor. It supports Nitro enclaves today for early access, but Intel TDX, AMD SEV-SNP, and TPM 2.0 attestation are coming in 2026 building on our modular design and abstractions.
This is a fundamentally better way to run software because it replaces blind trust with verifiable proof, and from multiple different types of enclave and attesation hardware simultanously, and as such removes major single points of failure.
Our mission is to make verifiable compute as much of an industry standard for network services as TLS. The technology primitives were invented for DRM to remove freedom from users, however we intend to flip the script and use the same primitives to upgrade freedom, security, and privacy for everyone.
Three commands take you from code to verified enclave: initialize your app, deploy to an enclave, and verify the deployment matches your source.
Configure your project for Caution by running:
# Initialize an app and generate a Procfile
$ caution init
This creates a .caution/deployment.json config file and generates a Procfile that defines how your application is built and run.
You may need to adjust the generated Procfile depending on your project structure. The Procfile supports the following fields:
| Field | Required | Description |
|---|---|---|
build |
Yes | Command to build the application (e.g., docker build) |
run |
Yes | The binary that will be run inside the enclave |
oci-tarball |
No | Path where the OCI image data is exported |
source |
No | URL to a source archive (defaults to git origin + latest commit hash) |
cpus |
No | Number of CPUs (default: 2) |
memory_mb |
No | RAM allocation in MB (default: 512) |
metadata |
No | Arbitrary metadata added to the attestation endpoint manifest |
Push your code to trigger a build and deploy it into a Nitro enclave.
Caution pulls your application, combines it with EnclaveOS to create a reproducible enclave image, provisions infrastructure in your AWS account, and starts the enclave. Build time varies based on application size and compilation requirements.
# Deploy to a Nitro enclave
$ git push caution main
After this command, Caution:
Verify that the deployed enclave matches the source you can audit.
The CLI rebuilds the enclave image locally, sends a challenge to the attestation endpoint and gets a fresh attestation from the running enclave, then compares the resulting hashes and verifies all signatures. A match gives you cryptographic proof of the exact code and configuration running inside the enclave.
# Verify the deployment matches the source
$ caution verify --attestation-url <url>
Attestation without reproducibility is theater. A Nitro attestation tells you the hash of the enclave image, but if you can’t independently produce that same hash from auditable source code, you’re just trusting whoever built the image.
Caution solves this with two verification modes: reproduce (full verification) and PCR (quick verification).
Note: Full verification requires your source code to be available to users. We recommend FOSS licenses for this, but any license works as long as users can access the code.
Reproduce mode is the full verification path and the gold standard. It rebuilds the enclave image from source and compares it against the live attestation, giving you the strongest possible guarantee that the runtime matches the code you can audit.
<<<<<<< HEAD
$ caution verify --attestation-url <url>
=======
$ caution verify --url <your_attestation_url>
>>>>>>> origin/dec-17-edits
During this process, the CLI:
~/.cache/caution/reproductions/local/<unique-id>/eif-stage.If they match, you have cryptographic proof that:
$ caution verify --pcrs known-good-pcrs.txt
If you’ve already done a reproduce verification (or trust someone who has), you can verify future attestations against known-good PCR values without rebuilding:
This is faster but requires you to trust the source of the PCR file. It’s useful for applications that lack public source code, but this means they are also not truly verifiable.
| PCR | Contents |
|---|---|
| PCR0 | Hash of the entire Enclave Image File |
| PCR1 | Hash of the Linux kernel and boot configuration |
| PCR2 | Hash of the application code |
If any of these don’t match expected values, either the code has changed or something is wrong.
Caution’s architecture combines a local CLI, a lightweight control plane, and EnclaveOS inside the enclaves. Each part has a narrow, well-defined role so the path from source code to verified runtime stays simple, auditable, and reproducible. The core components are:
| Component | Purpose | |
|---|---|---|
| CLI | Local tool for managing deployments and verification | |
| Gateway | Authentication (FIDO2 passkeys or SSH) and request routing | |
| API Backend | Manages state, users, organisations, and orchestrates deployments | |
| Enclave Builder | Combines your application with EnclaveOS to produce reproducible enclave images | |
| EnclaveOS | Minimal, immutable, deterministic Linux OS that runs inside the enclave |
Together, these components form a reproducible pipeline: your code is built locally or in the builder, combined with EnclaveOS, stored as an enclave image, provisioned into a Nitro enclave, and verified by your local CLI. Every step is transparent and independently auditable.
Under the hood, Caution uses EnclaveOS: a minimal, immutable, deterministic operating system designed for high-security enclave deployments.
It’s in active development with most new work happening in a variety of other repos it will later import, but today it supports Nitro Enclaves and is suitable for early access users. So much so that many orgs are already using forks of it in the wild. Even so, it does not hit -our- standard yet, and we expect to have our new bootproof engine and nested VM architecture in Q1 2026 at which point Caution will be suitable for threat models that do not allow for completely trusting AWS.
EnclaveOS provides:
Caution serves as the deployment and orchestration layer on top of EnclaveOS. As EnclaveOS adds support for more attestation backends, Caution will enable seamless multi-cloud deployment: same git push workflow, different cloud targets.
You don’t have to trust Caution. Every deployment is verifiable:
Even if Caution’s infrastructure were completely compromised, an attacker couldn’t deploy malicious code without it being detectable via caution verify --url <>.
Caution is 100% freely licensed open source software, not “open core” with paid features hidden behind enterprise tiers. You get the entire platform with nothing held back:
We hope many teams choose our hosted platform or support services so we can continue this work long term, but the software itself is open because the mission is larger than us. Anyone should be able to build, verify, and run secure workloads without artificial barriers.
Our goal is an ecosystem where organisations with sensitive workloads can run enclaves across multiple providers, avoiding walled gardens and reducing single points of failure.
A fully managed service delivering the same functionality as the open source version will be offered in Q1 2026.
Caution is dual-licensed under the GNU Affero General Public License version 3 (GNU AGPLv3) and a commercial licence. The GNU AGPLv3 allows free use, modification, and distribution as long as derivative works remain freely licensed open source software.
For organisations that want to extend a fork of the Caution platform internally with proprietary code, a commercial licence can be purchased upon request to bypass AGPL limitations.
Reach out to us for early access and commercial licensing.
The alpha is open to teams who want to start experimenting with verifiable compute today. The platform already supports:
--attestation-url and --pcrsContact us to join the alpha and deploy to a Nitro enclave.
Caution is rapidly evolving toward a fully portable, multi-enclave platform. Here’s what’s coming in early 2026:
What if the software running your systems isn’t what you think? If you had to prove what software is on a system, how would you do it?
Table of contents
Most of today’s technologies are black boxes. From firmware and operating systems to compilers and cloud platforms, opacity is the default. Users can send requests to an API or server, but they cannot verify what software, or whose software, they are really interacting with. The issue impacts organisations internally as well, where system managers can’t verify whether the code they think they deployed is actually what’s running on the server. This is not just a usability issue, it is a systemic design failure and the result is software stacks riddled with blind spots, where compromise can occur at any stage and remain invisible.
Years of working with high-risk clients and analysing different technologies have led us to realise that the pieces needed for verifiable systems already exist. They remain underutilized because they are misunderstood and difficult to use, a problem we need to solve.
Reproducible builds, secure enclaves, and cryptographic remote attestation each solve parts of the problem. Taken together, they form the building blocks for verifiable compute, which allows software to be verified. Our work is focused on creating the next generation of cloud hosting platform centered around verifiability and elimination of single points of failure present in current market solutions.
Like “zero trust” before it, the term verifiable compute is already being hijacked by marketing teams. Companies throw it around to describe partial solutions, usually just proving a binary hash hasn’t changed. We take a stricter view: verifiable compute means the entire supply chain can be checked. Anything less is not verifiable compute.
The risks of unverifiable systems are not theoretical; they’ve already caused some of the most damaging security incidents of the past decade.
SolarWinds (2020) showed how a compromised software supply chain can cascade globally. Attackers injected malicious code into SolarWinds’ Orion updates, which were then shipped to thousands of companies and U.S. government agencies. Because customers had no way to verify what software they were actually running, the backdoor spread silently through trusted update channels.
This is one of the many breaches which demonstrate that without verifiability across the entire stack, organisations have no reliable way to prove the integrity of the systems they depend on.
Three core technologies make end-to-end software verifiability possible:
Reproducible builds. Reproducible builds force software to be bit-for-bit identical when built from the same source code, and eliminate certain categories of supply chain attacks and would have prevented incidents like SolarWinds. It allows for integrity verification, without which software is opaque and difficult to verify.
Secure enclaves. Hardware-isolated execution (e.g., IOMMU-backed enclaves) prevents external processes — even privileged ones — from tampering with sensitive workloads. Enclaves give us strong isolation, but isolation alone doesn’t prove what is running.
Remote attestation. Remote hardware attestation (TPM2, Intel TDX, AMD SEV, AWS Nitro, and others) measures the state of a machine and provides cryptographic proof of what software is running. Attestation anchors trust at the hardware layer, but on its own it doesn’t guarantee the software’s provenance or build integrity.
Together, they form the foundation of true verifiable compute: the ability to verify software integrity from the toolchain it’s built with to the hardware it runs on.
Current offerings from the major cloud providers (AWS, Azure, GCP, etc.) are demanding in terms of both expertise and time to set up. They lock users into a single vendor’s ecosystem and force reliance and trust in one type of hardware or firmware. For example, AWS requires implicit trust in its proprietary Nitro Card, a black-box technology that customers cannot independently verify.
New players are building wrappers around enclave and attestation technologies, but most remain focused on narrow use cases such as digital asset wallets or running LLMs. While promising, they provide only surface-level verification, proving that a binary’s hash hasn’t changed without offering full visibility into what is actually running on the server.
In short, there are currently no solutions offering full transparency and elimination of single points of failure in the market.
Our team has chosen a no-compromise approach to solving this problem by building a cloud hosting platform, Caution, that:
Is full-source bootstrapped and reproducible.
Is portable across environments across major cloud platforms or bare metal.
Uses multiple hardware attestations.
Uses quorum authentication as a core primitive.
Is fully open source.
Caution is the next generation cloud hosting platform launching in 2026. We believe this marks the beginning of a new era of infrastructure: verifiable, open, and resilient by default.
We’re building Caution in the open. If you’d like to use it, contribute, or partner with us, get in touch.
]]>