Privacy Stewards of Ethereum https://pse.dev PSE is a research and development lab with a mission of making cryptography useful for human collaboration. We build open source tooling with things like zero-knowledge proofs, multiparty computation, homomorphic encryption, Ethereum, and more. Tue, 17 Mar 2026 10:56:18 GMT https://validator.w3.org/feed/docs/rss2.html https://github.com/jpmonette/feed en Privacy Stewards of Ethereum https://pse.dev/favicon.ico https://pse.dev All rights reserved 2026, Privacy Stewards of Ethereum <![CDATA[Untitled Article]]> https://pse.dev/blog/README https://pse.dev/blog/README Tue, 17 Mar 2026 10:56:18 GMT <![CDATA[Post-Quantum Signature Aggregation with Falcon + LaBRADOR]]> https://pse.dev/blog/post-quantum-signature-aggregation-with-falcon-and-LaBRADOR https://pse.dev/blog/post-quantum-signature-aggregation-with-falcon-and-LaBRADOR Tue, 17 Mar 2026 10:56:18 GMT <![CDATA[Summon Major Update]]> https://pse.dev/blog/summon-major-update https://pse.dev/blog/summon-major-update Tue, 17 Mar 2026 10:56:18 GMT b return [plus, gt] } // and separately provide mpcSettings: const mpcSettings = [ { name: "alice", inputs: ["a"], outputs: ["main[0]", "main[1]"], }, { name: "bob", inputs: ["b"], outputs: ["main[0]", "main[1]"], }, ] ``` ### After ```js default function main(io: Summon.IO) { const a = io.input('alice', 'a', summon.number()); const b = io.input('bob', 'b', summon.number()); io.outputPublic('plus', a + b); io.outputPublic('gt', a > b); } // mpcSettings is generated automatically ``` _Shorter, self‑contained, and each output has a real name._ Per-party outputs are also coming, and will fit neatly into this API: `io.output(partyName, outputName, value)`. Type information is available via [`summon.d.ts`](https://github.com/privacy-scaling-explorations/summon/blob/main/summon.d.ts): ![intellisense demo](/articles/summon-major-update/intellisense-light.webp) ## 2 · Typed Inputs (now with `bool`) The third argument of `io.input` specifies the type: ![number example](/articles/summon-major-update/number-example-light.webp) ![bool example](/articles/summon-major-update/bool-example-light.webp) `bool`s now work properly, so you can pass `true`/`false` instead of `1`/`0`. This is both better devX and removes unnecessary bits. Output bools are also new, decoding correctly as `true`/`false` (the values you get out of `await session.output()`). This also sets us up to support arrays/etc and grow into comprehensive typing à la [zod](https://zod.dev/?id=basic-usage) or [io‑ts](https://github.com/gcanti/io-ts/blob/master/index.md). ## 3 · Public Inputs Need a single program that adapts to many input sizes/participants? Public inputs let you accept these at **compile time**: ```js const N = io.inputPublic("N", summon.number()) let votes: boolean[] = [] for (let i = 0; i < N; i++) { const vote = io.input(`party${i}`, `vote${i}`, summon.bool()) votes.push(vote) } ``` Pass them via CLI: ```bash summonc program.ts \ --public-inputs '{ "N": 10 }' \ --boolify-width 8 ``` or the `summon-ts` API: ```js const { circuit } = summon.compile({ path: "program.ts", boolifyWidth: 8, publicInputs: { N: 10 }, files: { /* ... */ }, }) ``` See it in action: **JumboSwap** [circuit](https://github.com/privacy-scaling-explorations/jumboswap/blob/3f81b87/src/circuit/main.ts). ## 4 · Faster Branch Merging Merging has to occur whenever your program branches on signals: ```js const value = cond ? x : y ``` Circuits can't evaluate only one side of this like CPUs do, so the Summon compiler has to emit wires for both branches and then merge them together like this: ```js value = merge(condA, x, condB, y) // = (condA * x) + (condB * y) // old method // = (condA * x) XOR (condB * y) // new method ``` So, `+` became `XOR` which is great because `XOR` is almost free, but why is this allowed? The key is that `condA` and `condB` cannot be true simultaneously. In this example we have `condB == !condA`, but we don't have to rely on that. These conditions are _always_ non-overlapping - there is only ever one "real" branch with `cond == 1`. This means each bit of the addition cannot produce a carry and is equivalent to `XOR`, because `XOR` is 1-bit addition. This caused some real speedups in our demos: - [**JumboSwap**](https://mpc.pse.dev/apps/jumboswap): ≈4× faster - [**Lizard‑Spock**](https://mpc.pse.dev/apps/lizard-spock): ≈20 % faster ## Join Us! - [Telegram group](https://t.me/+FKnOHTkvmX02ODVl) - [Discord](https://discord.gg/btXAmwzYJS) (Channel name: 🔮-mpc-framework) - [Github Repo](https://github.com/privacy-scaling-explorations/mpc-framework) ⭐️ - [Website](https://mpc.pse.dev) Thanks for building privacy‑preserving magic with us! 🪄 ]]> summon mpc typescript privacy developer-tools release-notes circuits cryptography compiler <![CDATA[Social Recovery SDK: design, implementation and learnings]]> https://pse.dev/blog/social-recovery-sdk https://pse.dev/blog/social-recovery-sdk Mon, 02 Mar 2026 00:00:00 GMT submit -> execute* interactions with typed contract wrappers. PolicyBuilder provides deterministic policy construction for guardian sets and thresholds, and EIP-712 helpers keep intent hashing/signing consistent between off-chain and on-chain paths. The goal of this layer is to keep wallet integration thin while preserving the protocol guarantees defined in contracts. ### Integration & Example App The SDK is designed to be integrated directly into existing wallets and dapps, rather than used as a standalone protocol service. A typical integration flow is straightforward: deploy verifier/recovery contracts, configure guardian policy for each wallet, wire SDK auth adapters on the client, and expose recovery actions (start, submit proof, execute) in the product UI. This lets teams keep their own wallet UX while delegating recovery-critical enforcement to on-chain policy logic. For implementation details and exact integration steps, follow the project documentation[^5]. To make integration concrete, we also built a local demo dapp: a minimal smart-wallet app with the SDK recovery flow fully wired in. It includes policy setup, guardian-based recovery, and end-to-end execution against a local chain. You can run it locally with Foundry installed, plus Google OAuth configuration for JWT-based authentication (required for zkJWT). For full setup and run instructions, use the dedicated guide[^6]. --- You can find the implementation of the SDK here[^3]. --- ## Open questions While the protocol is production-oriented in scope, a few important open problems remain outside the SDK boundary and are worth calling out explicitly: 1. **Private salt synchronization (owner <-> guardian).** For zkJWT-style guardians, the private salt must be shared in a way that preserves privacy and does not create extra UX friction. The unresolved question is who should generate it and how it should be transferred without forcing users into awkward side channels (for example, manual email exchange). In the current SDK flow, this sync is intentionally left to the wallet owner (manual handoff). 2. **DKIM public-key registry and trustless rotation.** DNSSEC coverage is still incomplete across providers (including major ones), which makes trustless key-rotation handling a real issue for email-based authentication. Our recommendation is to use self-maintained on-chain registries updated by the domain owner. For long inactivity windows, a fallback migration path to a DAO-supported key registry is a practical safety mechanism. 3. **Cross-chain UX without keystores.** Without shared keystore primitives, users must reconfigure and repeat recovery setup across chains, which degrades UX and increases operational risk. This limitation is out of scope for the current SDK but affects real adoption. We expect proposals like RIP-7728 (L1SLOAD precompile + Keystores) to materially improve this by making cross-chain recovery state more portable. [^1]: Vitalik Buterin, *Why we need wide adoption of social recovery wallets* (January 2021): https://vitalik.eth.limo/general/2021/01/11/recovery.html [^2]: ZK Email social recovery docs: https://docs.zk.email/account-recovery/ [^3]: Social Recovery SDK repository: https://github.com/privacy-ethereum/social-recovery-sdk/ [^4]: Specification (SPEC.md): https://github.com/privacy-ethereum/social-recovery-sdk/blob/main/SPEC.md [^5]: Official documentation: https://privacy-ethereum.github.io/social-recovery-sdk/ [^6]: Example app / integration guide: https://github.com/privacy-ethereum/social-recovery-sdk/tree/main/example ]]> social recovery zkp privacy <![CDATA[Pretty Good Payments: Free, Private and Scalable Stablecoin Transactions on Ethereum]]> https://pse.dev/blog/pretty-good-payments https://pse.dev/blog/pretty-good-payments Thu, 26 Feb 2026 00:00:00 GMT This article is a guest contribution by [aleph_v](https://github.com/aleph-v), an external contributor and developer of Pretty Good Payments. The work was supported by a Cypherpunk Fellowship from Protocol Labs and Web3Privacy Now and a matching grant from EF PSE. In 1991, Phil Zimmermann released Pretty Good Privacy, a tool that brought practical encryption to everyday email. It wasn't perfect in every theoretical sense, but it was good enough to meaningfully protect ordinary people, and it changed what privacy meant on the internet. Pretty Good Payments carries that same ambition into finance. It combines zero-knowledge proofs with a novel economic model that turns idle deposits into the fuel that runs the network. Sender, receiver, amount, and transaction graph are all hidden from observers. The whole system settles directly to Ethereum, secured by cryptographic proofs and economic incentives enforced on L1. And it's free — users pay zero fees. ## The Problem Ethereum users face two persistent frustrations: high transaction costs and zero privacy. When you pay a freelancer in USDC, your employer can see your salary. Your landlord can see your net worth. Anyone who learns a single address of yours can trace your entire financial life — no hack required, just a block explorer. Every transfer is recorded permanently on a public ledger, and anyone can follow the money. For the privilege of this transparency, you pay gas fees that can spike unpredictably. Existing privacy solutions come with their own costs: high fees, centralization risks, limited token support, or clumsy user experiences. Layer 2 rollups have reduced costs dramatically, but they've done little for privacy. Pretty Good Payments asks a more ambitious question: *"what if there were no fees at all?"* ## What It Feels Like You tap send, enter the amount, and confirm. Behind the scenes, a zero-knowledge proof is generated in roughly half a second and submitted to a sequencer. The sequencer validates the transaction and returns a preconfirmation — a commitment to include it in the next block — in milliseconds, as fast as the API can respond. To the user, it feels no different from sending a message. The recipient sees the funds arrive moments later. Neither party pays a cent in fees — only the intended amount moves. The entire transaction stays completely private: sender, receiver, and amount are all hidden. Final settlement on Ethereum follows after a challenge period, during which anyone can prove fraud if they find it. The preconfirmation is backed by the sequencer's staked ETH — submitting a transaction they preconfirmed and then failing to include it means losing their stake. Whether you're a freelancer receiving payment, a fintech building a payment app, or a DAO distributing grants — this is what "free and instant" looks like in practice: zero fees, sub-second responsiveness, and up to 400 transactions per second across the network. ## How It Works: The Big Picture Pretty Good Payments is technically classified as a rollup, though it may not match the mental image that word conjures. It settles directly to Ethereum, its transaction data lives on Ethereum, and its security comes from Ethereum, using an open set of sequencers and standard ERC-20 tokens. Think of it as a smart contract system that batches private transactions and posts them to L1, with a fraud proof mechanism that lets anyone keep sequencers honest. ### Privacy When you deposit tokens into Pretty Good Payments, they're converted into encrypted notes, sealed envelopes containing value. Each note records the token type, the amount, a random blinding factor, and a public key identifying the owner. All of this is hashed together using a cryptographic function called Poseidon, so the on-chain record reveals nothing about what's inside. When you want to spend a note, you generate a zero-knowledge proof: a mathematical demonstration that you know the private key for that note, that the note exists in the system, and that your transaction balances. The proof reveals none of the underlying details. An observer sees that *a valid transaction happened* but learns nothing about who sent it, who received it, or how much was transferred. This is the same foundational approach used by Zcash, adapted for Ethereum. ### Free Transactions Through Yield Here's where Pretty Good Payments gets creative. When you deposit tokens, the system puts them to work immediately, routing them into yield-generating vaults like Aave or Compound that earn interest on deposited assets. The yield generated by everyone's deposits is what pays the sequencers, the participants who batch and submit transactions to Ethereum. In a traditional rollup, sequencers charge you fees. In Pretty Good Payments, sequencers earn from the collective yield pool instead. Their share is proportional to how much blob space they fill with transactions — process more transactions, earn more yield. Priority sequencers who commit to reliable block production earn a 2x multiplier during their exclusive submission windows. To put concrete numbers on it: a single Ethereum block can carry up to 21 EIP-4844 blobs, each holding roughly 273 private transactions — over 5,700 transactions per block, at a cost of 0.01 to 0.001 cents (not dollars) per transaction. For example in a recent demo the team submitted 1638 transactions for 0.06 cents, if you were to submit one such blob each block you'd get 126 transactions per second for a year with only 145k (eg 0.06 cents per block). To support this cost using only low risk DeFi or Ethereum staking yield earning 3% APY you would only need 4.8 million of TVL, which is a very achievable number for a wide range of applications. The result: users transact for free, and sequencers get paid. Your deposited funds still belong to you and you can withdraw them at any time. For most users, the convenience of free private transactions far exceeds the yield they'd otherwise earn and they might even get yield back if there are enough deposits. ## How Security Works Pretty Good Payments is a stage two rollup by default, the highest level of rollup decentralization and security. Its security is grounded entirely in Ethereum and in open participation: anyone can become a sequencer, and anyone can hold sequencers accountable. The system uses an optimistic model — blocks are assumed valid unless proven otherwise. After a sequencer posts a batch, there is a challenge period during which anyone can submit a fraud proof. If no fraud is proven within the window, the block is finalized. If fraud is proven, the block and everything after it is rolled back, and the sequencer's stake is slashed. ### Open Sequencing Anyone can become a sequencer by staking ETH, and the stake acts as a security bond: submit invalid data and you lose it. To ensure reliable block production, time is divided into epochs. Each epoch has a closed period where a designated priority sequencer gets an uncontested window to submit, followed by an open period where any staked sequencer can participate. The open period guarantees that the system keeps producing blocks even if a priority sequencer is offline, keeping the network live and censorship-resistant. This openness is particularly relevant for businesses. Any company can register as a sequencer and batch transactions on behalf of its own users, with no approval process and no dependency on a third-party sequencer. A business stakes a small amount of ETH, runs the sequencer software, and submits its customers' transactions directly to Ethereum. The entire relationship is between the business, its users, and the Pretty Good Payments smart contracts. This opens up real use cases across finance and payments. A financial institution could settle zero coupon bonds through Pretty Good Payments, using private notes to represent positions and settling them on-chain while not revealing onchain the counterparties and notional amounts. A fintech provider could integrate Pretty Good Payments as the payment rail behind a consumer app, giving users instant, free stablecoin transfers with the same feel as any polished payment product, offering super fast preconfirmations and free transactions to only their users. Since the system is generic over many types of assets many types of end users can be served in the same privacy set, with all companies contributing to each other's privacy. ### Permissionless Fraud Proofs After a sequencer submits a batch of transactions, it enters a challenge period. During this window, anyone can examine the data and prove fraud if they find it. Being a challenger just means running the software. The system guards against every way a sequencer could cheat: wrong deposit values, double-spends, invalid proofs, incorrect state updates. If a challenger proves any of these, the sequencer loses their entire stake, with half going to the challenger as a reward and the other half burned. The fraudulent batch and everything submitted after it is rolled back. This makes fraud strictly unprofitable, because even if a sequencer colludes with a challenger, the burned half guarantees a net loss. The system further simplifies the process with single-round fraud proofs: the challenger submits all evidence in one transaction and the contract verifies everything on the spot. ## Ethereum-Keyed Accounts: Programmable Privacy Any note in Pretty Good Payments can be owned by an Ethereum address — including a smart contract. Spending requires an on-chain approval through the system's Transaction Registry, so private value can move in and out of on-chain logic atomically. The authorizing address is visible, but the destination, amount, and transaction graph remain hidden. Users can always transfer from an Ethereum-keyed note into a standard ZK-keyed note, making the visible portion a brief, deliberate moment in a longer private flow. This means that you can maintain full programmable composability with Ethereum while also using private transfers of arbitrary assets. This opens up programmable privacy across Ethereum: - **Trustless private swaps**: A DEX contract matches orders on-chain but settles through private note transfers — the counterparty and amount stay hidden. - **Treasury management**: A multisig or DAO treasury governed by on-chain votes distributes payroll or grants where the total outflow is visible but individual recipients and amounts are not. - **Escrow and subscriptions**: Escrow logic is transparent and auditable on L1, while the actual movement of value to end recipients happens privately. ## The Road Ahead Pretty Good Payments sits at a new point in the rollup landscape, one where privacy and zero fees aren't competing luxuries but complementary features of the same economic model. Yield-funded sequencing aligns incentives in a way that traditional fee markets can't: users want free transactions, sequencers want deposits that generate yield, and both get what they want. What comes next is getting Pretty Good Payments into the hands of builders: fintechs looking for a payment rail, institutions that need private settlement, and developers who want to build on programmable privacy. If you or your company is interested in these features please reach out to the team for help and advice on deployment and integration. --- *Pretty Good Payments is open source. Explore the architecture, run a sequencer, or start building on top of it at the [project repository](https://github.com/aleph-v/pretty_good_payments).* ]]> privacy rollups zero-knowledge proofs Ethereum payments <![CDATA[PSE February 2026 Newsletter]]> https://pse.dev/blog/pse-february-2026 https://pse.dev/blog/pse-february-2026 Thu, 19 Feb 2026 00:00:00 GMT newsletter <![CDATA[Revocation in zkID: Merkle Tree-based Approaches]]> https://pse.dev/blog/revocation-in-zkid-merkle-tree-based-approaches https://pse.dev/blog/revocation-in-zkid-merkle-tree-based-approaches Tue, 03 Feb 2026 00:00:00 GMT $O(1)$ - if index is cached | $O(\log n)$ - guided traversal using key | ### LeanIMT $n$: Number of leaves in the tree. - The time complexity of Insert, Generate Merkle Proof and Verify Merkle Proof is $O(\log n)$. - Search operation has time complexity $O(n)$, since it requires traversing all the leaves to find the desired one, as the structure does not support direct leaf access. - If the index of the leaf is saved (cached), since its position does not change when new values are added, the search cost improves from $O(n)$ to $O(1)$. ### SMT $n$: Maximum supported leaves in the tree. - The time complexity of Insert, Generate Merkle Proof and Verify Merkle Proof is $O(\log n)$. - The search operation remains $O(\log n)$, as the key guides direct traversal from the root to the corresponding leaf. ### Space Complexity | Structure | Complexity | Notation | | --------- | ---------- | -------------------------------- | | LeanIMT | $O(n)$ | $n$ = number of leaves | | SMT | $O(n)$ | $n$ = number of non-empty leaves | ### LeanIMT $n$: Number of leaves in the tree. The space complexity of the LeanIMT is $O(n)$. ### SMT $n$: Number of non-empty leaves in the tree. The space complexity of the SMT is $O(n)$. ### Practical Depth Ranges - LeanIMT: In practice, LeanIMTs are used with relatively small tree depths, typically between 10 and 32 levels, since they grow dynamically with the number of inserted elements. This corresponds to handling between roughly $2^{10}$ (≈ 1k) and $2^{32}$ (≈ 4B) leaves, which is sufficient for most incremental use cases. - SMT: SMTs usually have fixed depths between 64 and 256, depending on the key space size. This depth is constant regardless of how many keys are actually used. A tree with depth $d$ can theoretically address up to $2^{d}$ unique keys (leaves). In practice, only a very small subset of this key space is populated, and the SMT efficiently stores only non-default nodes. ### Complexity Analysis Insights LeanIMTs scale with the amount of actual data, while SMTs scale with the size of the key space. SMTs efficiently store key/value maps using direct access paths and sparse storage, whereas LeanIMTs rely on linear search when the index of an element is not cached. ## Benchmarks LeanIMT and SMT implementations were benchmarked across Circom circuits, browser environments, Node.js environments, and Solidity smart contracts to evaluate their overall efficiency and practicality. These benchmarks are designed to reflect how each data structure would typically be used in real world revocation systems. The benchmarks shown here focus on the most representative measurements for each environment: - Circuits: non-linear constraints and Zero-Knowledge (ZK) artifact sizes. - Browser: tree recreation, Merkle proof generation, and ZK proof generation. - Node.js: tree insertions and ZK proof verification. - Smart contracts: tree insertions, ZK proof verification, and deployment costs. Many additional benchmarks are available and can be generated using the repository. ### Running the benchmarks To run the benchmarks, follow the instructions in the repository README files. The repository provides scripts and commands for running circuit, browser, Node.js, and smart contract benchmarks. - GitHub repository: https://github.com/vplasencia/vc-revocation-benchmarks - Browser App: https://vc-revocation-benchmarks.vercel.app ### System Specifications and Software environment All the benchmarks were run in an environment with these properties: **System Specifications** Computer: MacBook Pro Chip: Apple M2 Pro Memory (RAM): 16 GB Operating System: macOS Sequoia version 15.6.1 **Software environment** Node.js version: 23.10.0 Circom compiler version: 2.2.2 Snarkjs version: 0.7.5 ### Circuit Benchmarks The circuits are written using the Circom DSL. ### Number of Non-linear Constraints Fewer constraints indicate a more efficient circuit. ![LeanIMT vs SMT: Number of Constraints Across Tree Depth](/articles/revocation-in-zkid-merkle-tree-based-approaches/constraints-absolute.webp) ![Relative Efficiency: Ratio of Constraints](/articles/revocation-in-zkid-merkle-tree-based-approaches/constraints-ratio.webp) ### Proof Size Groth16 Proof Size is always fixed, independent of circuit size: ~ 805 bytes (in JSON format). ### ZK Artifact Size #### WASM File Size - SMT ≈ 2.2-2.3 MB - LeanIMT ≈ 1.8 MB #### ZKEY File Size From tree depth 2 to 32: - SMT: grows from 1.4 MB to 5.9 MB. - LeanIMT: grows from 280 kB to 4.4 MB. #### Verification Key JSON File Size Constant at ~2.9 kB for both. ### Circuit Insights - At every tree depth, SMT has between ~1560 and ~1710 more constraints than LeanIMT. - While the absolute difference grows slowly with depth, the relative ratio decreases: for small depths, SMT can have over 4x more constraints, but by depth 32 it is only about 1.22x more. - This shows that LeanIMT provides a large relative improvement for small trees, while still maintaining an absolute advantage for larger trees. - The WASM artifacts remain almost constant in size for both (1.8 MB vs 2.3 MB). - LeanIMT produces smaller proving keys across all depths. Both exhibit near-linear ZKEY growth as tree depth increases, but LeanIMT remains consistently lighter, up to 25-30% smaller than SMT. - LeanIMT is more efficient overall since both its WASM and ZKEY files are lighter. ### Browser Benchmarks ### Recreate Tree | Members | SMT Time | LeanIMT Time | | ------- | -------- | ------------ | | 128 | 232.3 ms | 17.6 ms | | 512 | 1 s | 78.7 ms | | 1024 | 2.1 s | 139.2 ms | | 2048 | 4.6 s | 273.0 ms | ### LeanIMT Performance | Members | Recreate Tree Time | | --------- | ------------------ | | 10 000 | 1.2 s | | 100 000 | 11.9 s | | 1 000 000 | 1 m 59.9 s | ![LeanIMT vs SMT: Recreate Tree Browser](/articles/revocation-in-zkid-merkle-tree-based-approaches/recreate-tree-browser.webp) ### 128 - 2048 credentials - Generate Merkle Proof (both): ~5 ms - Non-Membership ZK Proofs (SMT): 446-590 ms - Membership ZK Proofs (LeanIMT): 337-433 ms ### LeanIMT 10K - 1M credentials - Generate Merkle Proof: ~5 ms - Membership ZK Proofs: 382-477 ms ### Browser Insights - LeanIMT remains faster across all operations in the browser. - Since the LeanIMT typically handles around 100,000 credentials or more, while the SMT manages only hundreds or thousands, the SMT can appear faster when recreating the tree due to its smaller size. - Both SMT and LeanIMT are practical for browser-based applications. - LeanIMT is ideal for systems that need frequent updates and fast client-side proof generation, while SMT is better when non-membership proofs are required. ### Node.js Benchmarks ![LeanIMT vs SMT: Insert Function Node.js](/articles/revocation-in-zkid-merkle-tree-based-approaches/insert-node.webp) - ZK Proof verification is constant at roughly 9 ms across all depths. ### Node.js Insights - LeanIMT insertions are significantly faster than SMT insertions at the same number of leaves. However, since a LeanIMT in a real world system can handle 1 million or more credentials, while an SMT typically manages only hundreds or thousands of revoked credentials, SMT insertions can appear faster in practice due to the smaller tree size. - ZK proof verification times are similar, since they depend primarily on the proof system rather than the data structure. ### Smart Contract Benchmarks - Insert 100 leaves into the tree. - Verify one ZK proof with a tree of 10 leaves. ### Function Gas Costs | Operation | LeanIMT (gas) | SMT (gas) | | --------------- | ------------- | --------- | | Insert | 181,006 | 1,006,644 | | Verify ZK Proof | 224,832 | 224,944 | ### Deployment Costs | Contract | LeanIMT (avg gas) | SMT (avg gas) | | -------------- | ------------------------ | -------------------- | | Bench Contract | 461,276 | 436,824 | | Verifier | 350,296 | 349,864 | | Library | 3,695,103 _(PoseidonT3)_ | 1,698,525 _(SmtLib)_ | ### Smart Contract Insights - LeanIMT is significantly cheaper for insert operations, reducing gas consumption by around 82% compared to SMT. - Verification costs are identical between both, as they share the same Groth16 verifier logic. - Both implementations remain practical for mainnet deployment. - LeanIMT costs a bit more to deploy due to the Poseidon library. ## Takeaways - Membership proofs are faster to compute than non-membership proofs. - Overall, LeanIMT offers better performance for membership proofs and client-side use cases, while SMT remains the preferred option when non-membership proofs are required. - Since revoked credentials are usually far fewer than valid ones, non-membership proofs over a list of revoked credentials are often more efficient in practice. ## Future Directions This work highlights the following directions that could be explored to further improve privacy-preserving revocation systems with Merkle tree-based solutions: - Further optimization of SMT implementations across different programming languages. - The design of a new data structure to support more efficient non-membership proofs. ## Acknowledgement I would like to thank Ying Tong, Zoey, Privado ID/Billions (Oleksandr and Dmytro), Kai Otsuki, and the PSE members for all the feedback, insights, ideas, and direction they shared along the way. Their support and thoughtful conversations were incredibly helpful in shaping this work. [^1]: Sparse Merkle Tree (SMT) paper: https://docs.iden3.io/publications/pdfs/Merkle-Tree.pdf [^2]: LeanIMT paper: https://zkkit.org/leanimt-paper.pdf [^3]: Ethereum Privacy: Private Information Retrieval - https://pse.dev/blog/ethereum-privacy-pir [^4]: Revocation report: https://github.com/decentralized-identity/labs-privacy-preserving-revocation-mechanisms/blob/main/docs/report.md]]> zero-knowledge zkID ZK-Kit <![CDATA[Client-Side GPU Acceleration for ZK: A Path to Everyday Ethereum Privacy]]> https://pse.dev/blog/client-side-gpu-everyday-ef-privacy https://pse.dev/blog/client-side-gpu-everyday-ef-privacy Mon, 26 Jan 2026 00:00:00 GMT client-side proving gpu post-quantum privacy zkp metal webgpu <![CDATA[Measuring the Privacy Experience on Ethereum]]> https://pse.dev/blog/px-user-survey-2025 https://pse.dev/blog/px-user-survey-2025 Thu, 08 Jan 2026 00:00:00 GMT **💬 Qualitative themes referenced:** - Theme 1: Clarity of privacy scope - Theme 5: Verification anxiety - Theme 7: Educational & mental model gaps **Qualitative hypothesis:** Users believe they understand what is private on-chain, but struggle to accurately identify what is hidden, visible, or still inferable. **Quantitative results:** - **Importance of privacy:** **3.3 / 4** (High) - **Satisfaction with current privacy:** **1.7 / 4** (Low, net dissatisfied) - **Confidence in current privacy guarantees:** **2.4 / 4** (Moderate) - **Confidence privacy will remain intact in the future:** **1.9 / 4** (Low) Despite high experience levels, confidence remains limited. Users care deeply about privacy, but do not feel secure that they understand or can rely on existing protections. **Interpretation:** This validates the qualitative finding that privacy tools fail to clearly communicate scope. Users are not rejecting privacy, they are uncertain what they are actually getting. ## 2. Motivation: Privacy as Control, Not Secrecy **Quantitative results (free-text + ranking):** Users consistently frame privacy as: - **Control:** choosing what is revealed, to whom, and when - **Freedom:** a digital extension of fundamental rights - **Security hygiene:** protection against scams, extortion, profiling, and physical risk **Top motivations:** 1. Personal safety & security (~60%) 2. Anti-profiling / identity separation (~55%) 3. Asset and balance protection (~50%) **Top perceived risks:** - Targeted attacks and scams - Loss of funds or access - Surveillance by governments or large platforms **Interpretation:** The survey confirms that privacy demand is principled and pragmatic, not ideological or fringe, aligning directly with qualitative insights. ## 3. Usage: Widely Tried, Rarely Habitual **Quantitative results:** A clear pattern emerges: - **Active tools** (stealth addresses, mixers, shielded pools) have **high reach (≈70%)** but **low habitual use (≈15–17%)** - **Passive or infrastructure tools** (private mempools, RPCs) have lower reach (~50%) but higher daily usage (~23%) | **Tool Category** | **Reach** | **Habit** | **Usage** | | --- | --- | --- | --- | | **Stealth / One-time Addresses** | **73%** (54 users) | 15% (11 users) | Wide but Sporadic | | **Mixers or Privacy Pools** | **70%** (52 users) | 17% (13 users) | Wide but Sporadic | | **Shielded Pools** | **69%** (51 users) | 17% (13 users) | Wide but Sporadic | | **ZK Identity / Proofs** | 68% (50 users) | 16% (12 users) | Wide but Sporadic | | **Private Mempools / MEV** | 68% (50 users) | **23%** (17 users) | Stickier | | **Private Voting** | 59% (44 users) | 9% (7 users) | Sporadic | | **Private L2s / Rollups** | 57% (42 users) | 13% (10 users) | Moderate | | **Private Relayers** | 54% (40 users) | 9% (7 users) | Sporadic | | **Private / Custom RPCs** | 51% (38 users) | **23%** (17 users) | Niche Stickier | ![image.png](/articles/privacy-experience-report/px-usage.png) **Interpretation:** The moment privacy requires users to leave their normal flow, usage drops sharply. Privacy that runs in the background is more likely to stick. ## 4. Technical Friction: Usability Is the Primary Blocker **Quantitative results:** Top blockers: - **Complex or hard to use:** 58% (43 votes) - High gas costs: 32% (24 votes) - Regulatory uncertainty: 31% (23 votes) - Missing in wallet or favorite dapps: 30% (22 votes) Additional signals: - **~86%** of respondents have abandoned a privacy flow at least once - Top reasons: confusion and uncertainty about safety - The most requested feature **with 74% of all users**, is to **have private sends as default** in existing wallets **User quotes** - *"I need a switch in my wallet to turn on private mode."* - *"Unclear what it would do... Unsure the tool was safe."* - *"Native wallet support for stealth addresses... making privacy seamless like HTTPS."* **Interpretation:** This strongly confirms the qualitative finding that privacy UX is fragile. Abandonment is the norm, not the exception. --- ## 5. Trade-offs: Time Is Acceptable, Workflow Breakage Is Not Users are willing to trade **time**, but not **cost or workflow disruption**: - 69% will wait a few minutes longer - 53% accept 2–3 extra screens - Only ~25% accept higher fees or network switching | **Trade-off** | **Votes** | **Percentage** | **Verdict** | | --- | --- | --- | --- | | **Wait up to a few minutes longer** | **47** | **69.1%** | **😍 Highly Acceptable** | | 2–3 extra confirmations or screens | 36 | 52.9% | 🙂 Acceptable | | Using a separate wallet or account | 26 | 38.2% | 😐 Borderline | | Signing multiple transactions | 18 | 26.5% | ☹️ High Friction | | Switching to a different network or L2 | 18 | 26.5% | ☹️ High Friction | | Pay up to ~5% more in fees | 17 | 25.0% | ☹️ High Friction | | Lower compatibility with some dapps | 8 | 11.8% | 😡 Unacceptable | | Withdrawal delays up to 1 day | 8 | 11.8% | 😡 Unacceptable | | Fixed deposit/withdrawal sizes | 6 | 8.8% | 😡 Unacceptable | **Interpretation:** Privacy can be slower, but it must remain affordable and integrated into existing workflows. --- ## 6. Trust & Verification: Don’t Trust, Verify (But Make It Legible) **Quantitative results:** Top trust signals (See appendix 3 for the full table): - Open-source code (61%) - Clear docs and architecture explanations (~46%) - Transaction previews/simulations (52%) Social proof and branding rank significantly lower. | **Top 5 Trust Factors** | **Votes** | **Percentage** | | --- | --- | --- | | **Open-source code** | **45** | **60.8%** | | Clear docs on how it works | 34 | 45.9% | | Transparent architecture | 34 | 45.9% | | Clear explanation of trade-offs | 34 | 45.9% | | Referrals or endorsements from trusted people | 20 | 27.0% | **Interpretation:** Users want verification, but only if it is surfaced in human-readable ways. Trust must be designed into the interface, not outsourced to reputation. --- ## 7. Confidence vs Capability: Why Adoption Fails Even for Experts ![Confidence vs capability in crypto privacy.png](/articles/privacy-experience-report/px-confidence.png) To synthesize multiple themes, we segmented users by **confidence** and **capability**: - **High confidence / High capability (36.5%):** still abandon flows ~70% of the time - **High confidence / Low capability (31.1%):** optimism without practice - **Low confidence / High capability (13.5%):** *highest abandonment (~90%)* and lowest trust - **Low confidence / Low capability (18.9%)** **Interpretation:** Technical skill does not eliminate anxiety. The most capable users are often the most cautious, reinforcing that adoption failure is driven by unclear guarantees and weak mental models, not lack of education. --- ## Synthesis: What Quantative + Qualitative Together Tell Us Across both research phases, the same story repeats: - Privacy demand is high and principled - Satisfaction and confidence are low - Friction and ambiguity dominate behavior - Defaults, previews, and clarity matter more than cryptographic sophistication alone --- ## Actionable Recommendations (Community Invitation) This research points to challenges that cannot be solved by any single team or protocol. We see these recommendations as **invitations to the Ethereum community** (wallet teams, protocol developers, UX designers, researchers, and educators) to collaborate on improving the privacy experience together. 1. **Wallet-native privacy primitives** - Private send / receive as first-class wallet features - Shared UX patterns for privacy presets (e.g., Quick Private, Maximum Privacy) 2. **Standardized privacy scope visualization** - Community-aligned patterns for showing what is hidden, visible, and inferable - Reusable components for transaction privacy previews and confirmations 3. **Confidence-building UX patterns** - Sandbox or test modes for private transactions - Progressive disclosure designs that support anxious power users 4. **Shared trust and verification standards** - Common transparency checklists (open source, architecture, simulations) - Consistent terminology across wallets and dapps 5. **Passive-by-default privacy infrastructure** - MEV protection, private RPCs, and address hygiene as defaults - Tooling that works without requiring behavior changes 6. **Context-aware privacy design** - Prioritize financial and identity-linked actions first - Explore programmable privacy for compliance-friendly use cases We invite builders and researchers to experiment with these directions, share learnings, and help define what “usable privacy” should look like on Ethereum. 1. **Make privacy native:** integrate private sends and protections directly into wallets 2. **Expose privacy scope clearly:** show what is hidden, visible, and inferable 3. **Add previews and confirmations:** reduce verification anxiety 4. **Design for anxious power users:** sandbox modes, progressive disclosure, safe defaults 5. **Standardize trust signals:** consistent transparency across tools 6. **Favor passive protections:** private infrastructure as default 7. **Respect context:** prioritize financial and identity-linked actions --- ## Conclusion This quantitative survey validates, and strengthens our earlier published qualitative findings. Privacy on Ethereum is not failing because users do not care, but because **the experience does not meet the psychological requirements of trust, clarity, and confidence**. Solving privacy adoption is therefore not only a cryptographic challenge, but a **design and UX challenge**. Addressing this gap is the fastest path to making privacy usable, trusted, and ultimately normal on Ethereum. --- ## Appendix ### Qualitative Themes and How We Tested Them Quantitatively | **Theme** | **Hypothesis (based on interview insights)** | **Purpose of Testing It** | | --- | --- | --- | | **1. Clarity of privacy scope** | Users believe they know what’s private on-chain, but in reality, most cannot accurately identify what data is visible or protected. | Measure how well people actually understand privacy boundaries. | | **2. Trust transparency** | Users place more trust in *brands* (e.g., Flashbots, Railgun) than in *verifiable proofs* (e.g., audits or on-chain evidence). | Quantify how trust forms: social vs technical trust. | | **3. Technical friction** | Complex setup and multi-step flows (extra wallets, ENS, signatures) are major barriers, even for technically skilled users. | Assess how much friction affects adoption intent. | | **4. Usability and defaults** | Users assume privacy settings are enabled by default, and rarely change them manually. | Confirm the behavioral gap between assumption and action. | | **5. Verification anxiety** | Lack of clear confirmations or test environments causes users to hesitate or limit fund size in private transactions. | Measure confidence thresholds and safety needs. | | **6. Context-specific motivation** | Privacy priorities depend on context: users care most in financial or identity-linked actions, less in social or governance contexts. | Rank contexts by perceived privacy need. | | **7. Educational & mental model gaps** | Even experienced users struggle to explain how privacy tech (e.g., stealth addresses, shielded pools) actually works. | Measure comprehension and need for educational support. | ### Blockers when using on-chain privacy tools | **Blocker** | **Votes** | **Percentage** | | --- | --- | --- | | **Complex or hard to use** | **43** | **58%** | | High gas or transaction costs | 24 | 32% | | Regulatory or policy uncertainty | 23 | 31% | | Missing in my wallet or favorite dapps | 22 | 30% | | Too few people use it / Privacy feels weak | 20 | 27% | | Hard to verify what is private | 15 | 20% | | Security concerns (e.g. fear of hacks) | 13 | 17% | | Doesn’t work the same across apps or chains | 11 | 15% | | My activity does not feel sensitive enough | 8 | 10% | | I want onchain reputation (airdrops, social graph) | 7 | 9% | | Social stigma or reputation risk | 4 | 5% | | Other | 6 | 8% | ### Trust factors when using on-chain privacy tools | **Trust Factor** | **Votes** | **Percentage** | | --- | --- | --- | | **Open-source code** | **45** | **60.8%** | | Clear docs on how it works | 34 | 45.9% | | Transparent architecture | 34 | 45.9% | | Clear explanation of trade-offs | 34 | 45.9% | | Referrals or endorsements from trusted people | 20 | 27.0% | | Logical in-app UX with info and context | 19 | 25.7% | | Widely used in production and time-tested | 18 | 24.3% | | Strong security practices (bug bounties) | 16 | 21.6% | | Independent audits | 16 | 21.6% | | Clear website/language explaining function | 16 | 21.6% | | Reproducible builds | 14 | 18.9% | | Clear changelogs | 14 | 18.9% | | Verifiable releases and contracts | 14 | 18.9% | | Transparent team identity and track record | 10 | 13.5% | | Verified listings on reputable directories | 1 | 1.4% | ### Open data We are sharing the full, anonymized survey responses so anyone can analyze the results and draw their own conclusions. The CSV includes all questions and raw answers. Feel free to remix, chart, or join with your own data. - [Download the dataset](/articles/privacy-experience-report/px-user-survey-2025-results.csv). ]]> privacy user experience privacy experience <![CDATA[do not pass "Go"]]> https://pse.dev/blog/tlsnotary-do-not-pass-go https://pse.dev/blog/tlsnotary-do-not-pass-go Thu, 01 Jan 2026 00:00:00 GMT <![CDATA[PSE December 2025 Newsletter]]> https://pse.dev/blog/pse-december-2025 https://pse.dev/blog/pse-december-2025 Thu, 18 Dec 2025 00:00:00 GMT newsletter <![CDATA[PSE November 2025 Newsletter]]> https://pse.dev/blog/pse-november-2025 https://pse.dev/blog/pse-november-2025 Thu, 11 Dec 2025 00:00:00 GMT MPT equivalence. Intended for indexers and light clients: smaller UBT roots are advantageous for db-size-sensitive PIR and bandwidth/overhead-sensitive light clients - [Geth/Reth instances behind .onion service](https://github.com/CPerezz/torpc/pull/2#issuecomment-3491141977), handling `eth_sendRawTx` originating locally (forward to Tor network) or received from behind .onion service (forward to ethp2p), building to Torpc by Carlos - Spun up and [Nodejs-tested](https://voltrevo.github.io/tor-hazae41/) Snowflake Tor proxy instance for websocket-based access to Tor from wallets/frontends - Integrate Echalot Tor lib into Viem.js, with Metri wallet being the first willing adopter - [Investi](https://hackmd.io/@alizk/BJwFha2agl)gated [PIR](https://hackmd.io/@keewoolee/Skvu0BDRle) (private information retrieval) schemes and began early experimentation - Contributed to Kohaku on [Helios](https://github.com/ethereum/kohaku-commons/pull/19) and provided support and follow-up on [TEE-ORAM](https://hackmd.io/@tkmct/BywaGeY2le) integration]]> newsletter <![CDATA[Why Users Don't Use Privacy (Yet): Insights from On-Chain Privacy Experience]]> https://pse.dev/blog/privacy-experience-report https://pse.dev/blog/privacy-experience-report Tue, 02 Dec 2025 00:00:00 GMT "I thought shielded would mean my vote would always be private… weird that I had to hover to see details." ![Snapshot UI](/articles/privacy-experience-report/snapshot-UI1.webp) Snapshot UI > "There are so many leaks if I'm using Alchemy… what is the point?" ![Privacy Pool Github](/articles/privacy-experience-report/Privacy-Pool-Github_1.webp) Privacy Pools Github **Design implication:** → Tools need **explicit, contextual privacy indicators** (e.g., _"Your address is hidden until reveal phase"_) and **plain-language explanations** of privacy boundaries. ### **Pattern 2: Lack of Trust Transparency** _Behavior: Trust decisions were driven by brand reputation, not by verifiable or visible assurances._ - Users "trusted" Flashbots or Railgun because they'd heard of them, not because the interface provided proof. - Even technically advanced users questioned how much custody or data the service retained. **Quotes:** > "I trusted Shutter because the personal risk is low and I've heard of them, not because the UI proved anything." > "I've heard of Railgun before, so I'd trust it a little bit more" > "If the last release was three months ago and not many stars, I don't feel confident." ![Railgun Github](/articles/privacy-experience-report/Railgun-Github_1.webp) Railgun Github > "Only you and Fluidkey can see all your transactions… Fluidkey team? Operator? What does that mean?" ![Fluidkey UI](/articles/privacy-experience-report/Fluidkey-UI_1.webp) Fluidkey UI **Design implication:** → Build **visible trust cues** (audits, social proof, age of project) and integrate **verifiable trust mechanisms** like on-chain proofs or audit links. ### **Pattern 3: Overly Technical Setup and Cognitive Overload** _Behavior: Participants found setup flows fragmented, verbose, or opaque, especially when required to buy ENS, deploy tokens, or manage RPCs._ - Even power users noted "a ton of clicks and signatures" with little feedback on what each did. - Non-technical users struggled to understand why new wallets, seeds, or denominations were needed. **Quotes:** > "There were a ton of clicks and signatures, I didn't even know what I was agreeing to." > "Why do I need to buy an ENS just to test?" ![Snapshot UI](/articles/privacy-experience-report/Snapshot UI_2.webp) Snapshot UI ![Snapshot UI](/articles/privacy-experience-report/Snapshot UI_3.webp) Snapshot UI > "I would never trust online generated seed, that's the basic of crypto security." ![Privacy Pool UI](/articles/privacy-experience-report/Privacy Pool UI.webp) Privacy Pools UI **Design implication:** → Simplify setup with **guided onboarding**, **progressive disclosure**, and **test modes** for safe experimentation. ### **Pattern 4: Usability Frictions: Defaults, Navigation, and Feedback** _Behavior: Users struggled with hidden controls, unclear defaults, and missing confirmations._ - Privacy options were buried ("tiny text in the Voting tab"). - Defaults often undermined privacy ("Any" = public). - Feedback was fleeting or unclear ("confirmation disappears too fast"). **Quotes:** > "Defaults matter, it should default to shielded." > "Where are the privacy controls? It's just this tiny text." ![Snapshot/Shutter UI](/articles/privacy-experience-report/Snapshot_Shutter UI_1.webp) Snapshot/Shutter UI > "If it's private by default, that's perfect. I shouldn't have to think about it." ![Flashbot UI](/articles/privacy-experience-report/Flashbot UI_1.webp) Flashbot UI **Design implication:** → Adopt **privacy-by-default**, ensure **clear visual status indicators**, and maintain **persistent confirmation messages** for key actions. ### **Pattern 5: Verification Anxiety and Fear of Loss** _Behavior: Users feared doing irreversible or unverified actions (e.g., sending funds or proofs without visible confirmation)._ - Several wanted test modes or dry runs before risking real funds. - Even confident users double-checked contract addresses or waited to see funds reappear. **Quotes:** > "There's no testing mode. I wouldn't send 1 ETH through something untested." ![Flashbot UI](/articles/privacy-experience-report/Flashbot UI_2.webp) Flashbot UI > "I want to see the contract before confirming the transaction." ![Etherscan of Privacy Pool tx](/articles/privacy-experience-report/Etherscan of Privacy Pool tx.webp) Etherscan of Privacy Pools tx ![Privacy Pool contract on Etherscan](/articles/privacy-experience-report/Privacy Pool contract on Etherscan.webp) Privacy Pools contract on Etherscan > "I wouldn't download something random, even on this machine." ![Railgun UI](/articles/privacy-experience-report/Railgun UI_1.webp) Railgun UI **Design implication:** → Provide **sandbox or test networks**, **verifiable confirmations**, and **transaction visibility before finalization**. ### **Pattern 6: Context-Specific Privacy Motivation** _Behavior: Motivation to use privacy tools varied by context._ - Some wanted privacy for governance (voting), others only for large transfers or identity separation. - "Compliant privacy" was seen by technical users as "not real privacy." **Quotes:** > "Compliant privacy is like giving up, it's not really privacy at all." > "For large fund transfers I'd plan ahead, so waiting isn't a big issue." ![Privacy Pool UI](/articles/privacy-experience-report/Privacy Pool UI_2.webp) Privacy Pools UI **Design implication:** → Offer **flexible privacy levels** and **context-aware defaults** that adapt to intent (e.g., governance vs payments). ### **Pattern 7: Educational Gaps and Mental Model Mismatches** _Behavior: Even advanced users struggled to articulate how features like stealth addresses, shielded voting, or relayers work._ - Ambiguous labels ("Power user," "Shielded," "ASP") created anxiety or alienation. - Users appreciated inline explanations and step-by-step guidance. **Quotes:** > "A normal user probably doesn't know what stealth addresses are, even I'm not sure I could define it." ![Fluidkey UI](/articles/privacy-experience-report/Fluidkey UI_3.webp) Fluidkey UI > "'Power user' makes me feel like maybe I'm not technical enough." ![Fluidkey UI](/articles/privacy-experience-report/Fluidkey UI_4.webp) Fluidkey UI **Design implication:** → Use **layered education** (simple upfront, expandable detail), avoid jargon, and provide **interactive onboarding tours**. ### **Pattern 8: Desired Qualities in Privacy Tools** _Behavior: Across all interviews, users consistently valued:_ - **Transparency:** showing what's happening - **Control:** ability to verify and customize - **Safety nets:** test modes, confirmations, and clear recovery paths - **Reputation & longevity:** older, audited, widely used projects feel safer **Quotes:** > "Anything that makes me feel a little bit more safe is important, like links to audits, social proof." ![Fluidkey UI](/articles/privacy-experience-report/Fluidkey UI_5.webp) Fluidkey UI > "Older apps that have been around longer feel safer." **Design implication:** → Frame privacy as _trustable infrastructure_ , emphasizing stability, safety, and proof over abstraction. ## Map to pain points vs opportunities and provide design suggestions _Summarize the themes we identified as pain points and opportunities_ | Theme | Core Pain Point | Design Opportunity | | ----------------------------- | ------------------------------- | ----------------------------------------- | | 1. Clarity of privacy scope | Users can't tell what's private | Add visible privacy indicators | | 2. Trust verification | Users rely on brand, not proof | Include audits and on-chain verifiability | | 3. Technical friction | Setup is complex | Simplify and guide onboarding | | 4. Default behaviors | Wrong defaults expose users | Privacy-by-default UI | | 5. Fear of loss | Lack of testing or visibility | Provide test mode and confirmations | | 6. Varying privacy motivation | Context-dependent needs | Offer adaptive privacy modes | | 7. Education & communication | Jargon-heavy UX | Layered explanations, plain language | ## **Call to Action: Shaping the Future of On-Chain Privacy** This research is an open invitation to the ecosystem. We hope designers, developers, researchers, and privacy advocates can collaborate in addressing the challenges uncovered here. **Contribute to Future Work:** 1. **Identify Technical Challenges** Many user pain points appear UX-related but are rooted in deep technical limitations. Building verifiable privacy, safe testing, and seamless defaults requires cryptographic innovation, infrastructure evolution, and better developer tooling. 2. **Expand Quantitative Understanding** Complement this qualitative study with large-scale quantitative analysis (We're actively collecting responses at Devconnect! Fill out [the survey here](https://pad.ethereum.org/form/#/2/form/view/IFZv0NuHEXd-eqIBh0o+C88F9V6+WVcBGKEb1d2LJcE/) for us to better understand your perspective on privacy tools). Measure and prioritize privacy needs, attitudes, and usage barriers across user segments. Like technical vs. non-technical, high vs. low privacy motivation, guiding where investment will have the most impact. 3. **Prototype and Share Solutions** Pilot "privacy-by-default" interfaces, testnet-safe flows, and verifiable trust cues. Publish learnings openly to accelerate shared progress. 4. **Build an Open Privacy UX Community** If you're a designer, developer, or researcher passionate about privacy experience, contribute ideas, case studies, or experiments. Together, we can make privacy a _default expectation,_ but not an afterthought. 5. **Broaden Role and Feature Coverage** This study focused on specific user roles and product features. For instance, DAO managers in governance tools or deposit flows in privacy wallets. Future research should explore the full ecosystem of participants and functionalities to provide a more holistic view of the Privacy Experience (PX) across contexts. ]]> privacy user experience privacy experience <![CDATA[State of Private Voting 2026]]> https://pse.dev/blog/state-of-private-voting-2026 https://pse.dev/blog/state-of-private-voting-2026 Wed, 12 Nov 2025 00:00:00 GMT
## Inside the Report Private voting is a critical component of decentralized governance, ensuring that participants can cast their votes without fear of coercion or retribution. As the landscape of decentralized autonomous organizations (DAOs) and on-chain elections continues to evolve, the need for robust private voting mechanisms becomes increasingly apparent. The [Shutter Network](https://www.shutter.network/) and PSE teams have prepared a report to provide an overview of the current state of private voting protocols on Ethereum, examining various solutions, their strengths and weaknesses, and recommendations for future development. The report covers the following areas: - The need for private voting - The challenges of private voting - In-depth analysis of private voting protocols, including: - [Freedom Tool](https://freedomtool.org/) - [MACI V3](https://maci.pse.dev/) - [Semaphore V4](https://semaphore.pse.dev/) - [Shutter - Shielded Voting](https://www.shutter.network/shielded-voting) - [SIV](https://siv.org/) - [Incendia](https://incendia.tech/) - [DAVINCI](https://davinci.vote/) - [Aragon/Aztec](https://research.aragon.org/nouns.html) - [Cicada](https://github.com/a16z/cicada) - [Enclave](https://www.enclave.gg/) - [Kite](https://arxiv.org/pdf/2501.05626) - [Shutter - Permanent Shielded Voting](https://www.shutter.network/shielded-voting) - Recommendations - Future work The following is a summary table comparing the different private voting protocols based on the defined properties. You can find more details and information in the full report. We have since published an update with new properties: "Quantum Resistance" and "Open Source", which you can read about in the full PDF. ![Private voting protocols comparison table](/articles/state-of-private-voting-2026/table.webp) We hope you find this report useful and informative as we continue to explore and develop private voting solutions for decentralized governance. We believe the community is doing an amazing job pushing the boundaries of this particular field, and we look forward to seeing how these protocols evolve. Reach out with your feedback, questions, or suggestions for future work in the comments on Twitter / X via [@zkMACI](https://twitter.com/zkMACI) or [@ShutterNetwork](https://x.com/ShutterNetwork). Privacy is normal. ]]>
privacy governance voting DAOs MACI
<![CDATA[PSE Retreat Synthesis Report]]> https://pse.dev/blog/pse-retreat-synthesis-report https://pse.dev/blog/pse-retreat-synthesis-report Wed, 08 Oct 2025 00:00:00 GMT retreat privacy ethereum <![CDATA[Thank You Sam, Welcome Andy and Igor]]> https://pse.dev/blog/a-thank-you https://pse.dev/blog/a-thank-you Wed, 01 Oct 2025 00:00:00 GMT PSE privacy Ethereum leadership <![CDATA[Constant-Depth NTT for FHE-Based Private Proof Delegation]]> https://pse.dev/blog/const-depth-ntt-for-fhe-based-ppd https://pse.dev/blog/const-depth-ntt-for-fhe-based-ppd Thu, 25 Sep 2025 00:00:00 GMT **Security note.** Using a smaller field (32‑bit) changes soundness margins for RS‑based protocols (code distance, rate, soundness error). Our focus here is *kernel* benchmarking; end‑to‑end security must be re‑established at the protocol layer (e.g., by adjusting evaluation domain sizes/rounds). We flag this so readers don’t conflate kernel timing with final system security. ## 3.5 Constant‑depth NTT layout We implement the NTT in a constant-depth, 2D layout. The goal is to keep multiplicative depth fixed (2–3) regardless of input size; we do this with 2D blocking plus plaintext twiddles and depth‑1 sub‑transforms: 1. Sub-transforms: Split the input into smaller subsequences, run NTTs on each at depth-1 (all in parallel). 2. Twiddle & merge (fused): apply plaintext twiddles and run the small group NTTs in a single pass (1 multiplication layer). This way, the whole transform fits within a single multiplication depth per predetermined depth (i.e., the depth of recursion) instead of $\log n$. All index movement is handled via 2D packing and plaintext matrix multiplications; our implementation performs no ciphertext rotations and no keyswitches (counts = 0),. ## 3.6 NTT Sizes and Batching - Field: prime $p = 2^{32} - 2^{20} + 1 = 4,293,918,721$. - Ring: dimension $2^{14}=16,384$ (BFV). - Circuit source: witnesses from Circom (Semaphore-v4, zk-Twitter), ported to this field. - Witness size: from ~1k values up to ~2 million (zk-Twitter). We pack field elements into ciphertext slots. The packing size is chosen near $\sqrt{M}$ for witness length $M$, rounded to a power of two. Inputs are then padded so the ciphertext count is also a power of two. This keeps the matrix shape balanced for the 2D NTT. For a witness of length $M$, we use $\text{lanes} \approx 2^{\lfloor \log_2 \sqrt{M} \rfloor}$ per ciphertext and #CT $= \lceil M/\text{lanes} \rceil$, pad #CT to a power of two, and run an NTT of length #CT. Cost model: this constant‑depth layout uses $O(d\cdot n^{1+1/d})$ ct–pt multiplies (e.g., $O(n^{1.5})$ at $d=2$), trading multiplies for reduced depth (vs. $O(n\log n)$ at $\log n$ depth). ## 3.7 Metrics & Instrumentation - Reported metrics: wall‑clock runtime, ct–pt multiplies, ct–ct additions. - Rotations/keyswitches: not used by this kernel (see §3.5), so we omit those columns. - Noise budget: we did not report a before/after delta for this kernel; adding this is straightforward and left as future work. - Sanity checks: each run decrypts and compares against a plaintext NTT to confirm correctness (excluded from timing). ## 3.8 Hardware & run controls * **Machine:** Intel Xeon Platinum 8375C (Ice Lake, AVX‑512), 1 socket, 8 cores/16 threads (SMT=2), base 2.90 GHz; L3 54 MiB; 128 GiB RAM. Appendix A (Reproducibility) lists full toolchain, parameters, and the exact cargo command used to run these benchmarks. --- This setup lets us answer the narrow question the paper left open—in the exact field and ring parameters we now target: **what does a constant‑depth NTT actually cost** (depth, op counts, milliseconds) when you run it the way an FHE‑evaluated *Ligero* prover would. # 4. Results **Headline:** 1.94 s (5.6k @ depth=3), 4.50 s (22k @ depth=4), 121.1 s (2.25M @ depth=4). *Lower-bound kernel timings.* In an end-to-end FHE-SNARK, NTTs are evaluated at a higher ciphertext modulus (i.e., more levels), so wall-clock will be modestly higher. Reading the tables: lower time is better; counts shown are ct–pt multiplies and ct–ct additions; rotations/keyswitches are zero in this kernel. | Witness size | Best depth | Time (s) | Throughput | | -----------: | ---------: | -------: | ---------: | | 5,570 | 3 | 1.94 | ~2.86k elems/s | | 22,280 | 4 | 4.50 | ~4.95k elems/s | | 2,250,280 | 4 | 121.11 | ~18.6k elems/s | We measure three witness scales—from **\~5.6k** up to **\~2.25M** entries—spanning the tweaked Semaphore v4, its original-sized variant, and a zk‑Twitter–scale input. ### 4.1 Semaphore v4 (tweaked to 32‑bit field) Witness entries: **5,570** | depth | time (s) | ct–pt multiplies | ct–ct additions | | ----: | ---------: | ---------: | --------: | | 1 | 11.4158 | 16,384 | 16,256 | | 2 | 2.6045 | 3,072 | 2,816 | | 3 | **1.9449** | **2,048** | **1,664** | | 4 | 2.7946 | 2,816 | 2,304 | | 5 | 2.5411 | 2,048 | 1,408 | * **Best:** depth **3** → **1.94 s** (\~**2.86k elems/s**, \~**0.35 ms/elem**). * **Speedup vs depth‑1:** \~**5.9×**. * **Note:** past depth‑3, overhead outweighs the smaller op counts. ### 4.2 Semaphore v4 (original‑size, same field) Witness entries: **22,280** | depth | time (s) | ct–pt multiplies | ct–ct additions | | ----: | ---------: | ---------: | --------: | | 1 | 45.4127 | 65,536 | 65,280 | | 2 | 6.4632 | 8,192 | 7,680 | | 3 | 5.3701 | 6,144 | 5,376 | | 4 | **4.4982** | **4,096** | **3,072** | | 5 | 6.6192 | 6,144 | 4,864 | * **Best:** depth **4** → **4.50 s** (\~**4.95k elems/s**, \~**0.20 ms/elem**). * **Speedup vs depth‑1:** \~**10.1×**. * **Observation:** as size grows, the sweet spot shifts from **3 → 4**. ### 4.3 zk‑Twitter–scale (similar witness size on 32‑bit field) Witness entries: **2,250,280** | depth | time (s) | ct–pt multiplies | ct–ct additions | | ----: | -----------: | ----------: | ----------: | | 1 | 11,729.7076 | 16,777,216 | 16,773,120 | | 2 | 387.0503 | 524,288 | 516,096 | | 3 | 160.5035 | 196,608 | 184,320 | | 4 | **121.1053** | **131,072** | **114,688** | | 5 | 133.9219 | 131,072 | 110,592 | * **Best:** depth **4** → **121.11 s** (\~**18.6k elems/s**, \~**53.8 µs/elem**). * **Speedup vs depth‑1:** \~**97×**. * **Note:** depth‑5 trims ops slightly but adds memory traffic and twiddle‑load overhead; beyond depth‑4 that overhead outweighs the saved multiplies, so **depth‑4** wins. ### 4.4 Takeaways * **Constant‑depth works.** Depth‑1 (naïve matrix NTT) is impractical at scale; depth **3–4** is **5–97× faster** across our sizes. * **Size decides the sweet spot.** Small (\~5.6k) prefers **3**; medium/large (22k–2.25M) prefers **4**. * **Cost shifts to data movement.** After the sweet spot, runtime flattens even as op counts drop—overheads (layout, scheduling, memory, twiddle loads) dominate. * **Feasible at real scale.** With **depth‑4**, a single Ice Lake socket processes **\~2.25M** field elements in **\~2 minutes**. # 5. Discussion & Conclusion **What we showed.** Constant‑depth NTT over ciphertexts is **practical** at real scales in the FHE‑SNARK (Ligero/RS) setting. On a single Intel Xeon Platinum 8375C (Ice Lake, AVX‑512) socket and a 32‑bit field: * Depth‑1 (naïve matrix) is a non‑starter at scale. * Depth **3–4** delivers **5×–97×** speedups and keeps depth bounded. * The **sweet spot shifts with size**: \~5.6k entries → **depth‑3**; ≥22k up to \~2.25M → **depth‑4**. **What this means for builders.** * **NTT isn’t the blocker.** With a constant‑depth layout, the transform fits inside typical BFV depth budgets and runs in minutes even at \~2.25M elements. * **Optimize for data movement.** Once depth is capped, runtime flattens as op counts fall—**memory traffic and scheduling** take over. Co‑design your **packing** (near‑square), **stride sets**, and **batch shape** with upstream/downstream steps. * **Pick depth first, then tune.** Start at **depth‑3** (small/medium) or **depth‑4** (large), then adjust packing and ring parameters for your throughput/memory envelope. **On the 32‑bit field.** We ported the circuits to $p=4{,}293{,}918{,}721$ to exercise the kernel. That choice is fine for NTT benchmarking, but **protocol soundness** in RS/Ligero must be re‑established for smaller moduli (e.g., domain size/rounds). See §3.2 “Rationale: prime choice” for how this prime satisfies NTT roots, aligns with OpenFHE packing, and how to lift to ~50‑bit effective modulus via CRT or use a 64‑bit NTT‑friendly prime. For production fields: * Use **CRT** across several 32‑bit primes, or * Switch to a **64‑bit prime** (e.g., Goldilocks) and expect roughly linear cost growth in ct–pt multiplies (constants depend on HEXL paths). **Limits of this work.** * **Kernel only.** We did not measure the full FHE‑SNARK pipeline. * **Metrics coverage.** We reported time and ct–pt/ct–ct counts. Rotations/keyswitches are not used by this kernel (counts = 0), and we did not yet add a simple before/after noise‑budget delta. * **One machine profile.** Results are single‑socket Ice Lake; microarchitecture changes will shift constants. **Where to push next.** * **R1CS modulus/porting:** R1CS circuits over a smaller prime field are non‑standard; existing BN254/BlS12‑based gadgets don’t carry over as‑is. Re‑audit soundness and constraints under the new modulus (e.g., range checks, hash/curve gadgets), and update any protocol‑level parameters accordingly. * **Witness extension under HE:** End‑to‑end proving requires RS witness extension executed under HE; we did not explore this here. Tooling is currently sparse—build generators that perform extension, packing/padding, and correctness checks under HE to integrate with the NTT kernel. * **Hardware:** explore GPU offload for rotations/KS; widen AVX‑512 utilization. * **End‑to‑end:** plug this NTT into a E2E prover under FHE, re‑tune RS parameters for target soundness, and report wall‑clock/communication together. **Bottom line.** The FHE‑SNARK paper left constant‑depth NTT unmeasured. We filled that gap with a concrete implementation and numbers across **\~1k → \~2.25M** elements. With **depth‑3–4**, NTT is **depth‑stable and fast enough**; the next wins will come from **layout and bandwidth** (rotations if introduced in future variants), not the butterfly. --- # Appendix A. Reproducibility - **Repo:** https://github.com/tkmct/fhe-snark - **HE libs:** OpenFHE v1.2.4 (shared libs on system); Intel HEXL v1.2.5 enabled at OpenFHE build time. If relevant, also record exact commit hashes and build flags. - **CPU:** Intel Xeon Platinum 8375C (Ice Lake), x86_64, 1 socket, 8 cores/16 threads (SMT=2), base 2.90 GHz; caches: L1d 384 KiB (8×), L1i 256 KiB (8×), L2 10 MiB (8×), L3 54 MiB (1×); NUMA nodes: 1; AVX‑512 supported; virtualization: KVM. - **Memory:** 128GiB ]]> FHE ZKP <![CDATA[ZK-Kit: Cultivating the Garden of ProgCrypto]]> https://pse.dev/blog/zk-kit-cultivating-the-garden-of-progcrypto https://pse.dev/blog/zk-kit-cultivating-the-garden-of-progcrypto Tue, 23 Sep 2025 00:00:00 GMT zero-knowledge ZK-Kit <![CDATA[PSE September 2025 Newsletter]]> https://pse.dev/blog/pse-september-2025-newsletter https://pse.dev/blog/pse-september-2025-newsletter Tue, 16 Sep 2025 00:00:00 GMT newsletter <![CDATA[PSE Roadmap: 2025 and Beyond]]> https://pse.dev/blog/pse-roadmap-2025 https://pse.dev/blog/pse-roadmap-2025 Fri, 12 Sep 2025 00:00:00 GMT Ethereum Privacy Roadmap Zero-Knowledge Decentralization FHE MPC TEE <![CDATA[PSE August 2025 newsletter]]> https://pse.dev/blog/pse-august-2025-newsletter https://pse.dev/blog/pse-august-2025-newsletter Fri, 08 Aug 2025 00:00:00 GMT zero-knowledge cryptography client-side-proving fully-homomorphic-encryption decentralized-identity zk-governance privacy TLSNotary MACI Semaphore ZK-Kit zkid PlasmaFold vOPRF mpc pod2 mopro research development <![CDATA[The case for privacy in DAO voting]]> https://pse.dev/blog/the-case-for-privacy-in-dao-voting https://pse.dev/blog/the-case-for-privacy-in-dao-voting Thu, 07 Aug 2025 00:00:00 GMT privacy governance DAOs MACI zkSNARKs <![CDATA[Metal MSM v2: Exploring MSM Acceleration on Apple GPUs]]> https://pse.dev/blog/mopro-metal-msm-v2 https://pse.dev/blog/mopro-metal-msm-v2 Mon, 28 Jul 2025 00:00:00 GMT Scheme Input Size (ms) 212 214 216 218 220 222 224 Arkworks v0.4.x
(CPU, Baseline) 6 19 69 245 942 3,319 14,061 Metal MSM v0.1.0
(GPU) 143
(-23.8x) 273
(-14.4x) 1,730
(-25.1x) 10,277
(-41.9x) 41,019
(-43.5x) 555,877
(-167.5x) N/A Metal MSM v0.2.0
(GPU) 134
(-22.3x) 124
(-6.5x) 253
(-3.7x) 678
(-2.8x) 1,702
(-1.8x) 5,390
(-1.6x) 22,241
(-1.6x) ICME WebGPU MSM
(GPU) N/A N/A 2,719
(-39.4x) 5,418
(-22.1x) 17,475
(-18.6x) N/A N/A ICICLE-Metal v3.8.0
(GPU) 59
(-9.8x) 54
(-2.8x) 89
(-1.3x) 149
(+1.6x) 421
(+2.2x) 1,288
(+2.6x) 4,945
(+2.8x) ElusAegis' Metal MSM
(GPU) 58
(-9.7x) 69
(-3.6x) 100
(-1.4x) 207
(+1.2x) 646
(+1.5x) 2,457
(+1.4x) 11,353
(+1.2x) ElusAegis' Metal MSM
(CPU+GPU) 13
(-2.2x) 19
(-1.0x) 53
(+1.3x) 126
(+1.9x) 436
(+2.2x) 1,636
(+2.0x) 9,199
(+1.5x) > Negative values indicate slower performance relative to the CPU baseline. The performance gap narrows for larger inputs. Notes: - For ICME WebGPU MSM, input size 2^12 causes M3 chip machines to crash; sizes not listed on the project's GitHub page are shown as "N/A" - For Metal MSM v0.1.0, the 2^24 benchmark was abandoned due to excessive runtime. While Metal MSM v2 isn't faster than CPUs across all hardware configurations, its open-source nature, competitive performance relative to other GPU implementations, and ongoing improvements position it well for continued advancement. ## Profiling Insights Profiling on an M1 Pro MacBook provides detailed insights into the improvements from v1 to v2: | metric | v1 | v2 | gain | |---|---|---|---| | end-to-end latency | 10.3 s | **0.42 s** | **24x** | | GPU occupancy | 32 % | **76 %** | +44 pp | | CPU share | 19 % | **<3 %** | –16 pp | | peak VRAM | 1.6 GB | **220 MB** | –7.3× | These metrics highlight the effectiveness of v2's optimizations: - **Latency Reduction:** A 24-fold decrease in computation time for 2^20 inputs. - **Improved GPU Utilization:** Occupancy increased from 32% to 76%, indicating better use of GPU resources. - **Reduced CPU Dependency:** CPU share dropped below 3%, allowing the GPU to handle most of the workload. - **Lower Memory Footprint:** Peak VRAM usage decreased from 1.6 GB to 220 MB, a 7.3-fold reduction. Profiling also identified buffer reading throughput as a primary bottleneck in v1, which v2 mitigates through better workload distribution and sparse matrix techniques. See detailed profiling reports: [v1 Profiling Report](https://hackmd.io/@yaroslav-ya/rJkpqc_Nke) and [v2 Profiling Report](https://hackmd.io/@yaroslav-ya/HyFA7XAQll). ## Comparison to Other Implementations Metal MSM v2 is tailored for Apple's Metal API, setting it apart from other GPU-accelerated MSM implementations: - **Derei and Koh's WebGPU MSM on BLS12**: Designed for WebGPU, this implementation targets browser-based environments and may not fully leverage Apple-specific hardware optimizations. - **ICME labs WebGPU MSM on BN254**: Adapted from Derei and Koh's WebGPU work for the BN254 curve, it is ~10x slower than Metal MSM v2 for inputs from 2^16 to 2^20 on M3 MacBook Air. - **cuZK**: A CUDA-based implementation for NVIDIA GPUs, operating on a different hardware ecosystem and using different algorithmic approaches. Metal MSM v2's use of sparse matrices and dynamic workgroup sizing provides advantages on Apple hardware, particularly for large input sizes. While direct benchmark comparisons are limited, internal reports suggest that v2 achieves performance on par with or better than other WebGPU/Metal MSM implementations at medium scales. It's worth noting that the state-of-the-art Metal MSM implementation is [Ingonyama's ICICLE-Metal](https://medium.com/@ingonyama/icicle-goes-metal-v3-6-163fa7bbfa44) (since ICICLE v3.6). Readers can try it by following: - [ICICLE Rust MSM example](https://github.com/ingonyama-zk/icicle/tree/main/examples/rust/msm) - [Experimental BN254 Metal benchmark](https://github.com/moven0831/icicle/tree/bn254-metal-benchmark) Another highlight is [ElusAegis' Metal MSM implementation](https://github.com/ElusAegis/metal-msm-gpu-acceleration) for BN254, which was forked from version 1 of Metal MSM. To the best of our knowledge, his pure GPU implementation further improves the allocation and algorithmic structure to add more parallelism, resulting in **2x** faster performance compared to Metal MSM v2. Moreover, by integrating this GPU implementation with optimized MSM on the CPU side from the [halo2curves](https://github.com/privacy-scaling-explorations/halo2curves) library, he developed a hybrid approach that splits MSM tasks between CPU and GPU and then aggregates the results. This strategy achieves an additional **30–40%** speedup over a CPU-only implementation. This represents an encouraging result for GPU acceleration in pairing-based ZK systems and suggests a promising direction for Metal MSM v3. ## Future Work The Metal MSM team has outlined several exciting directions for future development: - **SIMD Refactoring:** Enhance SIMD utilization and memory coalescing to further boost performance. - **Advanced Hybrid Approach:** Integrate with Arkworks 0.5 for a more sophisticated CPU-GPU hybrid strategy. - **Android Support**: Port kernels to Vulkan compute/WebGPU on Android, targeting Qualcomm Adreno (e.g., Adreno 7xx series) and ARM Mali (e.g., G77/G78/G710) GPUs. - **Cross-Platform Support:** Explore WebGPU compatibility to enable broader platform support. - **Dependency Updates:** Transition to newer versions of [objc2](https://github.com/madsmtm/objc2) and [objc2-metal](https://crates.io/crates/objc2-metal), and Metal 4 to leverage the latest [MTLTensor features](https://developer.apple.com/videos/play/wwdc2025/262/), enabling multi-dimensional data to be passed to the GPU. Beyond these technical improvements, we are also interested in: - **Exploration of PQ proving schemes:** With the limited acceleration achievable from pairing-based proving schemes, we're motivated to explore PQ-safe proving schemes that have strong adoption potential over the next 3–5 years. These schemes, such as lattice-based proofs, involve extensive linear algebra operations that can benefit from GPUs' parallel computing capabilities. - **Crypto Math Library for GPU:** Develop comprehensive libraries for cryptographic computations across multiple GPU frameworks, including Metal, Vulkan, and WebGPU, to expand the project's overall scope and impact. ## Conclusion Metal MSM v2 represents a leap forward in accelerating Multi-Scalar Multiplication on Apple GPUs. By addressing the limitations of v1 through sparse matrix techniques, dynamic thread management, and other novel optimization techniques, it achieves substantial performance gains for Apple M-series chips and iPhones. However, two challenges remain: - First, GPUs excel primarily with large input sizes (typically around 2^26 or larger). Most mobile proving scenarios use smaller circuit sizes, generally ranging from 2^16 to 2^20, which limits the GPU's ability to fully leverage its parallelism. Therefore, optimizing GPU performance for these smaller workloads remains a key area for improvement. - Second, mobile GPUs inherently possess fewer cores and comparatively lower processing power than their desktop counterparts, constraining achievable performance. This hardware limitation necessitates further research into hybrid approaches and optimization techniques to maximize memory efficiency and power efficiency within the constraints of mobile devices. Addressing these challenges will require ongoing algorithmic breakthroughs, hardware optimizations, and seamless CPU–GPU integration. Collectively, these efforts pave a clear path for future research and practical advancements that enable the mass adoption of privacy-preserving applications. ## Get Involved We welcome researchers and developers interested in GPU acceleration, cryptographic computations, or programmable cryptography to join our efforts: - [GPU Exploration Repository](https://github.com/zkmopro/gpu-acceleration/tree/v0.2.0) (latest version includes Metal MSM v2) - [Mopro](https://github.com/zkmopro/mopro) (Mobile Proving) For further inquiries or collaborations, feel free to reach out through the project's GitHub discussions or directly via our [Mopro community on Telegram](https://t.me/zkmopro). ## Special Thanks We extend our sincere gratitude to [Yaroslav Yashin](https://x.com/yaroslav_ya), Artem Grigor, and [Wei Jie Koh](https://x.com/weijie_eth) for reviewing this post and for their valuable contributions that made it all possible. [^1]: Bootle, J., & Chiesa, A., & Hu, Y. (2022). "Gemini: elastic SNARKs for diverse environments." IACR Cryptology ePrint Archive, 2022/1400: https://eprint.iacr.org/2022/1400 [^2]: Lu, Y., Wang, L., Yang, P., Jiang, W., Ma, Z. (2023). "cuZK: Accelerating Zero-Knowledge Proof with A Faster Parallel Multi-Scalar Multiplication Algorithm on GPUs." IACR Cryptology ePrint Archive, 2022/1321: https://eprint.iacr.org/2022/1321 [^3]: Wang, H., Liu, W., Hou, K., Feng, W. (2016). "Parallel Transposition of Sparse Data Structures." Proceedings of the 2016 International Conference on Supercomputing (ICS '16): https://synergy.cs.vt.edu/pubs/papers/wang-transposition-ics16.pdf ]]>
metal msm gpu client-side proving zkp
<![CDATA[A trustless and simple 14k+ TPS payment L2 from Plasma and client-side proving]]> https://pse.dev/blog/a-trustless-and-simple-14k-tps-payment-l2-from-plasma-and-client-side-proving https://pse.dev/blog/a-trustless-and-simple-14k-tps-payment-l2-from-plasma-and-client-side-proving Fri, 18 Jul 2025 00:00:00 GMT intmax plasma l2 scaling zero-knowledge proofs validity proofs data availability <![CDATA[PSE July 2025 Newsletter]]> https://pse.dev/blog/pse-july-2025-newsletter https://pse.dev/blog/pse-july-2025-newsletter Wed, 16 Jul 2025 00:00:00 GMT zk cryptography ethereum privacy scaling folding voprf <![CDATA[Unboxing iO: Building From Familar Crypto Blocks]]> https://pse.dev/blog/unboxing-io-building-from-familar-crypto-blocks https://pse.dev/blog/unboxing-io-building-from-familar-crypto-blocks Fri, 11 Jul 2025 00:00:00 GMT <![CDATA[PSE June 2025 newsletter]]> https://pse.dev/blog/pse-june-2025-newsletter https://pse.dev/blog/pse-june-2025-newsletter Tue, 08 Jul 2025 00:00:00 GMT newsletter post quantum cryptography private proof delegation client side proving machina io voprf tlsnotary maci zk-kit mpc framework mopro semaphore <![CDATA[Under-Constrained Bug in BinaryMerkleRoot Circuit (Fixed in v2.0.0)]]> https://pse.dev/blog/under-constrained-bug-in-binary-merkle-root-circuit-fixed-in-v200 https://pse.dev/blog/under-constrained-bug-in-binary-merkle-root-circuit-fixed-in-v200 Tue, 01 Jul 2025 00:00:00 GMT zero-knowledge ZK-Kit circom Semaphore MACI <![CDATA[Ethereum Privacy: Private Information Retrieval]]> https://pse.dev/blog/ethereum-privacy-pir https://pse.dev/blog/ethereum-privacy-pir Wed, 18 Jun 2025 00:00:00 GMT ℹ️ For the rest of the post, we will focus on Single Server PIR. ### Query Format For single server PIR, we can categorize schemes based on their query format. #### Key based The PIR query is formulated using a unique identifier for the desired data item. The server treats its data store as a key-value database. This offers the advantage that users do not need to know a numerical index corresponding to their target item, as the scheme handles the key-value mapping directly. However, this approach comes with extra costs for database encoding and the mapping process itself. An example is [Chalamet PIR](https://eprint.iacr.org/2024/092.pdf), which has a [Rust implementation](https://github.com/claucece/chalamet)! #### Index based This approach involves preprocessing the database by enumerating all items into an ordered structure, like a flat array. To formulate the query, the client needs to specify the numerical index of the element they want. Index-based PIR schemes are more efficient, but require maintaining and updating the item-to-index mapping whenever the underlying database changes. For frequently changing databases, keeping this map updated can be a challenge, but using an append-only mapping strategy can simplify it. ## Application 1: Private Merkle Proof Generation Merkle proofs enable verification of existence within a Merkle tree. When combined with zero-knowledge circuits, they allow you to prove that something belongs to a tree without revealing what it is. As an example, if the tree is a group of people, you can prove you're a member without disclosing your identity. [Semaphore](https://docs.semaphore.pse.dev/) leverages this, enabling you to cast a **signal**, which can be a vote or a message, as a provable group member while preserving your anonymity. [World ID](https://world.org/world-id) uses Semaphore to allow you to anonymously and securely verify that you are a [real and unique human](https://world.org/blog/developers/the-worldcoin-protocol). Group members are stored in a [Lean Incremental Merkle Tree](https://github.com/privacy-scaling-explorations/zk-kit/tree/main/papers/leanimt) (LeanIMT). It's a Poseidon-based append-only binary tree, with 32-byte nodes and leaves. World ID has [~13 million members](https://world.org/blog/world/world-id-faqs), and the tree is too large for users to store and update locally. They currently rely on an [indexing service](https://whitepaper.world.org/#verification-process) that retrieves the inclusion proof on behalf of the user. > 🚨 But this allows the indexer to associate the requester's IP address to their public key So, in order to [scale Semaphore](https://hackmd.io/@brech1/scale-semaphore) while keeping its privacy guarantees, we should find a way around it. ### Proposed Solution Users can fetch the tree `root` and `size` without disclosing any information. The LeanIMT has a deterministic structure based on its `size`, so we can compute the indices involved in the Merkle proof for a given leaf (see the [implementation](https://github.com/privacy-scaling-explorations/zk-kit.rust/blob/main/crates/lean-imt/src/stateless.rs)). With the calculated indices, users can then use PIR to fetch them and generate the Merkle proofs locally. This solution is feasible only if the tree nodes and leaves needed for the Merkle proof can be determined without accessing the full tree data. Otherwise, a solution such as implementing a [Multi-Party Computation](https://mirror.xyz/privacy-scaling-explorations.eth/v_KNOV_NwQwKV0tb81uBS4m-rbs-qJGvCx7WvwP4sDg) with the server to execute a `get_merkle_path_indices(i)` function, without the server learning the leaf index, would need to be developed. This path would be **highly impractical**, as it is neither straightforward nor efficient. ## Application 2: Private Ethereum State Reads Ethereum's [World State](https://epf.wiki/#/wiki/EL/data-structures?id=world-state-trie), stored in Merkle-Patricia Tries, maps addresses to Externally Owned Accounts and contracts state. Standard RPC calls (`eth_getBalance`, `eth_call`, `eth_getStorageAt`) for tasks like balance checks expose: - Queried addresses or contract slots. - Query timing and frequency. - User's IP and device fingerprint. This metadata allows the RPC endpoint to profile and potentially deanonymize users. With around [320 million active EOAs](https://etherscan.io/chart/address) as of June 2025, local trie storage is impractical for standard consumer hardware. This becomes even worse after [EIP-7702](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-7702.md) since EOAs now also store code. ### Proposed Solution PIR can enable private state reads for balance lookups and contract state reads. For instance, wallets could fetch Ether or token balances and NFTs anonymously. A great working example of this is [sprl.it](https://sprl.it/) for ENS resolution. > ⚠️ Keep in mind that contract state reads wouldn't be possible in many cases where the contracts are already too big (jump to the benchmarks for specifics), or if we don't want to disclose which contracts we're interested in. Unlike Semaphore's LeanIMT, Merkle-Patricia Tries lack a deterministic structure based on known parameters, complicating data index computation. There are at least two ways around this: - **Key-Based PIR**: Schemes like [Chalamet PIR](https://eprint.iacr.org/2024/092.pdf) query by address, treating the trie as a key-value database, which is also [Ethereum's case](https://geth.ethereum.org/docs/fundamentals/databases), avoiding index computation but increasing encoding and usage costs. - **Index-Based PIR with MPC**: A server-side `address -> index` map is maintained and queried via 2-Party Computation, followed by index-based PIR. #### What about logs? [Logs](https://info.etherscan.com/what-is-event-logs/) could be a way around the huge state if we limit the scope in time and operations we're interested in (+1 for [EIP-7708](https://ethereum-magicians.org/t/eip-7708-eth-transfers-emit-a-log/20034)). Even though indexing problems don't get easier, building something like a private [EthReceipts](https://github.com/daimo-eth/ethreceipts) can be practical due to the small size of the base data, providing a solution for one of the most common use cases for block explorers. [QuietRPC](https://github.com/oskarth/quietrpc/tree/main) is exploring a private RPC solution based on logs and two-hop PIR. ### Data Verification PIR guarantees private retrieval but not data correctness, and standard RPC assumes provider honesty. The `eth_getProof` RPC method could be used to verify data, like [light clients do](https://github.com/a16z/helios?tab=readme-ov-file#ethereum), but it would expose the accessed information. State verification with PIR is currently not practical. Privately fetching and calculating the indices needed for a Merkle proof is too complex for Ethereum's MPT. This shouldn't discourage us from implementing PIR-based solutions, since privacy is guaranteed, but it's an open question how to offset the server trust. ## Respire [Respire](https://eprint.iacr.org/2024/1165) is a single-server PIR scheme tailored for databases of small records (~256 bytes). This can be a common scenario in blockchain applications that rely on Merkle trees to store their data, since nodes tend to be short hashes. Its security relies on the computational hardness of lattice problems, specifically Ring-LWE. A [deep dive session](https://www.youtube.com/watch?v=Nf4IZ2kTPN4) on Respire is available, presented by professor David J. Wu. ### Efficiency for Small Records Respire is optimized for retrieving small records. Retrieving a single 256-byte record from a database of over a million entries requires only about 6 KB of total data exchange, so the communication overhead is minimized, enabling high rates (record size/response size). ### Batch Query Support Respire has support for batch queries, using cuckoo hashing, allowing a client to request multiple items simultaneously. For Merkle proofs, this means a client can fetch all sibling hashes along a path (e.g., 20 hashes) in one go. Retrieving 10–20 records might cost only about 1.5x the bandwidth of a single record query. However, the computational costs for the database encryption and server answer increase significantly with the batch size. ### No Offline Setup/Hint Unlike some PIR schemes that require a large "hint" to be pre-downloaded by the client or an expensive one-time offline setup by the server for each client, Respire operates without such a client-side pre-download. ### Implementation Respire has a [Rust implementation](https://github.com/AMACB/respire) available. ### Mechanics These are the steps involved in a Respire private information retrieval: 1. **Server Preprocessing**: The server preprocesses the database into a structure ready to perform homomorphic operations. If the database data wants to be updated, this process should run again. 2. **Client Public Parameters Generation**: The client generates reusable public parameters (similar to a public key) that the server uses to process the query. They only have to be generated and transmitted once, at the beginning of the client-server connection. 3. **Client Query Generation**: The client constructs an encrypted query vector. Conceptually, this vector has a 1 at the index of the desired item and 0s elsewhere, but the entire vector is encrypted, so the server can't see the position of the 1. 4. **Server Answer Computation**: The server homomorphically applies this encrypted query to the encrypted database and returns the encrypted answer. 5. **Client Decryption**: The client decrypts to obtain the plaintext answer. ## Performance Evaluation A [benchmarking codebase](https://github.com/brech1/tree-pir) was created for the performance evaluation, with the main goal being to test Respire with LeanIMTs. The trees are generated with the `semaphore-rs` group module, and then flattened. Results of $2^n$ member groups (or leaves) mean dealing with a $2^{n+1} - 1$ element database. [Criterion](https://github.com/bheisler/criterion.rs) was used as benchmarking crate, the settings for each benchmark can be explored in the repository. All elements (leaves and nodes) are 32-byte long. They were executed on a AWS EC2 [r7i.8xlarge](https://aws.amazon.com/ec2/instance-types/#Memory_Optimized) instance: - **Processor**: 4th Generation Intel Xeon - **vCPUs**: 32 - **Memory**: 256 GB DDR5 - **Cost**: ~2 USD/hr All measurements are on a single thread. A much cheaper instance type could be used, but for benchmarking a DB of size $2^{25}$ elements, the DB encoding is quite memory-intensive. Raw results are published [here](https://docs.google.com/spreadsheets/d/1aWtgHQB3GE7rmz0DwWZLO2zrvnvD1WuNbzSyhygye_M/edit?usp=sharing). Each section has the result for both single and multiple element (or batch) benchmarks: - **Single Element Request**: Executes a request for a single 32-byte record. - **Batch Request**: Executes a request of $n-1$ elements for a database of size $2^n$. This is tied to LeanIMTs being binary trees, so we could need up to $n$ siblings and the root to construct the merkle proofs. **Database Size** is expressed in amount of elements. To keep in mind: - **$2^{13}$** = 8,192 elements (0.25 MB) - **$2^{17}$** = 131,072 elements (4 MB) - **$2^{21}$** = 2,097,152 elements (64 MB) - **$2^{25}$** = 33,554,432 elements (1 GB) ### Computational This table displays the execution times of each step of the PIR request. | Database Size | Setup (ms) | Encode DB (s) | Query (ms) | Answer (s) | Extract (µs) | | :------------ | :--------- | :------------ | :--------- | :--------- | :----------- | | **Single** | | | | | | | $2^{13}$ | 47.185 | 0.038145 | 0.425 | 0.16466 | 24.132 | | $2^{17}$ | 47.559 | 0.76478 | 0.424 | 0.30939 | 27.503 | | $2^{21}$ | 47.498 | 15.105 | 0.426 | 1.1226 | 24.803 | | $2^{25}$ | 47.018 | 278.51 | 0.429 | 9.567 | 29.909 | | **Batch** | | | | | | | $2^{13}$ | 49.96 | 2.0136 | 10.299 | 4.8099 | 148.41 | | $2^{17}$ | 56.888 | 4.822 | 12.179 | 6.0517 | 256.96 | | $2^{21}$ | 57.191 | 52.777 | 17.339 | 12.544 | 252.55 | | $2^{25}$ | 56.39 | 1220.73 | 68.88 | 85.54787 | 259 | **Client Side** Client side operations (`setup`, `query_generation`, `answer_extraction`) are quite performant on both single and batch requests. Extraction times are sub-ms, the setup is relatively constant with the worst case being 57 ms, and the query can get up to 68 ms on batch requests but is constant on .42 ms for single element ones. **Server Side** The server side operations take the heavy burden. Database encoding times grow significantly with the database size, becoming a real issue for frequently updated databases if we're dealing with large batches on big databases. For single-requests, it could take up to 4.5 minutes to encode a database of 33M elements. For batch requests, 20 minutes, but keep in mind that it is heavily impacted by the large batch of 20 elements, since Respire uses [Cuckoo Hashing](https://cs.stanford.edu/~rishig/courses/ref/l13a.pdf) to allow batch requests, larger batches require more redundancies, so more processing. Smaller batches could have much better results. Generating the encrypted answer is relatively acceptable, being the worst cases 9.5 s for single requests and 85 s for batch requests. ### Communication This table displays the sizes of the transmitted elements between client and server. | Database Size | Public-Params Size (MB) | Query Size (KB) | Answer Size (KB) | | :------------ | :---------------------- | :-------------- | :--------------- | | **Single** | | | | | $2^{13}$ | 11.17 | 0.91 | 3.02 | | $2^{17}$ | 11.17 | 1.89 | 3.01 | | $2^{21}$ | 11.17 | 5.41 | 3.03 | | $2^{25}$ | 11.17 | 19.05 | 3.02 | | **Batch** | | | | | $2^{13}$ | 11.33 | 23.49 | 12.95 | | $2^{17}$ | 12.19 | 35.27 | 12.94 | | $2^{21}$ | 12.19 | 96.63 | 12.97 | | $2^{25}$ | 12.19 | 753.59 | 14.94 | **Public Parameters**: ~12 MB but just have to be transmitted once. **Query**: Low for single queries, 0.75 MB for batch of 24 elements. **Answer**: Constant 3 KB for single queries and max of 15 KB for batch queries. ### Roundtrip Below is a table displaying the total roundtrip times with and without the client public parameters generation and the database encoding, which can be skipped if the database hasn't been updated. | Database Size | Roundtrip (s) | Roundtrip + Setup + Encode (s) | | :------------ | ------------: | -----------------------------: | | **Single** | | | | $2^{13}$ | 0.1651 | 0.2504 | | $2^{17}$ | 0.3098 | 1.1222 | | $2^{21}$ | 1.1230 | 16.2755 | | $2^{25}$ | 9.5674 | 288.1244 | | **Batch** | | | | $2^{13}$ | 4.8202 | 6.8838 | | $2^{17}$ | 6.0639 | 10.9428 | | $2^{21}$ | 12.5613 | 65.3955 | | $2^{25}$ | 85.6168 | 1306.4031 | This table can give us a clear view of the overhead of implementing PIR on top of current data retrieval operations. ## Challenges ### Dynamic State Benchmark results show that database encoding takes significant time and grows rapidly for large datasets. For dynamic data, whether growing Semaphore groups or Ethereum's state, continuously re-encoding a multi-gigabyte database for PIR is impractical. #### Possible Solution: Periodic Snapshots In this approach, the server captures the database state at defined intervals. The interval length will depend on the amount of resources willing to be spent on computation and the speed at which the database is updated, and the data retrieved will always be at least as old as the `db_encoding` time. The snapshot's root hash can be attested on-chain so clients can verify integrity. For periodic balance checks or group membership proofs a delay of a few minutes won't interfere. For voting applications, one could just limit the registrations to vote up to a time and this wouldn't be an issue at all. But this solution does not contemplate real-time private queries for frequently changing datasets. A possible fallback for this can be replacing PIR with a TEE-based relayer. ### Scalability Merkle trees with more than $2^{20}$ elements become difficult to manage. Applying PIR to the entire dataset is not possible, so combining this with merkle-forests or focusing on high-value subsets (e.g., EOA balances, contract storages) is necessary. > ℹ️ There are proposals that already suggest Ethereum nodes will move towards this [partial statelessness](https://ethresear.ch/t/a-local-node-favoring-delta-to-the-scaling-roadmap/22368) paradigm. ### Server-Side Computational Load PIR servers require significant computational resources, which will add costs to the implementors. Incentives might need to be set in place. ### Integrations End-user applications have to integrate PIR. This comes with an extra cost since there are not a lot of implementations out there and none with JavaScript/TypeScript or WASM binding libs. ## Alternative Solutions ### Trusted Execution Environments A Trusted Execution Environment is a secure and isolated environment that runs alongside the main operating system on a machine. TEEs are currently being explored for [Private Proof Delegation](https://pse.dev/en/blog/tee-based-ppd). I highly recommend reading the PPD report, since it details the trust assumptions involved in dealing with them. TEEs can be used as secure enclaves. A secure enclave processes user queries, forwards them to backend nodes, and returns responses without exposing the query content to the relay operator. ### Privacy Preserving Routing Network-level privacy is also available through Tor and [mixing networks](https://en.wikipedia.org/wiki/Mix_network). - [Tor](https://spec.torproject.org/intro/index.html) is a distributed overlay network designed to anonymize **low-latency** TCP-based applications such as web browsing, secure shell, and instant messaging. - Mixing Networks are routing protocols that create hard-to-trace communications using a chain of proxy servers. [Nym](https://nym.com/docs/network), for instance, packetises and mixes together IP traffic from many users inside the Mixnet. Note that this only protects against IP-based tracking but does not hide the query content itself. ### RPC Rotation Wallets could rotate queries across multiple RPC providers, ensuring no single provider sees the user's full query history. This distributes the data and makes it harder for any one entity to profile the user. Each provider would still learn about the specific queries sent to them, and if providers collude, they could reconstruct the user's activity. Adding to this collution risk, there aren't that many reliable RPC providers out there, less of them that don't share your data. This is one of the easiest solutions to implement, but it does not hide the query content and the user's IP, just makes it a bit harder to deanonymize. The equivalent solution to the private Merkle proof generation is to store the tree in multiple servers, making it possible to retrieve different IDs from different servers. ### Light Clients and Portal Network This is only a path for Ethereum State Reads. Light clients like [Helios](https://github.com/a16z/helios) and decentralized data networks like the [Portal Network](https://ethportal.net/overview) aim to reduce reliance on centralized RPC providers. - **Light clients** should be able to request specific state proofs from peers, but implementing peer-to-peer networking for light clients can be a difficult task, and they currently rely on centralized RPC providers. - **Portal network** distributes data queries across multiple nodes, but comes with high latency costs. This will decentralize data access and reduce the risk of centralized logging, but individual peers still learn about the queries they serve. ## Ethereum Privacy Roadmap There have been some roadmap proposals around improving Ethereum L1 privacy: - [pcaversaccio's Roadmap](https://hackmd.io/@pcaversaccio/ethereum-privacy-the-road-to-self-sovereignty) - [Vitalik's Roadmap](https://ethereum-magicians.org/t/a-maximally-simple-l1-privacy-roadmap/23459) Enhancing privacy is a critical priority for Ethereum's ecosystem. The solutions outlined in this writeup can reduce metadata leakage and provide network-level protections, giving users control over their data. One of the most immediate reasons for taking this direction is security, [data leaks](https://www.coinbase.com/blog/protecting-our-customers-standing-up-to-extortionists) and bad practices have already proven to be [dangerous](https://www.france24.com/en/live-news/20250527-france-foils-new-crypto-kidnapping-plot-arrests-over-20-source). In the mid term, proofs of inclusions could be necessary to [vote in an election](https://docs.semaphore.pse.dev/V2/use-cases/private-voting), and you would disclose your identity when generating them. As adoption increases, the right to privacy should be prioritized. ## Conclusion Privacy is essential for Ethereum and PIR offers a path for private reads in and around it, hiding not just who you are but what you look up. But still, practical deployments face some frictions. Re-encoding large trees for every change is too expensive, so the data may be stale, depending on the cadence of periodic snapshots. Deterministic trees such as LeanIMT let clients compute indices on their own, but arbitrary tries still need either key-based PIR or a simple 2PC for index lookup, both of which add overhead. The computation burden is concentrated on the servers, so RPC providers or dedicated operators must be incentivized through fees or other rewards. Wallets and dApps need accessible WASM libraries to integrate PIR. Until that is available, users can use alternatives such as rotating RPC endpoints, using mixnets, or TEE-based relayers. If we're going towards a fully private Ethereum, a possible roadmap for data access privacy could be: - **Near term**: Only focus on network level anonymity with Tor, mixnets and rotating RPCs. - **Mid term**: Implement and use TEE-based relayers. If light-clients are also used, it would enable confidential verifications and eliminate provider trust. - **Long term**: With PIR advancements, work on making it easily adoptable by wallets and DApps. ## Resources - [A Maximally Simple L1 Privacy Roadmap](https://ethereum-magicians.org/t/a-maximally-simple-l1-privacy-roadmap/23459) - [Call Me By My Name: Simple, Practical Private Information Retrieval for Keyword Queries](https://eprint.iacr.org/2024/092) - [Ethereum Privacy: The Road to Self-Sovereignty](https://hackmd.io/@pcaversaccio/ethereum-privacy-the-road-to-self-sovereignty) - [How to Raise the Gas Limit, Part 1: State Growth](https://www.paradigm.xyz/2024/03/how-to-raise-the-gas-limit-1) - [Private Information Retrieval and Its Applications: An Introduction, Open Problems, Future Directions](https://arxiv.org/abs/2304.14397) - [Respire: Single-Server Private Information Retrieval with Sublinear Online Time](https://eprint.iacr.org/2024/1165) - [Scaling Semaphore](https://hackmd.io/@brech1/scale-semaphore) - [Semaphore Documentation](https://docs.semaphore.pse.dev/) - [TEE based private proof delegation](https://pse.dev/en/blog/tee-based-ppd) - [TreePIR: Sublinear-Time and Polylog-Bandwidth Private Information Retrieval from DDH](https://eprint.iacr.org/2023/204) - [(WIP) A validation on Valid-Only Partial Statelessness](https://hackmd.io/_wVNey49QTmbd0Nm9lrU8A) - [Worldcoin - A New Identity and Financial Network](https://whitepaper.world.org/) ]]> ethereum privacy pir <![CDATA[zkPDF: Unlocking Verifiable Data in the World's Most Popular Document Format]]> https://pse.dev/blog/zkpdf-unlocking-verifiable-data https://pse.dev/blog/zkpdf-unlocking-verifiable-data Fri, 13 Jun 2025 00:00:00 GMT zkpdf zero-knowledge proofs privacy pdf zkid digilocker pdfs <![CDATA[Efficient Client-Side Proving for zkID]]> https://pse.dev/blog/efficient-client-side-proving-for-zkid https://pse.dev/blog/efficient-client-side-proving-for-zkid Wed, 11 Jun 2025 00:00:00 GMT client-side proving zkp zero-knowledge proofs zkid post-quantum benchmarks <![CDATA[PSE May 2025 newsletter]]> https://pse.dev/blog/pse-may-2025-newsletter https://pse.dev/blog/pse-may-2025-newsletter Fri, 30 May 2025 00:00:00 GMT newsletter post quantum cryptography private proof delegation client side proving machina io voprf tlsnotary maci zk-kit mpc framework mopro semaphore <![CDATA[MPCStats Retrospective: Lessons from a Privacy-Preserving Stats Platform]]> https://pse.dev/blog/mpc-retrospective https://pse.dev/blog/mpc-retrospective Fri, 23 May 2025 00:00:00 GMT MPC privacy data analysis Devcon TLSNotary <![CDATA[Integrating Mopro Native Packages Across Mobile Platforms]]> https://pse.dev/blog/mopro-native-packages https://pse.dev/blog/mopro-native-packages Thu, 22 May 2025 00:00:00 GMT
iOS zkEmail App Example
iOS
Android zkEmail App Example
Android

Flutter App for iOS & Android zkEmail Example

Notice that, with Mopro and the use of [Noir-rs](https://github.com/zkmopro/noir-rs), we port zkEmail into native packages while keeping the proof size align with Noir's Barretenberg backend CLI. It transfers the API logic directly to mobile platforms with no extra work or glue code needed! ### How it worked before Mopro Previously, integrating ZKPs into mobile applications involved more manual work and platform-specific implementations. For example, developers might have used solutions like: - **Swoir:** [Swoir](https://github.com/Swoir/Swoir/tree/main) - **noir-android:** [noir_android](https://github.com/madztheo/noir_android/tree/main) These approaches often required developers to handle bridging code and manage dependencies separately for each platform, unlike the streamlined process Mopro now offers. With Mopro, developers can leverage pre-built native packages and import them directly via package managers. This, combined with automated binding generation, significantly reduces the need for manual API crafting and platform-specific glue code. While developers still write their application logic using platform-specific languages, Mopro simplifies the integration of core ZK functionalities, especially when leveraging Rust's extensive cryptography ecosystem. ## Under The Hood Developing native packages involved tackling several technical challenges to ensure smooth and efficient operation across different platforms. This section dives into two key challenges we addressed: 1. Optimizing static library sizes for iOS to manage package distribution and download speeds. 2. Ensuring compatibility with Android's release mode to prevent runtime errors due to code shrinking. ### Optimizing Static Library Sizes for iOS #### Why Static Linking? UniFFI exports Swift bindings as a static archive (`libmopro_bindings.a`). Static linking ensures all Rust symbols are available at link-time, simplifying Xcode integration. However, it bundles all Rust dependencies (Barretenberg Backend, rayon, big-integer math), resulting in larger archive sizes. #### Baseline Size The full build creates an archive around **≈ 153 MB** in size. When uploading files over 100 MB to GitHub, Git LFS takes over by replacing the file with a text pointer in the repository while storing the actual content on a remote server like GitHub.com. This setup can cause issues for package managers that try to fetch the package directly from a GitHub URL for a release publish. While uploading large files may be acceptable for some package management platforms or remote servers like Cloudflare R2, the large file size slows down: - CocoaPods or SwiftPM downloads - CI cache recovery - Cloning the repository, especially on slower connections #### Our Solution: Zip & Unzip Strategy To keep development fast and responsive, we compress the entire `MoproBindings.xcframework` before uploading it to GitHub and publishing it to CocoaPods, reducing its size to about **≈ 41 MB**. We also found that by customizing `script_phase` in the `.podspec` (check our implementation in [`ZKEmailSwift.podspec`](https://github.com/zkmopro/zkemail-swift-package/blob/b5c3a94f8580b0332ced2c8409a1017530a56e38/ZKEmailSwift.podspec#L93-L103)), we can unzip the bindings during pod install. This gives us the best of both worlds: (1) smaller packages for distribution and (2) full compatibility with Xcode linking. The added CPU cost is minor compared to the time saved on downloads. #### Comparison With Android On Android, dynamic `.so` libraries (around 22 MB in total) are used, with symbols loaded lazily at runtime to keep the package size small. In contrast, because iOS's constraint on third-party Rust dynamic libraries in App Store builds, static linking with compression is currently the most viable option, to the best of our knowledge. ### Ensuring Android Release Mode Compatibility Another challenge we tackled was ensuring compatibility with Android's release mode. By default, Android's release build process applies [code shrinking and obfuscation](https://developer.android.com/build/shrink-code) to optimize app size. While beneficial for optimization, this process caused a `java.lang.UnsatisfiedLinkError` for Mopro apps. The root cause was that code shrinking interfered with [JNA (Java Native Access)](https://mozilla.github.io/uniffi-rs/latest/kotlin/gradle.html#jna-dependency), a crucial dependency for UniFFI, which we use for Rust-to-Kotlin bindings. The shrinking process was removing or altering parts of JNA that were necessary for the bindings to function correctly, leading to the `UnsatisfiedLinkError` when the app tried to call the native Rust code. #### The Fix: Adjusting Gradle Build Configurations Our solution, as detailed in [GitHub Issue #416](https://github.com/zkmopro/mopro/issues/416), involves a configuration adjustment in the consuming application's `android/build.gradle.kts` file (or `android/app/build.gradle` for older Android projects). Developers using Mopro need to explicitly disable code and resource shrinking for their release builds: ```kotlin android { // ... buildTypes { getByName("release") { // Disables code shrinking, obfuscation, and optimization for // your project's release build type. minifyEnabled = false // Disables resource shrinking, which is performed by the // Android Gradle plugin. shrinkResources = false } } } ``` #### Impact and Future Considerations This configuration ensures that JNA and, consequently, the UniFFI bindings remain intact, allowing Mopro-powered Android apps to build and run successfully in release mode. This approach aligns with recommendations found in the official Flutter documentation for handling [similar issues](https://docs.flutter.dev/deployment/android#shrink-your-code-with-r8). While this increases the final app size slightly, it guarantees the stability and functionality of the native ZK operations. We are also actively exploring ways to refine this in the future to allow for optimized builds without compromising JNA's functionality. ## The Road Ahead ### a. Manual Tweaks for Cross-Platform Frameworks Cross-platform frameworks like React Native and Flutter require additional glue code to define modules, as they straddle multiple runtimes. Each layer needs its own integration. For example, in our [zkEmail React Native package](https://github.com/zkmopro/zkemail-react-native-package), we use three separate wrappers. - \[TypeScript\] [`MoproReactNativePackageModule.ts`](https://github.com/zkmopro/zkemail-react-native-package/blob/main/src/MoproReactNativePackageModule.ts): declares the public API and lets the React Native code-gen load the native module. - \[Swift\] [`MoproReactNativePackageModule.swift`](https://github.com/zkmopro/zkemail-react-native-package/blob/main/ios/MoproReactNativePackageModule.swift): loads bindings into Objective-C–discoverable classes. - \[Kotlin\] [`MoproReactNativePackageModule.kt`](https://github.com/zkmopro/zkemail-react-native-package/blob/main/android/src/main/java/expo/modules/moproreactnativepackage/MoproReactNativePackageModule.kt): loads bindings and bridges via JNI. Similarly, for our [zkEmail Flutter package](https://github.com/zkmopro/zkemail_flutter_package), a comparable set of wrappers is employed: - \[Dart\] [`zkemail_flutter_package.dart`](https://github.com/zkmopro/zkemail_flutter_package/blob/main/lib/zkemail_flutter_package.dart): defines the public Dart API for the Flutter plugin, invoking methods on the native side via platform channels. - \[Swift\] [`ZkemailFlutterPackagePlugin.swift`](https://github.com/zkmopro/zkemail_flutter_package/blob/main/ios/Classes/ZkemailFlutterPackagePlugin.swift): calls the underlying Rust-generated Swift bindings. - \[Kotlin\] [`ZkemailFlutterPackagePlugin.kt`](https://github.com/zkmopro/zkemail_flutter_package/blob/main/android/src/main/kotlin/com/zkmopro/zkemail_flutter_package/ZkemailFlutterPackagePlugin.kt): bridges Dart calls to the Rust-generated Kotlin bindings. ### b. Support for Custom Package Names Initially, we encountered naming conflicts when the same XCFramework was used in multiple Xcode projects. Addressing this to allow fully customizable package names is an ongoing effort. Initial progress was made with updates in [issue#387](https://github.com/zkmopro/mopro/issues/387) and a partial fix in [PR#404](https://github.com/zkmopro/mopro/pull/404). Further work to complete this feature is being tracked in [issue#413](https://github.com/zkmopro/mopro/issues/413). ## What's Next: Shaping Mopro's Future Together Currently, the Mopro CLI helps you create app templates via the `mopro create` command, once bindings are generated with `mopro build`. Our vision is to enhance this by enabling the automatic generation of fully customized native packages. This would include managing all necessary glue code for cross-platform frameworks, potentially through a new command (maybe like `mopro pack`) or by extending existing commands. We believe this will significantly streamline the developer workflow. If you're interested in shaping this feature, we invite you to check out the discussion and contribute your ideas in [issue #419](https://github.com/zkmopro/mopro/issues/419). By achieving this, we aim to unlock seamless mobile proving capabilities, simplifying adoption for developers leveraging existing ZK solutions or porting Rust-based ZK projects. Your contributions can help us make mobile ZK development more accessible for everyone! If you find it interesting, feel free to reach out to the Mopro team on Telegram: [@zkmopro](https://t.me/zkmopro), or better yet, dive into the codebase and open a PR! We're excited to see what the community builds with Mopro. Happy proving! ' ]]>
<![CDATA[Are elliptic curves going to survive the quantum apocalypse?]]> https://pse.dev/blog/are-elliptic-curves-going-to-survive-the-quantum-apocalypse https://pse.dev/blog/are-elliptic-curves-going-to-survive-the-quantum-apocalypse Tue, 20 May 2025 00:00:00 GMT <![CDATA[TEE based private proof delegation]]> https://pse.dev/blog/tee-based-ppd https://pse.dev/blog/tee-based-ppd Tue, 20 May 2025 00:00:00 GMT <![CDATA[Hello, World: the first signs of practical iO]]> https://pse.dev/blog/hello-world-the-first-signs-of-practical-io https://pse.dev/blog/hello-world-the-first-signs-of-practical-io Mon, 12 May 2025 00:00:00 GMT indistinguishability obfuscation cryptography Diamond iO lattice <![CDATA[Reflecting on MACI Platform: What We Built, Learned, and What’s Next]]> https://pse.dev/blog/reflecting-on-maci-platform https://pse.dev/blog/reflecting-on-maci-platform Thu, 01 May 2025 00:00:00 GMT MACI Privacy Public Goods RetroPGF Zero Knowledge Governance <![CDATA[Introducing Trinity]]> https://pse.dev/blog/introducing-trinity https://pse.dev/blog/introducing-trinity Mon, 28 Apr 2025 00:00:00 GMT mpc zero-knowledge proofs privacy cryptography kzg halo2 garbled circuit zero knowledge 2pc plonk computational integrity private transactions security infrastructure/protocol <![CDATA[Towards a Quantum-Safe P2P for Ethereum]]> https://pse.dev/blog/towards_a_quantum-safe_p2p_for_ethereum https://pse.dev/blog/towards_a_quantum-safe_p2p_for_ethereum Tue, 22 Apr 2025 00:00:00 GMT quantum computing p2p ethereum cryptography networking post-quantum security infrastructure/protocol <![CDATA[Code Optimizations in the Landscape of Post-Quantum Cryptography]]> https://pse.dev/blog/code-optimizations-in-the-landscape-of-post-quantum-cryptography https://pse.dev/blog/code-optimizations-in-the-landscape-of-post-quantum-cryptography Mon, 07 Apr 2025 00:00:00 GMT post-quantum cryptography pqc cryptography optimization security algorithms zero-knowledge proofs elliptic curves research infrastructure/protocol <![CDATA[Circom MPC: TL;DR and Retrospective]]> https://pse.dev/blog/circom-mpc-tldr-and-retrospective https://pse.dev/blog/circom-mpc-tldr-and-retrospective Thu, 06 Mar 2025 00:00:00 GMT " - IsPositive can be replaced with "> 0" - x = d \* q + r can be written as "q = x / d" **Scaling, Descaling and Quantized Aware Computation** Circomlib-ML "scaled" a float to int to maintain precision using $10^{18}$: - for input $a$, weight $w$, and bias $b$ that are floats - $a$, $w$ are scaled to $a' = a10^{18}$ _and_ $w' = w10^{18}$ - $b$ is scaled to $b' = b10^{36}$_,_ due to in a layer we have computation in the form of $aw + b \longrightarrow$ the outputs of this layer is scaled with $10^{36}$ - To proceed to the next layer, we have to "descale" the outputs of the current layer by (int) dividing the outputs with $10^{18}$ - say, with an output $x$, we want to obtain $x'$ s.t. - $x = x'*10^{18} + r$ - so effectively in this case $x'$ is our actual output - in ZK $x'$ and $r$ are provided as witness - in MPC $x'$ and $r$ have to be computed using division (expensive) For efficiency we replace this type of scaling with bit shifting, i.e. - instead of $*10^{18}$ ($*10^{36}$) we do $*2^s$ ($*2^{2s}$)where $s$ is called the scaling factor - The scaling is done prior to the MPC - $s$ can be set accordingly to the bitwidth of the MPC protocol - now, descaling is simply truncation or right-shifting, which is a commonly supported and relatively cheap operation in MPC. - $x' = x >> s$ **The "all inputs" Circom template** Some of the Circomlib-ML circuits have no "output" signals; we patched them to treat the outputs as 'output' signals. The following circuits were changed: - ArgMax, AveragePooling2D, BatchNormalization2D, Conv1D, Conv2D, Dense, DepthwiseConv2D, Flatten2D, GlobalAveragePooling2D, GlobalMaxPooling2D, LeakyReLU, MaxPooling2D, PointwiseConv2D, ReLU, Reshape2D, SeparableConv2D, UpSampling2D _**Some templates (Zanh, ZeLU and Zigmoid) are "unpatchable" due to their complexity for MPC computation.**_ ### Keras2Circom Patches > keras2circom expects a convolutional NN; We forked keras2circom and create a [compatible version](https://github.com/namnc/keras2circom). ### Benchmarks After patching Circomlib-ML we can run the benchmark separately for each patched layer above. **Docker Settings and running MP-SPDZ on multiple machines** For all benchmarks we inject synthetic network latency inside a Docker container. We have two settings with set latency & bandwidth: 1. One region - Europe & Europe 2. Different regions - Europe & US We used `tc` to limit latency and set a bandwidth: ```bash tc qdisc add dev eth0 root handle 1:0 netem delay 2ms tc qdisc add dev eth0 parent 1:1 handle 10:0 tbf rate 5gbit burst 200kb limit 20000kb ``` Here we set delay to 2ms & rate to 5gb to imitate a running within the same region (the commands will be applied automatically when you run the script). There's a [Dockerfile](https://github.com/namnc/circom-mp-spdz/blob/main/Dockerfile), as well as different benchmark scripts in the repo, so that it's easier to test & benchmark. If you want to run these tests yourself: 1\. Set up the python environment: ```bash python3 -m venv .venv source .venv/bin/activate ``` 2\. Run a local benchmarking script: ```bash python3 benchmark_script.py --tests-run=true ``` 3\. Build & Organize & Run Docker container: ```bash docker build -t circom-mp-spdz . docker network create test-network docker run -it --rm --cap-add=NET_ADMIN --name=party1 --network test-network -p 3000:3000 -p 22:22 circom-mp-spdz ``` 4\. In the Docker container: ```bash service ssh start ``` 5\. Run benchmarking script that imitates few machines: ```bash python3 remote_benchmark.py --party1 127.0.0.1:3000 ``` 6\. Deactivate venv ```bash deactivate ``` **Benchmarks** Below we provide benchmark for each different layer separately, a model that combines different layers will yield corresponding combined performance. ![](/articles/circom-mpc-tldr-and-retrospective/_gT634uo_O9kx4ogisxtj.webp) ![](/articles/circom-mpc-tldr-and-retrospective/1EZeKTAV2tO1M-t1kwtk2.webp) Accuracy of the circuits compared to Keras reference implementation: ![](/articles/circom-mpc-tldr-and-retrospective/RWD7aoy3r8bs-uMc0d45D.webp) Our above benchmark only gives a taste of how performance looks for MPC-ML; any interested party can understand approximate performance of a model that combines different layers. ]]> circom mpc secure multi-party computation privacy cryptography zero-knowledge proofs machine learning mp-spdz computational integrity research <![CDATA[Intmax: a scalable payment L2 from plasma and validity proofs]]> https://pse.dev/blog/intmax-a-scalable-payment-l2-from-plasma-and-validity-proofs https://pse.dev/blog/intmax-a-scalable-payment-l2-from-plasma-and-validity-proofs Tue, 04 Mar 2025 00:00:00 GMT intmax plasma l2 scaling zero-knowledge proofs validity proofs plonky2 data availability <![CDATA[Lattice-Based Proof Systems]]> https://pse.dev/blog/lattice-based-proof-systems https://pse.dev/blog/lattice-based-proof-systems Tue, 18 Feb 2025 00:00:00 GMT <![CDATA[Retrospective: Summa]]> https://pse.dev/blog/retrospective-summa https://pse.dev/blog/retrospective-summa Mon, 10 Feb 2025 00:00:00 GMT summa proof of reserves zero-knowledge proofs cryptography privacy security centralized exchange halo2 ethereum infrastructure/protocol <![CDATA[Web2 Nullifiers using vOPRF]]> https://pse.dev/blog/web2-nullifiers-using-voprf https://pse.dev/blog/web2-nullifiers-using-voprf Thu, 30 Jan 2025 00:00:00 GMT voprf nullifiers zero-knowledge proofs privacy web2 identity cryptography mpc anonymity/privacy infrastructure/protocol <![CDATA[Certificate Transparency Using NewtonPIR [PAPER WITHDRAWN]]]> https://pse.dev/blog/certificate-transparency-using-newtonpir https://pse.dev/blog/certificate-transparency-using-newtonpir Tue, 28 Jan 2025 00:00:00 GMT certificate transparency newtonpir privacy security private information retrieval fhe cryptography web security tls zero-knowledge <![CDATA[Self-Sovereign Identity & Programmable Cryptography: Challenges Ahead]]> https://pse.dev/blog/self-sovereign-identity-programmable-cryptography-challenges-ahead https://pse.dev/blog/self-sovereign-identity-programmable-cryptography-challenges-ahead Thu, 23 Jan 2025 00:00:00 GMT self-sovereign identity ssi privacy cryptography zero-knowledge proofs verifiable credentials digital identity identity standards ethereum <![CDATA[Mopro: Comparison of Circom Provers]]> https://pse.dev/blog/mopro-comparison-of-circom-provers https://pse.dev/blog/mopro-comparison-of-circom-provers Tue, 21 Jan 2025 00:00:00 GMT mopro circom zero-knowledge proofs witness generation proof generation snarkjs rapidsnark mobile performance toolkits <![CDATA[Retrospective: Trusted Setups and P0tion Project]]> https://pse.dev/blog/retrospective-trusted-setups-and-p0tion-project https://pse.dev/blog/retrospective-trusted-setups-and-p0tion-project Wed, 15 Jan 2025 00:00:00 GMT trusted setups p0tion powers of tau kzg zero-knowledge proofs cryptography ethereum groth16 security infrastructure/protocol <![CDATA[Why We Can't Build Perfectly Secure Multi-Party Applications (yet)]]> https://pse.dev/blog/why-we-cant-build-perfectly-secure-multi-party-applications-yet https://pse.dev/blog/why-we-cant-build-perfectly-secure-multi-party-applications-yet Tue, 14 Jan 2025 00:00:00 GMT mpc secure multi-party computation fhe fully homomorphic encryption <![CDATA[AnonKlub: Reflections on Our Journey in Privacy-Preserving Solutions]]> https://pse.dev/blog/anonklub-reflections-on-our-journey-in-privacy-preserving-solutions https://pse.dev/blog/anonklub-reflections-on-our-journey-in-privacy-preserving-solutions Tue, 01 Oct 2024 00:00:00 GMT zk-ecdsa privacy ethereum zero-knowledge proofs circom halo2 spartan anonymity identity cryptography <![CDATA[Secure Multi-Party Computation]]> https://pse.dev/blog/secure-multi-party-computation https://pse.dev/blog/secure-multi-party-computation Tue, 06 Aug 2024 00:00:00 GMT mpc secure multi-party computation privacy cryptography garbled circuit secret sharing oblivious transfer security circuits threshold cryptography <![CDATA[The next chapter for zkEVM Community Edition]]> https://pse.dev/blog/the-next-chapter-for-zkevm-community-edition https://pse.dev/blog/the-next-chapter-for-zkevm-community-edition Wed, 05 Jun 2024 00:00:00 GMT zkevm zero-knowledge proofs ethereum scaling cryptography rollups zkvm research infrastructure/protocol proof systems <![CDATA[Unleashing Potential: Introducing the PSE Core Program]]> https://pse.dev/blog/unleashing-potential-introducing-the-pse-core-program https://pse.dev/blog/unleashing-potential-introducing-the-pse-core-program Wed, 24 Apr 2024 00:00:00 GMT [ { "title": "WEEK 0: PRE-REQUISITES", "items": [ "Course overview and resources", "Git, GitHub, and PR workflow basics", "Introduction to ZKPs and Number Theory" ] }, { "title": "WEEK 1: CRYPTOGRAPHIC BASICS", "items": [ "Getting started with Circom", "Basics of encryption and hash functions", "Digital signatures and elliptic curve cryptography" ] }, { "title": "WEEK 2: MORE CRYPTO + ZKPS", "items": [ "Circom crash course + practice", "KZG Commitments and zkSNARKs", "Overview of Trusted Setups and Groth16" ] }, { "title": "WEEK 3: HACKATHON", "items": [ "A break from studying", "One week to build something with your new skills!" ] }, { "title": "WEEK 4: PLONK WEEK", "items": [ "Learn Rust and complete Rustlings", "Deep dive into PLONK", "Make a presentation and blog post on PLONK" ] }, { "title": "WEEK 5: TECHNOLOGIES + APPLICATIONS", "items": [ "Halo2 introduction and practical", "Study of FHE and MPC", "Explore Semaphore, Bandada, TLSNotary, ZKEmail" ] } ] ### Frequently Asked Questions { "type": "multiple", "size": "xs", "items": [ { "value": "who-can-apply", "label": "Who can apply?", "children": "The Core Program is open to university students based in Japan, South Korea, Taiwan, Costa Rica, Ecuador and Argentina with a basic understanding of programming. If you're currently enrolled in a mathematics or computer science program, you’re likely an excellent fit." }, { "value": "structure", "label": "What is the structure of the program?", "children": "We use a hybrid learning model with the majority of learning happening online and weekly in-person meetings for discussions and problem-solving. The program consists of three stages: 1) self-driven exploration & guidance, 2) hands-on circuit writing, and 3) open-source project contribution." }, { "value": "time-commitment", "label": "How much time will I need to commit?", "children": "We’re looking for dedicated students who can commit 40 hours a week from mid-July to September. You will be required to attend in-person meetups once a week and make presentations." }, { "value": "remote", "label": "Can I participate remotely?", "children": "Unfortunately no, the weekly in-person sessions are required for in-depth discussions and collaborative problem-solving." }, { "value": "gain", "label": "What will I gain from this program?", "children": "Upon completing the program, you'll have comprehensive knowledge about programmable cryptography, a bolstered GitHub portfolio, and opportunities to apply for grants for further research and contributions." }, { "value": "questions", "label": "What if I have more questions?", "children": "For any further questions or additional information, you can join our [Telegram group](https://t.me/+ebGauHbpDE0yZGIx)!" } ] } ]]> education zero-knowledge proofs privacy cryptography proof systems open-source security public goods <![CDATA[Advancing Anon Aadhaar: what's new in v1.0.0]]> https://pse.dev/blog/advancing-anon-aadhaar-whats-new-in-v100 https://pse.dev/blog/advancing-anon-aadhaar-whats-new-in-v100 Wed, 14 Feb 2024 00:00:00 GMT anonaadhaar privacy zero-knowledge proofs digital identity identity circom ethereum cryptography <![CDATA[Zero to Start: Applied Fully Homomorphic Encryption (FHE) Part 1]]> https://pse.dev/blog/zero-to-start-applied-fully-homomorphic-encryption-fhe-part-1 https://pse.dev/blog/zero-to-start-applied-fully-homomorphic-encryption-fhe-part-1 Thu, 21 Dec 2023 00:00:00 GMT )). FHE is an extension of public key cryptography; the encryption is "homomorphic" because it works on the principle that for every function performed on unencrypted text (Plaintext), there is an equivalent function for encrypted text (Ciphertext). ![Homomorphic Encryption](/articles/zero-to-start-applied-fully-homomorphic-encryption-fhe-part-1/PoAkyRxFZ5v2OieE-iRPS.webp) Homomorphic Encryption FHE shares fundamental components with traditional cryptography like encryption, decryption, and key generation. In addition to this, it uniquely enables arithmetic operations such as addition and multiplication on ciphertexts. There are generally four categories of homomorphic encryption: 1. **Partially homomorphic**: enables only one type of operation (addition or multiplication). RSA is an example of partially homomorphic encryption only using multiplication and not addition. 2. **Somewhat homomorphic**: limited for one operation but unlimited for the other. For example, limited multiplications but unlimited additions. 3. **Leveled homomorphic**: limited operations for both addition and multiplication 4. **Fully homomorphic**: unlimited operations for both addition and multiplication (and others). ![](/articles/zero-to-start-applied-fully-homomorphic-encryption-fhe-part-1/QkcoPW4EGRdD9wBEpqHb4.webp) In the past, the difficulty in achieving FHE was due to the "noise" that accumulated with every subsequent operation. The excess overflow in noise eventually makes decryption impossible. Craig Gentry proposed the first FHE scheme in 2009, where he solved this problem with a method called bootstrapping. Bootstrapping is used to recursively evaluate the decryption circuit to reduce and manage noise accumulation. ## **Why is FHE important?** Fully Homomorphic Encryption (FHE) signifies a groundbreaking shift in privacy, enabling data-centric systems to preserve privacy with minimal data exposure inherently. FHE, built using lattice-based cryptography, also offers the notable advantage of being post-quantum resistant, ensuring robust security against future potential threats from quantum computing. Some [general](https://homomorphicencryption.org/wp-content/uploads/2018/10/CCS-HE-Tutorial-Slides.pdf?ref=blog.sunscreen.tech) FHE use cases include: - Private inference & training: FHE could be used to protect the privacy of both the model and data (likely 3-5 years away). - Encrypted searches: query an encrypted file and only see the result of your specific query without the entire contents of the database revealed, also known as Private Information Retrieval (PIR). - Policy Compliance & Identity Management: Secure identity management by enabling the processing of identity-related data without exposure, allowing organizations to comply with regulators' KYC policies. ![General FHE Use Cases](/articles/zero-to-start-applied-fully-homomorphic-encryption-fhe-part-1/qZBR43OiJJQubIwL1iIc2.webp) General FHE Use Cases Fully Homomorphic Encryption (FHE) holds immense significance in blockchain technology because it can perform encrypted data computations within a trustless environment. We won't dive into the importance of privacy on the blockchain and how off-chain ZKPs are not the complete solution, but Wei Dai's article [Navigating Privacy on Public Blockchains](https://wdai.us/posts/navigating-privacy/) is a great primer. Here are some theoretical blockchain use cases that FHE could facilitate: - [Private Transactions](https://eprint.iacr.org/2022/1119.pdf): the processing of confidential transactions by smart contracts, allowing private transactions in dark pools, AMMs, blind auctions, and voting. - [MEV](https://collective.flashbots.net/t/frp-10-distributed-blockbuilding-networks-via-secure-knapsack-auctions/1955) (Maximal Extractable Value) Mitigation: FHE could potentially allow proposing blocks and ordering transactions while ensuring Pre-execution, failed execution, and post-execution privacy, offering a potential solution to prevent front-running. - Scaling: [Leveraging](https://www.fhenix.io/fhe-rollups-scaling-confidential-smart-contracts-on-ethereum-and-beyond-whitepaper/) [FHE Rollups](https://www.fhenix.io/wp-content/uploads/2023/11/FHE_Rollups_Whitepaper.pdf) presents a scalable approach to execute private smart contracts utilizing the security derived from Ethereum for state transitions - [Private Blockchains](https://eprint.iacr.org/2022/1119.pdf): encrypted chain states that are programmatically decrypted via consensus using Threshold FHE. ![FHE: Blockchain Use Cases](/articles/zero-to-start-applied-fully-homomorphic-encryption-fhe-part-1/duTnCuiIvMfqdk3ZERSe2.webp) FHE: Blockchain Use Cases The applied use cases for FHE are far-reaching, there are non-trivial technical challenges to overcome, and many are still being explored today. At its core, FHE ensures secure **data processing**, which, combined with other cryptographic primitives, can be incredibly powerful. In our exploration of Applied FHE, we dive deeper into real-world applications and use cases. ## **ZKP, MPC, & FHE** The terms ZKPs, MPC, and FHE have often been misused and interchanged and have been the source of much confusion. The post, [Beyond Zero-Knowledge: What's Next in Programmable Cryptography?](https://mirror.xyz/privacy-scaling-explorations.eth/xXcRj5QfvA_qhkiZCVg46Gn9uX8P_Ld-DXlqY51roPY) provides a succinct overview and comparisons of Zero-Knowledge Proofs (ZKPs), Multi-Party Computation (MPC), Fully Homomorphic Encryption (FHE) and Indistinguishability Obfuscation (iO). All fall under the broader umbrella of programmable cryptography. To briefly summarize how the three concepts are connected: **[Multi-Party Computation (MPC)](https://www.youtube.com/watch?v=aDL_KScy6hA&t=571s)**: MPC, when described as a **_general function_**, is any setup where mutually distrustful parties can individually provide inputs (private to others) to collaboratively compute a public outcome. MPC can be used as the term used to describe the **_technology_** itself, where randomized data shares from each individual are delegated for compute across servers. ![MPC](/articles/zero-to-start-applied-fully-homomorphic-encryption-fhe-part-1/poh6Brvlh1qyBiYpgPxyP.webp) MPC To add to the confusion, it is also often used to describe MPC **_use cases_**, most notably in the context of [Distributed Key Generation](https://en.wikipedia.org/wiki/Distributed_key_generation) (DKG) and [Threshold Signature Schemes](https://link.springer.com/referenceworkentry/10.1007/0-387-23483-7_429#:~:text=Threshold%20signatures%20are%20digital%20signatures,structure%20of%20a%20threshold%20scheme.) (TSS). Three leading technologies form the [building blocks](https://open.spotify.com/episode/4zfrPFbPWZvn6fXwrrEa5f?si=9ab56d47510f4da0) of MPC **_applications_**: [Garbled Circuits (GC)](https://www.youtube.com/watch?v=La6LkUZ4P_s), Linear Secret Sharing Schemes (LSSS), and Fully Homomorphic Encryption (FHE). These can be used both conjunctively or exclusively. ![MPC & ZKPs](/articles/zero-to-start-applied-fully-homomorphic-encryption-fhe-part-1/XiqL4MvjssDILJ59mDR5_.webp) MPC & ZKPs **Zero-Knowledge Proofs (ZKPs):** A method that allows a single party (prover) to prove to another party (verifier) knowledge about a piece of data without revealing the data itself. Using both public and private inputs, ZKPs enable the prover to present a true or false output to the verifier. ![ZKPs](/articles/zero-to-start-applied-fully-homomorphic-encryption-fhe-part-1/8YgbCNa_VDgqwUo3y5qaG.webp) ZKPs In Web 3 applications, the integration of ZKPs alongside FHE becomes crucial for constructing private and secure systems. ZKPs are vital because they can be used to generate proofs of correctly constructed FHE ciphertexts. Otherwise, users can encrypt any unverified gibberish. Hence corrupting the entire FHE circuit evaluation. Note the difference in ZKPs, FHE, and MPCs, where the input element of each primitive is distinct when evaluating the exposure of private data. - In ZKPs, private data contained in the input is only _visible to the prover_ - In MPC, private data contained in each input is only _visible to the owner_ - In FHE, private data contained in the input is encrypted and is **_never revealed_** While MPC is network bound, FHE and ZKPs are compute bound. The three primitives also differ regarding relative computation costs and interactiveness required between parties. ![ZKPs, MPC, FHE, computation costs and interactiveness](/articles/zero-to-start-applied-fully-homomorphic-encryption-fhe-part-1/fkjwJBfIJ2VkIGLKsqK1D.webp) ZKPs, MPC, FHE, computation costs and interactiveness In summary, - ZKPs focus on proving the truth of a statement without revealing the underlying data; it is useful for preserving private states for the prover. - MPC enables joint computation; it is useful when users want to keep their state private from others. - FHE allows computations on encrypted data without decryption; it is non-interactive and useful for preserving privacy throughout the entire computation process. FHE is an extension of public key cryptography, not a replacement for ZKPs or MPC. Each can act as an individual building block and serve a distinct cryptographic purpose. An assessment needs to be made on where and which primitive should be applied within different applications. ## **The State of FHE Today** Early concepts of FHE developed in the 1970s-90s laid the theoretical groundwork for homomorphic encryption. However, the real breakthrough came with Gentry's solution for FHE in 2009. The initial construction needed to be faster to be practically applied. Performance at the time was close to 30mins per bit operation and only applicable in a single key setting. Much of the research published following Gentry's paper has been focused on performance improvements that address these issues through: - [refining schemes](https://eprint.iacr.org/2021/315.pdf) - [reducing computation complexity](https://eprint.iacr.org/2023/1788) - [faster bootstrapping](https://eprint.iacr.org/2023/759), and - [hardware acceleration](https://eprint.iacr.org/2023/618) FHE is not possible with Ethereum today due to the size of ciphertexts and the cost of computation on-chain. It is estimated with the current rate of hardware acceleration, we may see applications in production by 2025. Zama's implementation of [fhEVM](https://docs.zama.ai/fhevm/) is a fork of Ethereum; they have several [tools](https://docs.zama.ai/homepage/) available: - **[TFHE-rs](https://docs.zama.ai/tfhe-rs)**: Pure Rust implementation of TFHE for boolean and small integer arithmetics over encrypted data - **[fhEVM](https://docs.zama.ai/fhevm)**: Private smart contracts on the EVM using homomorphic encryption There are some challenges with ZAMA's fhEVM approach that are yet to be improved. Networks using ZAMA's fhEVM are limited to about 2 FHE transactions per second (tps). Compared to Ethereum's ~15 tps this is not far off; however, it will need to be greatly improved for many time-sensitive applications. Additionally, operations on encrypted integers are much more difficult to perform than on plaintext integers. For example, on an Amazon m6i.metal machine (one of Amazon's top machines costing $2-4k per month to operate): - adding or subtracting two **encrypted** uint8 values takes around 70ms - adding **plaintext** uint8 values is essentially free and instant on any modern device There are also limitations to the size of unsigned integers available in the fhEVM context. Encrypted uint32 values are the largest possible in the fhEVM, while uint256 are the largest in the standard EVM and are used frequently by many protocols on Ethereum. Due to the challenge of operating on encrypted values in the fhEVM it is currently unreasonable to run validators at home, which makes this more suitable for networks with a smaller, more trusted validator set. [Sunscreen](https://docs.sunscreen.tech/) is another project actively working on FHE; they have a Rust-based FHE compiler using the BFV scheme with a [playground](https://playground.sunscreen.tech/). They've deployed a [blind auction](https://demo.sunscreen.tech/auctionwithweb3) proof of concept on SepoliaETH. [Fhenix](https://docs.fhenix.io/), a team working on a modular "FHE blockchain extension", plans on launching their testnet in January 2024. They also recently released their [whitepaper on FHE-Rollups](https://www.fhenix.io/fhe-rollups-scaling-confidential-smart-contracts-on-ethereum-and-beyond-whitepaper/). In the last five years, significant advancements have been made to make FHE more usable. Shruthi Gorantala's [framework](https://youtu.be/Q3glyMsaWIE?si=TbhlNxGsozbalIHU&t=1278) for thinking about FHE development as a hierarchy of needs is particularly helpful. The performance improvements listed above address deficiency needs and are contained in Layers 1-3 within the FHE tech stack. For FHE to realize its full potential, we also need to address the growth needs listed in Layers 4-5. ![FHE Hierarchy of Needs](/articles/zero-to-start-applied-fully-homomorphic-encryption-fhe-part-1/ZQ48QaY9vXvlwn-4Eh2B9.webp) FHE Hierarchy of Needs A critical aspect of systems integration is figuring out how to combine FHE technology with other privacy-enhancing primitives like ZKPs and MPC in a way that suits each unique trust model and protocol. Continue to [Part 2: Fundamental Concepts, FHE Development, Applied FHE, Challenges and Open Problems](https://mirror.xyz/privacy-scaling-explorations.eth/wQZqa9acMdGS7LTXmKX-fR05VHfkgFf9Wrjso7XxDzs). ]]> fhe cryptography privacy security ethereum education post-quantum cryptography zero-knowledge proofs <![CDATA[Zero to Start: Applied Fully Homomorphic Encryption (FHE) Part 2]]> https://pse.dev/blog/zero-to-start-applied-fully-homomorphic-encryption-fhe-part-2 https://pse.dev/blog/zero-to-start-applied-fully-homomorphic-encryption-fhe-part-2 Thu, 21 Dec 2023 00:00:00 GMT fhe cryptography privacy lattice-based cryptography threshold cryptography mpc security ethereum private transactions education <![CDATA[Beyond Zero-Knowledge: What's Next in Programmable Cryptography?]]> https://pse.dev/blog/beyond-zero-knowledge-whats-next-in-programmable-cryptography https://pse.dev/blog/beyond-zero-knowledge-whats-next-in-programmable-cryptography Thu, 09 Nov 2023 00:00:00 GMT cryptography programmable cryptography zero-knowledge proofs mpc secure multi-party computation fhe fully homomorphic encryption io privacy security <![CDATA[UniRep Ceremony: An Invitation to the Celestial Call and UniRep v2]]> https://pse.dev/blog/unirep-ceremony-an-invitation-to-the-celestial-call-and-unirep-v2 https://pse.dev/blog/unirep-ceremony-an-invitation-to-the-celestial-call-and-unirep-v2 Tue, 24 Oct 2023 00:00:00 GMT unirep trusted setup ceremony privacy zero-knowledge proofs identity reputation anonymity/privacy cryptography infrastructure/protocol <![CDATA[Continuing the Zero Gravity Journey]]> https://pse.dev/blog/continuing-the-zero-gravity-journey https://pse.dev/blog/continuing-the-zero-gravity-journey Thu, 19 Oct 2023 00:00:00 GMT zkml zero-knowledge proofs weightless neural networks halo2 lookup compression folding schemes feature selection machine learning research zero gravity <![CDATA[Announcing Anon Aadhaar]]> https://pse.dev/blog/announcing-anon-aadhaar https://pse.dev/blog/announcing-anon-aadhaar Thu, 21 Sep 2023 00:00:00 GMT anonaadhaar privacy zero-knowledge proofs digital identity identity ethereum proof of personhood credentials <![CDATA[TLSNotary Updates]]> https://pse.dev/blog/tlsnotary-updates https://pse.dev/blog/tlsnotary-updates Tue, 19 Sep 2023 00:00:00 GMT tlsn mpc secure multi-party computation privacy data portability cryptography selective disclosure security zero-knowledge proofs infrastructure/protocol <![CDATA[From CEX to CCEX with Summa Part 1]]> https://pse.dev/blog/from-cex-to-ccex-with-summa-part-1 https://pse.dev/blog/from-cex-to-ccex-with-summa-part-1 Thu, 14 Sep 2023 00:00:00 GMT 100x times more than performing it without having to generate a ZK proof. Summa uses [Halo2](https://github.com/privacy-scaling-explorations/halo2), a proving system that was originally built by [ZCash](https://github.com/zcash/halo2). Beyond high proving speed, Halo2 allows the reuse of existing and reputable trusted setups such as the [Hermez 1.0 Trusted Setup](https://docs.hermez.io/Hermez_1.0/about/security/#multi-party-computation-for-the-trusted-setup) for any application-specific circuit. The reader is now fully equipped with the background to understand the functioning of any part of the Summa ZK Proof of Solvency Protocol. ## End Part 1 [Part 2](https://mirror.xyz/privacy-scaling-explorations.eth/f2ZfkPXZpvc6DUmG5-SyLjjYf78bcOcFeiJX2tb2hS0) dives into a full Proof of Solvency flow. At each step, a detailed explanation of the cryptographic tooling being used is provided. The path toward establishing an industry-wide standard for proof of solvency requires the definition of a protocol that is agreed upon by Exchanges, Cryptographers, and Application Developers. The goal is to collaborate with Exchanges during a Beta program to bring Summa to production and, eventually, come up with a [EIP](https://github.com/summa-dev/eip-draft) to define a standard. Complete this [Google Form](https://forms.gle/uYNnHq3vjNHi5iRh9) if your Exchange (or Custodial Wallet) is interested in joining the program. Furthermore, if you are interested in sharing feedback or simply entering the community discussion, join the [Summa Solvency Telegram Chat](https://t.me/summazk).Summa is made possible because of contributions from [JinHwan](https://github.com/sifnoc), [Alex Kuzmin](https://github.com/alxkzmn), [Enrico Bottazzi](https://github.com/enricobottazzi). ]]> summa proof of solvency zero-knowledge proofs merkle tree cryptography privacy security centralized exchange computational integrity infrastructure/protocol <![CDATA[From CEX to CCEX with Summa Part 2]]> https://pse.dev/blog/from-cex-to-ccex-with-summa-part-2 https://pse.dev/blog/from-cex-to-ccex-with-summa-part-2 Thu, 14 Sep 2023 00:00:00 GMT 0x123 owns 20 ETH at t -> Exchange owns 20 ETH at t` The Exchange needs to sign an off-chain arbitrary message like _"these funds belong to XYZExchange"_ for each of these addresses, and then submit these signatures, together with the addresses and the message, to the SSC. ![](/articles/from-cex-to-ccex-with-summa-part-2/91YwYrQX4G0dQvsmQhILf.webp) The SSC operates optimistically by storing the triples `{signature, address, message}` within its state **without** performing any verification of their correctness. Any external actor can verify the correctness of those signatures and, if anything wrong is spotted, kick off a dispute. The ownership of an address can be proven only once and "reused" across any number of Proof of Solvency rounds (although providing it at each round would decrease the likelihood of a [friend attack](https://hackmd.io/j85xBCYZRjWVI0eeXWudwA#Proof-of-Assets-PoA---attacks-by-the-exchange)). If the Exchange moves the funds to a new address for any reason, this procedure has to be run again only for this new address. This phase happens asynchronously to a Proof of Solvency round. Up to now, only crypto addresses have been taken into account. But what if the Exchange is holding reserves in fiat currencies? In that case, the Proof of Ownership of these assets can still be carried out by, inevitably, having to trust the bank. In such a scenario, the bank would need to sign a certificate that attests that _XYZExchange holds x$ in their bank_. This certificate can be used during a Proof of Solvency Round (next section). ## Proof of Solvency Round In this phase, both the assets and the liabilities are snapshotted at a specific timestamp `t` to kick off a Proof of Solvency Round. Within a round, the Exchange needs to provide a ZK `ProofOfSolvency` that constrains their assets to be greater than their liabilities at `t`. Furthermore, the Exchange is required to generate a `ProofOfInclusion` for each user, which proves that the user has been accounted for correctly within the liabilities tree. ![](/articles/from-cex-to-ccex-with-summa-part-2/F83GSyDCOEo8yRVKWZCE_.webp) ### 1\. Snapshot In order for a Proof of Solvency round to start, the Exchange has to snapshot the state of its _liabilities_ and its _assets_ at a specific _timestamp t_. For the liabilities, it means building a Merkle Sum Tree² out of the database containing the users' entries at _t_ . The logic of building the Merkle Sum Tree is the one [previously described](https://mirror.xyz/privacy-scaling-explorations.eth/_1Y6ExFD_Rs3oDxwx5_kWAj_Tl_L9c0Hm7E6SVJei0A). For the assets, it means fetching, from an archive node, the balances of the addresses controlled by the Exchange, as proven in **AddressOwnership**, at the next available block at _t_ for each asset involved in the Proof of Solvency. This operation happens entirely locally on the Exchange premises. No ZK program is involved at this step. No data is shared with the public. Building the Merkle Sum Tree doesn’t require auditing or oversight. Any malicious operation that the Exchange can perform here, such as: - Adding users with negative balances - Excluding users - Understating users’ balances will be detected when the Proof of Inclusion (step 3) is handed over to individual users for verification. Note that the protocol doesn’t have to worry if the Exchange is adding fake users to the Merkle Sum Tree. Each user added to the tree would increase the liabilities of the Exchange which is against their interest. This is true as long as 1) the user balance is not negative and 2) the accumulated sum of the balances doesn’t overflow the prime field. These constraints are enforced at step 3. ### 2\. Proof of Solvency In order to prove³ their Solvency at time _t_, the Exchange needs to provide cryptographic proof that constraints the assets controlled by the Exchange at _t_ to be greater than the liabilities at _t_. It is necessary to avoid the liabilities denominated in a cryptocurrency being backed by assets denominated in a different currency. This may result in a solvency “only on paper”, that may crash due to a lack of liquidity or rate volatility. Because of this, each asset is compared against the total liabilities denominated in the same cryptocurrency. The Proof of Solvency is generated leveraging the following ZK [Circuit](https://github.com/summa-dev/summa-solvency/blob/master/zk_prover/src/circuits/solvency.rs). ![](/articles/from-cex-to-ccex-with-summa-part-2/ueB3hQDWFAAZhHSZLV2vN.webp) **inputs** - The private inputs `penultimate_level_left_hash, penultimate_level_left_balances[], penultimate_level_right_hash and penultimate_level_right_balances[]` represent the two nodes in the penultimate level of the Merkle sum tree and can be extracted from the Merkle Sum Tree data structure build in the previous step. - The public input `ccex_asset_sums[]` represents the amount of assets owned by the Exchange for each cryptocurrency that is part of the Round as per the assets Snapshot performed in the previous step. **constraints** - Perform a hashing of the penultimate nodes of the tree to get the Root Node (`root_hash` and `root_balances[]`). `root_balances[]` is an intermediary value that represents an array of the balances stored in the Root Node. In particular, Summa uses Poseidon hashing, which is a very efficient hashing algorithm when used inside zkSNARKs. - Checks that the liability sums for each cryptocurrency in the `root_balances[]` are less than the respective `ccex_asset_sums[]` passed as input In the example, the Exchange is generating a Proof of Solvency for multiple assets `N_ASSETS`,  therefore the length of the arrays `penultimate_level_left_balances[]`, `penultimate_level_right_balances[]`, `ccex_asset_sums[]` , and `root_balances[]` is equal to `N_ASSETS.` **(public) output** - `root_hash` of the Merkle Sum Tree After the proof is being generated locally by the Exchange, it is sent for verification to the SSC along with its public inputs `ccex_asset_sums[]`, `root_hash` and the timestamp. SSC verifies the validity of the proof. On successful verification, the Contract stores the public inputs. The immutability of a Smart Contract guarantees that people have consistent views of such information. If the same data were published on a centralized server, these would be subject to modifications from a malicious exchange. This attack is described in [Broken Proofs of Solvency in Blockchain Custodial Wallets and Exchanges, Chalkias, Chatzigiannis, Ji - 2022 - Paragraph 4.4](https://eprint.iacr.org/2022/043.pdf). At this point, it's worth noting that no constraints on `ccex_asset_sums[]` are performed within the Circuit nor within the Smart Contract. Instead, Summa adopts an optimistic approach in which these data are accepted as they are. As in the case of  `AddressOwnership`, external actors can kick off a dispute if the Exchange is trying to claim ownership of assets in excess of what they actually own. Spotting a malicious exchange is very straightforward: it only requires checking whether the assets controlled by the Exchange Addresses at the next available block at timestamp `t` match `ccex_asset_sums[].` ### 3\. Proof of Inclusion Up to this point the Exchange has proven its solvency, but the liabilities could have been calculated maliciously. For example, an Exchange might have arbitrarily excluded "whales" from the liabilities tree to achieve dishonest proof of solvency. Proof of Inclusion means proving that a user, identified by their username and balances denominated in different currencies, has been accounted for correctly in the liabilities. In practice, it means generating a ZK proof that an entry `username -> balanceEth`, `balanceBTC`, ... is included in a Merkle sum tree with a root equal to the one published onc-hain in the previous step. The Proof of Inclusion is generated⁴ leveraging the following zk [Circuit](https://github.com/summa-dev/summa-solvency/blob/master/zk_prover/src/circuits/merkle_sum_tree.rs). ![](/articles/from-cex-to-ccex-with-summa-part-2/ERA8zoNXOyP0Wk7ZQxdpU.webp) **inputs** - The private inputs `username` and `balances[]` represent the data related to the user whose proof of inclusion is being generated. - The private inputs `path_indices[]`, `path_element_hashes[]` and `path_element_balances[][]` represent the Merkle Proof for the user leaf node. - The public input `leaf_hash` is generated by hashing the concatenation of `username` and `balances[]`. Note that it would have been functionally the same to expose `username` and `balances[]` as public inputs of the circuit instead of `leaf_hash` but that would have made the proof subject to private data leaks if accessed by an adversarial actor. Instead, by only exposing `leaf_hash`, a malicious actor that comes across the proof cannot access any user-related data. **constraints** - For the first level of the Merkle Tree, `leaf_hash` and `balances` represent the current Node while p`ath_element_hashes[0]` and `path_element_balances[0][]` represents the sibling Node. - Performs the hashing between the current Node and the sibling Node `H(leaf_hash, balances[0], ..., balances[n], path_element_hashes[0], path_element_balances[0][0], path_element_balances[0][n])` to get the hash of the next Node. In particular, `path_indices[0]` is a binary value that indicates the relative position between the current Node and the sibling Node. - Constrains each value in `balances[]` and `path_element_balances[0][]` to be within the range of 14 bytes to avoid overflow and negative values being added to the tree. - For each currency `i` performs the sum between `balances[i]` of the current Node and the `path_element_balances[0][i]` of the sibling Node to get the balances of the next Node. - For any remaining level `j` of the Merkle Tree, the next Node from level `j-1` represents the current node, while `path_element_hashes[j]` and `path_element_balances[j][]` represents the sibling Node - Performs the hashing between the current Node and the corresponding sibling Node to get the hash of the next Node - Constrains each balance of current Node and each balance of the corresponding sibling Node to be within the range of 14 bytes to avoid overflow and negative values being added to the tree. - For each currency `i` perform the sum between the balances of the current Node and the balances of the sibling Node to get the balances of the next Node. In the example, the Exchange is generating a Proof of Solvency for multiple assets `(N_ASSETS)`. All the users' information is stored in a Merkle Sum Tree with height `LEVELS`. `path_indices` and `path_element_hashes` are arrays of length `LEVELS`. `path_element_balances` is a bidimensional array in which the first dimension is the `LEVELS` and the second is `N_ASSETS` . **(public) output** - `root_hash` of the Merkle Sum Tree result of the last level of hashing. The proof is generated by the Exchange and shared with each individual user. Nothing is recorded on a blockchain in this process. The proof doesn't reveal to the receiving user any information about the balances of any other users, the number of the users of the Exchange or even the aggregated liabilities of the Exchange. The verification of the π of Inclusion happens locally on the user device. It involves verifying that: - The cryptographic proof is valid - The `leaf_hash`, public input of the circuit, matches the combination `H(username, balance[0], ..., balance[n])` of the user with balances as snapshotted at _t_ - The `root_hash`, public output of the circuit, matches the one published on-chain in step 3. If any user finds out that they haven't been included in the Merkle Sum Tree, or have been included with an understated balance, a warning related to the potential non-solvency of the Exchange has to be raised, and a dispute should open. The rule is simple: if enough users request a Proof of Inclusion and they can all verify it, it becomes evident that the Exchange is not lying or understating its liabilities. If just one user cannot verify their π of Inclusion, it means that the Exchange is lying about its liabilities (and, therefore, its solvency). At the current state of ZK research, the user has to verify the correct inclusion inside the Merkle Sum Tree **in each** Proof of Solvency Round. An [experimental feature](https://github.com/summa-dev/summa-solvency/pull/153) using more advanced Folding Schemes, such as [Nova](https://www.youtube.com/watch?v=SwonTtOQzAk), would allow users to verify their correct inclusion in any round **up to the current round** with a single tiny proof. ## What makes for a good Proof of Solvency Summa provides the cryptography layer required for an Exchange to run a Proof of Solvency. But that's not all; there are further procedures outside of the Summa protocol that determine the legitimacy of a Proof of Solvency process. ### Incentive Mechanism As explained in the **Proof of Inclusion** section, the more users verify their correct inclusion in the Liability Tree, the more sound this proof is. If not enough users verify their Proof of Inclusions, a malicious Exchange can manipulate or discard the liabilities of users and still be able to submit a valid Proof of Solvency without being detected. The probability of this to happen is denoted as the **failure probability**. The Failure Probability is common to any Proof of Solvency scheme as described by [Generalized Proof of Liabilities, Chalkias and Ji - section 5](https://eprint.iacr.org/2021/1350.pdf). A finding of the paper is that within an Exchange of 150𝑀 users, only 0.05% of users verifying inclusion proofs can guarantee an overwhelming chance of detecting an adversarial Exchange manipulating 0.01% of the entries in the Liabilities Tree. To reduce the Failure Probability, the Exchange is invited to run programs to incentivize users to perform the Proof of Inclusion Verification. For example, the Exchange can provide a discount trading fee for a limited period of time for each user that successfully performs the Proof Verification. On top of that, the percentage of users that performed such verification can be shared with the public as a metric of the soundness of such a Proof of Solvency process. ### Proof of Inclusion Retrieval The finding related to Failure Probability described in the previous paragraph relies on the hypothesis that the Exchange doesn’t know which users will verify their inclusion proof in advance. Instead, if the Exchange knows this information, they could exclude from the Liabilities Tree those users who won’t verify their correct inclusion. This would lead to a higher failure probability. But how would the Exchange know? If the process of retrieving the Proof of Inclusion is performed on demand by the user, for example, passing through the Exchange UI, the Exchange gets to know which users are actually performing the verification. If this process is repeated across many rounds, the Exchange can forecast with high probability the users who are not likely to perform verification. A solution to this issue is to store the proofs on a distributed file system such as IPFS (remember that the proof doesn’t reveal any sensitive data about the Exchange’s user) Users would fetch the data from a network of nodes. As long as these nodes don’t collude with the Exchange, the Exchange won’t know which proof has been fetched and which has not. An even more exotic solution is to rely on [Private Information Retrieval](https://en.wikipedia.org/wiki/Private_information_retrieval) techniques. This solution necessitates that the Exchange generates all Proofs of Inclusion simultaneously. Even though this operation is infinitely parallelizable, it introduces an overhead for the Exchange when compared to the on-demand proof generation solution. A further cost for Exchange involves the storage of such proofs. ### Frequency Proof of Solvency refers to a specific snapshot in time. Even though the Exchange might result in solvent at _t_ nothing stops them from going insolvent at _t+1s_. The Exchange can potentially borrow money just to perform the Proof of Solvency and then return it as soon as it is completed. Increasing the frequency of Proof of Solvency rounds makes such [attacks](https://hackmd.io/@summa/SJYZUtpA2) impractical. [BitMEX](https://blog.bitmex.com/bitmex-provides-snapshot-update-to-proof-of-reserves-proof-of-liabilities/) performs it on a bi-weekly basis. While this is already a remarkable achievement, given the technology provided by Summa, this can be performed on a per-minute basis. From a performance point of view, the main bottleneck is the creation and update of the Merkle Sum Tree. This process can be sped up by parallelization being performed on machines with many cores. Surprisingly, Prover time is not a bottleneck, given that proof can be generated in the orders of seconds (or milliseconds) on any consumer device. Another solution to avoid such attacks is to enforce proof of solvency in the past. Practically, it means that the Exchange is asked to perform a proof of solvency round related to a randomly sampled past timestamp. ### Independent Dispute Resolution Committee The whole Proof of Solvency flow requires oversight on the actions performed by the Exchange at three steps: - When the Exchange is submitting the `AddressOwnership` proof, the validity of the signatures must be checked - When the Exchange is submitting the `ProofOfSolvency`, the validity of the `asset_sums` used as input must be checked - When the users verifies their `ProofOfInclusion`, the validity of the user data used as input must be verified. The action of performing the first two verification might be overwhelming for many users. Instead, a set of committees (independent of Summa and any of the Exchange) might be assigned to perform such verification and raise a flag whenever malicious proof is submitted. While the first two verifications can be performed by anyone, the last one can only be validated by the user that is receiving the proof itself, since he/she is the only one (beyond the Exchange) that has access to their user data. Note that the Exchange can unilaterally modify the users' data in their database (and even in the interface that the users interact with). Because of that, the resolution of a dispute regarding the correct accounting of a user within the liabilities tree is not a trivial task as described by [Generalized Proof of Liabilities, Chalkias and Ji - section 4.2](https://eprint.iacr.org/2021/1350.pdf) A solution to this is to bind these data to a timestamped commitment signed by both the User and the Exchange. By signing such data, the user would approve its status. Therefore any non-signed data added to the liabilities tree can be unanimously identified as malicious. ### UX Once a user receives a Proof of Inclusion, there are many ways in which the verification process can be performed. For example, the whole verification can be performed by clicking a magic _verify_ button. Given that the premise of a Proof of Solvency protocol is to not trust the Exchange, it is likely that a user won't trust the black-boxed API that the Exchange provides to verify the proof. A more transparent way to design the verification flow is to allow the users to fork a repo and run the verification code locally. A similar approach is adopted by both [BitMEX](https://blog.bitmex.com/bitmex-provides-snapshot-update-to-proof-of-reserves-proof-of-liabilities/) and [Binance](https://www.binance.com/en/proof-of-reserves). While this latter approach is preferred, it also may seem intimidating and time-consuming for many users. A third way would be to have a user-friendly open-source interface (or, even better, many interfaces) run by independent actors that allow the verification of such proofs. In such a scenario, the soundness of the verification process is guaranteed by the auditability of the code and by the independent nature of the operator, without sacrificing the UX. Alternatively, the verification function can be exposed as a _view function_ in the Summa Smart Contract. In such case, the benefit would be twofold: - The code running the verification is public so everyone can audit it - There are many existing interfaces, such as Etherscan, that allow users to interact with the Smart Contract and call the function. ## Conclusion This blog post presented the detailed implementation of how an Exchange can start providing Proof of Solvency to their users as a first step towards a fully Cryptographically Constrained Exchange (CCEX). By doing so, the users of the Exchange can benefit from the privacy and performance of a Centralized Exchange (in terms of transaction settlement speed and close to zero fee), while still having cryptographic-based proof that their deposits are covered. A follow-up blog post will provide a more practical tutorial on how to leverage Summa Backend to perform the operations described before. The path toward establishing an industry-wide standard for proof of solvency requires the definition of a protocol that is agreed upon by Exchanges, Cryptographers, and Application Developers. The goal is to collaborate with Exchanges during a Beta program to bring Summa to production and, eventually, come up with a [EIP](https://github.com/summa-dev/eip-draft) to define a standard. Complete this [Google Form](https://forms.gle/uYNnHq3vjNHi5iRh9) if your Exchange (or Custodial Wallet) is interested in joining the program. ### Benchmark Notes: 1. All the benchmarks related to the round are provided considering an Exchange with 500k users performing a Proof of Solvency for 20 different cryptocurrencies. The benches are run on a MacBook Pro 2023, M2 Pro, 32GB RAM, 12 cores. All the benchmarks can be reproduced [here](https://github.com/summa-dev/summa-solvency/tree/benches-blogpost/zk_prover) running: `cd zk_prover` `cargo bench` 2. `461.08s` to build a Merkle Sum Tree from scratch. At any subsequent round, the process only requires leaf updates therefore the required time is significantly reduced. 3. `460.20s` to generate the proof of `1568` bytes. The proof costs `395579` gas units for onchain verification. 4. `3.61s` to generate the proof of `1632` bytes. The verification of the proof takes `6.36ms`. ]]> summa proof of solvency zero-knowledge proofs merkle sum tree cryptography privacy security centralized exchange proof of inclusion infrastructure/protocol <![CDATA[Bandada is live!]]> https://pse.dev/blog/bandada-is-live https://pse.dev/blog/bandada-is-live Wed, 23 Aug 2023 00:00:00 GMT bandada semaphore privacy zero-knowledge proofs identity credentials security infrastructure/protocol <![CDATA[p0tion V1.0 Release]]> https://pse.dev/blog/p0tion-v10-release https://pse.dev/blog/p0tion-v10-release Tue, 08 Aug 2023 00:00:00 GMT p0tion trusted setups zero-knowledge proofs groth16 ceremony maci rln cryptography security toolkits <![CDATA[Rate-Limiting Nullifier (RLN)]]> https://pse.dev/blog/rate-limiting-nullifier-rln https://pse.dev/blog/rate-limiting-nullifier-rln Tue, 01 Aug 2023 00:00:00 GMT rln rate-limiting nullifier nullifiers zero-knowledge proofs spam protection privacy anonymity/privacy cryptography security infrastructure/protocol <![CDATA[The Power of Crowdsourcing Smart Contract Security for L2 Scaling Solutions]]> https://pse.dev/blog/the-power-of-crowdsourcing-smart-contract-security-for-l2-scaling-solutions https://pse.dev/blog/the-power-of-crowdsourcing-smart-contract-security-for-l2-scaling-solutions Tue, 18 Jul 2023 00:00:00 GMT <![CDATA[Learnings from the KZG Ceremony]]> https://pse.dev/blog/learnings-from-the-kzg-ceremony https://pse.dev/blog/learnings-from-the-kzg-ceremony Tue, 11 Jul 2023 00:00:00 GMT kzg ceremony trusted setups ethereum cryptography scaling web development security infrastructure/protocol wasm <![CDATA[zkEVM Community Edition Part 1: Introduction]]> https://pse.dev/blog/zkevm-community-edition-part-1-introduction https://pse.dev/blog/zkevm-community-edition-part-1-introduction Tue, 23 May 2023 00:00:00 GMT zkevm zero-knowledge proofs ethereum scaling l2 rollup infrastructure/protocol proof systems virtual machine computational integrity <![CDATA[zkEVM Community Edition Part 2: Components]]> https://pse.dev/blog/zkevm-community-edition-part-2-components https://pse.dev/blog/zkevm-community-edition-part-2-components Tue, 23 May 2023 00:00:00 GMT "\[Zero knowledge proofs\] deliver _scalability_ by exponentially compressing the amount of computation needed to verify the integrity of a large batch of transactions."            [\- Eli Ben-Sasson](https://nakamoto.com/cambrian-explosion-of-crypto-proofs/) A ZK proof involves two parties: the prover and the verifier. In a zkEVM, the prover generates the proof of validity. The verifier checks if the proof was done correctly. An L1 proof of validity confirms every transaction on Mainnet Ethereum. For a [ZK-rollup](https://ethereum.org/en/developers/docs/scaling/zk-rollups/), the proof of validity confirms every L2 transaction on the rollup and is verified as a single L1 transaction. Zero-knowledge proofs offer the same level of security as re-executing transactions to verify their correctness. However, they require less computation and resources during the verification process. This means that more people can participate in maintaining the network by running nodes and contributing to consensus. Nodes using specialized hardware will be required to generate proofs of validity, but once the proof is posted on-chain, nearly any node will be able to verify the proof with a low-resource cryptographic operation. A zkEVM makes it theoretically possible to run an Ethereum [node on your phone](https://youtu.be/hBupNf1igbY?t=590). ## SNARKs The zkEVM uses [zkSNARKs](https://blog.ethereum.org/2016/12/05/zksnarks-in-a-nutshell): a type of ZK protocol that is general purpose and capable of turning nearly any computation into a ZK proof. Before zkSNARKs, building ZK proofs was a highly specialized math problem that required a skilled cryptographer to create a unique ZK protocol for every new function. The discovery of zkSNARKs turned the creation of ZK protocols from a specialized math problem to a [generalized programming task](https://archive.devcon.org/archive/watch/6/zkps-and-programmable-cryptography/?tab=YouTube). [zkSNARKs stand for Zero-Knowledge Succinct Non-interactive ARguments of Knowledge](https://z.cash/technology/zksnarks/). Zero-knowledge refers to the protocol's capacity to prove a statement is true "without revealing any information beyond the validity of the statement itself." Though the ZK part tends to get the most attention, it is in fact optional and unnecessary for zkEVMs. The most relevant property is succinctness. ![https://www.youtube.com/watch?v=h-94UhJLeck](/articles/zkevm-community-edition-part-2-components/Sd2dQ6Q8Y2nPIgO0cqr9j.webp) https://www.youtube.com/watch?v=h-94UhJLeck Succinct proofs are short and fast to verify. It must take less time to verify a SNARK than to recompute the statements the SNARK is proving. Quickly verifying transactions via short proofs is how zkEVMs achieve scalability. In a non-interactive proof, a single proof is submitted, and the verifier can either reject or accept the proof as valid. There is no need to go back and forth between the prover and verifier. The proof of validity is created once and stored on-chain where it can be verified by anyone at any time. ## Opcodes Every time a user makes a transaction on Ethereum, they set off a chain of instructions to change the state of the [Ethereum Virtual Machine (EVM).](https://ethereum.org/en/developers/docs/evm/) These instructions are [opcodes](https://ethereum.org/en/developers/docs/evm/opcodes/). Opcodes are the language of the EVM and each opcode has a distinct function specified in the Ethereum [yellow paper](https://ethereum.org/en/developers/tutorials/yellow-paper-evm/). Opcodes can read values from the EVM, write values to the EVM, and compute values in the EVM. Popular programming languages such as [Solidity](https://soliditylang.org/) must be translated or compiled to opcodes that the EVM can understand and run. Opcodes change the state of the EVM, whether that is the balance of ETH in an address or data stored in a smart contract. All the changes are distributed or updated to every node in the network. Each node takes the same inputs or transactions and should arrive at the same outputs or state transition as every other node in the network – a secure and decentralized, but slow and expensive way to reach consensus. The zkEVM is attempting to prove the EVM transitioned from its current state to its new state correctly. To prove the entire state transitioned correctly, the zkEVM must prove each opcode was executed correctly. To create a proof, circuits must be built. ## Circuits SNARKs are created using [arithmetic circuits](https://en.wikipedia.org/wiki/Arithmetic_circuit_complexity), a process also known as [arithmetization](https://medium.com/starkware/arithmetization-i-15c046390862). Circuits are a necessary intermediate step between EVM opcodes and the ZK proofs that validate them. A circuit defines the relation between public (revealed) and private (hidden) inputs. A circuit is designed so that only a specific set of inputs can satisfy it. If a prover can satisfy the circuit, then it is enough to convince the verifier that they know the private inputs without having to reveal them. This is the zero-knowledge part of zkSNARKs. The inputs do not need to be made public to prove they are known. ![https://archive.devcon.org/archive/watch/6/eli5-zero-knowledge/?tab=YouTube](/articles/zkevm-community-edition-part-2-components/rvCrquqQ87uVWOD6dvtg_.webp) https://archive.devcon.org/archive/watch/6/eli5-zero-knowledge/?tab=YouTube To create a SNARK, you must first convert a function to circuit form. Writing a circuit breaks down the function into its simplest arithmetic logic of addition and multiplication. Because addition can express linear computations and multiplication can express exponential computations, these two simple operations become highly expressive when stacked together and applied to polynomials. ![Polynomials are math expressions with "many terms." ](/articles/zkevm-community-edition-part-2-components/gizYcrA2NKJ4Ow11FlxqJ.webp) Polynomials are math expressions with "many terms." In the context of this article, it is only necessary to know that polynomials have two useful properties: they are easy to work with and can efficiently encode a lot of information without needing to reveal all the information it represents. In other words, polynomials can be succinct: they can represent a complex computation yet remain short and fast to verify. For a complete explanation of how zkSNARKs work and why polynomials are used, [this paper](https://arxiv.org/pdf/1906.07221.pdf) is a good resource. For a practical explanation of how polynomial commitments schemes are applied in Ethereum scaling solutions, check out [this blog post](https://scroll.io/blog/kzg). With the basic mathematical building blocks of polynomials, addition, and multiplication, circuits can turn nearly any statement into a ZK proof. In circuit form, statements become testable: verifiable and provable. ![Visualization of a simple arithmetic circuit https://node101.io/blog/a_non_mathematical_introduction_to_zero_knowledge_proofs](/articles/zkevm-community-edition-part-2-components/G1B3_UHeZ8CLMErT4K3pr.webp) Visualization of a simple arithmetic circuit https://node101.io/blog/a\_non\_mathematical\_introduction\_to\_zero\_knowledge\_proofs In a circuit, gates represent arithmetic operations (addition or multiplication). Gates are connected by wires and every wire has a value. In the image above: - Left hand circuit represents the equation: _a + b = c_ - Right hand circuit represents the equation: _a x b = c_ The input wires are _a_ and _b_; and can be made public or kept private. The output wire is _c_. The circuit itself and output _c_ is public and known to both the prover and verifier. ![Example of a slightly more complex circuit https://nmohnblatt.github.io/zk-jargon-decoder/definitions/circuit.html](/articles/zkevm-community-edition-part-2-components/R9tDApVpc4eEEVAVoiFYo.webp) Example of a slightly more complex circuit https://nmohnblatt.github.io/zk-jargon-decoder/definitions/circuit.html In the image above, the circuit expects: - Inputs are *x*₀, *x*₁, and *x*₂ - Output is *y = 5x*₀ *\+ 3(x*₁ *\+ x*₂) For a prover to demonstrate they know the private inputs without revealing them to the verifier, they must be able to complete the circuit and reach the same output known to both parties. Circuits are designed so that only the correct inputs can go through all the gates and arrive at the same publicly known output. Each step is iterative and must be done in a predetermined order to satisfy the circuit logic. In a sufficiently designed circuit, there should be no feasible way a prover can make it through the circuit without knowing the correct inputs. In the zkEVM Community Edition, circuits must prove that each transaction, all the opcodes used in the transaction, and the sequence of the operations are correct. As building circuits is a new and rapidly evolving field, there is still no "right way" to define the computation the circuit is trying to verify. To be practical, circuits must also be written efficiently in a way that minimizes the number of steps required while still being capable of satisfying the verifier. The difficulty of building a zkEVM is compounded by the fact that the skills required to build the necessary components are rare. The Community Edition is an attempt to overcome both the technical and organizational challenges of building a consensus-level compatible zkEVM. The goal is to create a public good that serves as a common point of collaboration for the zkEVM community. --- The zkEVM Community Edition is possible thanks to the contribution of many teams including the [PSE](https://appliedzkp.org/), [Scroll Tech](https://scroll.io/), and [Taiko](https://taiko.xyz/) along with many individual contributors. Teams such as [Zcash](https://electriccoin.co/) have also researched and developed proving systems and libraries that have greatly benefited zkEVM efforts. The zkEVM Community Edition is an open-source project and can be accessed in the [main repo](https://github.com/privacy-scaling-explorations/zkevm-specs). If you're interested in helping, you can learn more by visiting the [contribution guidelines](https://github.com/privacy-scaling-explorations/zkevm-circuits/blob/main/CONTRIBUTING.md). The Community Edition is being built in public and its current status can be viewed on the [project board](https://github.com/orgs/privacy-scaling-explorations/projects/3/views/1). For any general questions, feel free to ask in the [PSE Discord.](https://discord.com/invite/sF5CT5rzrR) --- _This series of articles intends to provide an overview of the zkEVM Community Edition in a way that is broadly accessible. Part 2 is a summary of the common components used in most zkEVMs._ _[Part 1: Introduction](https://mirror.xyz/privacy-scaling-explorations.eth/I5BzurX-T6slFaPbA4i3hVrO7U2VkBR45eO-N3CSnSg)_ _[Part 3: Logic and Structure](https://mirror.xyz/privacy-scaling-explorations.eth/shl8eMBiObd6_AUBikXZrjKD4fibI6xUZd7d9Yv5ezE)_ ]]> zkevm zero-knowledge proofs ethereum scaling snarks circuits computational integrity cryptography proof systems education <![CDATA[zkEVM Community Edition Part 3: Logic and Structure]]> https://pse.dev/blog/zkevm-community-edition-part-3-logic-and-structure https://pse.dev/blog/zkevm-community-edition-part-3-logic-and-structure Tue, 23 May 2023 00:00:00 GMT The zkEVM Community Edition is possible thanks to the contribution of many teams including the [PSE](https://appliedzkp.org/), [Scroll Tech](https://scroll.io/), and [Taiko](https://taiko.xyz/) along with many individual contributors. Teams such as [Zcash](https://electriccoin.co/) have also researched and developed proving systems and libraries that have greatly benefited zkEVM efforts. The zkEVM Community Edition is an open-source project and can be accessed in the [main repo](https://github.com/privacy-scaling-explorations/zkevm-specs). If you're interested in helping, you can learn more by visiting the [contribution guidelines](https://github.com/privacy-scaling-explorations/zkevm-circuits/blob/main/CONTRIBUTING.md). The Community Edition is being built in public and its current status can be viewed on the [project board](https://github.com/orgs/privacy-scaling-explorations/projects/3/views/1). For any general questions, feel free to ask in the [PSE Discord.](https://discord.com/invite/sF5CT5rzrR) --- _This series intends to provide an overview of the zkEVM Community Edition in a way that is broadly accessible. Part 3 reviews the general logic and structure of the zkEVM Community Edition._ _[Part 1: Introduction](https://mirror.xyz/privacy-scaling-explorations.eth/I5BzurX-T6slFaPbA4i3hVrO7U2VkBR45eO-N3CSnSg)_ _[Part 2: Components](https://mirror.xyz/privacy-scaling-explorations.eth/AW854RXMqS3SU8WCA7Yz-LVnTXCOjpwhmwUq30UNi1Q)_ ]]> zkevm zero-knowledge proofs ethereum scaling circuits cryptography computational integrity halo2 lookup tables proof systems <![CDATA[ZKML: Bridging AI/ML and Web3 with Zero-Knowledge Proofs]]> https://pse.dev/blog/zkml-bridging-aiml-and-web3-with-zero-knowledge-proofs https://pse.dev/blog/zkml-bridging-aiml-and-web3-with-zero-knowledge-proofs Tue, 02 May 2023 00:00:00 GMT zero-knowledge proofs machine learning privacy circom computational integrity ethereum web3 cryptography research toolkits <![CDATA[PSE Security: What’s New]]> https://pse.dev/blog/pse-security-what-is-new https://pse.dev/blog/pse-security-what-is-new Tue, 25 Apr 2023 00:00:00 GMT zk security l2 formal-verification circom audits <![CDATA[The zk-ECDSA Landscape]]> https://pse.dev/blog/the-zk-ecdsa-landscape https://pse.dev/blog/the-zk-ecdsa-landscape Tue, 18 Apr 2023 00:00:00 GMT ). This means there are millions of ECDSA keys for Ethereum addresses ready to be utilised. To support privacy on-chain with these existing keys, we need to add some extra logic to our smart contracts. Instead of directly verifying signatures, we can verify ECDSA signatures and execute arbitrary logic inside ZKPs, then verify those proofs on-chain, then execute our smart contract logic. Thus, without any change to Ethereum itself, we can support privacy where users want it. ## Use Cases ### Mixers Mixers were one of the first widespread use cases for ZKPs on Ethereum, with Tornado Cash handling [over $7B](https://home.treasury.gov/news/press-releases/jy0916). Tornado Cash prevents double spending by using an interactive nullifier, which is a special piece of data the user must hold onto to access their funds. Keeping this nullifier secure can be just as important as keeping a private key secure, but in practice it needs to, at some point, be in plaintext outside the wallet or secure enclave in order to generate the ZKP. This is a significant UX problem, especially for a security conscious user who has already gone to great lengths to protect their private key. zk-ECDSA can solve this by generating a nullifier deterministically from the private key, while keeping the user private. This is a subtle problem, and existing ECDSA signatures aren't quite suitable. We explain the PLUME nullifier, the top contender to solve this problem below. #### Blacklists Financial privacy is good, but it can have downsides. The US Treasury [accused Tornado Cash](https://home.treasury.gov/news/press-releases/jy0916) of laundering over $455M of funds stolen by a US sanctioned North Korean hacker group. Tornado Cash itself was subsequently sanctioned. There may be a middle ground, where privacy is preserved for normal users, but authorities can prevent hackers receiving their funds. The following is not an ideal scheme, as it gives authorities power to freeze funds of law-abiding citizens, but it is a start. In order to get your funds out of a compliant mixer, you must prove in a ZKP that you own an address that deposited funds, has not already retrieved their funds, and _does not belong to a blacklist_. This means having to do a proof of non-membership inside the ZKP. ### Private Safes Many projects use safes like [Safe](https://safe.global/) (formerly Gnosis Safe) to control funds split between multiple parties. Generally this means using your personal key to sign votes for how the money is spent, those votes are then sent to the chain and executed when enough parties agree. However, this means publicly linking your personal finances to some project, which is generally not desirable. Instead of sending a publicly readable signature, the user can send a ZKP proving their vote without revealing their identity on-chain. [zkShield](https://github.com/bankisan/zkShield) is an example of a private safe in development. It may be surprising that we don't need nullifiers for safes, since they are usually required for private financial applications. If you wanted to keep your votes private from other owners of the same safe you would need nullifiers. However, people sharing a safe are generally cooperative, so the sensible approach by zkShield is to create non-private signatures off-chain with efficient-ecdsa, and verify them in a ZKP. Nullifiers are also often used in financial applications to prevent double-spending, but that is irrelevant here because safes don't have an inbuilt payment system. ### Private Voting Voting on, for example, a DAO proposal (or on political candidates or legislation!) should generally be done privately to prevent retribution, bribery, and collusion. Instead of collating signatures, we can collate ZKPs, provided they output a deterministic nullifier to prevent double votes. ### Airdrops Many projects such as [Arbitrum](https://arbitrum.io/) and [ENS](https://ens.domains/) have introduced a governance token as they mature. This is generally done to reward early users, and give the community power over the protocol. However, if a token holder wants to vote on a proposal anonymously, they will have to sell their token, use a mixer, buy the token back at another address, and then vote with that unlinked address. Instead, we could offer airdrops anonymously by default. To do this, you simply make a list of all the addresses eligible for the drop, hash them into a Merkle tree, and allow people to claim their tokens by proving membership in that list. Airdrops usually offer granular rewards, giving more tokens to earlier users, etc. Unfortunately, high granularity would reduce the anonymity set. The easiest implementation would be if every address received the same amount. You could also mitigate the loss of privacy while allowing different rewards by letting people claim the airdrop for multiple addresses at a time, and offering multiple rewards per address, though this would introduce additional complexity in the circuit. ### Private NFTs Privacy can be used in creative ways in NFTs too. For example, you could allow any CryptoPunk holder to mint a "DarkPunk," where their original address is not linked to their original CryptoPunk. This would be done by taking a snapshot of addresses holding CryptoPunks, and gateminting by requiring a ZKP that shows you own some address in that list. Note, any set of addresses could be used for gating - i.e., people who lost money in The DAO hack, or people who have burned 100+ ETH. Similarly, a new NFT project could allow private minting. First you'd buy a ticket publicly on-chain, then privately prove you are a ticket holder to mint the NFT. This could be implemented with an interactive nullifier, but zk-ECDSA could be used to save on-chain costs at the expense of prover time. ### Message Boards zk-ECDSA will also enable off-chain use cases. Anonymity can be a useful tool for voicing controversial ideas, or givng a voice to less powerful people. Suppose a DAO is trying to coordinate on how to spend its treasury, political factions inevitably form and it can be hard to oppose consensus, or it might be hard to get your voice heard. Instead of relying on a traditional message board where every message is tied to a username, you can conduct discussions anonymously, or pseudonymously using ZKPs rather than signatures directly. Traditional anonymous boards are subject to sybil attacks, but in zk message boards you have to prove membership in a group and/or prove you are using a unique deterministic pseudonym derived from your public key. [heyanoun.xyz](https://www.heyanoun.xyz/) from PersonaeLabs is a project exploring this area. ### Gated Content zk-ECDSA can be used as an authentication for access to web content. For example, suppose you want to create some private content for [Nouns NFT](https://nouns.wtf/) holders. The standard solution would be "Sign in with Ethereum", where you would verify your address, and the server could verify that you own a Noun on-chain. However, this gives the website your personal financial details for that address, which may be enough to track and target you. This is dangerous, especially since you are known to hold a valuable NFT. Instead we can create "Sign in as Noun" functionality by simply proving you own an address in the set of Nouns holders. Using zk-ECDSA is still not easy. You have to carefully choose the right library and proof system for your use case. There are two critical questions: do you need nullifiers, and do you need on-chain verification? It's important to choose the right tool for your use case, because prover time can be radically improved if you don't need nullifiers or on-chain verification. Most of the work below was done at [PersonaeLabs](http://personaelabs.org/) and [0xparc](https://0xparc.org/). As part of this grant, I wrote the initial verifier circuit for the nullifier library. ### Merkle Tree Basics The circuits for most applications require some kind of signature/nullifier verification, and set membership. [Merkle trees](https://en.wikipedia.org/wiki/Merkle_tree) are a simple, efficient method of set membership, where security relies on a hash function. A circom implementation of Merkle trees originating from Tornado Cash has been well battle tested. During my grant I used a Merkle tree with the Poseidon hash, which is a hash function that's efficient in ZK circuits. [This implementation](https://github.com/privacy-scaling-explorations/e2e-zk-ecdsa/blob/a5f7d6908faac1aab47e0c705bc91d4bccea1a73/circuits/circom/membership.circom#L13), which verifies a public key, signature, and Merkle proof may be a useful starting point for your application. Note, that you should remove the public key check if unnecessary, and swap the signature verification out for the most efficient version possible for your constraints. ### Non-Membership Merkle trees don't naturally enable us to prove that an address is _not_ in a given list. There are two possible modifications we can make to make this possible, and the first is [probably the best option](https://alinush.github.io/2023/02/05/Why-you-should-probably-never-sort-your-Merkle-trees-leaves.html). The recommended approach is using a sparse Merkle tree. A sparse Merkle tree of addresses contains every possible address arranged in order. Since Ethereum addresses are 160 bits, the Merkle tree will be of depth 160 (note the amazing power of logarithmic complexity!), meaning Merkle proofs can still be efficiently verified in a ZKP circuit. The leaves of the tree will be 1 if the address is included in the set, and 0 if it is not. So by providing a normal Merkle proof that the leaf corresponding to an address is 0, we prove that the address is not in the list. The alternative is sorting a list of addresses, and using 2 adjacent Merkle proofs to show that the address's point in the list is unoccupied. This is the approach [I used](https://github.com/privacy-scaling-explorations/e2e-zk-ecdsa/pull/76) in this grant, but I wouldn't recommend it due to the complexity of the circuit, and additional proof required to show that the list is sorted, which introduces [systemic complexity](https://vitalik.ca/general/2022/02/28/complexity.html). ### Off-chain, no nullifiers The fastest way to privately verify a signature is [spartan-ecdsa](https://personaelabs.org/posts/spartan-ecdsa/), with a 4 second proving time in a browser. ECDSA uses elliptic curves, and the specific curve used for Ethereum signatures is called secp256k1. Spartan-ecdsa is primarily fast because it uses right-field arithmetic by using a related elliptic curve called secq256k1. This secp256k1's base field is the same as secq256k1's scalar field, the arithmetic is simple, but this means we have to use a proof system defined for secq256k1 such as [Spartan](https://github.com/microsoft/Spartan) (note, Groth16, PlonK etc aren't available as they rely on [pairings](https://medium.com/@VitalikButerin/exploring-elliptic-curve-pairings-c73c1864e627), which aren't available in secq256k1). Unfortunately, Spartan does not yet have an efficient verifier that runs on-chain (though [this is being worked on](https://github.com/personaelabs/spartan-ecdsa/tree/hoplite)). Ultimately, this is just an way to verify ECDSA schemes in ZKPs, so, like all plain ECDSA schemes, it can't be used as a nullifier. ### On-chain, no nullifiers A predecessor to spartan-ecdsa is [efficient-ecdsa](https://personaelabs.org/posts/efficient-ecdsa-1/). The difference is it uses expensive wrong-field arithmetic implemented with multi-register big-integers. The current implementation is circom, which is a natural frontend to any R1CS proof system such as Groth16, as well as having built in support for PlonK and fflonk. This means it can be verified on-chain at minimal cost. However, the prover is significantly slower than for spartan-ecdsa since the circuit requires 163,239 constraints compared to spartan-ecdsa's astonishing 8,076. Efficient-ecdsa is a major ~9x improvement over 0xparc's initial [circom-ecdsa](https://github.com/0xPARC/circom-ecdsa) implementation, which is achieved by computing several values outside the circuit. ### Nullifiers Nullifiers are deterministic values that don't reveal one's private identity, but do prove set membership. These are necessary for financial applications to prevent double spending, in addition to private voting and pseudonymous messaging. Intuitively, an ECDSA signature should work as a nullifier, but it is not, in fact, deterministic on the message/private key. ECDSA signatures include a random scalar (known [in the wikipedia article](https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm#Signature_generation_algorithm) as _k_) which is used to hide the private key. Even if this scalar is [generated pseudorandomly](https://www.rfc-editor.org/rfc/rfc6979), there is no way for the verifier to distinguish between a deterministic and random version of the same signature. Therefore, new schemes are required. [This blog](https://blog.aayushg.com/posts/nullifier) contains a more detailed exploration of the problem, including a solution called PLUME. The [PLUME nullifier](https://github.com/zk-nullifier-sig/zk-nullifier-sig) is the only existing candidate solution for this problem. There is some work required to get these into wallets, and the circuits (for which [I wrote](https://github.com/zk-nullifier-sig/zk-nullifier-sig/pull/7) the initial implementation as part of this grant) are not yet audited or production ready. PLUME's circom implementation currently has ~6.5 million constraints, and even with optimisation, I suspect it will always be more expensive than efficient-ecdsa or spartan-ecdsa, as the verification equations are inherently longer. ## My Work My grant ended up being a fairly meandering path toward the state of the art in zk-ECDSA. My main contribution, as I see it, is the [circuit for the PLUME nullifier](https://github.com/zk-nullifier-sig/zk-nullifier-sig/pull/7), as well as transmitting understanding zk-ECDSA in house, and now hopefully to the outside world. The initial exploratory work included a Merkle tree based membership and non-membership proofs, and public key validation in circom using [circom-ecdsa](https://github.com/0xPARC/circom-ecdsa) (which is the founding project in this space). About halfway through the grant I realised how critical nullifiers are for most applications, and pivoted to working on the PLUME nullifiers. ### Membership/Non-membership proofs The first task was to make a basic circuit that proves membership in an address set. I used a [modified version](https://github.com/ChihChengLiang/poseidon-tornado) of tornado cash for the Merkle proof, and circom-ecdsa for the signature verification (because I wasn't yet aware of efficient-ecdsa or spartan-ecdsa). We were also interested in non-membership proofs for use cases like the [gated mixer](https://mirror.xyz/privacy-scaling-explorations.eth/djxf2g9VzUcss1e-gWIL2DSRD4stWggtTOcgsv1RlxY#blacklists) above. [I did this](https://github.com/privacy-scaling-explorations/e2e-zk-ecdsa/pull/76) with a simple sorted Merkle tree, and two adjacent Merkle proofs showing that the proof is not between them. I have since been [convinced](https://alinush.github.io/2023/02/05/Why-you-should-probably-never-sort-your-Merkle-trees-leaves.html) that sparse Merkle trees are a more robust solution, and we intend to implement this. ### Public Key Validation Part of the signature verification algorithm involves validating that the public key is in fact a valid point on an elliptic curve ([Johnson et al 2001](https://www.cs.miami.edu/home/burt/learning/Csc609.142/ecdsa-cert.pdf) section 6.2). In previous applications this was done outside the circuit, which was possible because the full public key set was known ahead of time. However, we were interested in use cases where developers would be able to generate arbitrary address lists, such as [gated web content](https://mirror.xyz/privacy-scaling-explorations.eth/djxf2g9VzUcss1e-gWIL2DSRD4stWggtTOcgsv1RlxY#gated-content). The problem is, it's non-trivial to go from an address list to a public key list, as not all addresses have some associated signature from which we can deduce the public key. This means that the developers would not necessarily be able to validate the public keys for every address in the list. The solution was to [implement public key verification inside the circuit](https://github.com/privacy-scaling-explorations/e2e-zk-ecdsa/blob/a5f7d6908faac1aab47e0c705bc91d4bccea1a73/circuits/circom/membership.circom#L138-L177) using primitives from circom-ecdsa. This means that any ZKP purporting to prove membership also must be done with a valid public key. It is not exactly clear how important this check is, and you should think about it on a case-by-case basis for your use case. It is probably not necessary for an anonymous message board, for example, since the worst attack one could possibly achieve with an invalid public key is falsifying an anonymous signature. However, in order to do that, one has to know the SHA256 preimage of some address, in which case they, in practice, hold secret information (the public key) which is somewhat equivalent to a private key. More work needs to be done to characterise the cases where we need to verify the public key. ### Plume Nullifiers Having improved our understanding, we brainstormed use cases, and found (as can be seen [above](https://mirror.xyz/privacy-scaling-explorations.eth/djxf2g9VzUcss1e-gWIL2DSRD4stWggtTOcgsv1RlxY#use-cases)) that the lack of nullifiers was blocking many interesting applications. The PLUME nullifier scheme had not been implemented yet in a zero-knowledge circuit, and since I now had some experience with circom-ecdsa, I was well situated for the job. I wrote it in circom, with circom-ecdsa, ultimately ending up with 6.5 million constraints (about 2M in hashing, and 4.5M in elliptic curve operations). This was by far the most challenging part of the grant (future grantees be warned - don't be too optimistic about what you can fit in one milestone). One interesting bug demonstrates that difficulties of a low level language like circom, was when I simply wasn't getting the right final hash result out. It turned out (after many log statements) that part of the algorithm implicitly compresses a particular elliptic curve point before hashing it. This compression is so trivial in JS you barely notice it, but I ended up having to write it from scratch in [these two rather nice subcircuits](https://github.com/zk-nullifier-sig/zk-nullifier-sig/pull/7/files#diff-f59503380952aa2926ad22e3f7fcfb442043dd90242d81f70ffff91094f46d8fR243-R294). Another subtlety was that an elliptic curve equation calculating _a/b^c_ inexplicably started giving the wrong result on ~50% of inputs for _c_. It turned out that my circom code was right, but the JS that I was comparing against took a wrong modulus, using `CURVE.p` rather than `CURVE.n`, which essentially confuses the base and scalar fields of the elliptic curve. And, since `CURVE.p` is still rather large, and the value whose modulus was being taken was quite small, the result was usually the same, which accounts for the confusing irregularity of the bug! ### Proving Server For on-chain nullifiers especially, the proving time is very high, so we wanted to create a trusted server which would generate the proof for you. However, this server must be trusted with your privacy, so it will be deprecated as proving times improve. ## Conclusion The frontier for private applications on Ethereum is about to burst wide open. The cryptography and optimisations are almost solved. Now we need a new wave of projects with sleek UX focused on solving real problems for Ethereum's users. If you want to make any of the use cases above a reality, check out [our repo](https://github.com/privacy-scaling-explorations/e2e-zk-ecdsa) to get started. ]]> zk-ecdsa privacy zero-knowledge proofs ethereum circom cryptography nullifiers PLUME identity signature verification <![CDATA[Semaphore v3 Announcement]]> https://pse.dev/blog/semaphore-v3-announcement https://pse.dev/blog/semaphore-v3-announcement Thu, 09 Feb 2023 00:00:00 GMT semaphore privacy zero-knowledge proofs anonymity/privacy identity voting/governance ethereum security toolkits infrastructure/protocol <![CDATA[Semaphore Community Grants: Awarded Projects]]> https://pse.dev/blog/semaphore-community-grants-awarded-projects https://pse.dev/blog/semaphore-community-grants-awarded-projects Tue, 24 Jan 2023 00:00:00 GMT semaphore grants privacy zero-knowledge proofs anonymity/privacy identity sybil resistance social ethereum public goods <![CDATA[Announcing MACI v1.1.1]]> https://pse.dev/blog/announcing-maci-v111 https://pse.dev/blog/announcing-maci-v111 Wed, 18 Jan 2023 00:00:00 GMT maci zero-knowledge proofs privacy voting/governance collusion resistance ethereum security quadratic funding public goods infrastructure/protocol <![CDATA[Zkitter: An Anon-friendly Social Network]]> https://pse.dev/blog/zkitter-an-anon-friendly-social-network https://pse.dev/blog/zkitter-an-anon-friendly-social-network Wed, 11 Jan 2023 00:00:00 GMT "Man is least himself when he talks in his own person. > > Give him a mask, and he will tell you the truth." > > \- Oscar Wilde Zkitter is a social experiment. Philosophically, it is an experiment in whether the Oscar Wilde quote above is true. Does the option of anonymity, by separating reputation from speech, create a space for more open and honest self-expression? What would happen if the option to be anonymous was available as a default and widely considered to be a "normal" thing to do? How might the conversation change when the content of what's being said is detached from the reputation of the person saying it? As an anon or using a pseudonym, people can say what they really believe, and honest conversation is ultimately the most valuable thing for important topics like governance decisions. Because the stakes are so high, and decisions may potentially last decades or even centuries, debate must be as authentic as possible. Though [DAO](https://ethereum.org/en/dao/) governance may come to mind for most people reading this article; using anonymity, pseudonyms, or aliases to debate controversial topics is not new. In the late 1700s, when the newly formed United States of America was deciding between a weak or strong constitution (governance protocol in crypto-speak), the bulk of the conversation took place between anons. Writers of [the Federalist Papers](https://en.wikipedia.org/wiki/The_Federalist_Papers) argued for a strong constitution while the authors of the [Anti-Federalist Papers](https://en.wikipedia.org/wiki/Anti-Federalist_Papers) took the opposite side. Both sides used pseudonyms or aliases such as Publius, Cato, and Brutus to express their arguments as a collective and as individuals. To this day, historians are not completely certain who wrote which paper. Modern crypto and its various sub-cultures are built on the work of the anon [Satoshi Nakamoto](https://nakamoto.com/satoshi-nakamoto/) (along with many other anonymous and pseudonymous contributors) so it should be no surprise that anonymity is a regular feature of crypto-related discussions on platforms like Twitter. The idea for Zkitter is to go a step further and create a space where anons are not outliers but first-class citizens – where privacy is the default, going anonymous is as trivial as toggling between dark mode and light mode, and decentralization and censorship resistance are part of the architecture of the system. In other words, align the values of the platform with the values of the community. ## Using Zkitter Zkitter offers many of the basic functions people have come to expect from a social network. Where things get interesting are the anonymity options. **Signup** When signing up you can decide whether to create an account using an Ethereum address or [ENS name](https://ens.domains/), which will be displayed as your username, or to create an anonymous account. ![https://www.zkitter.com/signup](/articles/zkitter-an-anon-friendly-social-network/dBqPvJok48PmEavi4ziVB.webp) https://www.zkitter.com/signup To join Zkitter anonymously, you need to verify your reputation on an existing social network. [Interep](https://mirror.xyz/privacy-scaling-explorations.eth/w7zCHj0xoxIfhoJIxI-ZeYIXwvNatP1t4w0TsqSIBe4) imports a reputation from an existing platform to help prevent spammers or bots from creating many anonymous accounts. You can currently import your Twitter, Reddit, or Github reputation to Zkitter. Thanks to the magic of ZK proofs, the information from your Twitter account is not linked to your anon identity on Zkitter – Interep only verifies that you meet the reputation criteria and does not collect or store any details about either account. ![https://docs.zkitter.com/faqs/how-to-create-an-anonymous-user](/articles/zkitter-an-anon-friendly-social-network/srqVAqctPfgTFRapL_qCp.webp) https://docs.zkitter.com/faqs/how-to-create-an-anonymous-user Once your reputation is verified, instead of a username, your Zkitter posts will simply show your reputation tier. When you join Zkitter, you will sign a message to generate a [new ECDSA key pair](https://docs.zkitter.com/developers/identity) and write the public key to a [smart contract](https://arbiscan.io/address/0x6b0a11f9aa5aa275f16e44e1d479a59dd00abe58) on Arbitrum. The ECDSA key pair is used to authenticate messages and recover your Zkitter identity – so you aren't using your Ethereum account private key to sign for actions on Zkitter. **Posting** Posting to Zkitter will feel pretty familiar, but with some extra options. You can choose whether to post as yourself, or anonymously – even if you don't have an anonymous account. You can decide who you want to allow replies from, as well as whether the post will appear on the global feed or only on your own. If you've connected your Twitter account, you can also mirror your post to Twitter. **Chat** Any Zkitter user – anon or publicly known – has the option to chat anonymously. ![https://docs.zkitter.com/faqs/how-to-chat-anonymously](/articles/zkitter-an-anon-friendly-social-network/AjfTdRvCPiIPjnguqjrpV.webp) https://docs.zkitter.com/faqs/how-to-chat-anonymously Known identities and anonymous identities can interact with each other in private chats or on public threads. ## Private, on-chain identity Zkitter is possible because of composability. The platform combines a variety of zero knowledge primitives and puts them all into one user-friendly package. The base primitive of Zkitter is [Semaphore](https://mirror.xyz/privacy-scaling-explorations.eth/ImQNsJsJuDf_VFDm9EUr4njAuf3unhAGiPu5MzpDIjI), a private identity layer that lets users interact and post content anonymously. Semaphore IDs allow users to prove they are in a group and send signals as part of a group without revealing any other information. Interep is the anti-Sybil mechanism of Zkitter. Because users are anonymous and anyone can join the network permissionlessly, Zkitter is susceptible to Sybil attacks. Interep allows new users to prove they possess a certain level of reputation from existing social networks. ![https://www.zkitter.com/explore](/articles/zkitter-an-anon-friendly-social-network/a-6ZAmTQi43YjwOUaKBz5.webp) https://www.zkitter.com/explore [RLN](https://mirror.xyz/privacy-scaling-explorations.eth/aKjLmLVyunELnGObrzPlbhXWu5lZI9QU-P3OuBK8mOY) provides spam protection for Zkitter and is also integrated with the [zkchat](https://github.com/njofce/zk-chat) encrypted chat function. RLN allows the protocol to set a limit on how many messages a user can send in a certain amount of time, and a user who breaks the spam rules can be [identified and removed](https://rate-limiting-nullifier.github.io/rln-docs/what_is_rln.html#user-removal-slashing). A social platform with basic privacy guarantees and protections from spam and Sybil attacks allows users to explore how anonymity affects speech. Whether the option to interact anonymously is useful, or even interesting, will depend on what happens on social experiments like Zkitter. With no name, phone number, or email address to tie your digital identity to the one you use in the physical world, what would you say? How would you be different? ## Join the experiment If you are interested in experimenting with anonymous thread posting or chatting, you can [try Zkitter now](https://www.zkitter.com/home). If you have any comments or feedback, please let us know by using [#feedback](https://www.zkitter.com/tag/%23feedback/) directly on [Zkitter](http://zkitter/) or by joining the [PSE Discord channel](https://discord.gg/jCpW67a6CG). To help build Zkitter, check out the [Github repo here](https://github.com/zkitter) or learn more by reading the [docs.](https://docs.zkitter.com/developers/identity) Zkitter is being built anonymously by [0xtsukino](https://www.zkitter.com/0xtsukino.eth/) with contributions from [AtHeartEngineer](https://github.com/AtHeartEngineer), [r1oga](https://github.com/r1oga), and others. ]]> zkitter semaphore interep rln privacy anonymity/privacy social identity zero-knowledge proofs sybil <![CDATA[UniRep Protocol]]> https://pse.dev/blog/unirep-protocol https://pse.dev/blog/unirep-protocol Wed, 04 Jan 2023 00:00:00 GMT unirep semaphore privacy reputation zero-knowledge proofs anonymity/privacy identity ethereum social infrastructure/protocol <![CDATA[Devcon VI Recap]]> https://pse.dev/blog/devcon-vi-recap https://pse.dev/blog/devcon-vi-recap Wed, 16 Nov 2022 00:00:00 GMT zero-knowledge proofs semaphore privacy ethereum pse zkevm zkopru interep identity public goods <![CDATA[RSA Verification Circuit in Halo2 and its Applications]]> https://pse.dev/blog/rsa-verification-circuit-in-halo2-and-its-applications https://pse.dev/blog/rsa-verification-circuit-in-halo2-and-its-applications Mon, 14 Nov 2022 00:00:00 GMT rsa halo2 zero-knowledge proofs cryptography circuits security zkp modular exponentiation digital signatures <![CDATA[Semaphore Community Grants]]> https://pse.dev/blog/semaphore-community-grants https://pse.dev/blog/semaphore-community-grants Wed, 21 Sep 2022 00:00:00 GMT semaphore privacy zero-knowledge proofs grants anonymity/privacy identity ethereum public goods social toolkits <![CDATA[Interep: An on-ramp for reputation]]> https://pse.dev/blog/interep-on-ramp-for-reputaion https://pse.dev/blog/interep-on-ramp-for-reputaion Tue, 13 Sep 2022 00:00:00 GMT 10,000 karma and >5000 coins on Reddit is enough “proof of humanity” for most applications. Everything else on your Reddit profile is irrelevant to the question of whether you’re a real person. Interep is designed to be a modular piece of a larger reputation system that developers can plug into their stack. Like the building blocks that make up [decentralized finance (DeFi](https://ethereum.org/en/defi/)), these pieces can be permissionless, composable, and interoperable – and may one day form a rich and complex system of identity on Ethereum. It may not be possible to completely eliminate Sybil attackers and spam bots, but Interep is providing a powerful building block to bring pieces of reputation on to Ethereum. Over time, these reputational building blocks can construct more provably human identities. ## Interep in action Reputation can be simply defined as recognition by other people of some characteristic or ability. In the Interep system, _providers_ act as the other people, _parameters_ represent characteristics or abilities, and _signals_ are proof of recognition. Interep begins by defining a source of reputation, then calculating the parameters of reputation, before allowing users to privately signal that they meet a pre-defined reputation criteria. ### Providers Reputation is only as good as its source. Interep itself does not provide a source of reputation but rather acts as a bridge to make reputation portable from different sources. The current Interep system includes Reddit, Twitter, Github, and [POAP NFTs](https://poap.xyz/) as sources of reputation, referred to as providers. The Interep process of exporting reputation begins when users select a provider. A provider such as Reddit (via [OAuth](https://docs.interep.link/technical-reference/groups/oauth)) shares information about a user’s profile with Interep. Interep takes the authenticated information and calculates a reputation score. The type of provider is used to generate a new identity and the reputation score determines which group the user can join. ### Parameters Reputational parameters determine a user’s reputation level as either gold, silver, or bronze. The more difficult-to-fake parameters a user has, the higher their reputation level, and the higher probability of them being an authentic or non-malicious user. Reddit parameters are defined as the following: - Premium subscription: true if the user has subscribed to a premium plan, false otherwise; - Karma: amount of user's karma; - Coins: amount of user's coins; - Linked identities: number of identities linked to the account (Twitter, Google). To be included in the Reddit gold reputation level, a user would need to have a premium subscription and a minimum of 10000 karma, 5000 coins, and 3 linked identities. ```json [ { "parameter": "premiumSubscription", "value": true }, { "parameter": "karma", "value": { "min": 10000 } }, { "parameter": "coins", "value": { "min": 5000 } }, { "parameter": "linkedIdentities", "value": { "min": 3 } } ] ``` https://docs.interep.link/technical-reference/reputation/reddit Defining parameters and reputation levels is an ongoing and collaborative process – one that you can help with by [contributing your knowledge and opinions to the Interep survey.](https://docs.google.com/forms/d/e/1FAIpQLSdMKSIL-3RBriGqA_v-tJhNJOCciQEX7bwFvOW7ptWeDDhjpQ/viewform) ### Signals If a user meets the criteria for the Reddit gold reputation level, they can now join the group with other users who have met the same criteria. Once you are a member of an Interep group, you can generate zero-knowledge proofs and signal to the world in a private, publicly verifiable way that you are very likely human. If you’re interested in seeing Interep in action, the smart contracts have been [deployed to Goerli](https://goerli.etherscan.io/address/0x9f44be9F69aF1e049dCeCDb2d9296f36C49Ceafb) along with a [staging app for end users.](https://goerli.interep.link/) ## Preserving privacy with Semaphore Interep integrates zero-knowledge proofs through [Semaphore](https://semaphore.appliedzkp.org/) so users only reveal the minimum amount of information necessary to join a group. Using Interep, Reddit users can keep everything about their profiles private including the exact number of karma or coins they possess. The only information revealed is that they meet the group’s requirements. Semaphore is a privacy protocol with a few simple, but powerful, functions: 1. Create a private identity 2. Use private identities to join a group 3. Send signals and prove you are a member of a group anonymously A Semaphore group is simply a Merkle tree, with each leaf corresponding to a unique identity. Interep checks a user’s reputation and adds them to a group based on their reputation level. After joining a group, users can generate valid zero knowledge proofs that act as an anonymous proof-of-membership in a group and prove they meet a certain reputation criteria. Platforms and dapps can verify if a user belongs to a group by verifying the users' zk-proofs and be confident that anyone in an Interep group has met the reputation requirements without having to identify individual users. ![https://github.com/interep-project/presentations/blob/main/DevConnect%20Amsterdam%20April%202022.pdf](/articles/interep-on-ramp-for-reputaion/3SOeA46pjr3NO8Q_ghtZg.webp) https://github.com/interep-project/presentations/blob/main/DevConnect%20Amsterdam%20April%202022.pdf To join an Interep group, you must first generate a Semaphore ID. Semaphore IDs are always created in the same way: they are derived from a message signed with an Ethereum account. On Interep, the Semaphore ID is generated using information from a provider such as Reddit, Github, Twitter, or POAP NFTs. These are called “deterministic identities” because the identity is generated using a specific message. A Reddit Semaphore ID and a Github Semaphore ID would be two different identities because they were generated using two different messages or inputs. ```js const identity = new Identity("secret-message") ``` https://semaphore.appliedzkp.org/docs/guides/identities#create-deterministic-identities Interep and Semaphore are interrelated. Semaphore acts as a privacy layer capable of generating and verifying zero-knowledge proofs. Interep bridges reputation from a variety of external sources while also managing user groups. Together, they create a system where off-chain reputation can be privately proven and verified on the blockchain. You can generate a Semaphore identity using the [@interep/identity](https://github.com/interep-project/interep.js/tree/main/packages/identity) package. Learn more about how Semaphore works in [this post](https://mirror.xyz/privacy-scaling-explorations.eth/ImQNsJsJuDf_VFDm9EUr4njAuf3unhAGiPu5MzpDIjI). ## Using reputation on-chain Establishing reputation on-chain is an important step to unlocking new use cases or improving existing use cases, many of which have been difficult to develop due to concerns about sybil attacks, a desire to curate the user base, or resistance to rebuilding existing reputation in a new environment. Some possibilities include: **Social Networks** Social networks (even decentralized ones) are generally meant for humans. Requiring users to have multiple sources of reputation to engage on a platform provides a practical anti-Sybil solution for a social network, while reputation tiers give users who have worked to establish a reputation elsewhere a head start on credibility. **Specialized DAOs** DAOs or any digital organization can filter out or select for specific reputation parameters. For example, a protocol needing experienced developers would prize a higher Github reputation. A DAO focused on marketing may only accept members with a certain Twitter follower account. Organizations especially focused on community building may only want members who have proven reputations on Reddit. **Voting** Online voting has long been a promise of blockchain technology, but strong privacy and identity guarantees have been missing. Anonymous on-chain reputation helps us get closer to a world where eligible humans can privately vote using the blockchain. Voting goes beyond traditional elections and can be used for [on-chain governance](https://vitalik.ca/general/2021/08/16/voting3.html) and [quadratic funding](https://ethereum.org/en/defi/#quadratic-funding) where unique humanity is more important than holding the most tokens. **Airdrops** Everyone likes an airdrop, but no one likes a Sybil attack. Reputation as “proof of individual” could help weed out users who try to game airdrops with multiple accounts while preserving more of the token allocation for authentic community members. ## Limitations Interep can do a lot of things, but it can’t do everything. Some current limitations include: - Centralization of reputation service: only Interep can calculate reputation. - Data availability: groups are currently saved in mongodb instance, which presents a single point of failure. This could be mitigated in the long term by moving to distributed storage. - Members cannot be removed. - The Interep server is a point of centralization. If a malicious actor gained access, they could censor new group additions or try to analyze stored data to reveal links between IDs. - It is possible for someone with access to the Interep database to determine which provider accounts have been used to create Interep identities - The way reputation is calculated is still very basic. We’d love your [feedback](https://docs.google.com/forms/d/e/1FAIpQLSdMKSIL-3RBriGqA_v-tJhNJOCciQEX7bwFvOW7ptWeDDhjpQ/viewform) on how to make it more robust! ## Credible humanity Existing web2 platforms can be centralized, opaque, and reckless with their user’s private information – all problems the Ethereum community is actively building solutions and alternatives for. However, these platforms also have well-developed user bases with strong reputational networks and social ties. All the digital reputation amassed over the years need not be thrown away in order to build a new decentralized version of the internet. With a pragmatic reputational on-ramp our Ethereum identities can become much more than a history of our financial transactions. They can become more contextual, more relational, more social, and more credibly human. ## Build with Interep If you’d like to learn more or build with Interep, check out our [documentation](https://docs.interep.link/), [presentation](https://www.youtube.com/watch?v=CoRV0V_9qMs), and Github [repo](https://github.com/interep-project). To get involved, join the conversation on [Discord](https://discord.gg/Tp9He7qws4) or help [contribute](https://docs.interep.link/contributing). Interep is possible thanks to the work of contributors including [Geoff Lamperd](https://github.com/glamperd) (project lead) and [Cedoor](https://github.com/cedoor). ]]> interep reputation privacy identity ethereum semaphore sybil resistance zero-knowledge proofs web3 social networks <![CDATA[A Technical Introduction to Arbitrum's Optimistic Rollup]]> https://pse.dev/blog/a-technical-introduction-to-arbitrums-optimistic-rollup https://pse.dev/blog/a-technical-introduction-to-arbitrums-optimistic-rollup Mon, 29 Aug 2022 00:00:00 GMT arbitrum optimistic rollup l2 scaling ethereum rollup infrastructure/protocol education <![CDATA[A Technical Introduction to MACI 1.0 - Privacy Stewards of Ethereum]]> https://pse.dev/blog/a-technical-introduction-to-maci-10-privacy-scaling-explorations https://pse.dev/blog/a-technical-introduction-to-maci-10-privacy-scaling-explorations Mon, 29 Aug 2022 00:00:00 GMT X + Y = 15 I can prove that I know 2 values, X and Y that satisfy the equation without revealing what those two values are. When I create a zk-SNARK for my answer, anyone can use the SNARK (a group of numbers) and validate it against the equation above to prove that I do know a solution to that equation. The user is unable to use the SNARK to find out my answers for X and Y. For MACI, the equation is much more complicated but can be summarized as the following equations: > encrypt(command1) = message1 > > encrypt(command2) = message2 > > encrypt(command3) = message3 > > … > > Command1 from user1 + command2 from user2 + command3 from user3 + … = total tally result Here, everyone is able to see the messages on the blockchain and the total tally result. Only the coordinator knows what the individual commands/votes are by decrypting the messages. So, the coordinator uses a zk-SNARK to prove they know all of the votes that: 1. Encrypt to the messages present on the blockchain 2. Sum to the tally result Users can then use the SNARK to prove that the tally result is correct, but cannot use it to prove any individual's vote choices. Now that the core components of MACI have been covered, it is helpful to dive deeper into the MACI workflow and specific smart contracts. ## 3\. Workflow The general workflow process can be broken down into 4 different phases: 1. Sign Up 2. Publish Message 3. Process Messages 4. Tally Results These phases make use of 3 main smart contracts — MACI, Poll and PollProcessorAndTallyer. These contracts can be found on the [MACI github page](https://github.com/privacy-scaling-explorations/maci/tree/dev/packages/contracts). The MACI contract is responsible for keeping track of all the user sign ups by recording the initial public key for each user. When a vote is going to take place, users can deploy a Poll smart contract via MACI.deployPoll(). The Poll smart contract is where users submit their messages. One MACI contract can be used for multiple polls. In other words, the users that signed up to the MACI contract can vote on multiple issues, with each issue represented by a distinct Poll contract. Finally, the PollProcessorAndTallyer contract is used by the coordinator to prove on-chain that they are correctly tallying each vote. This process is explained in more detail in the Process Messages and Tally Results sections below. ![](https://miro.medium.com/max/1400/0*NA8cwQvAhZoX7Pia) ## a. Sign Up The sign up process for MACI is handled via the MACI.sol smart contract. Users need to send three pieces of information when calling MACI.signUp(): 1. Public Key 2. Sign Up Gatekeeper Data 3. Initial Voice Credit Proxy Data The public key is the original public key mentioned in above sections that the user will need to vote. As explained in earlier sections, they can change this public key later once voting starts. The user's public key used to sign up is shared amongst every poll. MACI allows the contract creator/owner to set a "signUpGateKeeper". The sign up gatekeeper is meant to be the address of another smart contract that determines the rules to sign up. So, when a user calls MACI.signUp(), the function will call the sign up gatekeeper to check if this user is valid to sign up. MACI also allows the contract creator/owner to set an "initialVoiceCreditProxy". This represents the contract that determines how many votes a given user gets. So, when a user calls MACI.signUp(), the function will call the initial voice credit proxy to check how many votes they can spend. The user's voice credit balance is reset to this number for every new poll. Once MACI has checked that the user is valid and retrieved how many voice credits they have, MACI stores the following user info into the Sign Up Merkle Tree: 1. Public Key 2. Voice Credits 3. Timestamp ![](https://miro.medium.com/max/1400/0*h6otS_gfiZ2Wjvoq) ## b. Publish Message Once it is time to vote, the MACI creator/owner will deploy a Poll smart contract. Then, users will call Poll.publishMessage() and send the following data: 1. Message 2. Encryption Key As explained in sections above, the coordinator will need to use the encryption key in order to derive a shared key. The coordinator can then use the shared key to decrypt the message into a command, which contains the vote. Once a user publishes their message, the Poll contract will store the message and encryption key into the Message Merkle Tree. ## c. Process Messages Once the voting is done for a specific poll, the coordinator will use the PollProcessAndTallyer contract to first prove that they have correctly decrypted each message and applied them to correctly create an updated state tree. This state tree keeps an account of all the valid votes that should be counted. So, when processing the messages, the coordinator will not keep messages that are later overridden by a newer message inside the state tree. For example, if a user votes for option A, but then later sends a new message to vote for option B, the coordinator will only count the vote for option B. The coordinator must process messages in groups so that proving on chain does not exceed the data limit. The coordinator then creates a zk-SNARK proving their state tree correctly contains only the valid messages. Once the proof is ready, the coordinator calls PollProcessorAndTallyer.processMessages(), providing a hash of the state tree and the zk-SNARK proof as an input parameters. The PollProcessorAndTallyer contract will send the proof to a separate verifier contract. The verifier contract is specifically built to read MACI zk-SNARK proofs and tell if they are valid or not. So, if the verifier contract returns true, then everyone can see on-chain that the coordinator correctly processed that batch of messages. The coordinator repeats this process until all messages have been processed. ## d. Tally Votes Finally, once all messages have been processed, the coordinator tallies the votes of the valid messages. The coordinator creates a zk-SNARK proving that the valid messages in the state tree (proved in Process Messages step) contain votes that sum to the given tally result. Then, they call PollProcessorAndTallyer.tallyVotes() with a hash of the correct tally results and the zk-SNARK proof. Similarly to the processMessages function, the tallyVotes function will send the proof to a verifier contract to ensure that it is valid. The tallyVotes function is only successful if the verifier contract returns that the proof is valid. Therefore, once the tallyVotes function succeeds, users can trust that the coordinator has correctly tallied all of the valid votes. After this step, anyone can see the final tally results and the proof that these results are a correct result of the messages sent to the Poll contract. The users won't be able to see how any individual voted, but will be able to trust that these votes were properly processed and counted. ![](https://miro.medium.com/max/1400/0*7Le2odbX7e2etpxR) ## 4\. Conclusion MACI is a huge step forward in preventing collusion for on-chain votes. While it doesn't prevent all possibilities of collusion, it does make it much harder. MACI can already be [seen](https://twitter.com/vitalikbuterin/status/1329012998585733120) to be in use by the [clr.fund](https://blog.clr.fund/round-4-review/), which has users vote on which projects to receive funding. When the possible funding amount becomes very large, users and organizations have a large incentive to collude to receive parts of these funds. This is where MACI can truly make a difference, to protect the fairness of such important voting processes such as those at clr.fund. ]]> maci zero-knowledge proofs privacy voting/governance ethereum collusion resistance smart contracts cryptography <![CDATA[Meet COCO! - Privacy Stewards of Ethereum]]> https://pse.dev/blog/meet-coco-privacy-scaling-explorations https://pse.dev/blog/meet-coco-privacy-scaling-explorations Mon, 29 Aug 2022 00:00:00 GMT prediction market social ethereum web3 user experience public goods community curation moderation infrastructure/protocol <![CDATA[Rate Limiting Nullifier: A spam-protection mechanism for anonymous environments]]> https://pse.dev/blog/rate-limiting-nullifier-a-spam-protection-mechanism-for-anonymous-environments https://pse.dev/blog/rate-limiting-nullifier-a-spam-protection-mechanism-for-anonymous-environments Mon, 29 Aug 2022 00:00:00 GMT rln nullifiers zero-knowledge proofs privacy anonymity/privacy security cryptography merkle tree infrastructure/protocol ethereum <![CDATA[Release Announcement: MACI 1.0 - Privacy Stewards of Ethereum]]> https://pse.dev/blog/release-announcement-maci-10-privacy-scaling-explorations https://pse.dev/blog/release-announcement-maci-10-privacy-scaling-explorations Mon, 29 Aug 2022 00:00:00 GMT _if you can prove how you voted, selling your vote becomes very easy. Provability of votes would also enable forms of coercion where the coercer demands to see some kind of proof of voting for their preferred candidate._ To illustrate this point, consider an alleged example of collusion that [occurred in round 6 of Gitcoin grants](https://gitcoin.co/blog/how-to-attack-and-defend-quadratic-funding/) (a platform for quadratic funding software projects which contribute to public goods). In _[How to Attack and Defend Quadratic Funding](https://gitcoin.co/blog/how-to-attack-and-defend-quadratic-funding/)_, an author from Gitcoin highlights a tweet by a potential grant beneficiary appeared to offer 0.01 ETH in exchange for matching funds: ![](https://miro.medium.com/max/1360/0*_aKOFcRGzjl4RcBB.png) They explain the nature of this scheme: > _While creating fake accounts to attract matching funds can be prevented by sybil resistant design, **colluders can easily up their game by coordinating a group of real accounts to “mine Gitcoin matching funds” and split the “interest” among the group**._ Finally, MACI is important because as crypto communities are increasingly adopting Decentralised Autonomous Organisations (DAOs) which [govern through token voting](https://vitalik.ca/general/2021/08/16/voting3.html). The threat of bribery attacks and other forms of collusion will only increase if left unchecked, since such attacks target a fundamental vulnerability of such systems. ## What’s new? In this release, we rearchitected MACI’s smart contracts to allow for greater flexibility and separation of concerns. In particular, we support multiple polls within a single instance of MACI. This allows the coordinator to run and tally many elections either subsequently or concurrently. ![](https://miro.medium.com/max/1400/0*i0MnnOBj18B_62Zt) We’ve kept the ability for developers to provide their own set of logic to gate-keep signups. For instance, application developers can write custom logic that only allows addresses which own a certain token to sign up once to MACI in order to participate in polls. An additional upgrade we have implemented is greater capacity for signups, votes, and vote options. With MACI 1.0, a coordinator can run a round that supports more users, votes, and choices than before, even with the same hardware. We adopted iden3’s tools for [faster proof generation](https://github.com/iden3/rapidsnark). Furthermore, we rewrote our zk-SNARK circuits using the latest versions of [snarkjs](https://github.com/iden3/snarkjs), [circom](https://github.com/iden3/circom), and [circomlib](https://github.com/iden3/circomlib). We also developed additional developer tooling such as [circom-helper](https://github.com/weijiekoh/circom-helper) and [zkey-manager](https://github.com/appliedzkp/zkey-manager). Finally, we significantly reduced gas costs borne by users by replacing our incremental Merkle tree contracts with a modified [deposit queue mechanism](https://ethresear.ch/t/batch-deposits-for-op-zk-rollup-mixers-maci/6883). While this new mechanism achieves the same outcome, it shifts some gas costs from users to the coordinator. A comparison of approximate gas costs for user-executed operations is as follows: ![](https://miro.medium.com/max/972/1*m3G3FB9x1-3X23HER3A4oQ.png) Finally, we are looking forward to collaborating with other projects and supporting their development of client applications and new use cases. For instance, clr.fund team has indicated that they would like to upgrade their stack to MACI v1.0, and other projects have expressed interest in adopting MACI. We hope that through collaboration, the Ethereum community can benefit from our work, and vice versa. ## Further work There is plenty of space for MACI to grow and we welcome new ideas. We are keen to work with developers who wish to do interesting and impactful work, especially folks who would like to learn how to build applications with zk-SNARKs and Ethereum. ## Negative voting We thank [Samuel Gosling](https://twitter.com/xGozzy) for completing a grant for work on [negative voting](https://github.com/appliedzkp/maci/pull/283). This allows voters to use their voice credits to not only signal approval of a vote option, but also disapproval. Please note that the negative voting branch, while complete, is currently unaudited and therefore not yet merged into the main MACI codebase. ## Anonymisation A [suggested upgrade to MACI is to use ElGamal re-randomisation for anonymity of voters](https://ethresear.ch/t/maci-anonymization-using-rerandomizable-encryption/7054). While all votes are encrypted, currently the coordinator is able to decrypt and read them. With re-randomisation, the coordinator would not be able to tell which user took which action. We are working on tooling that makes it easier for coordinators to interface with deployed contracts and manage tallies for multiple polls. This will allow users to generate proofs and query inputs and outputs from existing circuits through an easy-to-use API. We hope that this will drive more adoption of MACI and offload the need for bespoke infrastructure. ## Trusted setup Unlike other ZKP projects, MACI does not have an official [trusted setup](https://zeroknowledge.fm/133-2/). Instead, we hope to assist teams implementing MACI in their applications to run their own trusted setup. For instance, [clr.fund recently completed a trusted setup](https://blog.clr.fund/trusted-setup-completed/) (on a previous version of MACI) for a specific set of circuit parameters. Other teams may wish to use a different set of parameters on MACI 1.0, which calls for a different trusted setup. ## Conclusion This release marks a step towards the hard problem of preventing collusion in decentralised voting and quadratic funding systems. We are excited to share our work and please get in touch if you are a developer and are interested in getting involved in any way. ]]> maci privacy zero-knowledge proofs collusion resistance voting quadratic funding ethereum zk-snarks governance infrastructure/protocol <![CDATA[Unirep: A private and non-repudiable reputation system]]> https://pse.dev/blog/unirep-a-private-and-non-repudiable-reputation-system https://pse.dev/blog/unirep-a-private-and-non-repudiable-reputation-system Mon, 29 Aug 2022 00:00:00 GMT unirep privacy reputation zero-knowledge proofs identity semaphore ethereum social anonymity/privacy infrastructure/protocol <![CDATA[BLS Wallet: Bundling up data - Privacy Stewards of Ethereum]]> https://pse.dev/blog/bls-wallet-bundling-up-data-privacy-scaling-explorations https://pse.dev/blog/bls-wallet-bundling-up-data-privacy-scaling-explorations Fri, 26 Aug 2022 00:00:00 GMT bls wallet scaling rollups ethereum l2 account abstraction cryptography infrastructure/protocol security <![CDATA[Semaphore V2 is Live! - Privacy Stewards of Ethereum]]> https://pse.dev/blog/semaphore-v2-is-live-privacy-scaling-explorations https://pse.dev/blog/semaphore-v2-is-live-privacy-scaling-explorations Fri, 26 Aug 2022 00:00:00 GMT semaphore privacy zero-knowledge proofs anonymity/privacy identity ethereum cryptography infrastructure/protocol security toolkits <![CDATA[Zkopru Ceremony: Final Call and Failed Contributions]]> https://pse.dev/blog/zkopru-ceremony-final-call-and-failed-contributions https://pse.dev/blog/zkopru-ceremony-final-call-and-failed-contributions Fri, 26 Aug 2022 00:00:00 GMT zkopru trusted setup ceremony zero-knowledge proofs privacy cryptography ethereum security <![CDATA[ZKOPRU on Testnet - Privacy Stewards of Ethereum]]> https://pse.dev/blog/zkopru-on-testnet-privacy-scaling-explorations https://pse.dev/blog/zkopru-on-testnet-privacy-scaling-explorations Fri, 26 Aug 2022 00:00:00 GMT Please note that from here on, when we say ETH we are referring to GörliETH. Don't send mainnet ETH to your ZKOPRU wallet yet! Once you've got your ETH, make sure MetaMask is connected to the Görli testnet and head to the ZKOPRU [wallet](https://zkopru.network/). You'll need to connect an Ethereum account using MetaMask. Select the account you want to use and click _Next_, then _Connect_. You'll see a popup asking your permission to sync — the ZKOPRU wallet runs a client in the browser which needs to sync with the ZKOPRU network. MetaMask will prompt you to sign to unlock your ZKOPRU account and start the sync. ![](https://miro.medium.com/max/1400/0*TWLX-_TdNK0uWoR-) Syncing Zkopru The sync process could take a few minutes. Wait until the bottom left shows *Fully synced 100%. *If the site is blurred, double check if Metamask is connected to Görli. If you weren't connected to Görli you may need to refresh the page in order to start the sync. ![](https://miro.medium.com/max/1400/1*bG__U_qysCQ9xBqgrE2FtQ.png) ZKOPRU main page ## Deposit In order to start transacting on ZKOPRU, you'll need to deposit your ETH from Görli into ZKOPRU On the left side of the main page, click _Deposit_. You'll see options to deposit ETH, ERC20s or both at the same time. The deposit transaction will require some ETH for the L1 transfer and an additional fee for the coordinator. We recommend you deposit at least 0.01ETH — you'll also need it to pay coordinator fees for any ZKOPRU transactions. After confirming your transaction in MetaMask, head to the _History_ tab to check the deposit status. ![](https://miro.medium.com/max/1400/1*LY_SezdWuD4vTCsZaOYIkw.png) Depositing ## Transfer (Send & Receive) In order to make a private transfer on ZKOPRU, go to _Send,_ on the main page, enter the recipient address, select the asset and amount you want to send and enter the fee for the coordinator. Remember that the recipient's ZKOPRU address is different from the Ethereum address — the recipient can generate it by clicking _Receive_ on the ZKOPRU main page, then copy it to send to you. ![](https://miro.medium.com/max/1400/0*34CuL1JkOPxxBuYx) ZKOPRU Address ![](https://miro.medium.com/max/1400/1*JTChF3QmNF6UTWZO42CHew.png) Transfer After hitting S*end*, your transaction is relayed to the coordinator. The actual transfer can take a while if there is not a lot of activity on the network, because the coordinator has to accumulate enough transactions that the combined fees will cover the cost of submitting a batch. Since GörliETH is free you can splash it a bit and use a 2500Gwei transaction fee to help the poor coordinator submit the batch right away. We are building an instant finality mechanism to make that faster in the future :). After the transfer you will see something like this in the _My Wallet_ section: ![](https://miro.medium.com/max/634/0*Vz3tHJi4T7GddChn) This means that your available balance is currently locked until the transfer succeeds. ZKOPRU, like Bitcoin, uses the UTXO model and you can see your notes' info by hovering over the "i" next to your balance. ## Withdraw If you want your assets back on Görli, you'll need to withdraw them from ZKOPRU. Head to _Withdraw_ on the main page, select the asset you want to withdraw and enter the amount as well as the fee for the coordinator. The withdrawal will be initiated once the coordinator has enough transactions lined up to make submission of the batch economical (this can be a few hours). Unlike a transfer, you won't be able to meaningfully speed up the withdrawal via a higher transaction fee. ZKOPRU, like other optimistic rollups, requires a 7 day delay period for withdrawals. So even if you pay enough to incentivize the coordinator to submit the batch a few minutes sooner, you'll still have to wait 7 days for your assets to be available. This delay serves an important security function, but it's a UX annoyance — we're also working on an instant withdrawal mechanism so you'll have options to get around the withdrawal delay in the future. ![](https://miro.medium.com/max/1400/0*Jdkh8xVV1w2s3TjF) ## UI Research Rachel, our awesome designer, has conducted user acceptance testing with users who don't work in crypto. Users with varying levels of crypto knowledge were asked to complete tasks like adding and withdrawing assets, and describe their experience and impressions. It was especially interesting to hear our users' first reactions to features we're excited about, like multi-asset deposits — a good reminder that a new feature is also a new experience for a user, and it's our job to get them oriented so they can be as excited about it as we are. You can find the report [here](https://github.com/zkopru-network/resources/tree/main/ui-ux/wallet). We hope it will be useful for others working on similar design challenges! ## Conclusion ZKOPRU is on testnet! Now [go ahead and make some GörliETH private](https://zkopru.network/wallet). If everything goes smoothly for a few weeks on testnet, we will cut an official release. Stay tuned for the next post, where we will explain more details on how to run a coordinator and how ZKOPRU can be deployed to mainnet. If you are interested in learning more about ZKOPRU check out our [Twitter](https://twitter.com/zkoprunetwork), [Medium](https://medium.com/privacy-scaling-explorations) and [documentation](https://docs.zkopru.network/). Join our brand new [Discord](http://discord.gg/vchXmtWK5Z) and please report any bugs and issues there. Contributors are welcome — see our [good first issues](https://github.com/zkopru-network/zkopru/labels/good%20first%20issue) on Github. ]]> zkopru privacy scaling zero-knowledge proofs l2 optimistic rollup transaction privacy ethereum wallets infrastructure/protocol <![CDATA[Zkopru Trusted Setup Ceremony]]> https://pse.dev/blog/zkopru-trusted-setup-ceremony https://pse.dev/blog/zkopru-trusted-setup-ceremony Fri, 26 Aug 2022 00:00:00 GMT Menu >Logout, then Login, and launch again. It won't run any circuits, but it might pick up your hashes and allow you to tweet. Your browser might go blank, you can just refresh and restart, it will pick up where you left. You dont see your contribution hash for any or all circuits? In that case something went wrong and your contribution was discarded. We will give any participant with failed contributions a second chance. Encountering any issues? Let us know in the Zkopru telegram group. ## How to verify? After your participation you will be presented with a contribution hash. We will make the files available to download and you will be able to verify your contribution (see more info [here](https://github.com/glamperd/setup-mpc-ui#verifying-the-ceremony-files)). You can also contribute via CLI if you want more control, ask about it in our [telegram](https://t.me/zkopru) group. ## Whats the time line? The ceremony will run for at least 2 weeks from now on. Once we have enough contributions we will announce a public random beacon for the last contribution. ## Want to learn more? Source code for the ceremony is available [here](https://github.com/glamperd/setup-mpc-ui#verifying-the-ceremony-files). Contribution computation is performed in the browser. The computation code is compiled to WASM, based on the repo above, a fork of Kobi Gurkan's phase 2 computation module which has been [audited](https://research.nccgroup.com/2020/06/24/security-considerations-of-zk-snark-parameter-multi-party-computation/).We made these unaudited changes: \- For the WASM build, return the result hash to the caller.- Also for the WASM build: Progress is reported by invoking a callback.- Corrected errors in progress report count totals. ## More Questions? [Join](https://t.me/zkopru) our telegram group. ]]> zkopru trusted setup ceremony zero-knowledge proofs privacy scaling optimistic rollup ethereum security cryptography <![CDATA[Zkopru Trusted Setup Completed - Privacy Stewards of Ethereum]]> https://pse.dev/blog/zkopru-trusted-setup-completed-privacy-scaling-explorations https://pse.dev/blog/zkopru-trusted-setup-completed-privacy-scaling-explorations Fri, 26 Aug 2022 00:00:00 GMT zkopru trusted setup ceremony zero-knowledge proofs privacy scaling ethereum security <![CDATA[Zkopru: Wat, Y & Wen - Privacy Stewards of Ethereum]]> https://pse.dev/blog/zkopru-wat-y-wen-privacy-scaling-explorations https://pse.dev/blog/zkopru-wat-y-wen-privacy-scaling-explorations Fri, 26 Aug 2022 00:00:00 GMT zkopru optimistic rollup zero-knowledge proofs privacy scaling ethereum l2 transaction privacy utxo infrastructure/protocol