<![CDATA[Reddio Technology Blog]]>https://blog.reddio.com/https://blog.reddio.com/favicon.pngReddio Technology Bloghttps://blog.reddio.com/Ghost 5.89Fri, 20 Mar 2026 09:08:39 GMT60<![CDATA[Reddio DeFi Genesis: On-Chain Yield Begins]]>Time-Weighted Staking + Monthly APY

Reddio proudly unveils DeFi Genesis — a 3-month staking campaign combining time-weighted rewards and monthly APY mechanics to reward early, committed participants in our ecosystem.

Staking will begin on Ethereum only for the initial phase, and migrate smoothly to the Reddio Mainnet by the end of

]]>
https://blog.reddio.com/defi-genesis/6839b43feca10a0001824d00Fri, 30 May 2025 13:42:30 GMTTime-Weighted Staking + Monthly APYReddio DeFi Genesis: On-Chain Yield Begins

Reddio proudly unveils DeFi Genesis — a 3-month staking campaign combining time-weighted rewards and monthly APY mechanics to reward early, committed participants in our ecosystem.

Staking will begin on Ethereum only for the initial phase, and migrate smoothly to the Reddio Mainnet by the end of June.


🧩 How It Works

  • Total Reward Pool: 21,000,000 RDO (0.21% of total supply)
  • Duration: 3 months (June–August)
  • Monthly Distribution: 7,000,000 RDO
  • Formula:User Reward = (Staked Amount × Seconds Held) ÷ Total Weighted Stake
  • Unstake anytime
  • Monthly payout will calculated and settled at end of each month
  • *Early withdrawal :**Any wallet that un-stakes before the month-end snapshot keeps only 70% of its earned reward. The remaining 30% is redistributed to wallets that stayed staked for the entire month — as a loyalty bonus.
  • Staking will occur exclusively on Ethereum, with a planned migration to the Reddio Mainnet by end of June

📈 APY by Entry Time (Time-Weighted)

Stake Start Date Weight Estimate Estimated APY
Day 1 100% 100%
Day 10 ~66% ~66%
Day 20 ~33% ~33%
Day 29 ~3% ~3%
Rewards are based on how long and how much you stake — not just a snapshot.

📈 Estimated APY by Total TVL (Monthly)

Total RDO Staked Across Partners Monthly Yield Estimated Annual APY
100M RDO 7% ~28%
200M RDO 3.5% ~14%
300M RDO 2.3% ~9.3%
The fewer users and earlier the stake, the greater the reward.

📅 Campaign Timeline

  • May 30 — Announcement + TGE
  • June 1 — Month 1 begins
  • June 30 — Month 1 snapshot + rewards
  • July 1 — Month 2 begins
  • July 31 — Month 2 snapshot + rewards
  • August 1 — Month 3 begins
  • August 31 — Final snapshot + rewards

👉 Start staking once you claim: https://airdrop.reddio.com/

Staking Partners
Native - Credit Based Liquidity Pools
Native is an on-chain platform to build liquidity that is openly accessible and cost effective.
Reddio DeFi Genesis: On-Chain Yield Begins
QuBit
more coming...

Stake early. Stake long. Build with Reddio.


]]>
<![CDATA[Chapter 1: Understanding Blockchain Scalability Challenges]]>Introduction

Blockchain technology promises a decentralized, secure, and transparent approach to handling digital transactions and computation. However, as its adoption grows, scalability remains a significant barrier preventing blockchain networks from achieving mass adoption. Unlike traditional centralized systems like VISA or PayPal, which process thousands of transactions per second (TPS), major

]]>
https://blog.reddio.com/chapter-1-understanding-blockchain-scalability-challenges/67b21d2aeca10a0001824cf3Sun, 16 Feb 2025 17:30:23 GMTIntroductionChapter 1: Understanding Blockchain Scalability Challenges

Blockchain technology promises a decentralized, secure, and transparent approach to handling digital transactions and computation. However, as its adoption grows, scalability remains a significant barrier preventing blockchain networks from achieving mass adoption. Unlike traditional centralized systems like VISA or PayPal, which process thousands of transactions per second (TPS), major blockchain networks like Ethereum and Bitcoin struggle to achieve even a fraction of that throughput.

Scalability is critical not just for financial transactions but also for broader applications like gaming, AI-driven agents, and supply chain tracking. This chapter introduces blockchain scalability issues, examining real-world bottlenecks, past challenges, and industry efforts to redefine what scalability means in a decentralized system.


The Scalability Gap: A Comparative View

To understand blockchain’s scalability problem, we must compare its performance with traditional financial networks:

SystemTransactions Per Second (TPS)Notes
VISA24,000Centralized payment network
PayPal193Centralized digital payments
Ethereum~20General-purpose smart contracts
Bitcoin~7Secure but slow settlement
Solana~4,000High TPS, but network outages
Aptos~160,000Uses parallel execution (MoveVM)

Ethereum is the second-largest cryptocurrency by market cap after Bitcoin, but it is much more than just a digital asset. Ethereum is a decentralized computing platform capable of running a wide variety of applications, including an entire ecosystem of decentralized finance (DeFi) protocols. However, despite its versatility, Ethereum’s ability to process only 20 TPS presents a major bottleneck.

As a decentralized world computer, Ethereum facilitates smart contracts, DeFi applications, and NFT transactions. If it is to serve as the backbone of an open, global financial system, it must be capable of handling a significantly higher transaction load. However, in its current form, Ethereum’s execution model requires network-wide consensus for every transaction, which severely limits throughput.

This limitation raises two fundamental concerns:

  1. Network Congestion: When too many users submit transactions simultaneously, the network struggles to handle the load.
  2. Gas Fees: Increased competition for block space leads to rising transaction fees, making blockchain transactions costly.

However, scalability is not unique to Ethereum. It is a universal challenge faced by virtually all blockchain networks, from Bitcoin’s 7 TPS to high-throughput chains like Solana and Aptos. Each blockchain approaches scalability differently, often making trade-offs between throughput, decentralization, and security —a concept we will explore in detail in later chapters.

This book is not just about Ethereum’s scalability; It is about blockchain scalability as a whole. Blockchain researchers are exploring Layer 2 scaling solutions, sharding, and alternative consensus mechanisms. This book will examine these approaches in depth.


Understanding Gas Fees and Transaction Costs in Ethereum

Ethereum and many other blockchains operate as a decentralized computing platform, enabling smart contracts and decentralized applications (dApps) to execute code in a trustless manner. However, executing computations and storing data on Ethereum requires resources, which leads us to the concept of gas.

What Is Gas?

Gas is a fundamental unit in Ethereum that measures the computational work required to process transactions and execute smart contracts. Every operation performed by the Ethereum Virtual Machine (EVM) consumes a certain amount of gas.

For example:

  • A simple ETH transfer (sending ETH from one address to another) costs 21,000 gas.
  • Interacting with smart contracts (e.g., swapping tokens on Uniswap) may require significantly more gas, depending on the complexity of the operation.

Gas itself is not a currency—it is just a measurement unit. However, gas must be paid for using ETH for Ethereum, and this cost fluctuates based on network demand.

Why Do Gas Fees Fluctuate?

Ethereum transactions do not process at a fixed cost. Instead, users must bid for block space by offering a gas price, measured in gwei (1 gwei = 0.000000001 ETH). When the network is congested, users must compete to get their transactions included in the next block, leading to higher fees. For example, during the 2021 NFT boom, minting an NFT could cost upwards of $200 in gas fees, making it inaccessible for many users.

Gas fees depend on three main factors:

  1. Gas Limit: The maximum amount of gas a transaction is allowed to consume.
  2. Base Fee: A dynamically adjusted minimum fee set by Ethereum’s protocol based on network congestion.
  3. Priority Fee (Tip): An optional tip paid to miners/validators to prioritize transactions.

The Impact of Gas Fees on dApps and Users

Gas fees play a crucial role in network security—they prevent spam attacks by making transactions costly. However, they also pose significant challenges:

  • High costs make small transactions impractical (e.g., buying a $5 NFT with a $50 gas fee).
  • Variable fees lead to unpredictable transaction costs.
  • Smart contract interactions (e.g., DeFi swaps, NFT minting) can be prohibitively expensive during peak congestion.

Ethereum Gas Fees and scalability in Action

To better understand how gas fees and scalability impact the network, let’s examine a real-world example: CryptoKitties, one of the first dApps to expose Ethereum’s scalability limitations.


Case Study: CryptoKitties and Network Congestion

CryptoKitties, one of Ethereum’s earliest viral dApps, allowed users to breed and trade digital cats on-chain. The architecture was simple:

  • frontend connected to Ethereum smart contracts.

Smart contracts handled breeding, trading, and storage of NFT assets. 

Chapter 1: Understanding Blockchain Scalability Challenges

Unlike traditional applications with centralized databases and backends, CryptoKitties relied entirely on Ethereum smart contracts for logic execution.

The Scaling Problem

At launch, CryptoKitties’ popularity overloaded the Ethereum network, causing:

  • Severe network congestion as thousands of users submitted transactions simultaneously.
  • Spikes in gas fees, making simple transactions expensive.
  • Delays in transaction confirmation, leading to a poor user experience.

At its peak, CryptoKitties accounted for over 10% of Ethereum’s total transaction volume, causing gas fees to spike by 500%. This event highlighted the limitations of blockchain scalability and prompted the industry to search for better performance metrics.

While Ethereum’s gas fees are a well-known example of transaction costs, other blockchains face similar challenges. Here’s how some popular networks handle fees and the trade-offs involved:

Bitcoin: Simplicity at a Cost

Bitcoin uses a fee market where users bid for block space. While this model is simple, it can lead to high fees during periods of congestion, as seen in December 2017 when average fees reached $55.

Solana: Low Fees, High Throughput

Solana offers extremely low fees (e.g., $0.00025 per transaction) due to its high throughput. However, its network has experienced congestion and outages during peak demand, highlighting the challenges of scaling without compromising reliability.

Binance Smart Chain: Lower Fees, Fewer Validators

BSC’s gas fees are paid in BNB and are generally lower than Ethereum’s. However, its smaller validator set raises concerns about centralization, and fees can still spike during periods of high demand.

Cardano: Predictable Fees

Cardano uses a fixed fee structure (e.g., 0.17 ADA per transaction), making costs predictable. However, its current throughput of ~250 TPS may limit its ability to handle large transaction volumes.

These examples illustrate that transaction costs and scalability challenges are universal in blockchain technology, though each network approaches them differently.

BlockchainFee MechanismAverage FeeChallenges
EthereumGas fees (bid-based)$10–$50 (varies)High fees during congestion
BitcoinFee market (bid-based)$1–$50 (varies)High fees during congestion
SolanaFixed fee$0.00025Network congestion, outages
Binance Smart ChainGas fees (paid in BNB)$0.10–$0.50Centralization concerns
CardanoFixed fee0.17 ADALimited throughput
AvalancheGas-like fees (paid in AVAX)$0.01–$0.10Complexity of subnets
PolygonLayer 2 fees (settled on Ethereum)$0.01–$0.05Reliance on Ethereum for final settlement

What Does “Scalability” Really Mean?

The term scalability is frequently used in blockchain discussions, but defining it precisely is challenging. Does it mean:

  • Higher transactions per second?
  • Faster block finalization?
  • More efficient use of hardware resources?
  • Achieving greater throughput without centralization?

Lack of a Formal Definition

In multiprocessor computing, scalability is commonly discussed, but a widely accepted technical definition is lacking. In a seminal research paper, Mark D. Hill notes:

“Scalability is a frequently claimed attribute of multiprocessor systems. While the basic concept is intuitive, there is no generally accepted definition of scalability.” 1

This ambiguity extends to blockchain. Without a standard metric, projects often define scalability in ways that serve their marketing rather than technical clarity.


Defining Scalability: Lessons from Databases

Scalability is a concept that transcends blockchain technology. To clearly define blockchain scalability, it’s helpful to first explore how scalability is defined and measured in traditional databases—systems that have been optimizing for performance and growth for decades. By understanding the principles of database scalability, we can better appreciate the unique challenges and opportunities in blockchain systems.

What Is Database Scalability?

In the context of databases, scalability refers to the system’s ability to handle increasing workloads—such as more users, transactions, or data—without degrading performance. A scalable database can grow to meet demand, whether by adding more resources to a single machine (vertical scaling) or distributing the workload across multiple machines (horizontal scaling).

How Is Database Scalability Measured?

Database scalability is typically quantified using the following metrics:

  • Throughput: The number of transactions or queries the system can process per second (TPS or QPS).
  • Latency: The time it takes to complete a single transaction or query.
  • Resource Utilization: How efficiently the system uses hardware resources (e.g., CPU, memory, storage).
  • Elasticity: The ability to scale up or down dynamically in response to changing workloads.

These metrics provide a clear framework for evaluating scalability, whether in centralized databases or decentralized blockchains.


Fundamental Problems in Blockchain Scalability

While traditional databases have largely solved scalability through centralized or semi-centralized approaches, blockchains face unique challenges due to their decentralized nature. The core problems in blockchain scalability stem from three fundamental requirements:

  1. Replicated Computation → Every node in the network processes all transactions, leading to redundant computation.
  2. Replicated Storage → Every node stores all historical data, resulting in significant storage overhead.
  3. Consensus Overhead → Nodes must agree on the total ordering of transactions, which introduces communication and coordination costs.

These requirements create a scalability trilemma: achieving high throughput, low latency, and decentralization simultaneously is extremely difficult.

For example:

  • In Bitcoin and Ethereum, every node processes all transactions and stores the entire blockchain, limiting throughput and increasing latency.
  • Consensus protocols like Proof of Work (PoW) or Proof of Stake (PoS) add significant overhead, further reducing scalability.

Fundamental Challenges for Scalable Blockchain Systems

To address these problems, the blockchain community is exploring whether it’s possible to achieve:

  • Partial Transaction Processing → Can nodes process only a subset of transactions, rather than all of them?
  • Partial Data Storage → Can nodes store only a portion of the blockchain data, rather than the entire history?
  • Efficient Consensus → Can consensus protocols be optimized to reduce communication overhead while maintaining security?

These challenges are often framed in terms of three key properties:

  • State Validity → Ensuring that the state of the blockchain (e.g., account balances, smart contract states) is correct and consistent across nodes.
  • Data Availability → Ensuring that all necessary data is available for validation, even if nodes only store partial data.
  • Byzantine Adversary Resistance → Ensuring that the system remains secure and consistent even in the presence of malicious actors.

Defining Blockchain Scalability

Given these challenges, we can define blockchain scalability as the ability of a blockchain system to:

  • Increase throughput (transactions per second) without significantly increasing latency.
  • Reduce resource usage (computation, storage, and communication) while maintaining decentralization and security.
  • Scale dynamically to handle growing workloads, such as more users, transactions, or smart contract interactions.

Unlike traditional databases, blockchain scalability must be achieved without compromising the core principles of decentralization, security, and immutability. This makes scalability one of the most pressing challenges in blockchain technology today.


Learning from Traditional Systems: Benchmarking Scalability

Now that we have defined blockchain scalability and examined its fundamental challenges, a natural question arises: how do we measure scalability effectively? The blockchain industry still lacks a universal standard for benchmarking scalability, making it difficult to compare different systems objectively.

To better understand the importance of benchmarking, we can turn to database systems, which have been optimizing for performance and scalability for decades. The benchmarking methodologies used in databases provide valuable insights into how structured performance evaluation can drive improvements and innovation.


Benchmarking Databases: A Systematic Approach

In the database industry, benchmarking plays a crucial role in evaluating performance, scalability, and efficiency. Over decades, database systems have developed structured benchmarking methodologies that help compare different architectures under standardized conditions. These benchmarks are essential because they provide a consistent, repeatable way to measure how systems handle increasing workloads, allowing developers and researchers to optimize performance.


How Are Databases Benchmarked?

Databases are benchmarked using standardized testing frameworks that assess performance across various workloads. Some of the most widely used database benchmarks include:

  • TPC-C – Measures online transaction processing (OLTP) performance, simulating real-world e-commerce workloads.
  • TPC-H – Evaluates decision-support systems and complex queries.
  • YCSB (Yahoo! Cloud Serving Benchmark) – Designed for benchmarking NoSQL databases and key-value stores.
  • OLTPBench – A framework that supports multiple transactional workloads for relational databases.

Each benchmark focuses on key performance indicators such as:

  • Throughput (Transactions Per Second, TPS) – Measures how many transactions the system can process within a given time.
  • Latency – Assesses the delay between submitting a query and receiving a response.
  • Scalability – Evaluates how well the system adapts as the number of users, queries, or nodes increases.
  • Concurrency Handling – Determines the system’s ability to process multiple operations simultaneously.
  • Resource Utilization – Examines how efficiently the system uses CPU, memory, and storage.

These benchmarks follow rigorous methodologies, ensuring fair comparisons across different database architectures, whether relational (SQL) or NoSQL systems.


Why Is Benchmarking Important?

  1. Standardization – It allows for objective comparisons between different database implementations.
  2. Optimization – Helps engineers identify bottlenecks and optimize performance.
  3. Scalability Insights – Demonstrates how a database performs under real-world, high-load conditions.
  4. Industry Adoption – A well-established benchmark can influence technology adoption by enterprises.

Without proper benchmarking, database performance claims would be inconsistent, misleading, or difficult to verify. The structured benchmarking frameworks provide scientific rigor to ensure that improvements in performance are measurable and reproducible.


Challenges in Benchmarking Databases

While database benchmarking has been widely adopted, it is not without challenges:

  • Diverse Workloads → Different databases are optimized for different use cases (OLTP vs. OLAP), making direct comparisons difficult.
  • Hardware Variability → Performance can be heavily influenced by underlying infrastructure, requiring careful test standardization.
  • Tuning & Optimization → Some databases require extensive manual tuning to perform well in benchmarks, which may not reflect real-world conditions.
  • Scalability Metrics → Traditional benchmarks measure centralized scalability, but distributed systems introduce new variables like consistency models, replication lag, and fault tolerance.

Despite these challenges, database benchmarking remains one of the most reliable ways to evaluate system performance, providing valuable insights for system architects and engineers.


Relevance to Blockchain Benchmarking

Understanding how databases are benchmarked helps us appreciate why benchmarking blockchains is even more complex. Unlike traditional databases, blockchains introduce decentralization, consensus mechanisms, and cryptographic constraints, making performance evaluation far more challenging.

In the next section, we’ll explore how the blockchain industry is attempting to develop standardized benchmarking frameworks, such as BLOCKBENCH, to measure blockchain scalability systematically.


Benchmarking Blockchain Scalability: BlockBench & Gas Per Second

As blockchain adoption grows, the need for scalability benchmarking becomes increasingly important. Unlike traditional databases, where performance can be measured using well-established benchmarks like TPC-C and YCSB, blockchain lacks a universal standard for measuring scalability. This makes it difficult to compare different blockchain implementations objectively.

Two emerging approaches—BlockBench and Gas Per Second (GPS)—offer early attempts to standardize blockchain performance metrics.


BlockBench: A First Step Toward Blockchain Benchmarking

BlockBench 2 is one of the earliest frameworks developed to benchmark private (permissioned) blockchains. It introduces a structured methodology for evaluating blockchain scalability, focusing on three key layers:

  1. Consensus Layer – Measures how different consensus algorithms (e.g., PBFT, PoW, PoA) affect performance.
  2. Data Layer – Analyzes blockchain storage models and how they impact read/write speeds.
  3. Execution Layer – Benchmarks smart contract execution speed and efficiency, particularly for EVM-based chains.

BlockBench evaluates throughput, latency, and fault tolerance using real-world workloads, such as key-value storage benchmarks (YCSB) and OLTP-style transactions.

However, while BlockBench provides a useful starting point, its focus is primarily on private blockchains, making it less relevant for public, high-throughput blockchains like Ethereum.


Gas Per Second: A More Accurate Measure for EVM Chains

While Transactions Per Second (TPS) is commonly used to measure blockchain performance, it has limitations—not all transactions consume the same computational resources. A more precise metric, Gas Per Second (GPS) 3, offers a better way to benchmark Ethereum and EVM-compatible blockchains.

Why GPS Matters

  • Gas measures computational effort, not just transaction count.
  • GPS accounts for both execution and storage costs, making it a better performance indicator than TPS.
  • Helps prevent DoS attacks by ensuring that performance evaluations account for resource usage, not just raw transaction count.

GPS is calculated as:

Gas Per Second = (Target Gas Usage Per Block) / (Block Time)

This metric allows researchers and developers to compare execution performance across different Ethereum-based Layer 1 and Layer 2 chains, offering a standardized way to assess scalability.


The Road Ahead for Blockchain Benchmarking

While BlockBench and GPS are steps in the right direction, blockchain benchmarking is still in its infancy. A comprehensive performance benchmark should account for:

  1. Execution scalability – How efficiently smart contracts are processed.
  2. State growth impact – The cost of managing increasing blockchain state sizes.
  3. Hardware utilization – How well different clients optimize CPU and storage usage.
  4. Cross-chain interoperability – Performance across modular execution layers.

Standardizing blockchain benchmarks will require ongoing collaboration between developers, researchers, and infrastructure providers. As blockchains move beyond experimental scaling models, rigorous benchmarking will be essential to ensuring that new architectures deliver real performance gains without compromising decentralization or security.


Why This Matters

Understanding database scalability provides a useful benchmark for evaluating blockchain scalability. However, the decentralized nature of blockchains introduces unique constraints that require innovative solutions. By addressing the fundamental problems of replicated computation, replicated storage, and consensus overhead, and tackling the fundamental challenges of state validity, data availability, and Byzantine adversary resistance, the blockchain community can pave the way for scalable, high-performance systems.

In the next section, we’ll explore how these challenges are being addressed through Layer 1 and Layer 2 solutions, as well as technologies like sharding and rollups.

  1. Mark D. Hill, What is Scalability?Available here
  2. Tien Tuan Anh Dinh, BLOCKBENCH: A Framework for Analyzing Private BlockchainsAvailable here
  3. Georgios Konstantopoulos, Reth’s path to 1 gigagas per second, and beyondAvailable here
]]>
<![CDATA[Introducing the Reddio Foundry Program]]>Reddio is excited to launch the Reddio Foundry Program, an initiative designed to empower visionary developers and forward-thinking companies to redefine what’s possible with blockchain. Built on Reddio’s cutting-edge parallel zkEVM infrastructure, the Foundry Program enables groundbreaking projects in AI, DeFi, on-chain gaming, and beyond.

Join

]]>
https://blog.reddio.com/introducing-the-reddio-foundry-program/67a56d9ceca10a0001824cc6Fri, 07 Feb 2025 03:41:30 GMT

Reddio is excited to launch the Reddio Foundry Program, an initiative designed to empower visionary developers and forward-thinking companies to redefine what’s possible with blockchain. Built on Reddio’s cutting-edge parallel zkEVM infrastructure, the Foundry Program enables groundbreaking projects in AI, DeFi, on-chain gaming, and beyond.

Join a growing ecosystem that’s unlocking the next era of blockchain scalability, innovation, and impact.


Why Join the Reddio Foundry Program?

Technology and innovation thrive when communities come together. The Reddio Foundry Program is your gateway to tools, resources, and exclusive opportunities that can help you forge something truly remarkable.

As part of the program, you will:

• 🌟 Showcase Your Work: Participate in Demo Days and gain exposure to top investors, partners, and the broader blockchain community.

• 🛠 Get Technical Support: Collaborate directly with Reddio’s engineers to optimize your applications for performance and scalability.

• 💸 Access Exclusive Benefits: Enjoy gas fee waivers, grants, and other unique perks designed to support your project’s growth.


What Kind of Companies and Ideas Are We Looking For?

We’re searching for trailblazing projects that push boundaries in these focus areas:

🤖 AI Agents & First-Ever MAVI On-Chain

Reddio is revolutionizing how AI agents interact and transact. From meme token agents to micropayment agents, we enable intelligent, automated financial operations. One of our partners has already deployed the first-ever Multi-Agent Verifiable Interop Framework (MAVI), creating exciting new opportunities for cross-agent interactions on-chain.

🔗 AI Inference On-Chain

Imagine advanced AI models running entirely on the blockchain. Reddio is powering the next leap in blockchain-integrated AI, unlocking Autonomous AI and enabling complex inference operations on-chain. If your project explores AI and blockchain intersections, we want to hear from you.

💹 Orderbook-Based DEX & DeFi

We’re reimagining decentralized finance with a fully on-chain order book and matching engine. By enhancing parallel execution and optimizing high-frequency orders, Reddio delivers seamless trades and liquidity solutions. Projects pushing the boundaries of DeFi are a perfect fit for our ecosystem.

🎮 On-Chain Gaming

Game logic executed entirely on-chain is the future of decentralized gaming. Reddio provides the infrastructure for developers to create immersive, trustless gaming experiences with unparalleled scalability and reliability. If you’re building the next big thing in gaming, we’re ready to support you.


What Does It Mean to Join the Foundry?

The Reddio Foundry Program is not just about building—it’s about leading. Members of the program are innovators, creators, and visionaries who are shaping the future of blockchain technology.

Your Journey in the Foundry Program:

1️⃣ Build: Deploy your smart contracts on Reddio’s testnet and unlock the power of parallel zkEVM.

2️⃣ Collaborate: Work closely with our expert team to refine and scale your solutions.

3️⃣ Shine: Take your place in the spotlight with opportunities to showcase your work at Reddio Demo Days and beyond.


Ready to Forge Ahead?

1️⃣ Understand the Technology: Learn about Reddio’s high-performance parallel zkEVM and how it can power your ideas. Start here: docs.reddio.com/zkevm/overview.

2️⃣ Gear Up Your Ideas: Refine your vision and strategize how to leverage Reddio’s cutting-edge infrastructure for your project.

3️⃣ Deploy on Testnet: Once ready, visit https://docs.reddio.com/zkevm/developerguide to deploy your smart contract and test your solution on the testnet.

4️⃣ Apply to the Program: Submit your application here to join the Reddio Foundry Program and gain access to exclusive support, resources, and opportunities to showcase your innovation.

The Reddio Foundry Program is designed to elevate your project to new heights, offering the support, resources, and recognition you need to succeed. Whether you’re a startup or an established company, this is your chance to forge something extraordinary with Reddio.


Let’s Build the Future, Together

This is your opportunity to be part of a movement that is reshaping the blockchain landscape. At Reddio, we believe that the future is built by those who dare to dream—and we’re here to make that dream a reality.

Don’t wait. Apply today and start forging the future with the Reddio Foundry Program. 🚀

]]>
<![CDATA[Join the Reddio Advocate Program: Driving the Future of Blockchain and AI]]>Reddio is on a mission to revolutionize the blockchain space with our cutting-edge parallel zkEVM technology, optimized for high-performance applications and AI integration. We’re looking for passionate advocates to help accelerate Reddio’s adoption globally and bring our vision of a scalable, efficient blockchain solution to communities

]]>
https://blog.reddio.com/join-the-reddio-advocate-program-driving-the-future-of-blockchain-and-ai/6790cb2eeca10a0001824cbbWed, 22 Jan 2025 10:44:54 GMT

Reddio is on a mission to revolutionize the blockchain space with our cutting-edge parallel zkEVM technology, optimized for high-performance applications and AI integration. We’re looking for passionate advocates to help accelerate Reddio’s adoption globally and bring our vision of a scalable, efficient blockchain solution to communities worldwide.

Why Become a Reddio Advocate?

Technology and community are at the core of expanding the Reddio ecosystem. As a Reddio Advocate, you’ll play a crucial role in educating and connecting with developers, innovators, and blockchain enthusiasts about the potential of Reddio to transform how we interact with decentralized technologies.

As part of the Reddio Advocate Program, you will:

Connect with Innovators: Join a network of engineers, researchers, and tech leaders who are on the forefront of blockchain and AI.

Professional Growth: Gain recognition in the industry and opportunities to advance your career within the blockchain space.

Exclusive Access: Attend leading conferences and receive invites to Reddio-only events.

Networking Opportunities: Build relationships with key players in the industry through our exclusive, advocate-only communication channels.

Special Swag: Get your hands on stylish Reddio gear.

What Does It Take to Be a Reddio Advocate?

Reddio Advocates are dynamic, engaged, and committed to spreading the word about our technology. Responsibilities include:

Hosting and Organizing Events: Whether virtual or in-person, bring the blockchain community together under the Reddio banner.

Content Creation: From blogs to tutorials, create engaging content that resonates with both newcomers and seasoned developers.

Community Interaction: Be an active voice in forums and social media, guiding discussions and providing insights.

Mentorship: Help onboard new community members and support them in understanding and utilizing Reddio’s technology.

Apply to Become a Reddio Advocate Today

Ready to make a mark in the blockchain industry with Reddio? Fill out our Advocate Application form. Our team reviews each application thoroughly and reaches out to potential candidates for a follow-up discussion. If selected, you’ll be welcomed into the program, paired with a mentor, and given all the tools and resources needed to succeed.

This is an exciting time to join as Reddio continues to grow and innovate. As the blockchain landscape evolves, Reddio Advocates will be at the forefront, leading the charge in smart contract development and AI on blockchain.

To learn more about Reddio, visit our website, and follow us on Twitter.

Embark on this journey with us to shape the future of blockchain technology and build a community that harnesses the full potential of Reddio’s innovations.

]]>
<![CDATA[Reddio Technology Overview: From Parallel EVM to AI Integration]]>In the fast-evolving world of blockchain technology, performance optimization has become a critical concern. Ethereum’s roadmap has made it clear that Rollups are central to its scalability strategy. However, the serial nature of EVM transaction processing remains a bottleneck, unable to meet the demands of high-concurrency scenarios of

]]>
https://blog.reddio.com/reddio-technology-overview-from-parallel-evm-to-ai-integration/67581451eca10a0001824caaTue, 10 Dec 2024 10:28:44 GMT

In the fast-evolving world of blockchain technology, performance optimization has become a critical concern. Ethereum’s roadmap has made it clear that Rollups are central to its scalability strategy. However, the serial nature of EVM transaction processing remains a bottleneck, unable to meet the demands of high-concurrency scenarios of the future.

In a previous article—"Exploring the Path to Parallel EVM Optimization with Reddio"—we provided a brief overview of Reddio’s parallel EVM design. Today, we will delve deeper into its technical solutions and explore scenarios where it intersects with AI.

Since Reddio’s technical framework incorporates CuEVM, a project leveraging GPUs to enhance EVM execution, let’s begin with an introduction to CuEVM.

Overview of CUDA

CuEVM is a project that accelerates EVM using GPUs by translating Ethereum EVM opcodes into CUDA Kernels for parallel execution on NVIDIA GPUs. Leveraging the parallel computing capabilities of GPUs improves the execution efficiency of EVM instructions. CUDA, a term familiar to NVIDIA GPU users, stands for:

Compute Unified Device Architecture, a parallel computing platform and programming model developed by NVIDIA. It enables developers to harness GPU parallel computing for general-purpose computation (such as crypto mining, ZK operations, etc.), beyond just graphics processing.

As an open parallel computing framework, CUDA is essentially an extension of the C/C++ language, allowing any programmer familiar with low-level C/C++ to quickly get started. A key concept in CUDA is the kernel function, a type of C++ function.

// define the kernel function
__global__ void kernel_evm(cgbn_error_report_t *report, CuEVM::evm_instance_t *instances, uint32_t count) {
    int32_t instance = (blockIdx.x * blockDim.x + threadIdx.x) / CuEVM::cgbn_tpi;
    if (instance >= count) return;
    CuEVM::ArithEnv arith(cgbn_no_checks, report, instance);
    CuEVM::bn_t test;
}

Unlike regular C++ functions that execute once, kernel functions execute N times in parallel across CUDA threads when invoked using the <<<...>>> syntax.

#ifdef GPU
    // TODO remove DEBUG num instances
    // num_instances = 1;
    printf("Running on GPU %d %d\\n", num_instances, CuEVM::cgbn_tpi);
    // run the evm
    kernel_evm<<<num_instances, CuEVM::cgbn_tpi>>>(report, instances_data, num_instances);
    CUDA_CHECK(cudaDeviceSynchronize());
    CUDA_CHECK(cudaGetLastError());
    printf("GPU kernel finished\\n");
    CGBN_CHECK(report);

Each CUDA thread is assigned a unique thread ID and is organized hierarchically into blocks and grids, which manage a large number of parallel threads. Developers use NVIDIA’s nvcc compiler to compile CUDA code into programs runnable on GPU

Reddio Technology Overview: From Parallel EVM to AI Integration

CuEVM Workflow

With a basic understanding of CUDA, we can examine CuEVM’s workflow.

The main entry point of CuEVM is run_interpreter, which accepts transactions to be processed in parallel in the form of a JSON file. The inputs consist of standard EVM content, requiring no additional handling or translation by developers.

void run_interpreter(char *read_json_filename, char *write_json_filename, size_t clones, bool verbose = false) {
    CuEVM::evm_instance_t *instances_data;
    CuEVM::ArithEnv arith(cgbn_no_checks, 0);
    printf("Running the interpreter\\n");
}

Within run_interpreter(), the CUDA-defined <<<...>>> syntax is used to invoke the kernel function kernel_evm(). As mentioned earlier, kernel functions are executed in parallel within the GPU.

void run_interpreter(char *read_json_filename, char *write_json_filename, size_t clones, bool verbose = false) {
    cJSON_ArrayForEach(test_json, read_root) {
#ifdef GPU
        // TODO remove DEBUG num instances
        // num_instances = 1;
        printf("Running on GPU %d %d\\n", num_instances, CuEVM::cgbn_tpi);
        // run the evm
        kernel_evm<<<num_instances, CuEVM::cgbn_tpi>>>(report, instances_data, num_instances);
        CUDA_CHECK(cudaDeviceSynchronize());
        CUDA_CHECK(cudaGetLastError());
        printf("GPU kernel finished\\n");
        CGBN_CHECK(report);
#endif
    }
}

Inside kernel_evm(), the function evm->run() is called. This function contains numerous branch conditions to convert EVM opcodes into CUDA operations.

namespace CuEVM {
    __host__ __device__ void evm_t::run(ArithEnv &arith) {
        while (status == ERROR_SUCCESS) {
            if ((opcode & 0xF0) == 0x80) {  // DUPX
                *call_state_ptr->stack_ptr = opcode;
            } else if ((opcode & 0xF0) == 0x90) {  // SWAPX
                error_code = CuEVM::operations::SWAPX(arith, call_state_ptr->gas_limit, 
                                                      *call_state_ptr->stack_ptr, opcode);
            } else if ((opcode >= 0xA0) && (opcode <= 0xA4)) {  // LOGX
                error_code = CuEVM::operations::LOGX(arith, call_state_ptr->gas_limit, 
                                                     *call_state_ptr->stack_ptr, 
                                                     *call_state_ptr->message_ptr, call_state_ptr);
            } else {
                switch (opcode) {
                    case OP_STOP:
                        error_code = CuEVM::operations::STOP(*call_state_ptr->parent->last_op);
                        break;
                    case OP_ADD:
                        error_code = CuEVM::operations::ADD(arith, call_state_ptr->gas_limit, 
                                                            *call_state_ptr->stack_ptr);
                        break;
                }
            }
        }
    }
}

For example, the EVM opcode OP_ADD (addition) is translated into cgbn_add, utilizing CGBN (Cooperative Groups Big Numbers), a high-performance library for multi-precision integer arithmetic in CUDA.

__host__ __device__ int32_t ADD(ArithEnv &arith, const bn_t gas_used) {
    cgbn_add_ui32(arith.env, gas_used, gas_used, GAS_VERIFY);
    int32_t error_code = CuEVM::gas_cost::has_gas(arith, gas_used);
    if (error_code == ERROR_SUCCESS) {
        bn_t a, b, r;
        error_code |= stack.pop(arith, a);
        error_code |= stack.pop(arith, b);
        cgbn_add(arith.env, r, a, b);
        error_code |= stack.push(arith, r);
    }
    return error_code;
}

These steps effectively translate EVM opcodes into CUDA operations. CuEVM implements EVM functionality on CUDA, and the run_interpreter() method ultimately returns the computation results, including the world state and other information.

At this point, the basic logic of CuEVM’s operation has been explained.

While CuEVM is capable of processing transactions in parallel, its primary purpose (and main use case) is fuzzing testing. Fuzzing is an automated software testing technique that inputs large volumes of invalid, unexpected, or random data into a program to observe its behavior, identifying potential bugs and security vulnerabilities.

Fuzzing is inherently suited for parallel processing. However, CuEVM does not handle transaction conflicts, as this is outside its scope. Integrating CuEVM into a system requires external conflict resolution mechanisms.

As discussed in the previous article, Reddio employs a conflict resolution mechanism to sort transactions before feeding them into CuEVM. Thus, Reddio’s L2 transaction sorting mechanism can be divided into two parts: conflict resolution and CuEVM parallel execution.

Layer 2, Parallel EVM, and AI: A Converging Path

Parallel EVM and Layer 2 are only the starting points for Reddio, as its roadmap clearly outlines plans to incorporate AI into its narrative. By leveraging GPU-based high-speed parallel processing, Reddio is inherently well-suited for AI operations:

  1. GPU’s strong parallel processing capabilities make it ideal for convolution operations in deep learning, which are essentially large-scale matrix multiplications, optimized for GPUs.
  2. GPU’s hierarchical thread structure corresponds to the data structures in AI computations, utilizing thread over-provisioning and warp execution units to improve efficiency and hide memory latency.
  3. Computational intensity, a critical metric for AI performance, is enhanced by GPUs through features like Tensor Cores, which improve the efficiency of matrix multiplications, balancing computation and data transmission.

AI and Layer 2 Integration

In Rollup architectures, the network includes not only sequencers but also roles like validators and forwarders that verify or collect transactions. These roles often use the same client software as sequencers but with different functions. In traditional Rollups, these secondary roles are often passive, defensive, and public-service-oriented, with limited profitability.

Reddio plans to adopt a decentralized sequencer architecture where miners provide GPUs as nodes. The Reddio network could evolve from a pure L2 solution into a hybrid L2+AI network, enabling compelling AI + blockchain use cases:

  1. AI Agent Interaction NetworkAI agents, such as those executing financial transactions, can autonomously make complex decisions and execute high-frequency trades. L1 blockchains cannot handle such transaction loads.Reddio’s GPU-accelerated L2 greatly improves transaction parallelism and throughput, supporting the high-frequency demands of AI agents. It reduces latency and ensures smooth network operation.
  2. Decentralized Compute MarketIn Reddio’s decentralized sequencer framework, miners compete using GPU resources. The resulting GPU performance levels could support AI training. This market enables individuals and organizations to contribute idle GPU capacity for AI tasks, lowering costs and democratizing AI model development.
  3. On-chain AI InferenceThe maturation of open-source models has standardized AI inference services. Using GPUs for efficient inference tasks while balancing privacy, latency, and verification (e.g., via ZKP) aligns well with Reddio’s EVM parallelization capabilities.

Conclusion

Layer 2 solutions, parallel EVM, and AI integration may seem unrelated, but Reddio has ingeniously combined them using GPU computing. By enhancing transaction speed and efficiency on Layer 2, Reddio improves Ethereum’s scalability. Integrating AI opens new possibilities for intelligent blockchain applications, fostering innovation across industries.

Despite its promise, this domain is still in its infancy and requires substantial research and development. Continued iteration, market imagination, and proactive action from pioneers like Reddio will drive the maturity of this innovation. At this critical intersection, Reddio has taken bold steps, and we look forward to more breakthroughs in this space.

]]>
<![CDATA[Guide to Depositing STONE Tokens on Reddio]]>Reddio has now enabled staking deposits for the STONE token, allowing users to earn double the Reddio staking points.

Here’s a step-by-step guide to help you get started:

Step 1: Access the STAKESTONE Staking Page

Go to the STAKESTONE staking page, connect your wallet, and switch to the

]]>
https://blog.reddio.com/guide-to-depositing-stone-tokens-on-reddio/672d921a84b1b70001f70a49Fri, 08 Nov 2024 04:26:42 GMT

Reddio has now enabled staking deposits for the STONE token, allowing users to earn double the Reddio staking points.

Here’s a step-by-step guide to help you get started:

Step 1: Access the STAKESTONE Staking Page

Go to the STAKESTONE staking page, connect your wallet, and switch to the Ethereum network:

https://app.stakestone.io/u/eth/stake

Guide to Depositing STONE Tokens on Reddio

Step 2: Deposit ETH to Convert to STONE

In the DEPOSIT field, enter the amount of ETH you wish to deposit. The current exchange rate is 1 ETH = 1 STONE. For example, if you deposit 0.1 ETH, you will receive 0.1 STONE tokens.

  • Currently, converting ETH to STONE provides an annual yield of 3.08% in ETH. This reward will be distributed when STONE tokens can be exchanged back for ETH in the future.
Guide to Depositing STONE Tokens on Reddio

Step 3: Navigate to Reddio’s Deposit Page

After converting your ETH to STONE, go to the Stone - Fi page on STAKESTONE and directly access Reddio’s deposit page (or use this link: https://points.reddio.com/deposit?invite_code=CTJPM).

Guide to Depositing STONE Tokens on Reddio

Step 4: Select STONE Tokens on Reddio

On Reddio’s DEPOSIT page, click on the dropdown menu next to the ETH option and select STONE tokens.

Guide to Depositing STONE Tokens on Reddio

Step 5: Enter the Amount of STONE to Deposit

After selecting STONE, enter the amount you wish to deposit.

Currently, each 1 STONE token can generate 200 Reddio Staking Points per day, which is double the rate of depositing ETH.

Guide to Depositing STONE Tokens on Reddio

Step 6: Check Your Deposit on Reddio

Once the deposit is complete, click on your wallet address in the top right corner, then go to PROFILE to check your deposit details.

Your staked STONE tokens will appear as rsvSTONE, allowing you to enjoy dual rewards from both Stakestone and Reddio without risk.

Guide to Depositing STONE Tokens on Reddio
]]>
<![CDATA[Reddio’s Optimization of EVM through Multi-threaded Parallelization]]>The Ethereum Virtual Machine (EVM) is widely recognized as both the "execution engine" and the "smart contract execution environment" of Ethereum, making it one of the blockchain's most critical components. A public blockchain consists of an open network with thousands of nodes, each potentially

]]>
https://blog.reddio.com/reddios-optimization-of-evm-through-multi-threaded-parallelization/67249cbe84b1b70001f70a23Fri, 01 Nov 2024 09:29:28 GMT

The Ethereum Virtual Machine (EVM) is widely recognized as both the "execution engine" and the "smart contract execution environment" of Ethereum, making it one of the blockchain's most critical components. A public blockchain consists of an open network with thousands of nodes, each potentially differing in hardware specifications. To ensure that smart contracts produce identical results across all nodes, achieving "consistency," it is essential to establish a uniform environment across various devices. Virtual machines make this possible.

The EVM enables smart contracts to run uniformly across different operating systems, such as Windows, Linux, and macOS. This cross-platform compatibility ensures that every node, regardless of its system, achieves the same results when executing a contract. A prime example of such technology is the Java Virtual Machine (JVM).

Reddio’s Optimization of EVM through Multi-threaded Parallelization

The smart contracts we typically see on block explorers are first compiled into EVM bytecode before being stored on the blockchain. When the EVM executes a contract, it reads the bytecode sequentially. Each instruction in the bytecode (opcode) has an associated gas cost. The EVM tracks the gas consumption of each instruction during execution, with the consumption depending on the complexity of the operation.

Furthermore, as the core execution engine of Ethereum, the EVM processes transactions in a serial manner. All transactions are queued in a single line and executed in a specific order. Parallel processing is not used because the blockchain must strictly maintain consistency. A batch of transactions must be processed in the same order across all nodes. If transactions were processed in parallel, it would be difficult to accurately predict the transaction order unless a corresponding scheduling algorithm is introduced, which would add complexity.

Reddio’s Optimization of EVM through Multi-threaded Parallelization

In 2014-15, due to time constraints, the Ethereum founding team chose a serial execution method because it was simple to design and easy to maintain. However, as blockchain technology has evolved and the user base has grown, the demand for higher TPS (transactions per second) and throughput has increased. With the emergence and maturity of Rollup technology, the performance bottleneck caused by EVM's serial execution has become increasingly apparent on Ethereum Layer 2.

As a key component of Layer 2, the Sequencer handles all computation tasks as a single server. If the efficiency of external modules working with the Sequencer is sufficiently high, the final bottleneck will depend on the efficiency of the Sequencer itself. At this point, serial execution becomes a significant obstacle.

The opBNB team once achieved extreme optimization of the DA layer and data read-write modules, allowing the Sequencer to process up to around 2000 ERC-20 transfers per second. While this number may seem high, if the transactions being processed are much more complex than ERC-20 transfers, the TPS value will inevitably decrease. Therefore, parallelization of transaction processing will be an inevitable trend in the future.

Next, we will delve into more specific details to explain the limitations of the traditional EVM and the advantages of a parallel EVM.

Two core components of Ethereum transaction execution

At the code module level, besides the EVM, another core component related to transaction execution in go-ethereum is stateDB, which is used to manage account states and data storage in Ethereum. Ethereum uses a tree structure called Merkle Patricia Trie as the database index (or directory). Each time a transaction is executed by the EVM, certain data stored in stateDB is modified, and these changes are eventually reflected in the Merkle Patricia Trie (hereafter referred to as the global state tree).

Reddio’s Optimization of EVM through Multi-threaded Parallelization

Specifically, stateDB is responsible for maintaining the state of all Ethereum accounts, including both EOA (Externally Owned Account) accounts and contract accounts. The data it stores includes account balances, smart contract code, and more. During transaction execution, stateDB performs read and write operations on the relevant account data. After the transaction execution is complete, stateDB needs to submit the new state to the underlying database (such as LevelDB) for persistence.

In summary, the EVM interprets and executes smart contract instructions, altering the blockchain’s state based on the computation results, while stateDB acts as the global state storage, managing all account and contract state changes. Together, they build Ethereum’s transaction execution environment.

The Process of Serial Execution

There are two types of transactions in Ethereum: EOA transfers and contract transactions. EOA transfers are the simplest transaction type, which is ETH transfers between ordinary accounts. These transactions do not involve contract calls and are processed very quickly. Due to the simplicity of the operation, the gas fee charged for EOA transfers is very low.

Unlike simple EOA transfers, contract transactions involve calling and executing smart contracts. When processing contract transactions, the EVM interprets and executes each bytecode instruction in the smart contract. The more complex the contract logic means the more instructions involved, which ultimately means the more resources consumed.

For example, processing an ERC-20 transfer takes about twice as long as an EOA transfer. For more complex smart contracts, such as transactions on Uniswap, it takes even longer, potentially being several times slower than an EOA transfer. This is because DeFi protocols require handling complex logic such as liquidity pools, price calculations, and token swaps during transactions, requiring very complex calculations.

So, in the serial execution model, how do the EVM and stateDB work together to process transactions?

In Ethereum’s design, transactions within a block are processed sequentially, one by one. Each transaction (tx) has an independent instance used to perform the specific operations of that transaction. Although each transaction uses a different EVM instance, all transactions share the same state database, which is stateDB.

During transaction execution, the EVM continuously interacts with stateDB, reading relevant data from stateDB and writing the modified data back to stateDB.

Reddio’s Optimization of EVM through Multi-threaded Parallelization

Let's take a look at how the EVM and stateDB collaborate to execute transactions from a code perspective:

  1. The processBlock() function calls the Process() function to handle the transactions within a block.
// processBlock attempts to process and commit a block into the blockchain.
func (bc *BlockChain) processBlock(block *types.Block) error {
    //...
    // Initialize the statedatabase
    statedb := bc.state.NewBlockchainState()

    // Process the block's transactions and retrieve receipts
    receipts, logs, usedGas, err := bc.processor.Process(block, statedb, bc.vmConfig)
    if err != nil {
        return err
    }

    // Commit state changes and block to the database
    return bc.writeBlockWithState(block, receipts, logs, statedb, true)
    //...
}
  1. A for loop is defined in the Process() function, where transactions are executed one by one.
func (p *StateProcessor) Process(block *types.Block, statedb *state.StateDB, cfg vm.Config) (*ProcessResult, error) {
    // Iterate over and process the individual transactions
    for i, tx := range block.Transactions() {
        msg, err := TransactionToMessage(tx, signer, header.BaseFee)
        if err != nil {
            return nil, fmt.Errorf("could not apply tx %d [%v]: %w", i, tx.Hash().Hex(), err)
        }

        statedb.SetTxContext(tx.Hash(), i)

        receipt, err := ApplyTransactionWithEVM(msg, p.config, gp, statedb, blockNumber, blockHash, tx, usedGas, vmenv)
        if err != nil {
            return nil, fmt.Errorf("could not apply tx %d [%v]: %w", i, tx.Hash().Hex(), err)
        }

        receipts = append(receipts, receipt)
        allLogs = append(allLogs, receipt.Logs...)
    }

    //...
}

  1. After all transactions are processed, the processBlock() function calls the writeBlockWithState() function, which then calls the statedb.Commit() function to commit the state changes.
// writeBlockWithState writes a block's state to the chain database.
func (bc *BlockChain) writeBlockWithState(block *types.Block, receipts types.Receipts, logs []*types.Log, statedb *state.StateDB, cfg vm.Config) error {
    //...

    // Commit the state changes to the blockchain
    root, err := statedb.Commit(true)
    if err != nil {
        return err
    }

    // Write the block and state root to the chain database
    if err := bc.db.WriteBlockWithState(block, root, receipts, logs, statedb.IntermediateRoot(false).Bytes()); err != nil {
        return err
    }
    if emitHeadEvent {
        // Emit a head event if this is the canonical head
        bc.chainHeadFeed.Send(events.ChainHeadEvent{Block: block})
    }

    return nil
    //...
}

Once all transactions in a block have been executed, the data in stateDB is committed to the global state tree (Merkle Patricia Trie) mentioned earlier, generating a new state root (stateRoot). The state root is an important parameter in each block, recording the "compressed result" of the new global state after the block execution.

It’s easy to understand the bottleneck of the EVM's serial execution model: transactions must be queued and executed sequentially. If there is a time-consuming smart contract transaction, other transactions can only wait until it is completed, which clearly cannot fully utilize CPU and other hardware resources, significantly limiting efficiency.

Multi-threaded Parallel Optimization of EVM

To compare serial execution and parallel execution with a real-life example, the former is like a bank with only one counter, while parallel EVM is like a bank with multiple counters. In a parallel mode, multiple threads can be started to process multiple transactions simultaneously, resulting in several times the efficiency improvement. However, the tricky part lies in the issue of state conflicts.

If multiple transactions attempt to modify data for the same account and are processed simultaneously, conflicts may arise. For example, if only one NFT can be minted and both Transaction 1 and Transaction 2 request to mint it, fulfilling both requests would obviously result in an error. Such situations require coordination. In practice, state conflicts occur more frequently than the example mentioned, so if transaction processing is to be parallelized, measures to handle state conflicts are essential.

Reddio's Parallel Optimization of EVM

As a ZKRollup project for EVM, Reddio’s parallel optimization approach involves allocating one transaction to each thread and providing a temporary state database in each thread, calledpending-stateDB. The specific details are as follows:

Reddio’s Optimization of EVM through Multi-threaded Parallelization
  1. Multi-threaded Parallel Transaction Execution: Reddio sets up multiple threads to process different transactions simultaneously, with each thread operating independently. This approach can enhance transaction processing speed several times over.
  2. Allocating a Temporary State Database for Each Thread: Reddio assigns each thread an independent temporary state database (pending-stateDB). During transaction execution, each thread does not directly modify the global stateDB but instead temporarily records state changes in the pending-stateDB.
  3. Synchronizing State Changes: Once all transactions within a block have been executed, the EVM sequentially synchronizes the state changes recorded in each pending-stateDB to the global stateDB. If no state conflicts occurred during transaction execution, the records in the pending-stateDB can be successfully merged into the global stateDB.

We optimized the handling of read and write operations to ensure that transactions can correctly access state data and avoid conflicts.

Write Operations: All write operations (i.e., state modifications) are not directly written to the global stateDB but are first recorded in the WriteSet of the pending-state. After transaction execution is completed, the state changes are then merged into the global stateDB through conflict detection.

Reddio’s Optimization of EVM through Multi-threaded Parallelization

Read Operation: When a transaction needs to read a state, the EVM first checks the ReadSet of the pending-state. If the ReadSet indicates that the required data is available, the EVM directly reads the data from the pending-stateDB. If the corresponding key-value pair is not found in the ReadSet, it retrieves the historical state data from the global stateDB of the previous block.

Reddio’s Optimization of EVM through Multi-threaded Parallelization

The key issue in parallel execution is state conflicts, which become particularly prominent when multiple transactions attempt to read or write the state of the same account. To address this, Reddio introduces a conflict detection mechanism:

  • Conflict Detection: During transaction execution, the EVM monitors the ReadSet and WriteSet of different transactions. If it detects that multiple transactions are attempting to read or write the same state item, it considers this a conflict.
  • Conflict Handling: When a conflict is detected, the conflicting transactions are marked for re-execution. To prevent repeated conflicts and indefinite re-execution, transactions are re-queued with adjusted priority or in a sequence that reduces the likelihood of recurring conflicts. Additionally, conflict resolution mechanisms—such as lock-based access controls or transaction isolation strategies—are implemented.

After all transactions are completed, the change records from multiple pending-stateDBs are merged into the global stateDB. If the merge is successful, the EVM commits the final state to the global state tree and generates a new state root. The performance improvement from multi-threaded parallel optimization is evident, especially when handling complex smart contract transactions.

Reddio’s Optimization of EVM through Multi-threaded Parallelization

According to research on parallel EVM, in low-conflict workloads (where there are fewer conflicting transactions or transactions occupying the same resources in the transaction pool), benchmarked TPS shows an improvement of about 3–5 times compared to traditional serial execution. In high-conflict workloads, theoretically, if all optimization methods are applied, the difference in improvement can even reach up to 60 times.

Summary

Reddio’s multi-threaded parallel optimization scheme for the EVM significantly enhances transaction processing capacity by allocating a temporary state database for each transaction and executing transactions in parallel across different threads. By optimizing read and write operations and introducing conflict detection, EVM-based blockchains can achieve large-scale parallelization of transactions while ensuring state consistency, addressing the performance bottlenecks of traditional serial execution. This lays an important foundation for the future development of Ethereum Rollups.

]]>
<![CDATA[Reddio Integrates World ID, Enhances User Authentication and Security for Points System]]>August, 2024 – Reddio, the most performant parallel zkEVM Layer 2, has seamlessly integrated World ID, a digital passport embedded in the Worldcoin network, into Reddio’s points system.

As AI continues to advance, differentiating people from bots across the internet becomes increasingly critical. By integrating with World ID

]]>
https://blog.reddio.com/reddio-integrates-world-id-enhances-user-authentication-and-security-for-points-system/66ba129df970ca0001e544faMon, 12 Aug 2024 13:52:19 GMT

August, 2024 – Reddio, the most performant parallel zkEVM Layer 2, has seamlessly integrated World ID, a digital passport embedded in the Worldcoin network, into Reddio’s points system.

As AI continues to advance, differentiating people from bots across the internet becomes increasingly critical. By integrating with World ID through the developer API, Reddio enhances user authentication for its points system. World ID helps ensure the validity of each user account, enabling fair participation in the points system and improving the efficiency of point transactions within the Reddio community.

Enhanced Security and Sybil Resistance: By leveraging World ID’s trailblazing, permissionless proof-of-personhood (PoP) protocol, Reddio aims to ensure that only humans, and not AI, can participate in the Reddio points system.This can prevent malicious bots and enhances resistance against Sybil attacks and overall security for all users.

Fair and Equitable Ecosystem: The integration helps differentiate between bots and humans,fostering a fairer ecosystem where genuine users are rewarded. This equitable approach promotes a more authentic and engaged community within Reddio.

Privacy Protection and User Autonomy: World ID employs Zero-Knowledge Proofs (ZKP) to safeguard user credentials. Users maintain exclusive control over their World IDs through their devices, preserving autonomy and trust.

“Worldcoin's Proof-of-Personhood protocol aligns perfectly with our mission to make Web3 more accessible and yet more secured promoting fairness within the community" said Neil Han, CEO of Reddio. Reddio’s integration with World ID reinforces privacy, security, and fairness in the Web3 landscape.

To celebrate this collaboration, Reddio is offering all World ID users an exclusive +10% reward bonus. Just link your World ID with your Reddio account in the points system on the reddio.com website today to receive this benefit. Join Reddio today to experience the groundbreaking Layer 2 network with its own parallel zkEVM and enjoy these exclusive rewards.

What is World ID?

World ID is a secure, permissionless protocol that empowers millions of individuals to prove they’re unique and human online while preserving their privacy. It doesn’t require personal information such as name, phone number or email. With World ID, individuals can sign in to web, mobile and decentralized applications, and privately confirm their humanness – verified by the orb to obtain a high level of assurance.

About Reddio

Reddio is a high performance parallel Ethereum-compatible Layer 2, leveraging zero-knowledge technology to achieve unrivaled computation scale with Ethereum-level security, backed by Paradigm and Arena.

Website:https://reddio.com/

Twitter:https://x.com/reddio_com

Discord:https://discord.com/invite/reddio

]]>
<![CDATA[Argent X vs Braavos - Which Starknet Wallet is Right for You?]]>

Argent X and Braavos are the two most popular wallets in the Starknet ecosystem.

While it can be tough to choose a favorite, we have listed some differences below in case it can help you to decide.

* The number of accounts was accessed through Voyager https://voyager.online/analytics?page=
]]>
https://blog.reddio.com/argent-x-vs-braavos/664b027af970ca0001e52601Wed, 26 Jun 2024 03:44:48 GMT

Argent X and Braavos are the two most popular wallets in the Starknet ecosystem.

While it can be tough to choose a favorite, we have listed some differences below in case it can help you to decide.

Argent X vs Braavos - Which Starknet Wallet is Right for You?
* The number of accounts was accessed through Voyager https://voyager.online/analytics?page=accounts

Based on the above chart, we can see that Argent entered the market much earlier than Braavos. Both wallets continue to gain users since their launch and we expect this trend to continue as Starknet grows in popularity.

Both wallets are cross-platform and provide similar offerings. In all honesty, the functional differences are minor. For those who feel more assured for wallets that have been audited by 3rd-parties like Consensys, then you may want to consider Argent. For those interested in a newer wallet with an active Discord server (206k members), then you may want to consider Braavos.

At the end of the day, it might just come down to which UX you prefer.

However, it is worth mentioning that some long-time users of the Argent X wallet have experienced access issues. When Argent upgraded from Cairo 0 to Cairo 1.0, some users found themselves unable to access their funds. Argent does have an active Discord server but unfortunately, the support there has been lacking and the access issues have not been resolved for some users.

We know first-hand as some of our team members are long-time users and are still unable to access their funds.

Newer accounts on Argent do not seem to have this issue, but it is worth bearing in mind on how the company chooses to address or not address users' troubles and the level of support they are generally willing to provide.

With that being said, both Argent X and Braavos are the two most popular Starknet wallets for the reasons given above and you can't go too wrong with choosing either one of them.

If you're still unsure at this point, why not just try both?

As a provider of a developer-friendly SDK, let us know which wallet you prefer and the features you like and/or would like to see so that we may consider adding those functionalities when developing tools for our developers.

]]>
<![CDATA[Open Sourcing Itachi – The Future of Decentralized Sequencer in Blockchain Technology]]>Reddio, a public blockchain backed by investments from Tiger Cub Fund, Arena Holdings, and Paradigm, announces that its pioneering Layer 2 (L2) and Layer 3 (L3) application chain sequencer framework, Itachi, is now officially open source. This key milestone is part of Reddio team's ongoing commitment to transparency

]]>
https://blog.reddio.com/introducing-itachi-the-future-of-decentralized-sequencr-in-blockchain-technology/662f63b2f970ca0001e525a8Mon, 29 Apr 2024 09:14:05 GMT

Reddio, a public blockchain backed by investments from Tiger Cub Fund, Arena Holdings, and Paradigm, announces that its pioneering Layer 2 (L2) and Layer 3 (L3) application chain sequencer framework, Itachi, is now officially open source. This key milestone is part of Reddio team's ongoing commitment to transparency and community-driven innovation.

Itachi: A Developer Friendly Framework for L2&L3 Development

Developed on the modular Yu framework and written in Golang by the Reddio team, Itachi offers unparalleled flexibility and customization for developers. This open-source sequencer is a pivotal advancement in blockchain technology, offering robust scaling solutions for decentralized application deployment.

Endorsement from Industry Leader

The framework has received recognition from numerous Crypto OGs, including Eli Ben-Sasson, CEO of StarkWare, who praised the Itachi framework, 'It’s great to see community driven efforts to drive Starknet's path towards decentralization'.

Key Features and Innovations

  • Modular Customization: Itachi simplifies blockchain development, akin to building a web service, allowing easy integration of various modules such as multiple virtual machines (VMs) and Data Availability (DA) layers.
  • Multiple VM and DA Support: Itachi supports CairoVM out of the box, and is designed to support a variety of VMs, including but not limited to EVM, Solana Virtual Machine (SVM), zkWASM, RISC0, MoveVM, and parallel EVMs without resource conflicts. It supports integration with multiple DA layers like Ethereum, Avail, Celestia, and more.
  • Advanced Prover and Anti-MEV Capabilities: Itachi aggregates multiple proving systems and coordinates zk proving tasks, enhancing security and offering specialized anti-Maximal Extractable Value (MEV) features to protect all DeFi dApp users on the platform.
  • Special L2 Consensus: Itachi’s unique L2 consensus protocol is designed for high throughput and inherits the security and permissionless nature of L1, setting new standards in blockchain operations. Meanwhile Itachi supports integration with mainstream consensus like PoW, PoS, PBFT,dBFT, HotStuff, etc. Developers can also customize their own consensus protocols for more high-performance and customization.
  • High Performance: Itachi excels in performance, delivering high transaction throughput (TPS) data under various testing conditions and hardware setups.
  • Layer 3 Appchain Compatibility: For projects with specific needs that L2 cannot satisfy, Itachi facilitates the development of custom L3 Appchains, enhancing real-time performance and throughput as required by applications like RTT games.
  • Interoperability Across dApps: Itachi ensures low-latency and reduced gas fees for cross-dApp interactions, enabling transactions such as a DeFi dApp triggering an action in a full-chain game, exemplifying seamless interoperability on the Itachi platform.

Community and Developer Engagement

Reddio actively engages with developers and has developed Itachi based on their input. Currently, gaming companies such as Boyaa, XAR Labs, Mississippi, Metascan, Mizu, Crystal Fun, TG.Bet, DEX companies like EdgeX, SphereX, etc. are testing Itachi for their Appchain launch and exploring potential partnerships.

The founder of Boyaa shared, "I have been thoroughly impressed with the architecture and capabilities of Itachi during our ongoing evaluation. It’s clear that Itachi’s framework could significantly enhance our blockchain infrastructure. The level of detail and thought put into Itachi’s design by the Reddio team aligns well with our needs for performance, transparency and security. We look forward to potentially integrating this technology and exploring how it can further our ambitions in the blockchain space.”

The source code for Itachi is now publicly available. We encourage developers around the world to contribute and extend its capabilities. With this open-source initiative, Itachi is poised to foster innovation and drive further advancements in the blockchain space.

Looking Ahead

We invite developers and blockchain enthusiasts to explore Itachi’s capabilities and propel its growth. Information on getting started with Itachi can be found in our developer guide.

For more information about Itachi and to join our community, please visit Reddio's official website or follow us on Twitter.

About Reddio:

Reddio is a one-stop solution to scale up Ethereum. Reddio provides a variety of amazing tools, such as a StarkEx-powered Layer 2, a zkVM Layer 2 and a high performance decentralized sequencer Itachi, just to name a few, to empower developers to take blockchain development to the next level.

Backed by Tiger Cub Fund, Arena Holdings, and Paradigm, Reddio simplifies convoluted blockchain development and maintains a robust blockchain infrastructure so customers can easily ramp up deployment. Reddio products are highly scalable, robust, and reliable.

]]>
<![CDATA[Introducing Itachi: A Fully Decentralized Modular Sequencer for Appchain, Starting from Starknet]]>"Itachi" introduces a revolutionary, fully decentralized sequencer framework tailored for Layer 2 (L2)/L3 Appchains, kicking off with Starknet integration. Itachi is built on the Yu framework, which was developed by the Reddio team in Golang. Golang is known for its modularity and customisation capabilities and so the

]]>
https://blog.reddio.com/decentralized-modular-sequencer/660d2857f970ca0001e5257dWed, 03 Apr 2024 10:01:31 GMT

"Itachi" introduces a revolutionary, fully decentralized sequencer framework tailored for Layer 2 (L2)/L3 Appchains, kicking off with Starknet integration. Itachi is built on the Yu framework, which was developed by the Reddio team in Golang. Golang is known for its modularity and customisation capabilities and so the Yu framework is more developer-friendly than Substrate and/or Cosmos SDK. Itachi simplifies the transaction process for users, ensuring legality and preparing transactions for consensus and execution, including checks for signature validity and preventing replay attacks.

Central to Itachi's functionality is its consensus mechanism, periodically generating L2/L3 blocks and executing them via the CairoVM to ensure seamless synchronisation with Cairo-state. This process underscores Itachi's efficiency in handling transactions, storing block data, and interfacing with Layer 1 through proof generation and Data Availability (DA) data construction.

Looking ahead, the release of Itachi's source code will empower developers to expand its capabilities, potentially integrating EVM/ZK modules for a ZK Layer 2/Layer 3 solution or BitVM/ZK modules for a BTC Layer 2 solution. Additionally, Itachi powers Reddio's own Starknet compatible zkVM Layer 2 offers high-performance capabilities, allowing for seamless deployment of Starknet smart contracts to Reddio's zkVM Layer 2 without modifications.

Now, let's dive deep into how Itachi did that.

Composition of Sequencers

Overview

The sequencer plays a crucial role in Layer 2 (L2), particularly within the ZK Rollup architecture. The core function of a sequencer is to be responsible for ordering transactions, executing them, delegating proof generation to a prover, and then sending the proof and data back to Layer 1 (L1).

Introducing Itachi: A Fully Decentralized Modular Sequencer for Appchain, Starting from Starknet


The sequencer is essentially a blockchain with special functionalities. Beyond the essential components required by traditional blockchains, they also need capabilities for interfacing with ZK provers and for connecting with L1 Data Availability (DA). For ecosystems like Starknet, compatibility with Cairo contracts through a CairoVM is necessary. To implement additional distinctive features, such as customized consensus or transaction packaging methods, more modules are required. Often, these functional modules cannot be accomplished solely through smart contracts.

Introducing Itachi: A Fully Decentralized Modular Sequencer for Appchain, Starting from Starknet

Due to the high cost of developing a new blockchain from scratch, many Rollup sequencers opt to fork the source code of go-ethereum and make various degrees of modifications or encapsulations to reuse the underlying components of go-ethereum as much as possible. However, for Madara and Itachi, we utilize a blockchain framework to complete this task. The advantage is that it allows for decoupling between some of the blockchain's underlying components and the core functionalities that need to be added to the sequencer. This way, should there be a need to modify or add functionalities, development iterations can be carried out at a lower cost.


Unlike Madara which uses Substrate, Itachi uses the Yu blockchain framework , a highly customizable Layer 2 Native modular blockchain framework that was developed in Golang by the Reddio team. This framework offers developers a Web API-like development experience, making it easier for developers to get started in blockchain development.

Let's take a look at the operational mechanics and processes of a sequencer compatible with Cairo contracts from a blockchain perspective:

Introducing Itachi: A Fully Decentralized Modular Sequencer for Appchain, Starting from Starknet

For User

  1. Users submit transactions to the sequencer/Itachi via RPC. Transactions can also be received through broadcasts from other nodes in the P2P network.
  2. After receiving a transaction through RPC, the sequencer first conducts a legality check and preprocessing on the transaction, such as:
  • Checking if the signature is valid
  • Verifying whether the transaction data size is too large
  • Ensuring there are no duplicate transactions (to prevent replay attacks)

3. Once the transaction data passes these checks, it is placed into the txpool (transaction pool) and simultaneously broadcast to other nodes in the public network via the P2P network.

Consensus

  1. Through the consensus system, the sequencer periodically generates L2 blocks. It batches a certain number of transactions from the txpool into a block and broadcasts this block to other nodes via the P2P network.
  2. Next, the block with packaged transactions is handed over to the CairoVM. The CairoVM executes the transactions in the block in sequence and synchronizes their execution state to the Cairo-state.
  3. After execution by the CairoVM, the returned data such as actualFees and traces, and partial data like the state-root are first constructed and filled into the block. Once the complete block is constructed, it is added to the end of the blockchain for storage.

For Layer 1

  1. Based on the executed block and the state difference (stateDiff), Data Availability (DA) data is constructed and sent to the DA layer.
  2. Whenever a new block is executed, it is sent to an external prover. The prover generates a zk-proof and uploads it to Ethereum Layer 1 (ETH L1) for verification.

How We Build the Sequencer

Now, let's take a look at the process and components of building a sequencer with the Yu framework.

Introducing Itachi: A Fully Decentralized Modular Sequencer for Appchain, Starting from Starknet
  1. When a transaction is initiated from the client to the blockchain, it first undergoes a check by the txpool. Only after passing this check is it entered into the transaction pool and then broadcast to other blockchain nodes. Transactions broadcast from other nodes received via the P2P network are checked and, if approved, placed into the txpool (without being broadcast further).
  2. The process of land running begins with the generation of blocks, which involves a series of procedures: mining and creating blocks, broadcasting, validating blocks from other nodes, executing transactions within the blocks, and storing the blocks in the chain, among others. There is a great deal of flexibility in this process; you can implement any consensus protocol you prefer, set block times, decide how transactions are packaged, and choose when transactions are executed, etc. Interactions with the blockchain, txdb and state also occur during this process.

After each block goes through the logic of all custom tripod within the land, it enters the next block phase, where the next block is generated and processed, thus continuing in a cycle.

Introducing Itachi: A Fully Decentralized Modular Sequencer for Appchain, Starting from Starknet
Internal Process Diagram of the Land

Next, let's take a look at the component composition of Yu:

Introducing Itachi: A Fully Decentralized Modular Sequencer for Appchain, Starting from Starknet

Core Components

The following are interfaces: blockchain, txdb, and txpool, all with built-in default implementations. Developers can reimplement these interfaces if they have special requirements.

  • blockchain: This is the chain structure responsible for storing block data as well as organizing the chain's structure and fork logic.
  • txdb: Yu's database, which stores all the specific transaction data from blocks, receipt from transactions after execution, etc.
  • txpool: The transaction pool, responsible for verifying and caching transactions received from external sources.
  • state: Stores the state, holding the state after each transaction is executed.
  • tripod: The fundamental minimum unit for running the blockchain and allowing developers to customize logic. Developers can customize multiple tripods, arrange them in sequence, and load them into the land for the framework to call. This is the core of the entire framework.

Within the tripod, the two most crucial functions are writing and reading:

  • writing is designed for developers to customize and implement freely. It will be subjected to consensus and execution by all nodes in the network. For instance, in Starknet, the four types of transactions - declare, deployAccount, invoke, L1Handle - require customized writing for implementation.
  • reading is also for developers to freely customize and implement. However, it will not undergo consensus across the network and will only be executed on the local node. For example, in Starknet, the operation of call transactions needs to be implemented through customized reading.

Underlying Components

  • storage: This is the storage layer, supporting various forms of storage such as KV, SQL, and FS, all of which are interfaces. Developers can specify the required storage engine (for example, KV currently has pebble and boltdb as storage engines). Currently, the storage within the state is implemented using pebble, while the storage inside the blockchain and txdb is implemented using sqlite.
  • p2p: This is the peer-to-peer network used for discovering nodes within the network and for the propagation of transactions and blocks.
  • crypto(keypair): This is the component for asymmetric encryption algorithms for public and private keys, currently supporting sr25519 and ed25519. This component is in the form of an interface, allowing for the extension of other encryption algorithms.

In summary, Yu has already provided developers with some of the essential components required for a blockchain. Based on this, we only need to implement some modules that are unique to the sequencer.

For a Starknet sequencer, it is essential for the sequencer to be able to execute Cairo contracts. The execution of Cairo contracts requires the involvement of CairoVM and Cairo-state. Therefore, we need to integrate the sequencer with CairoVM and Cairo-state. To achieve this, we can use Yu to develop a tripod named "Cairo," enabling it to call CairoVM to execute contracts.

Additionally, we use the Proof of Authority (POA) tripod as the consensus mechanism for our sequencer at this stage. It's important to note that POA is a default implementation provided by Yu, so we don't need to start from scratch; we can simply reference it. However, POA is intended as a transitional measure towards a decentralized network, and we plan to develop a new L2 consensus to replace POA in the future.

Let's take a closer look at the core logic of the "Cairo" tripod, with some details omitted for brevity:


type Cairo struct {
    *tripod.Tripod
    cairoVM       vm.VM
    cairoState    *CairoState
}

func NewCairo(cairoVM  vm.VM,  cairoState  *CairoState) *Cairo {
    cairo :=  &Cairo{
        Tripod:  tripod.NewTripod(),
        cairoVM: cairoVM,
        cairoState: cairoState,
    }

    cairo.SetWritings(cairo.ExecuteTxn)
    return cairo
}

func (c *Cairo)  ExecuteTxn(ctx *context.WriteContext) error {
    tx := new(TxRequest)
    ctx.BindJson(tx)
    receipt , err := c.cairoVM.execute(c.cairoVM, c.cairoState, tx)
    if err != nil {
        return err
    }
    ctx.EmitExtra(receipt)
}

Next, import both the "Cairo" and "POA" tripods into Yu's startup function:

func main() {
    poaTripod := poa.NewPoa()
    cairoTripod := cairo.NewCairo(cairoVM, cairoState)
    startup.DefaultStartup(poaTripod, cairoTripod)
}

With that, a sequencer compatible with Cairo contracts is easily completed. The overview diagram would roughly look like this:

Introducing Itachi: A Fully Decentralized Modular Sequencer for Appchain, Starting from Starknet

What's Next

We are publishing the source code in two weeks time, by then, developers will be able to try it out and start to add more modules to the sequencer, things like EVM/ZK to wrap up as ZK Layer 2, BitVM/ZK for BTC Layer 2.

If Appchain is not your thing, we will be launching our zkVM Layer 2 that will be powered by our Itachi sequencer, which will allow you to deploy your Starknet smart contract smoothly to Reddio zkVM Layer 2 without any additional modification. Come back and try our high performance zkVM Layer 2 that's powered by Itachi!

Follow us on twitter to stay tuned with all the updates.

]]>
<![CDATA[Optimizing the timeout issue in Geth when querying large-span sparse logs through KV.]]>Before we begin, let's take a look at a request sent to Geth:

{
  "method": "eth_getLogs",
  "params": [
    {
      "fromBlock": "0x0",
      "toBlock": "0x120dc53",
      "address": "0xb62bcd40a24985f560b5a9745d478791d8f1945c",
      "topics": [
        [
          "0xcfb473e6c03f9a29ddaf990e736fa3de5188a0bd85d684f5b6e164ebfbfff5d2"
        ]
      ]
    }
  ],
  "
]]>
https://blog.reddio.com/optimizing-the-timeout-issue-in-geth-when-querying-large-span-sparse-logs-through-kv/65f11d9cf970ca0001e52561Wed, 13 Mar 2024 03:41:21 GMT

Before we begin, let's take a look at a request sent to Geth:

{
  "method": "eth_getLogs",
  "params": [
    {
      "fromBlock": "0x0",
      "toBlock": "0x120dc53",
      "address": "0xb62bcd40a24985f560b5a9745d478791d8f1945c",
      "topics": [
        [
          "0xcfb473e6c03f9a29ddaf990e736fa3de5188a0bd85d684f5b6e164ebfbfff5d2"
        ]
      ]
    }
  ],
  "id": 62,
  "jsonrpc": "2.0"
}

The semantics of this request are clear: it searches for logs with the topic "0xcfb473e6c03f9a29ddaf990e736fa3de5188a0bd85d684f5b6e164ebfbfff5d2" on the address "0xb62bcd40a24985f560b5a9745d478791d8f1945c" from block 0x0 to block 0x120dc53 (decimal 18930771) in the entire Ethereum blockchain.

However, when we send this request to our own ETH node, we experience a 30-second wait time followed by a timeout response:

{
    "jsonrpc": "2.0",
    "id": 62,
    "error": {
        "code": -32002,
        "message": "request timed out"
    }
}

Why is this happening?

To understand the reason behind this, we need to start with the working mechanism of Geth for the eth_getLogs query.

Bloom Filter

In Ethereum, there is a data structure called the Bloom filter.

Optimizing the timeout issue in Geth when querying large-span sparse logs through KV.
  • Bloom Filter is a probabilistic data structure used to quickly determine whether an element belongs to a set. It is often used to efficiently check if an element exists in a large dataset without actually storing the entire dataset.
  • The core idea of a Bloom Filter is to use multiple hash functions and a bit array to represent elements in the set. When an element is added to the Bloom Filter, its hash values from multiple hash functions are mapped to corresponding positions in the bit array, and these positions are set to 1. When checking if an element exists in the Bloom Filter, the hash values of the element are again mapped to the corresponding positions in the bit array, and if all positions are 1, it is considered that the element may exist in the set, although there may be a certain probability of false positives.
  • The advantages of Bloom Filters are that they occupy relatively small space and provide very fast membership tests. They are suitable for scenarios where a certain false positive rate can be tolerated, such as fast retrieval or filtering in large-scale datasets.

In Ethereum, the Bloom filter for each block is calculated based on the transaction logs in the block.

In each Ethereum block, transaction logs are an important part where contract events and state changes are stored. The Bloom filter is constructed by processing the data in the transaction logs. Specifically, for each transaction log, it extracts key information such as the contract address and log data. Then, using hash functions, it converts this information into a series of bit array indices and sets the corresponding positions in the bit array to 1.

In this way, each block has a Bloom filter that contains the key information from the transaction logs. When querying for a specific contract address or other information related to the transaction logs in a block, the Bloom filter can be used to quickly determine whether the relevant information exists. This query method improves efficiency by avoiding direct access and processing of a large amount of transaction log data.

We can see the Bloom value of a specific block by sending the following request:

{"method":"eth_getBlockByNumber","params":["latest",false],"id":1,"jsonrpc":"2.0"}

The response is as follows:

{
    "jsonrpc": "2.0",
    "id": 1,
    "result": {
        "baseFeePerGas": "0xbf7b14fbe",
...
        "hash": "0x86f79ed4401eb79c899c3029c54c679fd91f22c6b81a651c78c0f664b1316ce6",
        "logsBloom": "0xf4b37367e1d1dd2f9fb1e3b298b7fe61e7f40b0dbc71fcf4af5b1037f67238294d3257ffd35b2f3dcde1db20fb77139edb3f086cff3a79bda56575baac7ead457a4ef95c7fc7bf7afec2e00fbeaae6ff5daa5a9d8b5698ce5bfdf66ac8741c3e9e4d364c1e631dc326cdfe97fc6bfedfe2ae47fb14aeb70d938b5dde00dac77aab17bad6976ddedd30c5a57a3bcd563f826dc319d9914dea66614dee59d5346a8b7a076c63966af73ee7d7f4daffac4c86ff9f79c90efd82c5ab3d8299bb04f874d1a4420c3f4ef825dc0b0b2a6e7b434da4b74f0d6b9816a87eed4f35323d0094f8ee2e33531560db2e7feebe191a888da87499f9ff555cbc5f9e36e89dbd07",
        "stateRoot": "0x48f5a24f3f42d5f5650fbaaccf6713ba0b85d371e0ec7c5d5e1d16df47"
}

The logsBloom field in the response represents the Bloom filter value of that block. Its length is 512 hexadecimal characters, which corresponds to 2048 bits in binary.

So, when we want to query for a specific topic, Geth calculates the Bloom filter needed based on the topic and then iterates through the logsBloom field of each block within the specified range (from fromBlock to toBlock) to determine if the block may contain the corresponding log.

It is important to note that due to the probabilistic nature of Bloom filters and the limited size of the Bloom filter (2048 bits) compared to the vast number of blocks in Ethereum, there will be a large number of blocks that may result in false positives. This means that the Bloom filter may match, but the actual log may not exist in those blocks, leading to a significant amount of query time and even timeouts.

This issue has been discussed in the Geth GitHub repository, for example:

However, there hasn't been a mature solution yet. One approach discussed is to increase the length of the Bloom filter. Another approach is to introduce a subscription mechanism where clients can subscribe to similar requests and allow Geth to query in the background without requiring synchronous queries that may result in timeouts.

Reverse KV

This article attempts to propose an approach to mitigate this issue, and we have observed its effectiveness in our internal Proof of Concept (PoC). For the previous example provided, where we are searching for logs with the topic "0xcfb473e6c03f9a29ddaf990e736fa3de5188a0bd85d684f5b6e164ebfbfff5d2" on the address "0xb62bcd40a24985f560b5a9745d478791d8f1945c", which is very sparse (only two records), we can use a caching layer that maps topics to the blocks where they exist.

For example, for the given topic, we can store the value as the block number "0xed6e42" (decimal 15560258).

In this way, when we encounter a similar request, we can quickly determine the block where the log exists based on the topic and reassemble the request. In this case, since we already know that the log is in block "0xed6e42", we can modify the request as follows (note that both fromBlock and toBlock are set to "0xed6e42"):

{
  "method": "eth_getLogs",
  "params": [
    {
      "fromBlock": "0xed6e42",
      "toBlock": "0xed6e42",
      "address": "0xb62bcd40a24985f560b5a9745d478791d8f1945c",
      "topics": [
        [
          "0xcfb473e6c03f9a29ddaf990e736fa3de5188a0bd85d684f5b6e164ebfbfff5d2"
        ]
      ]
    }
  ],
  "id": 62,
  "jsonrpc": "2.0"
}

We can then send the modified requests to the actual Geth backend, optimizing the response time.

Implementation

Our key-value (KV) storage implementation uses Pebble, a Go implementation similar to LevelDB/RocksDB, which is also the underlying storage implementation for Juno.

Block

To implement storage, we first need to fetch blocks. Since we have our own ETH node, we can start fetching blocks from the first block on our node. The key code snippet is as follows:

		if LatestBlockNumberInDBKey < LatestBlockNumberOnChain {
			for i := LatestBlockNumberInDBKey; i < LatestBlockNumberOnChain; i++ {
				fmt.Println("Dealing with block number: ", i)
				// Get topic list from block number
				topicList, err := getTopicListByBlockNumber(i)
				if err != nil {
					fmt.Printf("Failed to get topic list: %v\n", err)
					// Retry
					return
				}
				fmt.Printf("Total topics: %d\n", len(topicList))
				if len(topicList) > 0 {
					for _, topic := range topicList {
						SetKeyNumberArray(topic, i)
					}
					// Flush DB
					FlushDB()
				}
				// Update LatestBlockNumberInDB
				SetKeyNumber("LatestBlockNumberInDB", i)
			}
		}

Since the value in Pebble is a byte array, we need two helper functions:


func SetKeyNumberArray(key string, number uint64) error {
	// Check if key exists, if exists, read array and append number to array
	// If not exists, create array with number
	keyByteSlice := []byte(key)
	existingNumberArray, err := GetKeyNumberArray(key)
	if err != nil {
		return err
	}
	if existingNumberArray != nil {
		// Append number to array
		existingNumberArray = append(existingNumberArray, number)

		// Deduplicate array
		existingNumberArray = DeduplicateUint64Array(existingNumberArray)

		numberByteSlice := Uint64ArrayToByteSlice(existingNumberArray)
		err := DB.Set(keyByteSlice, numberByteSlice, pebble.NoSync)
		if err != nil {
			return err
		}
		return nil
	} else {
		// Create array with number
		numberArray := []uint64{number}
		numberByteSlice := Uint64ArrayToByteSlice(numberArray)
		err := DB.Set(keyByteSlice, numberByteSlice, pebble.NoSync)
		if err != nil {
			return err
		}
		return nil
	}
}

func GetKeyNumberArray(key string) ([]uint64, error) {
	keyByteSlice := []byte(key)
	value, closer, err := DB.Get(keyByteSlice)
	if err != nil {
		// Check if key is not found
		if err == pebble.ErrNotFound {
			return nil, nil
		}
		return nil, err
	}
	defer closer.Close()

	numberArray := ByteSliceToUint64Array(value)
	return numberArray, nil
}

Since we aim for performance during the Set operation, we use pebble.NoSync mode for writing. After processing each block, we manually call FlushDB to flush the writes to disk.

Handler

Once we have fetched the blocks, we can handle user POST requests. The general approach is as follows:

  • Parse the user's POST request. If the request is for eth_getLogs, split the user's topics and query the KV database.
  • Retrieve the corresponding block range from the KV database based on the topics.
  • Consolidate the block range.
  • Reassemble the user's request by replacing fromBlock and toBlock with the consolidated block range.
  • Send the reassembled requests to the Geth backend and concatenate the returned data.
  • Return the data to the user.

Demo

In our local environment, we verified that with this KV caching layer, the above query can return results in approximately 450ms (note that the majority of the 450ms is the round-trip time (RTT) between the local environment and our ETH node, while the actual KV response speed is within 20ms).

Optimizing the timeout issue in Geth when querying large-span sparse logs through KV.

Future Plans

In the example above, we have completed a PoC for optimization. However, there are still several areas that need improvement, such as:

  • Handling multiple topic intersections and queries
  • Dealing with large block ranges for a specific topic, where the caching layer needs to be able to prevent requests from reaching the actual Geth backend to avoid excessive backend pressure

We have already deployed this caching layer on an internal test node and open-sourced the program on GitHub at: https://github.com/reddio-com/eth_logcache. We welcome interested individuals to use it and contribute to further its development.

]]>
<![CDATA[Twilio CEO Jeff Lawson Stepped Down, Twilions Follow His Path]]>Two weeks ago, Twilio co-founder Jeff Lawson stepped down as CEO. This story has more than the usual fanfare about "investors forcing the founder to quit". While there are lots of articles talking about that aspect, I, however, want to focus on my experience as an ex-Twilion and

]]>
https://blog.reddio.com/twilio-ceo-jeff-lawson-stepped-down-twilions-follow-his-path/65ab5586f970ca0001e52529Sat, 20 Jan 2024 05:37:19 GMT

Two weeks ago, Twilio co-founder Jeff Lawson stepped down as CEO. This story has more than the usual fanfare about "investors forcing the founder to quit". While there are lots of articles talking about that aspect, I, however, want to focus on my experience as an ex-Twilion and share my previous journey at Twilio and why I quit and how I decided to start my own company, Reddio.

Twilio, at its core, makes telecommunication accessible to developers with its easy-to-use APIs. Back in 2006, when I had to integrate SMS with a webpage, it could have easily taken a month. First, I had to find a reliable open source SMS gateway and then I had to run the gateway on the server. Then, I had to talk to Telcos to sign a deal with them so as to integrate their SMS gateway to my server. It was only then that I could finally push the SMS from the webpage to customers' phone. It took way longer than necessary to do this. When I found out that it only takes 5 minutes to get this complicated procedure with Twilio, I was sold. With its sleek experience and robustness, Twilio made it so easy. After I joined Twilio as their 3rd employee in APAC as a Solution Engineer, I got the chance to talk to developers like myself and their feedback were always consistent – they just love Twilio.

I was fortunate enough to have lunch with Jeff when I was doing my onboarding in San Francisco and he asked a lot of questions about the APAC developers; What kind of rules are they implementing in their own company, how do they assess technical decisions, how the ecosystem there look like, etc. He was very eager to understand the market and wanted to figure out the best way for Twilio to expand to the region. Everyone in San Francisco office was very determined and self-driven, even the owner of Twilio. We had tremendous growth in APAC afterwards, almost doubling the revenue every year and we all felt that we belonged at Twilio. I am also lucky enough to find my passion as a technology advocate when I built tools for developers during my tenure at Twilio.

Twilio CEO Jeff Lawson Stepped Down, Twilions Follow His Path
3 People APAC Team Won the Twilio's Best Team in 2016

However, after the Twilio IPO, revenue number became the only goal and things started to change and so I decided to leave Twilio two years after its IPO. After leaving, a personal question burned within me on how I can find another company similar to the earlier, developer-first Twilio to work for. I found none. Since that was the case, I thought to myself, why don't I build another Twilio of my own? I saw quite a few ex-Twilion follow this path after they left Twilio too.

Throughout the industries, Web3 came into my view as it's still in the early stages of the industry. Like Telco engineers in the early days, Web3 developers have to spend months to learn and code in order to make sure the non-essential items work when they should be spending most of their time on essential tasks for their primary business. This was especially prominent for the App and Game developers.

Actually, even the company name Reddio was inspired by Twilio. That's how much I love Twilio and how eager I am to build another company like Twilio, from the company culture to the way they develop and deliver products.

Our company zoomed into Web3 and talked to developers on their struggle. Reddio is building tools to solve their needs. That's how Reddio ended up with our own zk Layer 2, wrapped up with different APIs, so that developers can easily integrate Web3 without the need to learn any blockchain programming language, such as Solidity or Cairo. Meanwhile, we are focused on making our products easy-to-use, stable, and scalable. From there, we continued to build and received recognition from Paradigm and have their funding to support us building our products and fueling our growth.

I followed Jeff quite consistently and read his book 'Ask Your Developer' when it was released. I have learnt a lot from him and Twilio. At this juncture, I can only wish him all the best for his future adventure and I look forward to his next mission when he is ready to reveal it. In the meantime, ex-Twilions like myself have been building our own Twilio along the way and I cannot wait to see how things come together and more ex-Twilion companies come to life.

]]>
<![CDATA[Building a ERC20 Token App on Starknet with Starknet React: A Comprehensive Guide]]>This guide offers steps to build a React web application with Starknet React to interact with ERC20 smart contract on Starknet. Meanwhile you can get the source code from our Github repo. Readers will:

  • Understand how to implement the ERC20 interface
  • Discover ways to engage with contracts within a React
]]>
https://blog.reddio.com/building-a-erc20-token-app-on-starknet-a-comprehensive-guide/6590211cf970ca0001e523c2Tue, 16 Jan 2024 17:26:12 GMT

This guide offers steps to build a React web application with Starknet React to interact with ERC20 smart contract on Starknet. Meanwhile you can get the source code from our Github repo. Readers will:

  • Understand how to implement the ERC20 interface
  • Discover ways to engage with contracts within a React application
  • Design their own ERC20 token and initiate it on Starknet

A prerequisite for this guide is a foundational understanding of both the Cairo programming language and ReactJS. Additionally, ensure Node.js and NPM are installed on the system.

The guide will walk through creating an ERC20 token named reddiotoken and crafting a web3 interface for functionalities such as balance verification and token transfer.

Deploy a ERC20 smart contract on Starknet

Follow this guide to deploy the smart contract, if you do it correctly, you should be able to get a ERC20 smart contract deployed on Starknet testnet with name 'reddio token', get 1,000,000 tokens minted and also get the smart contract address correspondingly. These are quite important for our next integrations. For your convenience, we already integrate the Starknet project to the repo, you can compile and deploy directly.

Introduction to Starknet.js and Starknet React

Starknet.js is a JavaScript/TypeScript library designed to connect your website or decentralized application (dApp) to Starknet. It aims to mimic the architecture of ethers.js, so if you are familiar with ethers, you should find Starknet.js easy to work with.

Building a ERC20 Token App on Starknet with Starknet React: A Comprehensive Guide

To make the Starknet.js integration easy for React developers, inspired by wagmi, starknet-react (documentation) is developed by the community, to provide developers a collection of React hooks tailored for Starknet.

Integrating Starknet React to your React App

To integrate Starket-react, you just need to add the following dependency to your React app.

"@starknet-react/chains": "^0.1.0",
"@starknet-react/core": "^2.0.0",
"get-starknet-core": "^3.2.0",

In the StarknetConfig component of starknetprovider.tsx,  it lets you specify wallet connection options for users through its connectors prop.

  export function StarknetProvider({ children }: { children: React.ReactNode }) {
  const { connectors } = useInjectedConnectors({
    // Show these connectors if the user has no connector installed.
    recommended: [argent(), braavos()],
    // Hide recommended connectors if the user has any connector installed.
    includeRecommended: "onlyIfNoConnectors",
    // Sort connectors alphabetically by their id.
    order: "alphabetical",
  });

  return (
    // React context that provides access to
    // starknet-react hooks and shared state
    <StarknetConfig
      chains={[goerli]}
      provider={reddioProvider({apiKey})}
      connectors={connectors}
      // autoConnect={false}
    >
      {children}
    </StarknetConfig>
  );

Establishing connection and managing account

After defining the connectors in the config, you can use a hook to access them. This enables users to connect their wallets.

  export default function Connect() {
  const { connect, connectors } = useConnect();

  return (
    <div className="flex justify-center gap-8">
      {connectors.map((connector) => (
        <button
          className="btn"
          onClick={() => connect({ connector })}
          key={connector.id}
          disabled={!connector.available()}
        >
          Connect {connector.id}
        </button>
      ))}
    </div>
  );
}

Once connected, the useAccount hook provides access to the connected account, giving insights into the connection's current state. State values like isConnected and isReconnecting update automatically, easing UI updates. This is particularly useful for asynchronous processes, removing the need for manual state management in your components.

Smart contract interactions

Similarly to wagmi, Starknet React has useContractRead for read operations on smart contracts. These operations are independent of the user's connection status and don't require a signer. The useBalance hook simplifies retrieving balances without needing an ABI.

  // Convenience hook for getting
  // formatted ERC20 balance
  const { data: balance } = useBalance({
    address,
    token: CONTRACT_ADDRESS,
    // watch: true <- refresh at every block
  });

Unlike wagmi, the useContractWrite hook benefits from Starknet's native support for multicall transactions. This improves user experience by facilitating multiple transactions without individual approvals.

  const calls = useMemo(() => {
    if (!amount || !to || !contract || !balance) return;

    // format the amount from a string into a Uint256
    const amountAsUint256 = cairo.uint256(
      BigInt(Number(amount) * 10 ** balance.decimals)
    );

    return contract.populateTransaction["transfer"](to, amountAsUint256);
  }, [to, amount, contract, balance]);

  // Hook returns function to trigger multicall transaction
  // and state of tx after being sent
  const { write, isPending, data } = useContractWrite({
    calls,
  });

To make the App UX better, you can refer to Stark react documentation to integrate more hooks into the sample app.

Summary

In this blog, we guided you through how you can easily integrate Stark React and quickly build a Token App. Hopefully, it helps you to easily understand the mechanism and get your own App up and running quickly. We will publish more such sample App, you can follow our Github page to get updates or join our Discord if you want to contribute to the source codes or if you have any questions.

]]>
<![CDATA[Ethereum Layer 2 Unleashed: Year 2023 in Review with Reddio]]>Reddio steadfastly maintains its mission: to simplify blockchain technologies for Web 2 developers. By honing our focus on Ethereum Layer 2, we've made it exceptionally user-friendly for developers to engage and innovate.

With such mission in mind, over the past year, we focus on listening to developers'

]]>
https://blog.reddio.com/ethereum-layer-2-unleashed-year-2023-in-review-with-reddio/65910c9df970ca0001e523fbSun, 31 Dec 2023 09:47:35 GMT

Reddio steadfastly maintains its mission: to simplify blockchain technologies for Web 2 developers. By honing our focus on Ethereum Layer 2, we've made it exceptionally user-friendly for developers to engage and innovate.

With such mission in mind, over the past year, we focus on listening to developers' need and constantly deliver for them.

1. Enhancing Reddio StarkEx Layer 2

StarkEx is the most mature zk Layer 2 technology and has accumulated $1.16T, with 125M NFTs minted on the network. By partnering with StarkWare, Reddio delivered asset onchain and asset trading for Layer 2 in 2022.

To complete the tech stack, in 2023, we have stabilized and enhanced Reddio's own sequencer for StarkEx, iterated APIs based on developers' feedback, launched StarkEx block explorer and L1<>StarkEx L2 bridge. With the recent launch of ERC20 trading APIs, we are completing the stack for StarkEx Layer 2. To enable developer to be able to finish their first task in 5 mins and accelerate their integration, we enhanced JS, Unity SDKs, launched Java, Go, Python and C# libraries and quickstart demos.

Ethereum Layer 2 Unleashed:  Year 2023 in Review with Reddio

2. Working on Starknet smart contract templates

With Starknet becoming more mature, we are receiving more feedback on the difficulties of building smart contracts on Starknet. To assist developers with resolving these difficulties, we delivered a family of smart contracts templates to deploy applications faster on Starknet, including the most common development requests such as ERC20, token airdrop, ERC721, ERC1155, ERC721 staking and also NFT marketplace.

3. Deliver Starknet node services with the least global latency

There are very few node service providers for Starknet and no one has been focusing on low latency among the existing providers. We have benchmarked existing solutions and chose the best solution with fine-tunes mechanism to make the node services available to all developers.

4. Deliver Reddio's Starknet Layer 2

The more we talk to Starknet developers, the more we realised an alternative Starknet is needed, which provides 0 gas fee and high performance for fully onchain games, Dex, and DApps. Hence, we launched our Starknet Layer 2, which is powered by Madara, to just do that. The testnet has been ready and batter-tested with a few of our own customers, and we are preparing it for a larger audience in January of 2024.

5. Start Reddio's own Starknet sequencer for Appchain

While we have been extensively testing Starknet Layer 2 testnet, we realised Madara is far from ready for production, especially when it comes to specific requirements. With the core infra engineer from Scroll joining us and with our experience in building a StarkEx sequencer, we are confident in our ability to build a more stable and much higher performance sequencer for developers. Our sequencer will support both CairoVM and EVM, and with tick-based system for genres of different fully on-chain games. We are building the stack now and will announce the open source stack in Q2 of 2024.

Ethereum Layer 2 Unleashed:  Year 2023 in Review with Reddio

Grand Roadmap in 2024 and beyond

Without a grand roadmap, it's hard for developers to follow. Here's the grand roadmap developers have been asking and waiting for, most of which will be available in 2024.

  1. Improve StarkEx developer experience further for developers to onboard with Reddio
    This will include dashboard improvement of onboarding journey, more tutorials, quickstart, SDKs and libraries.
  2. Reddio's Starknet Layer 2 with 0 gas and high performance
    We have been advocating 0 gas for Layer 2 for years, and successfully launched StarkEx powered Layer 2 with 0 gas. Now, we bring that experience to Starknet and launch our owner Starknet Layer 2 powered by the most customizable sequencer developed by Reddio.
  3. Incentive plan for developers and users to onboard with the best technologies
    Starting with a point system, we will keep track of what developers have been building either on StarkEx Layer 2 or Starknet Layer 2. We will also launch our staking plan to incentivized for users once they have deposited their asset to Reddio's own Layer 2.
  4. One click to launch L2/L3 Appchain powered by the best technology in the industry
    By launching our own L2, Reddio will provide the infrastructure and developer experience so that developers can easily fulfill and/or service large customers' blockchain requirements.  
  5. 5 minutes to get simple task done without learning Cairo or Solidity for your own Appchain
    Ultimately, Reddio's goal is to assist all developers. In doing so, we make it easy for Web2 developers to use our L2/L3 infra, with a very minimal learning curve.
Ethereum Layer 2 Unleashed:  Year 2023 in Review with Reddio
Reddio Platform

The Reddio team is very excited about 2024. We anticipate that 2024 will be a great year for L2/L3 and we are ecstatic to share our technology with the greater developer community. We invite interested partners to talk to us and build together with us on the Reddio Starknet Layer 2. Together, we can lay the foundation for L2/L3 Appchain. As such, we especially welcome partners working on wallets, bridges, AMMs, and NFT marketplaces to talk to us.

2024, Onward and Upward!

]]>