Jekyll2025-09-18T05:56:39+00:00https://b00f.github.io//feed.xmlb00fI write, so I amMostafa Sedaghat JooBLS Threshold Signature using Shamir’s Secret Sharing2025-07-11T00:00:00+00:002025-07-11T00:00:00+00:00https://b00f.github.io//cryptography/bls-threshold-shamir-secret-sharingIn this post, I’m going to explain how we can use Shamir’s Secret Sharing for a Threshold Signature Scheme using the BLS Signature scheme.

Threshold Signature

A Threshold Signature allows a group of participants to collectively sign a message such that any subset of at least $k$ out of $n$ participants can generate a valid signature, while fewer than $k$ cannot. This is extremely useful in distributed systems, where we want fault tolerance and decentralization in signing operations without having to trust a single key holder.

Shamir’s Secret Sharing

Adi Shamir, in his seminal 1979 paper “How to Share a Secret”, introduced a method to split a secret into parts using Lagrange interpolation over a finite field. This technique is known as Shamir’s Secret Sharing (SSS) and remains one of the most widely used methods for secure secret splitting.

In a nutshell, a secret can be embedded in a random polynomial:

\[f(x) = a_0 + a_1x + a_2x^2 + \cdots + a_{k-1}x^{k-1}\]

Here, $ a_0 $ is the secret, and the rest of the coefficients $a_1, a_2, \dots, a_{k-1}$ are chosen randomly.

We then evaluate this polynomial at $n$ different non-zero values $ x_1, x_2, \dots, x_n $ to generate $n$ shares: $ (x_i, f(x_i)) $. These shares are distributed to participants.

To recover the secret $ a_0 = f(0) $, we only need $k$ shares. Using Lagrange interpolation, we compute the Lagrange coefficient $ \lambda_j $ for each share, and reconstruct the secret:

\[\lambda_j = \prod_{\begin{smallmatrix} m=0 \\ m \ne j \end{smallmatrix}}^{k-1} \frac{x_m}{x_m - x_j}\]

Then the secret is recovered as:

\[a_0 = f(0) = \sum_{j=0}^{k-1} y_j \cdot \lambda_j\]

BLS Signature

The Boneh–Lynn–Shacham (BLS) signature scheme is a digital signature scheme based on elliptic curves and bilinear pairings. The signer has a private key $ sk \in \mathbb{Z}_q $ and a public key $ pk = sk \cdot G $, where $ G $ is a generator of an elliptic curve group.

To sign a message $ m $, the signer first hashes it to a point on the curve: $ H(m) \in G_1 $, then computes the signature:

\[\sigma = sk \cdot H(m)\]

BLS Threshold Signature

In a BLS threshold signature scheme, a trusted dealer distributes $ N $ shares of the secret signing key to all participating parties using Shamir’s Secret Sharing.

Each party can independently compute a partial signature. As long as $ K $ or more parties sign the same message, the final signature can be reconstructed.

Here’s how it works:

  1. Let $ \mathcal{T} = {i_1, i_2, \dots, i_t} $ be the indices of the $ t $ participants (where $ t \ge K $) who provide partial signatures.

  2. Each participant computes their partial signature:

    \[\sigma_{i_j} = s_{i_j} \cdot H(m)\]

    where:

    • $ s_{i_j} $ is the secret share of participant $ i_j $
    • $ H(m) \in G_1 $ is the hash of the message mapped to the curve
  3. The combiner (aggregator) computes the final signature:

    \[\sigma = \sum_{j=1}^{t} \lambda_{i_j} \cdot \sigma_{i_j}\]

    where $ \lambda_{i_j} $ is the Lagrange coefficient:

    \[\lambda_{i_j} = \prod_{\substack{k=1 \\ k \ne j}}^{t} \frac{i_k}{i_k - i_j}\]
  4. Substituting the partial signatures:

    \[\sigma = \sum_{j=1}^{t} \lambda_{i_j} \cdot (s_{i_j} \cdot H(m)) = \left( \sum_{j=1}^{t} \lambda_{i_j} \cdot s_{i_j} \right) \cdot H(m)\]

    By Lagrange interpolation, we know:

    \[\sum_{j=1}^{t} \lambda_{i_j} \cdot s_{i_j} = s\]

    where $ s $ is the original secret key.

Thus, the final signature is:

\[\sigma = s \cdot H(m)\]

Which is the standard BLS signature.

]]>
Mostafa Sedaghat Joo
Rust Implementation of CGGMP212025-06-13T00:00:00+00:002025-06-13T00:00:00+00:00https://b00f.github.io//cryptography/cggmp21-rustIn one of my previous roles, I was part of a cryptography team responsible for implementing threshold signatures for ECDSA. This work is particularly important for many blockchains—like Bitcoin and Ethereum—that use ECDSA, which unfortunately doesn’t natively support multi-signature or threshold signature schemes.

While Bitcoin has recently adopted Schnorr signatures (which do support native multisignatures and it is very simple which I like it), ECDSA remains dominant in the ecosystem. For threshold ECDSA, one of the most efficient and well-designed protocols to date is CGGMP21.

The protocol is detailed in the paper “UC Non-Interactive, Proactive, Threshold ECDSA with Identifiable Aborts.” The acronym CGGMP comes from the initials of the authors’ last names.

What sets CGGMP21 apart is its non-interactive design. The actual signing process can be completed efficiently without repeated rounds of interaction (in a pre-signing phase). It also features proactive security and identifiable aborts, making it resilient and robust for real-world use.

We had successfully implemented CGGMP21 internally, but unfortunately, the project remained closed-source and was never published. Considering how valuable it could be to the wider ecosystem.

Recently, I came across a new Rust implementation of CGGMP21 that is open-source. It’s exciting to see the community making this cutting-edge cryptography more accessible, and I highly recommend checking it out if you’re working on threshold signing for ECDSA.

]]>
Mostafa Sedaghat Joo
Define Interfaces Properly2025-04-23T00:00:00+00:002025-04-23T00:00:00+00:00https://b00f.github.io//others/define_interfaces_properlyI was recently reviewing some code and came across this interface designed for interacting with Redis:

package port

import (
	"time"
)

type RedisPort interface {
	Set(key string, value string, ttl time.Duration) error
	Get(key string) (string, error)
	Del(keys ...string) error
	Exists(key string) (bool, error)
	SetJSON(key string, val any, ttl time.Duration) error
	GetJSON(key string, out any) error

	Close() error
}

At first glance, it gets the job done. But there are a few points that could make it even better.

Interfaces Should Be Descriptive

Interfaces or traits are descriptive, not blueprints for specific implementations. They describe the common behavior of a group of types.

For example, imagine describing a flower. You might mention its petals, color, and fragrance—properties common to all flowers. But you wouldn’t describe a Rose as an interface, that’s too specific.

Naming an interface RedisPort is too specific. It’s better to name it something more general, like CachePort.

Interfaces Should Be Cohesive

Another improvement is to remove the Close() method. It doesn’t align with the rest of the interface, which focuses on caching operations. Including Close() is like adding “can die” to describe a flower. Sure, everything dies, but it’s not relevant to the core behavior we’re describing

Interfaces should expose harmonious methods that embody a single responsibility or a cohesive set of behaviors.

Interfaces Can Support Partial Implementation

Some languages like Rust allow partial implementation of traits. While Go doesn’t support this natively with interfaces, you can achieve similar functionality using embedded structs with method promotion.

A Cleaner, More Flexible Interface

Let’s apply these improvements into a cleaner, more robust interface:

package port

import (
	"encoding/json"
	"time"
)

type Option func(*Options)

type Options struct {
	// Time-To-Live for cached item.
	TTL time.Duration
}

func WithTTL(ttl time.Duration) Option {
	return func(opt *Options) { opt.TTL = ttl }
}

type CachePort interface {
	Set(key string, value string, opts ...Option) error
	Get(key string) (string, error)
	Del(keys ...string) error
	Exists(key string) (bool, error)
}

type CacheWithJSON struct {
	CachePort
}

func (c CacheWithJSON) SetJSON(key string, val any, opts ...Option) error {
	bz, err := json.Marshal(val)
	if err != nil {
		return err
	}
	return c.Set(key, string(bz), opts...)
}

func (c CacheWithJSON) GetJSON(key string, out any) error {
	str, err := c.Get(key)
	if err != nil {
		return err
	}
	return json.Unmarshal([]byte(str), out)
}

With this setup, you only need to implement the core cache methods like Set and Get and the JSON helpers come built-in at no extra effort. Additionally, the interface is easily extensible through options, providing the flexibility to support additional features in the future.

]]>
Mostafa Sedaghat Joo
From Devil’s Bridge to Devil’s House2024-08-04T00:00:00+00:002024-08-04T00:00:00+00:00https://b00f.github.io//others/devil_bridgeDuring my college years, a group of friends and I decided to visit the Azar-Barzin-Mehr fire temple, a site believed to have been built around 2,500 years ago. This fire temple was one of the three major fire temples in the ancient Persian Empire, where the sacred fire was said to have burned continuously for centuries.

We knew the temple was located on top of a mountain near a village, so we set off, hoping the villagers could guide us. Surprisingly, no one in the village had heard of Azar-Barzin-Mehr or the fire temple. Our hope began to fade until we met an old man who spoke of a “Devils’ House.” Intrigued, we asked him about it, and he described a structure that, according to local lore, could not be destroyed because it was built by the devil himself. We realized this must be the fire temple, its endurance through the ages giving it an almost supernatural reputation. With his help, we located the mountain and began our climb. It was challenging, but finally, we stood before the ancient temple, amazed by its strength and historical significance.

Recently, I came across a fascinating story about the Devils’ Bridges in Europe. These ancient bridges, still standing and functional after centuries, are covered in mystery. Locals call them devil’s bridges, believing their construction to be beyond human ability, attributing them instead to the devil.

Devils' Bridge

Both stories share a common theme: the incredible cleverness of ancient engineers. Whether it’s the fire temple in Persia or the bridges in Europe, these structures resist time and doubt. The names of their creators may be lost to history, but their remarkable works continue to stand tall, drawing our attention and inspiring wonder.

]]>
Mostafa Sedaghat Joo
From Long-range to 51% problems2023-09-17T00:00:00+00:002023-09-17T00:00:00+00:00https://b00f.github.io//blockchain/long_range_and_nothing_at_stake_problemsA blockchain is defined as a chain of blocks, where each block contains a set of transactions. Based on this definition, an attacker, or a group of attackers, could select any block as a base point and attempt to create an alternate fork.

In this article, we’ll explore these potential malicious behaviors.

Long Range problem

In a Long-Range problem, adversary choose a block from the past and try to create an alternate chain starting from that block. The main goal is to rewrite the blockchain’s history By doing so, they can add or remove transactions they desire. Because the fork begins from an earlier point in the blockchain, it is termed as “Long-Range” problem.

Long Range problem

Proof-of-Work

In the case of Bitcoin, since the hashing power has constantly increased over time, rewriting past blocks requires a huge amount of energy. It is almost impossible for the adversary to rewrite the last 6 blocks.

Bitcoin hash rate

Indeed, Satoshi Nakamoto showed that with a probability less than 0.1%, an adversary with 10% of the mining power can rewrite the last 5 blocks (less than one hour), while an adversary with 45% of the mining power can rewrite the last 340 blocks (about 56 hours, or 2.5 days).

Bitcoin - hash power vs number of blocks

The chart below shows the probability of rewriting blocks by an adversary with 10% of the mining power.

Bitcoin - number of blocks vs probability

Unfortunately, many proof-of-work blockchains don’t have a hash rate growth like Bitcoin’s. In some blockchains, like Bitcoin SV, the hashing power even dropped over time.

Bitcoin-SV hash rate

Proof-of-Stake

This problem in Proof-of-Stake (PoS) blockchains appears more serious. In these blockchains, adversary can obtain (or even purchase) the private keys from early validators. If they possess enough keys, they can reorganize a new fork from early blocks (even from the genesis block!). Considering there is no actual work involved, just staking, reorganizing the blockchain becomes very easy.

Solution

So, as you can see, most blockchains appear vulnerable to long-range problem. However, this problem is not serious and can even be ignored. Simply because any chain that looks new can be ignored, and miners or validators can always add blocks to the chain they know.

51% and Nothing-at-stake problems

Attackers may choose to create a fork at the tip of the blockchain. By doing so, they can execute a double-spending transaction. For example, in one fork, Eve sends coins to Alice, and in another fork, she sends the same coins to Bob. This problem is known as the “51%” in proof-of-work blockchains and “Nothing at stake” in proof-of-stake blockchains.

Nothing at stake problem

Proof-of-Work

The consensus mechanism in Proof-of-Work blockchains doesn’t provide safety1. This means the blockchain doesn’t necessarily provide safety, and there might be more than one valid block at each height.

Liveness over safety

In the case of Bitcoin, forks can happen but very rarely. And since the hashing power is high, the chance of an adversary creating a fork longer than two blocks is really low. This is why in many exchanges, you have to wait for at least 2 block confirmations before withdrawing or depositing your Bitcoins.

Most people think that proof-of-work blockchains are more secure than other consensus types, but in the real world, Proof-of-Work blockchains are more vulnerable to this problem. For example, in Ethereum Classic, we have a history of a 51% problem by injecting more than 3000 blocks by only one miner.

Proof-of-Stake

If the consensus mechanism in proof-of-stake blockchains doesn’t provide safety, it is vulnerable to Nothing-at-stake problem.

In blockchains like Ethereum adversaries with enough stake can create a valid fork. To overcome these scenarios, there is a mechanism in Ethereum to punish the validators that double-vote in different forks.

However, other blockchains prioritize safety. In these cases, there’s no way to have a fork at the tip of the blockchain.

Safety over liveness

If a fork happens, it is the result of a bad implementation or, even worse, the majority of the nodes (with more than ⅔ of the total stake) are not loyal and honest. So, the Nothing-at-stake problem in these blockchains is not possible. Blockchains like Algorand and Pactus are not vulnerable to the Nothing-at-Stake problem, and therefore there is no need for a slashing mechanism there.


  1. To understand more about the different consensus mechanisms, read here

]]>
Mostafa Sedaghat Joo
Secure your cloud server2023-05-21T00:00:00+00:002023-05-21T00:00:00+00:00https://b00f.github.io//linux/securing_cloud_serverOne of the best practices to secure a Linux cloud server is to avoid using the root account for day-to-day operations. If you’ve just created a new server in AWS or on another cloud platform, you’ll need to create a new username and avoid connecting to the server using the root account.

Let’s do this step by step.

Assume you’ve just created a new Linux server. This guide is based on Debian or Ubuntu systems. However, for other distributions, most of the commands will be more or less the same.

Step 1: Check the current users

Connect to the server using SSH:

$ ssh root@<ip_address>

To determine which users are members of the root and sudo groups, inspect the /etc/group file:

# cat /etc/group | grep "sudo\|root"

After executing the above command, you might see an output similar to:

root:x:0:
sudo:x:27:

This output indicates that there are currently no users assigned to the sudo or root groups.

You can check which users have SSH login capabilities by examining the /etc/passwd file.

# cat /etc/passwd | grep /bin/

Executing this command may produce an output like:

root:xxxxxxxxxxxxxx:/bin/bash
sync:xxxxxxxxxxxxxx:/bin/sync
admin:xxxxxxxxxxxx:/bin/bash

This indicates that the users root, sync, and ubuntu have access to commands like bash or sync.

To remove all users except root:

# userdel -r admin
# userdel -r sync

The -r flag deletes the user along with their home directory. While it’s not strictly necessary to remove the sync user, we will.

Step 2: Add a New User

Let’s proceed to add a new user and grant it sudo execution privileges.

First, ensure that sudo is installed:

# apt update
# apt install sudo

Now, create a new user named pactus (feel free to choose a different name):

# adduser pactus

Then, add this user to the sudo group:

# adduser pactus sudo

To confirm everything is set up correctly, use the id command:

# id pactus

Running # cat /etc/group | grep "sudo\|root" again should now display the new user as a member of the sudo group.

Step 3: Enable SSH Login for New User

Now, we’ll set up the server to allow connections from the new user.

First, switch the security context to the new account to ensure new folders and files have the appropriate permissions:

# su - pactus

Create the ~/.ssh directory:

$ mkdir ~/.ssh
$ chmod 700 ~/.ssh

Currently, we are logged into the server using the root account. The public key for SSH access is stored in the /root/.ssh/authorized_keys file. We will copy this file to ~/.ssh/authorized_keys, change its ownership to the new user, and then delete the original file. This will prevent SSH login as root.

$ sudo cp /root/.ssh/authorized_keys ~/.ssh
$ sudo chown pactus:pactus ~/.ssh/authorized_keys
$ sudo rm /root/.ssh/authorized_keys

Now, let’s ensure that the new user can use SSH to connect to the server. Open a new terminal on your local machine and run:

$ ssh pactus@<ip address>

Step 4: Disable SSH login for root

At this point, we can disable SSH remote login for the root account.

Open sshd_config:

$ sudo vim /etc/ssh/sshd_config

Search for the authentication options and modify the root login permission by setting it to ‘no’ as shown below:

PermitRootLogin no

Also, ensure that SSH password login is disabled:

PasswordAuthentication no

After making these changes, restart the sshd service:

$ sudo systemctl restart sshd

You will no longer be able to log in using the root account.

Next step, lock the root account:

sudo passwd --delete --lock root

This command delete the root password and lock it. You can test if the root is locked:

su -

Reerences:

Secure Secure Shell

]]>
Mostafa Sedaghat Joo
Thoughts on Sybil Attacks2023-04-14T00:00:00+00:002023-04-14T00:00:00+00:00https://b00f.github.io//blockchain/sybil_attackA Sybil attack occurs when an attacker or a group of attackers creates multiple fake identities to gain control or influence over a network.

It seems that the best solution to prevent Sybil attacks is to weight the votes within the network that cannot be tampered or modified easily. However, the question remains: how can this be done?

Bitcoin addresses this issue by weighting nodes based on the work they contribute to the consensus protocol, instead of relying solely on identity, such as IP addresses: “If the majority were based on one-IP-address-one-vote, it could be subverted by anyone able to allocate many IPs. Proof-of-work is essentially one-CPU-one-vote.”1

Proof-of-Stake blockchains, weigh users based on the money in their account.2 This money is usually locked and cannot be transferred.

An alternative approach is to weight nodes based on their reputations. EigenTrust prevents Sybil attacks by using a reputation management algorithm in peer-to-peer networks.

Sybil attacks are challenging to address, and a combination of various methods may be required to mitigate the risks associated with such attacks.


]]>
Mostafa Sedaghat Joo
Consensus problem2022-11-10T00:00:00+00:002022-11-10T00:00:00+00:00https://b00f.github.io//blockchain/consensus_problemSpecial thanks to David Rusu and John Leonard for suggestion and review.

Consensus in Distributed Systems

Consensus is a critical problem in distributed systems, as it allows multiple parties to reach agreement on a single value or set of values. A protocol that solves the consensus problem must have several key properties, including:

  • Fault tolerance: If all correct parties propose the same value, any correct party must decide on that value. This ensures that a correct party will not accept a value proposed by a faulty or Byzantine node, and it makes the protocol fault-tolerant. Some papers refer to this property as “integrity” or “validity.”

  • Safety: Every correct party must decide on the same value, and there should be no fork in the system. A safety property ensures that something bad does not happen 1. Some papers refer to this property as “agreement.”

  • Liveness: Eventually, every correct party will decide on some value. A liveness property ensures that something good eventually does happen 1. Some papers refer to this property as “termination.”

Network synchrony

Network synchrony is an important factor to consider when designing a consensus protocol. In a completely asynchronous model, it is assumed that messages are eventually delivered and processes eventually respond, but no assumption is made about how long this may take. In contrast, partially synchronous models introduce the concept of time and assume known upper bounds on message transmission and response times 1.

FLP impossibility

The FLP 2 impossibility theorem states that it is impossible for a completely asynchronous consensus protocol to tolerate even a single faulty process:

No completely asynchronous consensus protocol can tolerate even a single unannounced process death. We do not consider Byzantine failures, and we assume that the message system is reliable. it delivers all messages correctly and exactly once. 3

FLP impossibility

This statement, known as the “FLP impossibility,” strongly demonstrates that:

No consensus protocol is totally correct in spite of one fault. 3

However, this does not mean that consensus is impossible in practice. Instead, it highlights the need for more refined models of distributed computing that reflect more realistic assumptions about processor and communication timings, and for less strict requirements on the solution to the consensus problem.

There are several approaches to solving the consensus problem, each with its own trade-offs and assumptions:

Prioritize Safety

One way to overcome the FLP impossibility is to prioritize safety over liveness. The Paxos consensus protocol was the first protocol that solved the consensus problem by making this assumption. Indeed, in Paxos, there may be situations where the protocol cannot be terminated.

Probabilistic Binary Agreements

Some protocols address the FLP impossibility by introducing a random function within the agreement protocol. In this scenario, when the network diverges, each party randomly chooses a value between zero and one, like flipping a coin. Even though these protocols employ a random function, they ensure safety or agreement, meaning that eventually all parties agree on the same value: zero or one. This approach was first offered by Ben-Or4 in 1983.

Prioritize Safety

Prioritize Liveness

Bitcoin is a solution to the FLP impossibility by prioritizing liveness over safety. In Bitcoin’s design, more than one valid block may exist at a given height within the network. This means that the protocol allows for the possibility of forks in the blockchain, sacrificing safety to ensure liveness.

Prioritize liveness

Partial Synchrony

The DLS5 consensus algorithm was the first protocol that introduce a partially synchronous model. It was proposed by Cynthia Dwork, Nancy Lynch, and Larry Stockmeyer in their paper “Consensus in the presence of partial synchrony” published in 1988.

Most consensus protocols follow this assumption by setting an upper-bound timer that prevents the system from waiting indefinitely in the event of node failure. From an FLP perspective, a synchronous consensus algorithm can provide safety, but with weaker liveness.

Synchronous consensus

CAP theorem

The Strong CAP Principle6, also known as the CAP theorem or Brewer’s theorem, is widely known in the field of distributed systems. It strongly states that a distributed data store can only guarantee two out of the following three properties:

  • Consistency: Every read receives the most recent write or an error.
  • Availability: Every request receives a (non-error) response, without the guarantee that it contains the most recent write.
  • Partition tolerance: The system continues to operate despite an arbitrary number of messages being dropped (or delayed) by the network between parties.

Consistency and availability

It’s worth noting that the FLP impossibility theorem and the CAP theorem are not the same thing7, although they both relate to distributed systems. The FLP impossibility theorem deals with the problem of achieving consensus in a distributed system, while the CAP theorem deals with the problem of achieving consistency in a distributed database.

Now let’s look at the possible combination of CAP theorems.

Consistency and Availability (CA)

Protocols like Paxos are consistent and available if there is no network partitioning. If the network becomes partitioned (e.g. half of the network can’t see the other half), then the consensus process will be halted.

Consistency and availability

Availability and Partition tolerance (AP)

The Bitcoin consensus protocol (Nakamoto consensus) prioritizes availability over consistency. This is why Bitcoin is resilient against network partitioning, but it can also result in forks.

Consistency and availability


  1. Chapter on Distributed Computing  2 3

  2. FLP stands for Fisher, Lynch, and Paterson, the authors of the paper “Impossibility of Distributed Consensus with One Faulty Process.” 

  3. Impossibility of Distributed Consensus with One Faulty Process  2

  4. Another Advantage of Free Choice: Completely Asynchronous Agreement Protocols 

  5. Consensus in the Presence of Partial Synchrony 

  6. Strong CAP Principle. Strong Consistency, High Availability, Partition-resilience: Pick at most 2. Look at Harvest, Yield, and Scalable Tolerant Systems 

  7. There is an interesting topic in Quora about this. 

]]>
Mostafa Sedaghat Joo
We are not with you2022-07-19T00:00:00+00:002022-07-19T00:00:00+00:00https://b00f.github.io//literature/fyodor_dostoevsky_brothers_karamazovOne of the most profound and fascinating pieces of literature is the story of “The Grand Inquisitor.”

“The Grand Inquisitor” is actually a short story within the main story of “The Brothers Karamazov,” a novel by Fyodor Dostoevsky. The story is set in Spain during the Spanish Inquisition, a time when the Catholic Church was the main power in Europe. This passage, which is almost independent of the rest of the book, narrates the return of Jesus, performing miracles in Spain. The Church discovers Him and orders His capture. Later, Jesus is arrested by the Inquisition.

A dialogue happens between Jesus and the Grand Inquisitor. The talk is intense, profound, and philosophical. The Grand Inquisitor threatens to burn Jesus and says to Him:

“We are not with you, but with him (Satan), and that is our secret! We corrected your great work. Our kingdom is built on your name. Why have you come back now to trouble us? Go, don’t come to us anymore.”

Security or Freedom

In my point of view, the story is philosophical battle between the freedom and security. The Grand Inquisitor argues that humanity cannot handle the burden of true freedom, as granted by Jesus, and that people would rather have security and happiness, even if it means sacrificing that freedom.

If you haven’t read the book, you can watch this short video. Anytime I watch this video, I learn something new.

]]>
Mostafa Sedaghat Joo
What makes a blockchain secure?2022-03-17T00:00:00+00:002022-03-17T00:00:00+00:00https://b00f.github.io//blockchain/blockchain_securityConsider a blockchain project with thousands of lines of code as a black box. There may be critical bugs or issues hidden within it. Each of them can act as a potential bomb, and if someone finds and triggers them, they can cause serious damage. But how does a blockchain secure itself against these potential threats?

The answer is “Decentralization”. Decentralization makes a blockchain secure, even if it has potential bugs or issues.

If a blockchain is not decentralized, it doesn’t have value, and therefore, no one will care about it. Even if a hacker bothers to find the bomb, it doesn’t matter. No one gets hurt by exploding a bomb in the desert.

But what about a decentralized blockchain? What happens if someone finds one of these “bombs” within the blockchain? There are two real-world scenarios that can happen here:

  1. A white hat hacker finds the issue first, and tries to fix it before it cause any trouble. Like disposing the bomb before exploding. The Bitcoin Transaction Malleability or Block Merkle calculation exploit are good examples of how the development team can fix potential problems before they cause any serious damage.

  2. A black hat hacker finds the issue first and
  3. exploits it for personal gain or to ruin the blockchain’s reputation. This damage can be destructive. But even in this case, if a blockchain is truly decentralized, the community can recover the blockchain from the damage. It’s hard to imagine a scenario worse than the DAO attack on Ethereum. In the DAO attack, more than 3.64 million or about 5% of the total supply was hacked. But the community decided to undo this attack by forking the blockchain. It was controversial and funny, but it worked!

In conclusion, decentralization makes a blockchain secure, not the algorithm, but the algorithm is important to make a blockchain decentralized. A poorly implemented or overly complex blockchain can’t be decentralized. It is a chicken and egg story.

]]>
Mostafa Sedaghat Joo