https://www.baeldung.com/https://www.baeldung.com/wp-content/uploads/sites/4/2020/03/P-NP-NP_Hard-NP-Complete-1-1-1024x783.png

New Generation Blockchain Does Shard

Michael Holdmann
6 min readOct 4, 2022

--

Addressing the NP-Complete problem in all current blockchains

By: David Beberman, Co-Founder, CTO Prasaga Foundation

Why blockchain scaling is possible with sharding, even though it is an NP-complete problem in general

The P=NP problem can be described as variation of the “traveling salesman” problem. In general, it simply means that the only known means to find a solution to a given problem is exhaust search. Further, it has been shown that all NP-complete problems are equivalent to each other. If there is ever a solution to P=NP for one problem, then all such problems have a solution — since all cryptography depends on P=NP, such a solution would be a huge real world problem. One might consider at least part of Computer Science as avoiding NP-complete problems by redefining the problems. Blockchain scaling via sharding is no exception to this.

There are a couple of great articles showing that, in general, blockchain scaling via sharding is NP-complete: The Consequences of Scalable Blockchains | by DataFinnovation | Medium, and Sharding Is Also NP-Complete. This is a quick follow-up to a post on… | by DataFinnovation | Medium. These articles are completely correct, given the assumptions of the problem as described in them. Following the CS model of avoiding NP-complete problems, if the premises are changed, then the result can avoid NP-complete, for the general use case in blockchains. The author of the articles does state several times essentially that.

As SagaChain’s main design goal is to provide blockchain scalability via sharding, here are the differences in assumptions and thus the expected differences in results.

1: Here is a paragraph from one of the articles: “We are going to call something “scalable” when it can process more transactions per unit time than a single node on the network. If you always need to wait for some designated leader/verifier/miner/whatever to finalize transactions then you’re never going to run faster, in the worst-case, than that node.”

This is a true statement. We differ on the interpretation of what it means. If there is a single blockchain, then it is obviously true that if there is a designated leader node, that chain can only run as fast as the leader node. This does not say anything about if there are multiple blockchains, each with a separate leader. Therefore, given the above statement, the question would be, how to run multiple leaders on multiple blockchains in parallel. If that can be accomplished, then an early definition of the articles, that scalability is processing transactions faster than a single node on the blockchain, can be satisfied.

2: As a comparison — we can view a multicore CPU as somewhat equivalent of a multi-sharded blockchain. If we ignore the consensus overhead for the moment, and just consider each shard has having a single leader, this reduces to a distributed processing model. Although the multicore CPU and the distributed processing model both are NP-complete given some problems, there are large number of problems that do scale and are not NP-complete. If this were not true, we wouldn’t all be carrying around multicore CPUs in our pockets (i.e. smartphones).

The solution for scalability for distributed processing and multicore CPUs relies on data separation. If all the data must be processed serially at all times, then distributed processing and multicore CPUs will fail to deliver scalability. This also can be viewed as the well known result of Amdahl’s equation.

The scalability problem, ignoring consensus for the moment, can be defined as establishing data separation allowing for parallel independent processing. This does not preclude periodic or aperiodic rendezvous when the results of the data processing must be serialized at that point, and only that point, in time.

3: The articles include an assumption in the problem statement that all shards can communicate instantaneously and share data instantaneously, to show that even with this assumption the problem is NP-complete. Although in the real world such an assumption is obviously impossible, there is a reasonable approximation of it. If the rate of communication between the shards with respect to certain data is much greater than that of the processing of blocks on each shard, then from the viewpoint of the block processing, such communication can be considered the equivalent of instantaneous. Thus this assumption is neither a limiting point, nor a significant issue. The articles do not consider it to be either. Given this description, the assumption can be accepted into a non-NP-complete model.

4: A paragraph in the articles states: “The property we are going to look at is: reshuffling lists of ok-on-their-own transactions with a no-negative-balance requirement into a single merged list where the maximum number of possible transactions go through. And doing this reordering “efficiently.” We take “efficiently” to mean “in polynomial time.”

Again, although this is true, this is not the property that we need for blockchain scalability via sharding. We do not need to merge the transactions into a single list. What is needed instead is to show that at any given point in time, where time is defined to be processing of a block, that data processed in a transaction on one shard is not accessed or processed on another shard at the same time. That is, for any given piece of data, it is processed serially across all the shards, but multiple unrelated pieces of data can be processed in parallel. When there is a dependency between various pieces of data, then some sort of rendezvous must be performed.

Given the two assumptions: communication between shards is logically instantaneous, and various pieces of data are independent given a certain period of time, then the leaders of multiple shards can process data in parallel, which will, by definition, exceed the transaction processing of any given leader. This meets the definition of scalability in the articles avoiding NP-completeness.

5: Now let’s relax the assumption that a shard consists only of a leader. Define shard as consisting of a leader and a set of nodes to reach consensus. The definition of transaction processing speed is expanded to include the block consensus time. Thus the question becomes equivalently, can multiple shards process blocks faster than a single shard. Scalability is then defined, not against a blockchain processing transactions faster than a single node, but a sharded blockchain processing blocks faster than a single chain. If the above assumptions hold, then the result meets this definition of scalability.

6: Let’s examine the data independence question. If we look at the fundamentals of a blockchain transaction in general, ignoring current implementations, it involves two or more parties. Assuming a large number of parties and a large number of transactions, whenever two or more transactions do not have any data from any parties in common, the transactions can be processed in parallel. Given that transactions can be expected to be randomly distributed across many parties, data independence with respect to transactions will increase, and thus the opportunity for parallel execution will increase.

Given a random distribution of transactions over parties, that increases with the number of parties, a sharded blockchain, and the assumption of instantaneous communication between shards with respect to block production, the definition of scalability is reached, without violating NP-completeness.

7: The outstanding question is how to design the data model such that it lends itself to increasing the opportunity for data independence with respect to transactions and parties, as well as communicating this information between the shards. This is a hard problem, but not an NP-complete problem.

SagaChain’s model is entirely focused on solving this problem given the assumptions as described.

--

--

Michael Holdmann

Founder & CEO at prasaga.com A Foundation building Decentralized GlobalOS and a Single, World Class Tree.