The off-chain matching engine really fascinates me and doesn’t get enough appreciation imo. The deterministic nature of it and having to deal with state channels across multiple blockchains, smart contracts, signing with elliptic-curve cryptography and so on makes these protocols incredibly complex.
But recently this made me wonder; how exactly will this scale to handle the needed throughput and latency?
At the peak of last bull run, Binance was handling on average 50k TPS and probably upwards of 100-200k during most active hours. If we are to onboard millions and billions of people, will Nash handle it?
Is there some form of horizontal scaling through sharding in mind?
Having such complex protocols where consensus has to be reached across multiple nodes in a distributed network will always have lower bandwidth than single node exchanges, no matter how you slice it.
I saw some numbers thrown around like 100k TPS and 10ms latency on trades, has this performance been reached?
Even a staggering 100k TPS wouldn’t be enough if we’re to do billions in volume.
I found a technical breakdown vid from Crypto Tuber and itwas worth the watch.
It also takes on the TPS from the matching engine maybe it clears some things up
In a year or two, when our ME launch architecture has been completely replaced by something better we will discuss technical details and properties of it. Probably with a paper in a technical conference.
But for now, our goals are to service the demand for the next 6mo.
If we use peak market to estimate performance requirements we have from 0.005 to 0.01 trade per user. That means to support our volume target in 6mo we should be able to handle about 5,000 TPS (trades per second). Our current arch can do that.
To support 1 Billion users we would need to handle 5 ~ 10M TPS. Our current arch cannot do that, but on the process of building our current system we identified several opportunities for improvement. So as soon as current system is deployed we will start thinking on the next iteration, the protocol is not a problem - but we will swap the implementation. Good news that we already planned to do that with the decentralization. That is absolutely normal on engineering, by building something you become better each time.
Still very relevant, I will say our system has improved greatly since that message - we are very far from its capacity limit. We have improved performance so much that we were able to greatly reduce the size of machines running it and save some $ on cloud bill.