Order Matching System

The off-chain matching engine really fascinates me and doesn’t get enough appreciation imo. The deterministic nature of it and having to deal with state channels across multiple blockchains, smart contracts, signing with elliptic-curve cryptography and so on makes these protocols incredibly complex.

But recently this made me wonder; how exactly will this scale to handle the needed throughput and latency?

At the peak of last bull run, Binance was handling on average 50k TPS and probably upwards of 100-200k during most active hours. If we are to onboard millions and billions of people, will Nash handle it?
Is there some form of horizontal scaling through sharding in mind?

Having such complex protocols where consensus has to be reached across multiple nodes in a distributed network will always have lower bandwidth than single node exchanges, no matter how you slice it.

I saw some numbers thrown around like 100k TPS and 10ms latency on trades, has this performance been reached?

Even a staggering 100k TPS wouldn’t be enough if we’re to do billions in volume.

Would appreciate response from team! :nash_token: :v:

15 Likes

Good question.
Look forward to the response from the team, about balancing tps and decentralization.

I found a technical breakdown vid from Crypto Tuber and itwas worth the watch.
It also takes on the TPS from the matching engine maybe it clears some things up

2 Likes

Surprisingly good video ^

In a year or two, when our ME launch architecture has been completely replaced by something better we will discuss technical details and properties of it. Probably with a paper in a technical conference.

But for now, our goals are to service the demand for the next 6mo.

5 Likes

Many people will read what you just said in a positive and negative light.

Negative:

  1. the ME architecture will be outgrown be developing technology making it obsolete.
  2. that Nash is not prepared for huge volume

Positive:

  1. Nash is the company that will make their own ME obsolete through creating better technology.

Which is it?

1 Like

All of those.

If we use peak market to estimate performance requirements we have from 0.005 to 0.01 trade per user. That means to support our volume target in 6mo we should be able to handle about 5,000 TPS (trades per second). Our current arch can do that.

To support 1 Billion users we would need to handle 5 ~ 10M TPS. Our current arch cannot do that, but on the process of building our current system we identified several opportunities for improvement. So as soon as current system is deployed we will start thinking on the next iteration, the protocol is not a problem - but we will swap the implementation. Good news that we already planned to do that with the decentralization. That is absolutely normal on engineering, by building something you become better each time.

25 Likes

interesting, i worked with few major futures exchanges, and the current nex TPS is either superior or in line with those architectures.

2 Likes

Hello Fabio, are these numbers still accurate for reference? Could you elaborate on what the 6mo volume target is also? Thank you good sir.

1 Like

@canesin

Still very relevant, I will say our system has improved greatly since that message - we are very far from its capacity limit. We have improved performance so much that we were able to greatly reduce the size of machines running it and save some $ on cloud bill.