"Trust Yourselves". But trusting nash is also required?

Nash is non custodial, but i’m not sure if i can agree with the “trust yourselves” line of nash.
I’d like to talk about being really non-custodial, and for how much we have to trust Nash as a exchange instead of just ourselves.
This topic is meant as a discussion and is not meant for accusations. I am not accusing the team, simply exploring how safe nash can be considered.

So the problem i have, is how do we know the code thats live on the nash exchange is trustworthy? For all we know, a next maintenance could enable the team to secretly make everyone who deposits to a trading contract, deposit their funds straight into nash’s own wallets and gone are your funds. or change the smart contract so nash always has access to your funds as well.
This is again extremely unlikely to happen, but imagine if someone threaten’s a employee’s family in order to make this. Are there requirements to pushing updates? multiple person’s approvals required?
What about private keys, they could be made visible to nash themselfs as well with some bad intent code. (but same applies to myetherwallet’s website i suppose)

There is a github available i see, but i am unfortunately not a coder myself (except some basics) and else it would probably be too much to go through.
But it would still mean we need to trust the team to have the github on the live website i guess, instead of pushing things not on github yet.

Again, i apologise if this is read as baseless fud or accusations, but i think it’s healthy to have a discussion regarding this topic.

Thanks for reading!

1 Like

Good questions,

We can’t change a smart contract, we can only do migrations.

Regarding malicious update that is certainly one attack vector that we needed to take into consideration when designing Nash systems. There are two scenarios:

  1. Supply chain attack: a malicious update is pushed for a dependency.
  2. Rogue employee: a team member goes evil and wants to perform a attack.

(1) Is something we deal in several moments, first we audited all used dependencies and pinned it (meaning freezes the versions), we than try to be extra annoying to not update dependencies and minimize those. We use hash verification at our code, for example:

This avoid attacks such as the one that happened on IOTA via a malicious update of Moonpay.

(2) On day to day to merge code we require QA and peer-review and all code is first deployed to a preview version (a version that runs on production but is behind firewall, where we can sanity check).

But on a critical rogue Nash systems have a special sauce, it is called the Nash controller: nashctl, the nashctl knows the public keys of the people that can create deploys, so every version of Nash goes through a pipeline of tests, that output a container (think of this as a static copy of the code), that image is than cryptographic signed by the stakeholders and pushed the update. Different pieces have different level or requirements, this is effectively like a “multsig” for deployments.

This is how usually looks when people try to get code into “master”, with the required review:

This here is the testing pipeline:

And here you can see nashctl verifying the signatures, deploying a payload and warning us on slack:



if someone threaten’s a employee’s family in order to make this

Would not work because this person would not be able to push the code without all the signatures that the nashctl needs, and this people is spread across the globe. I for example was never able to sign deploys - which should be since I am quite public person around Nash.


thank you for the reply that took quite some time to write!
Good to be re-assured of the security in place before a update can be pushed.

Is there a way to verify for nash users to see that the code thats made public is also in fact used live on the website? In such a case, people could check github if a lot of funds are involved to make sure it all works as claimed.

The way it works on the internet is that the certificate of a domain is verified by the browser to guarantee that the server deploying the frontend is the same that owns the domain. To make that more secure we use HSTS with 6 months binding. This means someone that was able to get a certificate for Nash would need to wait 6 months to deploy the attack, if not the browser would scream that it is the wrong certificate.

We do however started work and discussed internally another (yes, so many) innovation, a untrusted client, a client that knows signatures and could verify the validity of the certificate under a web of trust. To do so we would need to leverage the extension and poke at the security of current open tab. Mozilla has experimental support for on webRequest. When researching it I did file a bug and followed it in Chromium (the base for Chrome), but since we moved to other solutions that has been on ice :smiley:

PS: Maybe in a year or two we will be able to ship a signed desktop client based on the web one, for now we are doing frequent updates so this would be bad for providing a good service to users.


Sounds quite good. thanks again for the effort to explain. “Trust Yourselves” makes just as much sense at Nash then at myetherwallet, in this sense.
A desktop client later on would be even better as the code can then directly be inspected (if open sourced) But i don’t think people will have any problems with the web client in terms of trust issues.

Have a good weekend!

1 Like