Walletwasabi: Bitcoin core integration summery

Created on 5 Mar 2020  路  8Comments  路  Source: zkSNACKs/WalletWasabi

This issue is to summarize work and progress being done on bitcoin core integration.

It was decided to hold back with bitcoin core integration for now, reasons:

  • Filters incompatible - wasabi uses compressed ScriptPubKey (this can be easily solved with generating both filter types on the wallet but it means we have to maintain the old filters on the coordinator for old clients)
  • The size of basic filters is too big, syncing using bitcoin core locally (form segwit activation) end up with 5GB of filter data (and more then twice the time it would take to sync with the coordinator)
  • Upgrading the coordinator has bad risk/reward return (works fine for now consider upgrade in a few years)

Coordinator upgrade considerations and possible solutions:

  • Mandatory upgrade - bad idea too disruptive no justification
  • Coordinator Versioning (keep old coordinator filters and also have new filters) - maintenance overhead
  • Maintain our own version of bitcoin core with the filters we need - maintenance overhead

Discussions and PRs on bitcoin repo
https://github.com/bitcoin/bitcoin/issues/18222
https://github.com/bitcoin/bitcoin/pull/18223

Syncing filters with local instance of bitcoin core
https://github.com/zkSNACKs/WalletWasabi/pull/3207

questioresearch

All 8 comments

and more then twice the time it would take to sync with the coordinator

I wonder why it takes longer. Is it because the same false positive rate is set for the filter, but because the coordinator filters have less entries, it will result in less false-positive downloaded blocks?

And for the general idea. Could there be an option for users to select:

[ ] Use coordinator filters (network download, might be faster, SPV)
[ ] Use BIP 158 filters (download from local Bitcoin Core, might be slower, no coordinator trust)

@MarcoFalke I have an intuition that the lower sync time is due to two reasons:

  • In general larger data size to process, 250 MB vs 5 GB.
  • Higher false positive rate, thus more blocks downloaded, thus more data to process.

But I have been surprised by unintuitive nuances of block filters, so I might be completely off here.

I'll properly address this issue later, for now I'm just leaving something relevant here.

Case study: against options

Too often, designers faced with a security decision bow out, and instead leave
the choice as an option: protocol designers leave implementors to decide, and
implementors leave the choice for their users. This approach can be bad for
security systems, and is nearly always bad for privacy systems.

  • Extra options often delegate security decisions to those least able to understand what they imply.
  • Options make code harder to audit by increasing the volume of code, by
    increasing the number of possible configurations exponentially, and by guaranteeing that non-default configurations will receive little testing in the field.

https://www.researchgate.net/publication/228348285_Anonymity_loves_company_Usability_and_the_network_effect

@MarcoFalke I didn't bench mark the different parts of the sync processes so hard to answer that, perhaps it's not a bad idea to try to find out what parts of the sync are slower.

@MaxHillebrand you re probably right more blocks are likely fetched as a result of higher false positives rate

@nopara73 Thanks for the link on why giving that option is bad.

Going back to the specialized filter idea: In the long term I don't see how a special filter type is going to help. Assuming that something like taproot is going to happen, there won't be a way to distinguish scriptPubKeys by type any more (p2sh and p2pkh will look the same). Also assuming that long term more than 50% of all users/txs are going to use the new format, the savings from picking a blockfilter that specializes by omitting legacy scriptPubKeys will be less pronounced. So users are initially promised fast rescan and low bandwidth/low disk usage, but as the ecosystem adopts the new scriptPubKey format, they will see everything slow down even when they create a completely fresh wallet. It might be best if the performance they see is not affected by the behavior of other blockchain participants.

And to some extent, giving wallet designers the option to choose between the all-encompassing filter and the just-taproot-or-whatever filter is an option that they maybe shouldn't face according to your linked study.

@MarcoFalke It's a gamble. The idea is.

Whoever changed to segwit already did and most changed to wrapped segwit instead of native one. That slowed down bech32 adoption greatly.
On the other hand there's segwit v2 scripts coming, which will fully obsolate segwit v1.

Thus the gamble is that tech will evolve faster than segwit v1 grows. It was a successful gamble for the past 1.5 years, future will tell if it will work for long enough.

On the other hand I think we'll change to segwit v2 soon enough so we can play this game again until it works or another script type will be figured out.

Due to the fact segwit v1 adoption is somewhat completed and segwit v2 is coming, I think segwit v1 filters will never grow large enough to ruin a light wallet feel.

Even if something is miscalculated here, there's still the sync from wallet creation idea.

@dangershony @lontivero

@luke-jr noted https://github.com/bitcoin/bitcoin/pull/18223#issuecomment-595339831 that we can have value and script on getblock RPC verbosity 3 for the inputs! With this in mind I don't even want to delegate the filter creation to Core, as we can have more control over it in the future and more importantly we can do this in a backwards compatible way without replacing the filters on every client's computer.

I tested this feature of Knots and bootstrapped for Wasabi: https://github.com/zkSNACKs/WalletWasabi/pull/3212

It works properly. @dangershony would you mind progressing to implement the getblock to Wasabi with unit tests?

My roadmap proposal is:

  • [x] Port Knots to Wasabi
  • [ ] Implement getblock verbosity 3 into RPCClientExtensions.cs. Something like GetKnotsBlockAsync. With unit tests.
  • [ ] Simplify greatly our current index builder service with the new information. (Remove Bech32UtxoSet class.)
  • [ ] Build the filters and compare it to the current ones.
  • [ ] Merge, deploy Knots to our backend. @luke-jr actually has a maintained PPA, so we don't even have to find an alternative to the abandoned Core PPA.
  • [ ] Reimplement the index builder from bottom up with a shitload of unit tests (preferably utilizing TDD) to make sure it's an IHostedService and we can just push this into our HostedServices class and that'll handle its start and stop both on the backend and the client side.
  • [ ] Replace our new index builder with our old one.
  • [ ] Test if it creates the same filters.
  • [ ] Add it to the client side.
  • [ ] Make sure Core does still work but it cannot create the filters. (So users who are using Core and not Core from Wasabi can still use it as before.)
  • [ ] Implement the remaining things those Wasabi cannot work without and the backend provides them (I only have the price data in mind, but there may be other things those block the wallet from working if we turn off the backend.)
  • [ ] Look at other parts of the code where this getblock input script richness can be used to remove unnecessary RPC requests.

Just for the record, I believe 魔ere @nopara73 is talking about SegWit v0 [native segwit, as Wasabi is using now, not v1] and SegWit v1 [taproot, which Wasabi will support asap, not v2].

Was this page helpful?
0 / 5 - 0 ratings

Related issues

davterra picture davterra  路  3Comments

MaxHillebrand picture MaxHillebrand  路  3Comments

gabridome picture gabridome  路  3Comments

kenny47 picture kenny47  路  3Comments

nopara73 picture nopara73  路  3Comments