This idea was originally proposed by @muratyasin :
Creating and maintaining a Patricia tree for storing the state of balances and putting the root hash of it in the blockheader will also solve the trust problem of the light clients. Currently, a light client has to trust the answers coming from the api server. However, if an api server is comprimised, it can return incorrent answers for a dedicated set of light clients. To best of my knowledge, currently there is no mean for the light clients to verify the answers coming from the api server. If the answers are accompanied with a merkle state proof, then the light clients only storing the block headers can verify the answers.
Link: https://github.com/neo-project/neo/issues/302#issuecomment-412445739
This is a very good idea, and necessary to guarantee trust on storage data.
+1
Can someone give a practical example on how a light client would use this to solve the trust issue. I would be interested in implementing it
I guess it doesnt help light clients, because they need to trust rpc. But rpc itself will be able to know for sure its storage is fully correct.
One idea here Erik, is to perhaps allow the hash computation of storage for each contract too... so it could be possible to build "light clients" with storage of a single application, or for a company to know exactly the hash of its own data.
Actually, you can find a usage scenario in https://arxiv.org/pdf/1806.03914.pdf.
If the light client has to the an irreversible action based on the answer coming from a full node, it has to verify the answer, and can not just trust the rpc.
for light clients to reproduce the execution they will need to access a past version of the storage, which is not available on top block height...Vitor and I are working on a prototype to offer such service, and I guess wingy has also some ongoing work on it. But again, light clients will need to trust this storage (unless they try random different servers), because our storage server could fake specific entries, and light client will only be fully sure of the storage hash if it has full storage, like any heavy node. If top hash tree was hashed against storage if each contract, a "light node" with specific contract storage would be able to validate ita contract without whole storage.
for light clients to reproduce the execution they will need to access a past version of the storage, which is not available on top block height...Vitor and I are working on a prototype to offer such service, and I guess wingy has also some ongoing work on it. But again, light clients will need to trust this storage (unless they try random different servers), because our storage server could fake specific entries, and light client will only be fully sure of the storage hash if it has full storage, like any heavy node. If top hash tree was hashed against storage if each contract, a "light node" with specific contract storage would be able to validate ita contract without whole storage.
That would be awesome and would also help ensure consistent contract storage between versions / nodes. Atm we have to randomly resync, not knowing which node's view of contract storage is correct - for e.g. in https://github.com/neo-project/neo-cli/issues/191
One idea here Erik, is to perhaps allow the hash computation of storage for each contract too... so it could be possible to build "light clients" with storage of a single application, or for a company to know exactly the hash of its own data.
This would be nice, but is probably too expensive to do.
This would be nice, but is probably too expensive to do.
In fact, if we compute a Merkle tree with storage hashes of each contract on the basis, we only need to store the "StateRoot" (or Storage Merkle Tree root) on the blockchain. In this case, we could actually trust the RPC nodes that send the complete Merkle Tree via RPC call (RPC getStorageMerkleTree). No one will be able to fake a single storage of any contract, if the root is correct. On the other hand, nodes will only need to update the hashes of the affected contracts in that block, which is quick (and can be done in parallel). In fact, this strategy tends to be much faster than simply hashing the whole storage, because we can explore the natural parallelism of having multiple contracts, and storages are independent.
Ok, I did some heavy studies on Merkle Patricia/Trie used on Ethereum, and it's simple and efficient. That will solve 100% storage audit on Neo too... so, soon enough I'll start some experiments using data on https://github.com/neoresearch/neo-storage-audit.
Guys, thanks to @rodoufu we are very advanced in building this MerklePatricia tree to handle storage hashes, which is very efficient and deterministic regarding insertion/deletion orders. In a first moment, we could add it in local database (together with an rpc call), so storage validation will start to work. In a second moment we put it in block header to strengthen the solution.
We created a draft proposal on how Merkle Patricia trees could be useful to create safer storage and to provide a door out of UTXO model: https://github.com/neo-project/proposals/pull/74
Everyone in community is invited to join us as a co-author :)
Waiting for the pull request review at https://github.com/neo-project/neo/pull/528
Nice feature!! Next year we finish that xD Happy 2019 to everyone :)
Happy new year to everyone.
Hope 2019 comes as sucesfull as 2018, full of development, great ideas and insights.
Thanks to everyone that motivated and supported us.
By putting the state on current block, we need to process the whole process during block proposal, hurting TPS, and I believe the path being followed now on Neo3 is to allow application be processed after. However, it could still be included in next block header, when previous state was already processed. I'll put an opinion on why this is not currently desirable either.
Neo Blockchain has an interesting philosophy that we can keep applying fixes to it, and state is connected to the protocol specification itself, not the code. So, if code has a bug, and we "lock" the state on block, we could never correct the issue.
So, I think it's better to have state hash pushed via p2p messages, signed by consensus nodes, so that nodes can still keep track on "official" state, while not locking these forever on the blockchain.
Should we close this?
So, if code has a bug, and we "lock" the state on block, we could never correct the issue.
If you fix the issue will be with another transaction, so you will use this state as a valid one, could you explain deeply when could be a problem?
Erik gave a good example some time ago... about little endian format implemented as big on native contract. I'll give another one. suppose we wrongly implement some opcode, and allows doing unintentional stack access on program, or accessing forbidden context information, cheating on execution. This could leak funds from a contract,for example, similar to DAO hack. If we fix the issue, and re-run the chain on new version, previously states would now change, and hack wont happen.
If we lock state hash, we would never be able to do any protocol change, as all protocol changes would be breaking changes for someone.
With this approach, protocol itself can be improved, which is common in all platforms.
We have ways to protect user assets even without this.. for example, see S贸lid States Nep.
Another thing we should care is to not unfault tx,as this is not beneficial in any case... once faulted, faulted forever
I think that if we are in this situation, or you find the error in this block and stop the chain at this moment, or never will be fixed, because this storage will be used in the next transactions, if you fix the opcode, al alter the behavior, you will get different future results. Any change on the vm should be done without backwards compatibility, controlled by the header.
I prefer to save the states in the block header
lets see if we can make cross chain perfect without this... I think we can.
Most helpful comment
Happy new year to everyone.
Hope 2019 comes as sucesfull as 2018, full of development, great ideas and insights.
Thanks to everyone that motivated and supported us.