As in pruning mode in Bitcoin client, it would be useful to have an option to prune old blocks so the disk usage will be much less and it will facilitate the possibility to have a full validated node (a much more close approach to "don't trust, verify"). Despite it won't provide old blocks to newcomers, will provide security by validating blocks against consensus rules and propagate only valid ones.
Adding to this, perhaps adding an archive mode with a blocknumber option at which to prune at so it does same as full archive but only has historical state data up to that block.
For dapp devs this is something of great value as you don't care about blocks before your contract deployment and can then possibly serve a full archive node on a dev machine and not have to store one in AWS etc. at cost.
For eg. if I am interested in state starting at block 6m it syncs and auto-prunes up to this block at which point it start to store it. Now I can perhaps host an archive node on my local machine with the much reduced ssd footprint.
I would want to be able to export/import this data to share this with fellow devs.
We can then work fast with a local node with the state data we need and not need to pay AWS fees for a 2TB+ archive node we share.
This is a huge feature, a must have, especially when you're short on fast storage (SSD / NVMe) and don't want to keep old blocks available.
related: #10760
Closing issue due to its stale state.
Most helpful comment
Adding to this, perhaps adding an archive mode with a blocknumber option at which to prune at so it does same as full archive but only has historical state data up to that block.
For dapp devs this is something of great value as you don't care about blocks before your contract deployment and can then possibly serve a full archive node on a dev machine and not have to store one in AWS etc. at cost.
For eg. if I am interested in state starting at block 6m it syncs and auto-prunes up to this block at which point it start to store it. Now I can perhaps host an archive node on my local machine with the much reduced ssd footprint.
I would want to be able to export/import this data to share this with fellow devs.
We can then work fast with a local node with the state data we need and not need to pay AWS fees for a 2TB+ archive node we share.