Zeronet: History and log for ZeroNet site files change (a.k.a. Archiving, Versioning)

Created on 18 Dec 2017  ยท  5Comments  ยท  Source: HelloZeroNet/ZeroNet

http://127.0.0.1:43110/Talk.ZeroNetwork.bit/?Topic:1513607828_13aB79mzuRLgDYzQthn9wzycjw77WyDXh6/File+History

If someone change some file, other peers should be easily know which files changed and what the files previous data.

Most helpful comment

OP was talking about a kind of archive system. When the publisher pushes new changes, the recipients should cache the old files for a while, and the user should be able to revert the changes, if necessary. I will break this down to two main components:

  1. Network protocol design
  2. Data storage challenges

Network protocol design

It may be useful to locate old files via DHT (#57). To lookup an old file, use its hash digest as the key. Due to the fact that only the manifests (content.json) are signed, the old manifest must be retrieved first.

Data storage challenges

No idea.

All 5 comments

Please provide a description of the issue in the issue body for those of us viewing these from work.

As always, show your effort before opening a feature request. Don't say "I want this" without thinking about how to implement the said ideas. In addition, it doesn't hurt to check your grammar.

History and log for ZeroNet site files change - _polar posted on Dec 18, 2017_
If someone change[d] some file, other peers should be [able to] easily know which files [were] changed and what the files['] previous data [were].

_gitcenter โ” on Dec 18, 2017_
> File History
Git!

_polar โ” on Dec 18, 2017_
I need the built-in feature. And git is low-performance for HUGE amount of files.

_gitcenter โ” on Dec 18, 2017_
Builtin? You mean, ZeroNet? Then, there is no such feature.
Though, there may be a service in ZeroNet that offers it, but not builtin.

_polar โ” on Dec 18, 2017_
A great example: https://wiki.installgentoo.com/index.php/Lainchan

One lainon summed up the events very nicely: "Appleman didn't bother to change credentials on the server's control panel so when he requested a password reset Kalyx got the email instead of him. Realizing he had access to the server's control panel, Kalyx decided to be a dick and erase the server entirely, thinking appleman could just restore it from back ups. Appleman in a remarkable feat of stupidity kept lainchan's back ups on the same server it was hosted on, so those got nuked at the same time as the rest of the site."

Since site owner have the all power to nuke all files AND almost all user have no backup, it is important for it built-in file history.

_gitcenter โ” on Dec 18, 2017_
GitHub owners can erase all data, too. Do you know a service which has no problems?

_polar โ” on Dec 18, 2017_
But we have clone, right?

_gitcenter โ” on Dec 18, 2017_
Which one?

_polar โ” on Dec 18, 2017_
Normal git.
If a git repo erase all data and push to remote server, we still have local copy.

_gitcenter โ” on Dec 18, 2017_
What stops you from using Git for ZeroNet?

_polar โ” on Dec 18, 2017_
No, I can use git on my site. But I want apply this feature on all site.

_gitcenter โ” on Dec 18, 2017_
So you basically want a backup system?

_polar โ” on Dec 18, 2017_
I don't use time-based job scheduler. Full-scan / full-copy all file in /data is low-performance.
I monitor file change: when file change, I copy the changed file to mirror folder, and mirror folder stored all old version of files since I start seeding)
But I can can't get the history data before I seeding, other peers can't easily read my mirror.

_gitcenter โ” on Dec 18, 2017_
In git, you can force-push. You can deny force-push, but only on remote server. On ZeroNet, there is no server. So you cannot forbid anybody to force-push.

OP was talking about a kind of archive system. When the publisher pushes new changes, the recipients should cache the old files for a while, and the user should be able to revert the changes, if necessary. I will break this down to two main components:

  1. Network protocol design
  2. Data storage challenges

Network protocol design

It may be useful to locate old files via DHT (#57). To lookup an old file, use its hash digest as the key. Due to the fact that only the manifests (content.json) are signed, the old manifest must be retrieved first.

Data storage challenges

No idea.

Old data storage:
If some files (manifests and data) are changed/deleted, peers move old files to "recycle-bin" folder, instead of just override/delete.

Get the history:
If user fetch one site's history, other peers provide the old manifests and data.


No necessary to ensure all history data are always available (for example: It is OK that "there are peers provide the current manifest, but no peers provide the old manifests")

This feature is useful for sites to avoid unexpected bad thing (for example: write data before sync complete - data lost)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

trenta3 picture trenta3  ยท  3Comments

DaniellMesquita picture DaniellMesquita  ยท  3Comments

iShift picture iShift  ยท  3Comments

Forbo picture Forbo  ยท  3Comments

imachug picture imachug  ยท  3Comments