Store and access based on file's hash (or merkle root for big files)
data/__static__/[download_date]/[filename].[ext]
data/__static__/[download_date]/[hash].[ext]data/__static__/[download_date]/[partial_hash].[ext]data/__static__/[partial_hash]/[hash].[ext]Possible alternative to static content root directory (instead of data/__static__/):
data-static/data/__immutable__/Variables:
http://127.0.0.1:43110/f/[hash].[ext] (for non-big file)
http://127.0.0.1:43110/bf/[hash].[ext] (for big file)
File name could be added optionally as, but the hash does not depends on the filename:
http://127.0.0.1:43110/f/[hash]/[anyfilename].[ext]
data/__static__/__add__: Copy file to this directory, visit ZeroHello Files tab, Click on "Hash added files"For directory uploads we need to generate a content.json that contains the reference to other files.
Basically these would be sites where the content.json is authenticated by sha512t instead of the public address of the owner.
Example:
{
"title": "Directory name",
"files_link": {
"any_file.jpg": {"link": "/f/602b8a1e5f3fd9ab65325c72eb4c3ced1227f72ba855bef0699e745cecec2754", "size": 3242},
"other_dir/any_file.jpg": {"link": "/bf/602b8a1e5f3fd9ab65325c72eb4c3ced1227f72ba855bef0699e745cecec2754", "size": 3821232}
}
}
These directories can be accessed on the web interface using http://127.0.0.1:43110/d/{sha512t hash of generated content.json}/any_file.jpg
(file list can be displayed on directory access)
Downloaded files and content.json stored in data/static/[download_date]/{Directory name} directory.
Each files in the directory also accessible using
http://127.0.0.1:43110/f/602b8a1e5f3fd9ab65325c72eb4c3ced1227f72ba855bef0699e745cecec2754/any_file.jpg
As optimization if the files accessed using a directory reference the peer list can be fetched using
findHashId/getHashId from other peers without accessing the trackers.
Announcing and keep track of peers for large amount (10k+) of files can be problematic.
Send tracker request only for large (10MB+) files.
To get peer list for smaller files we use the current, getHashfield / findHashId solution.
Cons:
Announce all files to zero:// trackers, reduce re-announce time to eg. 4 hours (re-announce within 1 minute if new file added)
(sending this amount of request to bittorrent trackers could be problematic)
Don't store peers for file that you have 100% downloaded.
Request for 10k files: 32 * 10k = 320k (optimal case)
Change tracker communication to request client id token and only communicate hash additions / deletions until the expiry time.
Token expiry time extends with every request.
Take some risk of hash collision and allow the tracker to specify how many character it needs from the hashes.
(based on how many how many hashes it stores)
Estimated request size to announce 22k files:
Cons:
Downloading all optional files in a site or uploaded by a specific user won't be possible anymore:
The optional files no longer will be stored in the user's content.json file files_optional node.
Add a files_link node to content.json that stores uploaded files in the last X days.
(with sha512, ext, size, date_added nodes)
Why not directly abandon the protocol ?
Never do duplicating work please.
Starting from content-addressing, are you going to implement DHT and other stuff that is already on IPFS ?
What's your opinion on IPZN ?
BTW, you can find me on telegram @blurhy
Adding IPFS protocol support is also a possible option, but I don't want to depend on external application
DHT support with many uploaded files would be very inefficient: Eg. if you want to announce your IP to 100 000 files, then you have to connect to 1000s of different computers, because the DHT buckets are distributed between random computers.
So you don't agree on modularity ?
Adding IPFS protocol support
Instead of saying adding ipfs support, I'd say it's a radical change
DHT support with many uploaded files would be very inefficient: Eg. if you want to announce your IP to 100 000 files, then you have to connect to 1000s of different computers, because the DHT buckets are distributed between random computers.
What's ZeroNet for ?
Anti-censorship ? Right ?
There's a saying, 'grapes are sour when you can't eat them'
While modularity is important, using IPFS as the main backend doesn't look good to me. One of the reasons is depending on an external tool (that's what nofish said). We can never be sure there are no radical changes that might make ZeroNet stop working. Also, we'll have to spend much time switching to IPFS and making it compatible to classic ZeroNet than what we'd have to do if we just tuned the classic protocol a bit.
I might want to reword that better: I'm not against an IPFS-based system, but that shouldn't be connected to ZeroNet.
tuned the classic protocol a bit.
Are you sure ?
Anyways, a bit is not enough, or IPFS won't have such much code.
that shouldn't be connected to ZeroNet.
Yeah, but I will be connected to IPZN
We can never be sure there are no radical changes that might make ZeroNet stop working
Basically, impossible.
Anyways, a bit is not enough, or IPFS won't have such much code.
It is.
Yeah, but I will be connected to IPZN
Sure, you can develop a decentralized system yourself, but don't call it ZeroNet. If it turns out to be better, we'll switch.
Basically, impossible.
POSIX is going to be alive for quite a long while. Same with Python 3.7.
Additionally, I'm not quite sure but I believe that IPFS + IPZN is slower than classic ZeroNet.
It is.
It isn't.
Sure, you can develop a decentralized system yourself, but don't call it ZeroNet. If it turns out to be better, we'll switch.
I don't want to call it ZeroNet
I'm not quite sure but I believe that IPFS + IPZN is slower than classic ZeroNet.
It depends on IPFS. Do you know that DHT is not the only option for routing in IPFS ?
@mkg20001
It isn't.
It is. We have a rather big base for new features.
I don't want to call it ZeroNet
Ok, then don't say that IPZN is better than ZeroNet. It might be better than ZeroNet in the future if you finish it. @krixano and me worked on another decentralized system that could possibly better (though somewhat non-classical), but we didn't end up implementing it. We didn't advertise it here and there.
It depends on IPFS. Do you know that DHT is not the only option for routing in IPFS ?
Quite probably, but adding DHT (and others) to ZeroNet should be easier than switching to a completely different architecture.
a rather big base for new features.
But why does IPFS have so much code ?
Is the code unneceasry ? No.
on another decentralized system that could possibly better
What ?
We didn't advertise it here and there.
As a result, I know nothing about your project.
to ZeroNet should be easier than switching to a completely different architecture.
Maybe, but as I said, are the tons of code of IPFS unnecessary ?
It implies there're a lot features need to be done.
When you find all the features are done, you will also realize you re-implemented IPFS.
I mean IPFS has more features and we should not do duplicating work, just switch to IPFS.
So I'd rather re-implement application layer code instead of lower layer code.
That's easier
Is the code unneceasry ? No.
It unneceasry (sic) for ZeroNet's usecases.
What ?
A typo, sorry; it should be "possible be better".
As a result, I know nothing about your project.
Yes, that's what I'm talking about! Don't announce before an alpha version, we don't want another BolgenOS.
Maybe, but as I said, are the tons of code of IPFS unnecessary ?
Some of them are unnecessary for ZeroNet usecases.
It implies there're a lot features need to be done.
See above.
That's easier
That's how it works today: add 10 levels of abstraction and let it figure itself out! We should think about being better, not being easy to build.
It unneceasry (sic) for ZeroNet's usecases.
Do you want more awesome features ?
Don't announce before an alpha version
We need ideas and improvements on paperwork to achieve a better design
We should think about being better, not being easy to build.
Modularity is better as well as easy to build
another decentralized system
What's that project
Do you want more awesome features ?
List them. I believe that most of them can be easily implemented in classic ZeroNet and even more easily after adding content-addressed files.
We need ideas and improvements on paperwork to achieve a better design
It looks like you learned a new buzzword "IPFS" and now you're saying "IPFS supports more features, go use IPFS!" First, say what you're missing and how rewriting all ZeroNet code to support IPFS will be faster or easier (that's what you're apealling to) than adding them as classic ZeroNet plugins.
Modularity is better as well as easy to build
We don't want to depend on an external service. We could separate ZeroNet to backend and frontend later when we grow bigger, but we can't just take someone else's project and use it, mostly because we can't add features/fix bugs if IPFS guys won't like that.
What's that project
This is not related to ZeroNet mostly, so I'll keep short. Think of it as a decentralized gopher-like system.
List them
For example, FileCoin.
mostly because we can't add features/fix bugs
Why don't you add more features to tcp/ip/http/https ?
It looks like you learned a new buzzword "IPFS"
I doubt if you have ever read about IPFS ?
For example, FileCoin
Another non-buzzword one plaase. And even then, FileCoin can be implemented as a plugin.
Why don't you add more features to tcp/ip/http/https ?
Is this sarcasm?
I doubt if you have ever read about IPFS ?
Sure I did. Please don't ignore my questions and answer: what IPFS features can't be added to ZeroNet?
FileCoin can be implemented as a plugin.
Huh, do you think you guys have enough effort ?
sarcasm
Yeah, of course.
I mean your concern is nonsense and will never happen, because it's infrastructure like http.
what IPFS features can't be added to ZeroNet?
Nonsense
IPLD claims to be a "merkel forest" that supports all datatypes
Implementing IPLD into ZeroNet would therefore require to first write IPLD-compatible data-types to add the zeronet-objects into the IPLD-layer
Thus we'd have to integrate ZeroNet into IPLD anyways and this discussion is IMO completely pointless
Additionally we have a p2p framework that tries to solve the needs of everyone, so people can focus on their apps and not the network stuff, called libp2p. Etherum recently made the switch and ZeroNet could do that as well, since if anything's missing in libp2p, it can simply be added , thus squaring the value of the framework for both sides
Thus I find it entirely pointless to fight over what's best
My point is: Let's join together instead of fighting, so I created the idea of adding ZeroNet into IPLD which I tried to achieve with ZeroNet-JS (but gave up since summer holidays were over 😅)
What could possibly go wrong? In the end, if we find a way to add layers to libp2p to circumvent gfw by hiding it in plain HTTP traffic, it would benefit every p2p app. Not just ZeroNet. So we don't need 3 wheels if we can all work on one for everyone.
Huh, do you think you guys have enough effort ?
Don't make us do what you want to. Do it yourself: either write your own network or bring features to ours.
I mean your concern is nonsense and will never happen, because it's infrastructure like http.
What the heck, merger sites were added, optional files were added, big files were added!...
Nonsense
It looks like a classic "no u".
@mkg20001 Your arguments look better. While I wouldn't use IPFS, libp2p might be a better solution because it's at least used by many projects, so it's unlikely that breaking changes are added. So, is the plan to switch to libp2p?
Do it yourself
Of course, opensource voluntary
merger sites were added, optional files were added, big files were added!...
You think these are features ? It's just workarounds for bad design
While I wouldn't use IPFS
Go and read IPFS papers agian
I don't know what to say.
Of course, opensource voluntary
Right. nofish can't be forced to do something unless he or ZeroNet community finds it important (in the latter case, we'll either end up with forking or with convincing nofish). Go find those who like your idea and start development.
You think these are features ? It's just workarounds for bad design
Uh, what? Sure, big files might be a hotfix but how is optional/required file separation bad design?
We can even start at IPFS homepage:
Take a look at what happens when you add a file to IPFS.
See? Add a file. ZeroNet is not just about files: PeerMessage works without files and should never be.
PeerMessage works without files and should never be.
Huh, you definitely not knowing about pubsub and IPFS's plan of dynamic web
Huh, you definitely not knowing about pubsub and IPFS's plan of dynamic web
Quite probable. Now show me a working implementation of pubsub and the IPFS-like dynamic web in Python.
You were saying "well pubsub is not ready yet" and "IPFS development is slow", and now you're asking us to switch to something that's not ready!
pubsub
Take a look at https://gitlab.com/ipzn/ipzn/wikis/Notes, these are WIP IPFS features
switch to something that's not ready!
So what ? Just wait. Do you want a toy project ?
So what ? Just wait.
Yes. You can't implement something before its dependencies are ready!
how is optional/required file separation bad design?
Because of ZeroNet's default behaviour: keeping all files and never delete them.
How is optional file feature hard to implement ?
Because of ZeroNet's default behaviour: keeping all files and never delete them.
That's because it used to be a correct solution back then, when ZeroNet was small. You can't just make a project and say "it's finished", you have to adopt it endlessly.
How is optional file feature hard to implement ?
It's not hard, nofish implemented it when it started being important.
That's because it used to be a correct solution back then, when ZeroNet was small. You can't just make a project and say "it's finished", you have to adopt it endlessly.
So I am going to start a project from scratch, and based on IPFS, for a better design
add layers to libp2p to circumvent gfw by hiding it in plain HTTP traffic
As IPFS plugin maybe
Let's start tracing your point:
Right?
Let's start tracing your point:
* You want to use IPFS as backend; * Adding IPFS is better because of modular architecture; * Modular architecture allows adding features separately.Right?
And IPFS has a team working on it, possibly full-time
Great. So the only reason to switch to IPFS is because of more features. Now list them for us -- dumb people who are too silly to understand it.
@imachug The plan for ZeroNet-JS was to have both znv2 (the zeronet msgpack RPC protocol) and a custom one on top of libp2p. Additionally the plan for IPLD-integration was to store objects locally using IPLD (instead of directly using the FS) and then exchange them with other ZeroNet clients as "plain" ZeroNet objects.
(Also I had an idea to replace the SQL for multi-user with something that has SQL-syntax but doesn't do as much I/O and computation as sqlite, to fix performance)
That way we can experiment with new things, without breaking compatibility to the "mainnet" too much
The reason why I even started this project is because, from my perspective of view, it looked like this:
IPLD: "Let's build bridges, not walls, by building a common base-implementation for all kinds of DAGs"
ZeroNet: "Let's run our own torrent system called bigfiles"
libp2p: "Let's build upon common standards where possible, to keep problems with compatibility at a minimum"
ZeroNet: "Let's re-invent mDNS for discovery, because, heck, we can"
If we continue that path (with all of p2p, not just zeronet), we'll be just having another silod-problem, just at another point of the protocol
If we combine our forces, through efforts such as multiformats (which tries to "support it all") then we'll have a truly decentralized internet.
Great. So the only reason to switch to IPFS is because of more features. Now list them for us -- dumb people who are too silly to understand it.
Wait. It takes about two months to get a overview of IPFS for me. So make you know that how great IPFS is is not possible in a few words.
You basically don't want to admit that ZeroNet is not the best one nowadays
custom one on top of libp2p
No, IPFS has pubsub now
I think more introductions/docs are needed for IPZN.
Too much misunderstanding.
@blurHY We literally don't have clear goals with IPZN defined yet. I don't even fully understand it. It's better to focus on extensibility with maintained compatibility then "starting from scratch", or IPZN won't be better than any of the projects it claimed to replace. After all, we're building bridges not walls.
Wait. It takes about two months to get a overview of IPFS for me. So make you know that how great IPFS is is not possible in a few words.
Seriously? libp2p feature is "a small layer for building huge decentralized apps", ZeroNet is "we have sites, sites are stored by everyone who wants to store them, you can download a site and even post comments". What's IPFS feature?
You basically don't want to admit that ZeroNet is not the best one nowadays
I understand that ZeroNet might not be going in the correct direction, but I'm pretty sure that using IPFS is not the correct way.
No, IPFS has pubsub now
See, IPFS created pubsub and it's not compatible with other protocols! Wow!
I think more introductions/docs are needed for IPZN.
That's right.
We literally don't have clear goals with IPZN defined yet
I have some free time tomorrow to write the docs.
You can understand IPFS as a global filesystem and there's a communicator among all peers.
In this way, you can almost build any type of decentralized application.
I call the ZeroNet support on IPZN as bridge because ZeroNet may not support some features of IPFS, so it's partly compatible.
What's IPFS feature?
a global filesystem and a communicator among all peers
All decentralized web can be broken down as these two parts
See, IPFS created pubsub and it's not compatible with other protocols! Wow!
We can have a bridge to ZeroNet by a custom protocol on libp2p
Just to clarify: The reason why I'm all behind integrating/building on top of IPLD/libp2p is, that those projects try to be "compatible" by design, by for example allowing to swap out the DHT or pubsub implementation as needed, while on the other hand we have custom build protocol-stacks that make fixed assumptions and thus are harder to connect with each other (Edit: That is, aside from not having to re-invent the wheel)
Also, py-libp2p made substantial progress and should be ready to use quite soon, so that's something worth taking a look.
Hm. A big part of the world-wide community switched to libp2p, so using it might make sense (one of the reasons is making government blocks harder). But I think that most (if not all) IPFS features are available (or will be soon) in ZeroNet.
try to be "compatible" by design
When you always want to have compatibility, there's no space left for innovations then.
However I don't mean IPZN is not compatible, just partly
I think that most (if not all) IPFS features are available (or will be soon) in ZeroNet.
Why do you want to copy their features ?
Why do you want to copy their features ?
Reread my comment please. See, I said that most features are available right now, and switching to IPFS will take a lot more time than just adding one or two features. BTW, you didn't say how IPFS is better than all other distributed file systems (e.g. ZeroNet).
BTW, you didn't say how IPFS is better than all other distributed file systems (e.g. ZeroNet).
Reread my comments too.
switching to IPFS will take a lot more time
More time ? Do you think the application layer is harder than infrastructure ?
@blurHY
When you always want to have compatibility, there's no space left for innovations then.
You should've read my ZeroNet-JS related comment: "That way we can experiment with new things, without breaking compatibility to the "mainnet" too much"
Also, libp2p is extensible. That's my point. If it works on a small scale, it can become part of libp2p and thus everyone profits.
@imachug
See, I said that most features are available right now, and switching to IPFS will take a lot more time than just adding one or two features
I'd go with that statement as well and we should also stop making this mess bigger than it is, by reusing parts that the IPLD/libp2p projects offer us and can be easily integrated _especially_ for future additions, since that way we can finally stop re-inventing things.
@HelloZeroNet I'd really like to hear your opinion on that as well.
It seems that while in the centralized space we currently have movements to get away from silos such as big companies and move forwards to federation, a similar movement exists in the p2p space to make different protocols partly interoperable (such as Juan Benet's vision of "the merkel forest") or build common base-frameworks (such as libp2p is doing for network-releated things) and thus make the best of all available to everyone, everywhere.
More time ? Do you think the application layer is harder than infrastructure ?
Yay, some sensible discussion finally! Application layer is less difficult than infrastructure layer, but ZeroNet depends on its internals rather much, so changing the application-infrastructure bridge will take much time, and we could spend that time porting features from IPFS. We'll also stay separate from IPFS which is a feature. (i.e.: having a compatible interface is good, using the same implementation is bad. That's how competition works)
Adding IPFS is troublesome, whilst using libp2p should be a lot easier (after all, that's just a very low-level protocol), and if it gives a lot more features than what we currently have, I'd go for it.
@blurHY @mkg20001 You're working on a single project and working as a marketer and a developer (respectively), but it looks the opposite way round from my point of view :)
@imachug Nice to hear! What about the other releated parts, though? Like re-using IPLD + BitSwap for the exchange of objects across libp2p? It would make sense to use those as well, since they already integrate pretty well with libp2p and all custom parts of ZeroNet could be added as extensions to libp2p.
some sensible discussion finally
I've already written about it, uh, you didn't see them.
will take much time,
No, it's very easy because we have stuff like orbit-db that has already done a lot for us.
But the really hard and interesting part is stage 2, blockchain portion.
Adding IPFS is troublesome, whilst using libp2p should be a lot easier (after all, that's just a very low-level protocol), and if it gives a lot more features than what we currently have, I'd go for it.
@imachug About the projects: There are two projects right now. ZeroNetJS (mine) and IPZN (ours). I'm mostly referring to ZeroNet-JS, which already has a big codebase (but sadly it's a messy one as well)
@imachug About the projects: There are two projects right now. ZeroNetJS (mine) and IPZN (ours). I'm mostly referring to ZeroNet-JS, which already has a big codebase (but sadly it's a messy one as well)
Yeah, it can be integrated in IPZN for compatibility to ZeroNet
So compatibility is not a problem
I've already written about it, uh, you didn't see them.
I'm sorry in this case, but I guess I just ignored it as it was surrounded by flood.
blockchain portion
I'd think of it as a bug actually -- using blockchain where it's not required is a bad idea.
We can use both high-level stuff and access low-level stuff
But that'll take a lot of effort and I don't see what we'll get in the end clearly yet.
@blurHY How will it help to add ZeroNetJS's ZeroNet-incompatible features into IPZN for compatibility with ZeroNet? Could you please explain that to me?
using blockchain where it's not required is a bad idea.
I'll explain it tomorrow, i can't clarify what it is in a few words
Nice to hear! What about the other releated parts, though? Like re-using IPLD + BitSwap for the exchange of objects across libp2p? It would make sense to use those as well, since they already integrate pretty well with libp2p and all custom parts of ZeroNet could be added as extensions to libp2p.
We'd better start with something obvious like a low-level protocol and add more stuff in the future. Also, that's the first time I've ever heard of BitSwap so I'll have to spend some time reading on that...
How will it help to add ZeroNetJS's ZeroNet-incompatible features into IPZN for compatibility with ZeroNet? Could you please explain that to me?
Uh, if your project can be compatible to zeronet using libp2p, we can add this to IPFS as a ipfs-plugin.
add ZeroNetJS's ZeroNet-incompatible features
what's this
the first time I've ever heard of BitSwap
Haha, you don't know that
Uh, if your project can be compatible to zeronet using libp2p, we can add this to IPFS as a ipfs-plugin.
I believe it makes a lot more sense to make IPFS a plugin, not the other way round...
@imachug BitSwap is part of IPFS's way of exchanging IPLD data over the network. Once it's implemented in python as well, it might be useful, since that reduces the maintenance burden for @HelloZeroNet . It's explained in the paper that describes other parts of IPFS as well, such as it's dag.
Haha, you don't know that
Haha, you don't know what abjasdgljkhgjklhsdfljkgh is.
to make IPFS a plugin
Anyways ZeroNet is not well designed.
So we should focus on its successor
Haha, you don't know that
@blurHY Welcome to real-life, where not everyone knows everything. Put your _schadenfreude_ somewhere else. Don't know what that means? Ha... no, really, stop it...
Haha, you don't know that
@blurHY Welcome to real-life, where not everyone knows everything. Put your _schadenfreude_ somewhere else. Don't know what that means? Ha... no, really, stop it...
I have 20 GB of offline dictionaries ...... by simpling ctrl+c ctrl+alt+c
Anyways ZeroNet is not well designed.
So we should focus on its successor
@blurHY That's not how it works. Only a gradual transition is (in most cases) even feasible. And you're not going to suddenly find your way around that!
Anyways ZeroNet is not well designed.
Prove that. Adding features to ZeroNet is easy, you can't call that bad design.
you can't call that bad design.
IPLD's design is much better
IPLD's design is much better
"No u".
I have 20 GB of offline dictionaries ......
@blurHY
That says what exactly? That you're good at storing data? I have internet access as well, just fyi.
Look, at that point it doesn't even look like you want to argue, you just want to troll. Better be quiet then to throw out such meaningless nonsense.
@imachug
Because there's a known better solution/design than ZeroNet
I have internet access as well,
That's fast
PS: I found now this issue is almost the most commented one on ZeroNet
Because there's a known better solution/design than ZeroNet
Which. One? After you find it, prove that it's better.
Because there's a known better solution/design than ZeroNet
Then why are you even bothering to argue about compatibility with ZeroNet. Go ahead, do your own thing, I doubt it will be more compatible than ZeroNet currently is.
Why are you even putting your ego so much into this? I've seen the world, and there are times when X is better than Y and then there are times where it's the opposite way.
Sometimes rust is better tool to make a thing, sometimes it's javascript. Sometimes it's both. Sometimes it's none. The ends justify the means. What's your end for IPZN? Unity of all protocols? Or dominance/betterness above others?
Because there's a known better solution/design than ZeroNet
Which. One? After you find it, prove that it's better.
IPFS
It's basically impossible to explain in a few sentences, you should learn yourselves
It's basically impossible to explain in a few sentences, you should learn yourselves
In this case, go write your own ZeroNet-like network on top of IPFS. And don't argue with "I can't do that myself", nofish made ZeroNet without anyone's help.
@blurHY
To continue my comment above:
If unity is on your mind, it would only be reasonable to continue supporting all others as well, to keep them for the specific usecases where they are supiriour.
But one-size-fits-all has been the biggest joke of all times.
There are moments where IPFS is good, there are moments when something as plain and simple as "SSH to my server and download that file" is the better one.
Global, local, private, there are many contexts and what matters in each is a different story. KBFS is good for small teams but horrible for public stuff, IPFS is great for publicizing information bad for privatizing, tor is good for anonymity but horrible for speed
So, with that said, what's your standing point in this debate.
Do you understand how nonsense it is to tell Facebook guys that React is awful and they should switch to Vue? That's about the same. These are just two different things, and, while they're designed for about the same, they're different; sometimes React is better, sometimes Vue is.
If unity is on your mind, it would only be reasonable to continue supporting all others as well,
For example, Dat will not be supported, because it overlaps with IPFS and IPFS has more features.
IPFS is currenly the best solution, so I won't abandon to use some features that only exist on IPFS for compatibility
That's about the same
Here that's not the same
So, with that said, what's your standing point in this debate?
@blurHY I want this specific and meta question answered. And nothing more. All else was just context.
Here that's not the same
"IPFS is better! Go learn why yourself! I'm not going to spend my time discussing that!" Ok, we're not going to spend our time supporting your project. Bye.
So, with that said, what's your standing point in this debate?
@blurHY I want this specific and meta question answered. And nothing more. All else was just context.
ZeroNet is a toy project
ZeroNet is a toy project
@blurHY Says who? It's being actively used. And if you're using linux, then congrats on using the worlds biggest toy project, if that is your point. Past intent may not always equal future intent.
If you can't answer it properly or you still think your arguments are the ones always being superior, then feel free to leave. I'll wait.
You have two users trying to explain to you why you're not the one that is always right, and neither are we.
I agreed with @imachug for example that BitSwap isn't easy to implemented and it might sense to post-pone it until we have libp2p added.
But no. You just continue spewing nonsense.
ZeroNet is a toy project
Remember how Apple started.
Discuss it tomorrow
@blurHY All I wanted to get you to understand, was that hammering screws and screwing nails is both equally stupid and that's why there are different tools, that work for different problems. Protocols like tor have pros and cons, as well as ZeroNet and IPFS.
But you claim a hammer to be the worlds best tool and everyone else should be hammered down in your opinion.
That's not how you get support. That's how you get hate.
Life is about making the life's of others better, such that your own life improves within the process. And not commanding others to do it for you or forcing their beliefs upon them.
IPZN and ZeroNet are not different tools.
Ok, so it looks like content-addressed data is what's better in IPFS (or, at least, that's what you think).
Question 1: how to serve dynamic sites? IPNS sounds like the only option. Same with user-content.
Question 2: this proposal looks rather neat and should fix centralization problems caused by site-based architecture. It looks like we could do the following:
By this time, we basically just rebuilt ZeroNet and returned to hub-based architecture. Now, if that's not what you're looking for, where did I miss your point?
The topic starter proposal hashes individual files and not directories and not a replacement for current, site based storage.
Use cases: User file uploads, media files on sites
how to serve dynamic sites?
Hmm, read the wiki carefully please https://gitlab.com/ipzn/ipzn/wikis/home#ipfs
Any decentralized web can be summarized as two portions, a global and decentralized file system, and a communicator.
and not directories
I'd recommend that one though. It's rather common to group related files to a single directory (or even to nested directories). I'd really love that feature.
@blurHY Right, but that doesn't answer my question: if we have files, and file trees are signed by user/site/hub/whatever keys, how's that different from ZeroNet?
how's that different from ZeroNet?
There's no difference between user content and site content naturally.
We are just gathering related data.
Ok, so, basically, do you just want user data to be independent of site owner? In this case, this change should be rather easy.
to be independent of site owner
How user data stored is just how the data is represented.
We can still apply site owner's rules for user data
@blurHY Lol you calling zeronet as toyproject and in your ipfs redme representing your project as
IPZN is a new ZeroNet based on IPFS, which means that it uses IPFS (libp2p) protocol and IPLD data structure.
you are representing your project is based on this project what are trying to achieve are you trying to achieve users by playing some arguments ?
you are representing your project is based on this project
It's a new ZeroNet, and also it's for easier understanding for you.
ipfs redme
btw, it's ipzn
@imachug I have added my idea for directories
@all: Please try to keep it on-topic, it's very hard to follow this way
I have added my idea for directories
Great. This proposal looks promising.
While I would like that ZeroNet would be more modular and with more protocols, I don't like creating a completely new incompatible project for this. Here is why:
I agree that it is good to have multiple compatible implementations of ZeroNet in multiple languages
For example I have attempted something with https://github.com/ZeroNetJS/zeronet-js which had 2 swarms instead of just one (libp2p and znv2-protocol), but was compatible with vanilla zeronet
Edit: Moving libp2p-releated discussion here https://github.com/HelloZeroNet/ZeroNet/issues/2198
@filips123
I disagree.
Firstly, you should separate application layer in order that make a protocol stack.
Secondly, supporting too much protocols reduces performance and efficiency, some of the protocols you listed on the issue are outdated as well.
Thirdly, when you done separating layers, it's almost IPZN.
Fourthly, IPZN is not a project starting from scratch, it's based on the IPFS which has already done a lot of work for you.
Fifthly, IPZN is compatible to ZeroNet, done by @mkg20001 's work.
Fifthly, IPZN is compatible to ZeroNet, done by @mkg20001 's work.
@blurHY Is IPZN using ZeroNetJS? If that's not currently the case, then I urge you to please use the correct form: That it is planned.
I'm referring to ZeroNetJS most of the time, since in comparison to IPZN it already defines a clear architecture
Also, stop disagreeing. You're slowing everything.
Propose a solution instead.
And, no, IPZN isn't currently feasible as-is. First make it so it's a feasible drop-in replacement.
IPZN isn't currently feasible as-is.
As I said, it works as long as IPFS's pubsub works, it's not some complicated stuff.
Secondly, supporting too much protocols reduces performance and efficiency, some of the protocols you listed on the issue are outdated as well.
Explain a bit please...
Efficiency:
If you want to make a site available on all supoorted protocols/networks, you have to abstract their common features from them.
Outdated, e.g. FreeNet, GNUNet.
And it seems nonsense to support BitTorrent and WebTorrent, what's the purpose. Does it make Zeronet more robust ?
Firstly, you should separate application layer in order that make a protocol stack.
ZeroNet already contains plugins for som low-level network communications. Other protocols can also be added as new plugins. I agree that it would be good to have ZeroNet more modular, but this can be done in existing code.
But too much layering and modularity aren't really needed. ZeroNet is a full self-contained network with support for sites, users and (almost) real-time updates. So it is not really needed to be very layered as most features are already built-in. But IPFS is only a file system for storage without any other functionalities. For it, it is needed to be modular as developers need to implement most functionalities themselves.
I'm not really seeing IPFS as ZeroNet competitor but as an addition. So ideally, IPFS would be a plugin to ZeroNet so it would be able to use either ZeroNet protocol, BitTorrent protocol, IPFS protocol or any other protocol depending on what is needed.
If you want to make a site available on all supoorted protocols/networks, you have to abstract their common features from them.
If this is implemented in a good way, it can also be efficient. And it is good to support multiple protocols, to make network bigger and reach more users.
It could actually be more efficient. For example, you could use BitTorrent, IPFS or Swarm (depending on what is most appropriate/available/not blocked) best for big static content, as they are mostly made for it. Then you could use ZeroNet protocol or libp2p for dynamic content. It is same for DNS systems for ZeroNet.
But to do this, you don't need a completely new project. Most of the things can be done with plugins and some of them with some core changes.
Outdated, e.g. FreeNet, GNUNet.
What makes Freenet and GNUNet "outdated" and IPFS "very modern". By development and releases certainly not, as both (actually all three) of them have very active development history. And they also have a lot of users. Ok, IPFS is newer and with some better features, but how would you make sure that there will be no better solution than IPFS in the future?
And it seems nonsense to support BitTorrent and WebTorrent, what's the purpose. Does it make Zeronet more robust ?
Why not? Yes, to make ZeroNet more robust.
If more protocols would be supported, it would be harder to block all of them. Also see above and other comments for more details.
But too much layering and modularity aren't really needed.
Sure ?
IPFS is only a file system for storage without any other functionalities.
Sure ? I mentioned pubsub so many times before.
support multiple protocols, to make network bigger and reach more users.
Do you have that effort ? If not, a single best protocol is enough.
Most of the things can be done with plugins
IPFS is not a 'plugin' or an alternative protocol, it will be the main protocol.
IPFS "very modern"
Yeah, of course. IPFS is just fking modern.
how would you make sure that there will be no better solution than IPFS in the future?
That's not the thing we should think now.
Yes, to make ZeroNet more robust.
Not really
However the main reason is I want to take full control of the project, I forgot the say this, uh
@HelloZeroNet I want to mention something here that's somewhat related to this and deduplication - currently, if we have two separate zites that use the same merged zites type, both of these zites will have a database that contain the exact same information. I believe we should be able to fix this by having one database for a merged zite type handled by the core, and then these two zites would be able to query from this one database by using the standard dbQuery.
But there's a major problem with this, both of these zites could have different dbschema's.
So... just something to think about. This isn't a full proposal or anything, which is why it's not in a separate issue.
De-duplicating database is not possible, because the dbschema is defined by the merger site not by the merged one.
You can share database between sites by using Cors permission and Page.cmd("as", "dbQuery", ["Anything"])
De-duplicating database is not possible,
If we don't have 'database', we can.
Refer to orbit-db
De-duplicating database is not possible, because the dbschema is defined by the merger site not by the merged one.
Yeah, I know about how merger sites work, hence... "But there's a major problem with this, both of these zites could have different dbschema's." (I've also created many sites that are merger sites, but it seems you've forgotten about that... or you just don't pay attention to ZeroNet devs).
I don't think it's as Impossible as you think, but I do think it's a bit hard to do.
You can share database between sites by using Cors permission and Page.cmd("as", "dbQuery", ["Anything"])
Right... but this doesn't have write permission, so you have to end up resorting to merged zites for that.
Yeah, I know about how merger sites work, hence..
Sure I know, but this is a public conversation and I try to give as general answers as possible to make it usefull to others as well.
I didn't read all of that expensive comments.
Why starting a browser-wars alike?
Its pretty simple for keeping compatibility. Use .dat folder inside every site to keep its versioning snapshots every update, and IPFS for generating hashes of blobs inside this folder. Main files could also have their IPFS hash for de-duplication.
Keeping dirty folder structure?
I don't care about the dirty compatibility
From: Daniell Mesquita notifications@github.com
Sent: Friday, October 4, 2019 4:41:57 AM
To: HelloZeroNet/ZeroNet ZeroNet@noreply.github.com
Cc: blurHY blurhy@outlook.com; Mention mention@noreply.github.com
Subject: Re: [HelloZeroNet/ZeroNet] Proposal: Content addressed data (#2192)
I didn't read all of that expensive comments.
Why starting a browser-wars alike?
Its pretty simple for keeping compatibility. Use .dat folder inside every site to keep its versioning snapshots every update, and IPFS for generating hashes of blobs inside this folder.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://github.com/HelloZeroNet/ZeroNet/issues/2192?email_source=notifications&email_token=AH5CPRFUFSQG3DZDDBJMFI3QMZKJLA5CNFSM4IVJ6WN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAJQ2RQ#issuecomment-538119494, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AH5CPRC5OKB5BFZFZHCXQ5LQMZKJLANCNFSM4IVJ6WNQ.
Its pretty simple for keeping compatibility. Use .dat folder inside every site to keep its versioning snapshots every update, and IPFS for generating hashes of blobs inside this folder. Main files could also have their IPFS hash for de-duplication.
That's... not how that works. It may seem intuitive, but it will turn out to get really horrible really quickly or it won't work for most of the cases (for example every user would need to track their own .dat folder for user data, but this wouldn't be enforceable, at least without tons of hacks)
Keeping dirty folder structure?
I don't care about the dirty compatibility
_In every second spent fighting in this thread, a second that could've been used to create more code to actually prove one solution to be better over another could've been made instead_
In every second spent fighting in this thread, a second that could've been used to create more code to actually prove one solution to be better over another could've been made instead
Of course, I will soon give you all a proof-of-concept, but for now I have to wait.
You don't understand the theory, and so you say 'give me the proof-of-concept'.
The first thing you could do is to make your search engine working better than Zirch.
@blurHY: You've added nothing meaningful to this conversation. Yes, we get it: you're deeply and hopelessly infatuated with IPFS, presumably due to the psychology of previous investment in your InterPlanetary ZeroNet (IPZN). The overwhelming majority of us disagree with your hardline position. Can we please move constructively on?
Also, stop belligerently polluting this and other ZeroNet threads. This includes #2198, #2194, #2189, #2062, and the list goes on and on. #2090 is probably the only issue where you offer sound advice unburdened by sarcasm, vitriol, or one-liner exhortations extolling the virtues of IPFS and denigrating everything else in existence.
More threads like #2090. Less threads like this and all of your other commentary. 谢谢.
The first thing you could do is to make your search engine working better than Zirch.
As I said, it's centralized and it has already been abandoned.
Moreover, it's not the point.
PS: I have nothing to say. For you are lazy to discover other things better than ZeroNet.
As I said, it's centralized and it has already been abandoned.
Both ZeroNet and IPFS supports a decent search engine.
As I said, it's centralized and it has already been abandoned.
Both ZeroNet and IPFS supports a decent search engine.
It will be soon banned in China when it is used by many peopple
Hello @HelloZeroNet I think the first you must do is separate the static content from any .json! The content should be under a different folder! Not in the same where is the json! Also it would be great to have an option to use ZeroNet with no headers, no sandbox, simple serving static content! Verification is already done by the Network using the Bitcoin address! Much like IPFS the content is accessible under the (hash) in this case Bitcoin address.
The main problem with ZeroNet is that very difficult for most people unset headers and proxy correctly all request to the backend from clearnet. I example able to use any normal TLD with proxy stuff to ZeroNet backend which stripped from frame and headers. The back-end proxy incoming rewuest from the front-end to 127.0.0.1:43110/raw/example.bit. On the front-end I add my own headers.
I can work perfectly on my local machine publish and sign everything on localhost and use ZeroNet over Tor. The back-end downloads my updates... Than people who comes to my example.org domain will eventually proxyed correctly to ZeroNet back-end over firewall where the STATIC example.bit located and voila! Everything works perfectly. I think this is even better than IPFS!
Only need a server which acts and front-end and some ZeroNet backend servers as load balancers. Everything publishing/updating ONLY done on localhost. :) Decentralized. Most certainly!
I think this is even better than IPFS _dnlink which clearly exposes the gateway...
I will possible write some guide how to proxy any TLD to content hosted in ZeroNet. As I said it is way better than IPFS. Just need to rethink how many people you allow to use ZeroNet. By allowing to remove all headers and disable the frame SOLELY on purpose to act as a back-end you will eventually open up ZeroNet to the entire world! Frame and headers you included in ZeroNet only useful when there is no need to proxy traffic to it! Like when someone install it the first time on his/her local machine... Back-end don't need to sign anything just download the updated sites...
@BugsAllOverMe See #2214 for DNS support.
With a single user site it would be easier to allow to download big files too with default. Some websites contain large files in specified directories.It can be confusing to the users and site owners that some users can not acces to large files easy way.Because the are no mode to setup to automatic way to download all large files when the user download the zite.A button would be easier what show the total of large files size in a single site and only restrict large files download if the user click the button when the site load and not want them.When the user download the zite and currently click the button to not want them and later he think maybe later want the big files them top-right 0 button would be better one button what shows the total size of all large files in the current zite.And allow one click to download them.Plus allow all single large files a easy simple button click in the client to resume a single file download if a user click in a large file using a non flustrating MSG with a download button.What shows the client when a user click a large file.Without need the site owners to integrate +Seed button.And this is working with single and multiple user zites. This easy things can solve many seeding and downloading site +seed button intergration and using issues.Many large files currently hard to download,porly seeded,and dead.If we combinating it with this idea we can get more healt the files with site-independent file storage and de-duplication combination.
@HelloZeroNet Here is ZeroTalk comment that says:
"
All big files should be identified by hashsum and replaceable easily if hashsum matched. [...]
Nofish confirmed that this isn't how ZeroNet currently works.
"
i do not understand what that mean, but i wanted to note that it would be good if "Content addressed data" feature does not modify the files/big files i "upload" on zeronet and keep it identic with the source file that is outside of zeronet folders - so these two can be properly deduplicated by the operating system and replaced / symlink/hardlink 'ed.
Also it would be a benefit if the zeronet located bigfile name contains (not necesary be equal to) original human friendly file name so one does not need to relly only on zeronet to find the file.
Use SHAKE-256 for file hash.
@slrslr If you add (copy or symlink) a file or a directory to the data/__static__/__add__ dir, then you will able to list them on ZeroHello (and on other sites you give access to) and initialize the hashing process. When the process is done, it will be moved to data/__static__/[current_dir] and the generated hash (and the shareable url) will be returned to the site
This way you don't have to write or store your files multiple times and the file names and the content will be the same.
If you upload the file using http post, then I think there is no way to avoid it.
Note that content-addresses data should also be accessible from mutable addresses. This would be useful for updating content and getting newest version easily.
This could similar to IPNS where it could be using existing zite functionalities with public and private keys so zite would actually link content-addressed data from it.
It would be important if the same file using another file name the program can detect it and merge the seeders/leechers.Independent of the site.If we upload the same file to ZeroUp,KopyKate the filename and thus the file hash are changed but the file are absolute the same.This way you can't merge the users who seed the same file.Can not detect the program if somebody upload the same file to ZeroUp,KopyKate ETC.It would be easier for a user to be warned if the current file are exit somewhere else,on another site.Because it's from the same file you need more copies that different people are seed.This is a huge waste of resources.Not a good thing that ZeroUp,KopyKate are renaming the files like the original file.mp4 to 1234567890- file.mp4.This way you and the users can't seed the same file if exits in another site without manual editing the .json file.Existing seeders are lost if the same file are re uploaded or uploaded with another name.And unnecessary copies are stored which take up space on your hard drive.
The problem will be solved by this feature as the files will be stored/shared independently from the sites.
Most helpful comment
@blurHY: You've added nothing meaningful to this conversation. Yes, we get it: you're deeply and hopelessly infatuated with IPFS, presumably due to the psychology of previous investment in your InterPlanetary ZeroNet (IPZN). The overwhelming majority of us disagree with your hardline position. Can we please move constructively on?
Also, stop belligerently polluting this and other ZeroNet threads. This includes #2198, #2194, #2189, #2062, and the list goes on and on. #2090 is probably the only issue where you offer sound advice unburdened by sarcasm, vitriol, or one-liner exhortations extolling the virtues of IPFS and denigrating everything else in existence.
tl;dr
More threads like #2090. Less threads like this and all of your other commentary. 谢谢.