Currently ZeroNet uses a wrapper for sidebar and notifications UI, and embeds site content with an iframe. This proposal fixes some issues caused by iframing by embedding ZeroNet UI into site content.
Current architecture:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ WRAPPER (src/Ui/template/wrapper.html) โ
โ Stores secrets (e.g. wrapper key) โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ WS UPLINK โ โ
โ โ UiWebsocket API โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ NOTIFICATIONS (<div>) โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โ SITE NOTIFICATIONS (<div>) โ โ โ
โ โ โ Unsafe content (possible XSS) โ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โ WRAPPER NOTIFICATIONS (<div>) โ โ โ
โ โ โ Safe content (no leaking or spoofing) โ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ SIDEBAR (<div>) โ โ
โ โ Site and key management, safe โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โ SITE DATA โ โ โ
โ โ โ Title, description, donate links, etc. โ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ IFRAME SANDBOX โ โ
โ โ Unsafe content (managed by site code) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Safety layers are separated with double border.
No software has zero bugs. This includes security issues. There are many possible attack points here:
Proposed architecture;
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ HTML PAGE โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ PREFIX (shadow DOM) โ โ
โ โ Practically invisible to site content โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โ SITE NOTIFICATIONS (<div>) โ โ โ
โ โ โ Unsafe content (possible XSS) โ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โ WRAPPER NOTIFICATIONS (<iframe>) โ โ โ
โ โ โ Safe content (no leaking or spoofing) โ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โ SIDEBAR (iframe) โ โ โ
โ โ โ Site and key management โ โ โ
โ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ โ
โ โ โ โ GATE (iframe) โ โ โ โ
โ โ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ โ โ
โ โ โ โ โ WS UPLINK โ โ โ โ โ
โ โ โ โ โ UiWebsocket API (ADMIN) โ โ โ โ โ
โ โ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ โ โ
โ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โ GATE (iframe) โ โ โ
โ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ โ
โ โ โ โ WS UPLINK โ โ โ โ
โ โ โ โ UiWebsocket API (site) โ โ โ โ
โ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ SITE DATA โ โ
โ โ Unsafe content (managed by site code) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Sure, it might look more difficult at the first glance, but security comes with a cost.
The resulting HTML (the one that browser receives) consists of a prefix and the real site .html file.
Prefix is a "magic" HTML code that sets up an analogue of what was a wrapper by creating a shadow DOM node. This ensures that the sidebar and notifications are shown correctly, independent of main site styles, and that it doesn't affect the site itself.
A gate is an iframe that acts like a gate between UiWebsocket and its user. The gate handler uses the referrer to check what permissions the websocket must have.
ETA: at most 1 week if no major issues are found
@HelloZeroNet @geekless @rllola @DATSEC @ValdikSS @filips123 @anthyg @anoadragon453 @github-zeronet
Waiting for review!
I've first predicted compatiblity issues, but, surprisingly, latest Firefox and Chrome in both Incognito and classic mode work well.
I've found a minor issue with pushState/replaceState: sites could pretend as other sites because http://127.0.0.1:43110/talk.zeronetwork.bit and http://127.0.0.1:43110/me.zeronetwork.bit are same-origin. This can, however, be fixed by overriding history.pushState.
Nice presentation. Very hard to get your head around without something to kick the tires on. Hopefully it won't take much effort to get make the concept testable enough to validate it doesn't break apps or core ZN functionality like the side bar.
Does this impact the URL at all? Currently, the Angular app needs to set the base href so references work. Otherwise, they try to access content from the root. e.g., "/site.css".
In reality, it needs to do some parsing of the URL so you don't have to hard code the site address. There is a current issue right now with .bit domains that I haven't tested with an app because I don't have a .bit domain. I just know it is an issue today with relative referencing.
It's not clear if this prefix concept in the DOM can have an impact today. One thing that makes this challenging to predict up-front is you can't realistically know what all the third party libraries are doing in a Node-based app until you see an error. You have the core ones used to build a basic Angular, React or Vue app, then you have a bunch more you add for app functionality (such as charting). Later, we upgrade these, hoping they don't break.
What's the minimal you can do (time wise) to be able to have a branch where we can test the highest risk portions of the design, such as the new PREFIX (shadow DOM)? It doesn't have to do everything or provide new functionality. And, for testing, can still rely on NOSANDBOX, so long as it provides a way to test new scenarios. Just need to see validate it doesn't breaks apps.
What's the minimal you can do (time wise) to be able to have a branch where we can test the highest risk portions of the design, such as the new PREFIX (shadow DOM)?
If you're asking for a high-risk PoC (meaning unsafe, i.e. can be abused by sites), it'll probably be finished today (it's 9am for me). By now, ZeroTalk, static sites, ZeroSites and ZeroHello all mostly work (except localStorage stuff and such).
Ok, so, are there any sites using wrapperWebNotification / wrapperCloseWebNotification` at all yet? I have only one, but it's not released yet, and I don't know other people using these commands, so I don't think it makes sense to support them.
@imachug That commands should probably be available, but marked as deprecated and removed on next major version.
Also, how would you prevent CORS? Because all sites would be on same origin, any site would be able to access all data from other sites even without permission.
I think that this would be good improvement. This would also allow Progressive Web Apps with Web Manifests and Service Workers available.
Can you share your unsafe PoC when it is available? I would like to test it.
That commands should probably be available, but marked as deprecated and removed on next major version.
Sure, but innerLoaded, wrapperPushState and wrapperReplaceState are still useful, so we can probably leave them (or remove in two major versions or something)
@HelloZeroNet
Also, how would you prevent CORS? Because all sites would be on same origin, any site would be able to access all data from other sites even without permission.
We'd compare Origin/Referer to the request URL, and if it doesn't match, we'd use X-Frame-Options and Content-Security-Policy for iframes and just 403 Forbidden for data.
Can you share your unsafe PoC when it is available? I would like to test it.
Sure, ping me tomorrow if I forget to send you a link.
I'm currently a bit stuck with site-in-site (aka picture-in-picture) mode implementation. Good old 3 years old StackOverflow thread without an answer. Please tell me if you know how to fix that issue.
Unfortunately I couldn't find a way to fix that issue, so I'm switching to another architecture:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ HTML PAGE โ
โ Secret (wrapper key [site]) โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ PREFIX (shadow DOM) โ โ
โ โ Practically invisible to site content โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โ SITE NOTIFICATIONS (<div>) โ โ โ
โ โ โ Unsafe content (possible XSS) โ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โ WRAPPER NOTIFICATIONS (<iframe>) โ โ โ
โ โ โ Safe content (no leaking or spoofing) โ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โ SIDEBAR (iframe) โ โ โ
โ โ โ Site and key management โ โ โ
โ โ โ Secret (wrapper key [ADMIN]) โ โ โ
โ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ โ
โ โ โ โ WS UPLINK โ โ โ โ
โ โ โ โ UiWebsocket API (ADMIN) โ โ โ โ
โ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โ WS UPLINK โ โ โ
โ โ โ UiWebsocket API (site) โ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ SITE DATA โ โ
โ โ Unsafe content (managed by site code) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Basically, we're getting rid of iframe gate and using wrapper key instead. This architecture is still secure, but is more error-prone, so I should be careful enough.
What is different between site and wrapper notifications? Could both of them use DIV or IFRAME or even be in same element?
Site notifications are possibly insecure and don't store sensitive information; wrapper notifications are always secure and may store sensitive information. For example, Database was rebuilt notification is a site notification, while Enter private key: [...] (you get it when you press Sign & publish) is a wrapper notification.
It's unsafe to use div for wrapper notifications. But it's possible to use iframe for site notifications, though that's overcomplicated and slow.
@filips123 Forgot to ping you
I've noticed that a lot of wrapper code uses jQuery. We probably don't want to pollute sites' environment, so I'm trying to port as much code as possible to Vanilla JS.
Making sidebar work turned out to be a bit more difficult than expected, so you'll have to wait a bit.
Looking good, but the history access could be problematic and probably there is multiple ways to recover the original function, eg. using an iframe:
history.pushState = console.log
ฦ log() { [native code] }
$("<iframe id='itemp'>hello</iframe>").appendTo(document.body)[0].contentWindow.history.pushState
ฦ pushState() { [native code] }
Sure; but what would you do after getting pushState? pushState "", "", "http://facebook.com" because http://facebook.com and about:blank are not same-origin.
@imachug Maybe changing URL to address of another ZeroNet site. All sites would be on the same origin but I don't know if HTTP CORS headers actually prevent this.
@filips123 Eh, that's why we're replacing pushState. If you're referring to nofish'es iframe PoC, it won't work because http://127.0.0.1:43110 and about:blank aren't same-origin.
@imachug Ok, this makes sense. And you probably can't open http://127.0.0.1:43110 in iframe, right?
Also, would it be possible to differentiate between legitimate and malicious iframes? Because site can use iframe to just display other content on it (embedded game, sidebar... hosted on it) or to attack different site.
Ok, this makes sense. And you probably can't open http://127.0.0.1:43110 in iframe, right?
Well, it's possible (site-in-site case), but it's restricted, so you can't call pushState anyhow.
Also, would it be possible to differentiate between legitimate and malicious iframes? Because site can use iframe to just display other content on it (embedded game, sidebar... hosted on it) or to attack different site.
It's impossible to attack a site with an iframe because there's a sandbox. The only way of communication is postMessage. I'm still thinking about issues, but I don't see any yet.
Sure; but _what_ would you do after getting
pushState?pushState "", "", "http://facebook.com"becausehttp://facebook.comandabout:blankare not same-origin.
You are right, but it works this way:
> history.replaceState = "nope"
> history.replaceState = $("<iframe id='itemp'>hello</iframe>").appendTo(document.body)[0].contentWindow.history.replaceState
> history.replaceState("", "", "/AnyUrl")
> window.location.href
"http://127.0.0.1:43110/AnyUrl"
What browser are you using? That sounds like a major issue
Chrome, but just tested and also works in FF
Do you have any ideas how to solve this? Switching to site_address.zero or a similar way would help here (and solve other problems too), but that won't work well with proxies and is also a major change.
Not really, maybe we could setup a setInterval that monitors the window.top.location, but it's pretty hackish
@HelloZeroNet What if the site would somehow delete/replace setInterval?
@filips123 The prefix code is run before site code, so we can save setInterval beforehand.
@imachug Can site code access prefix's intervals? If so, the site could maybe guess (or brute-force) all interval IDs and clear (clearInterval) them which would disable security check.
What's the minimal you can do (time wise) to be able to have a branch where we can test the highest risk portions of the design, such as the new PREFIX (shadow DOM)?
If you're asking for a high-risk PoC (meaning unsafe, i.e. can be abused by sites), it'll probably be finished today (it's 9am for me). By now, ZeroTalk, static sites, ZeroSites and ZeroHello all _mostly_ work (except localStorage stuff and such).
One of the most useful steps I've learned to do in projects early is test high risks. A high risk is any threat to the success of the project. We begin by listing all risks, rating them High, Medium, Low, then focusing on mitigating High risks in the next step. Your goal is to at least lower to Medium.
Typically, a risk is high because there are unknowns. You may be including a new third party library, and don't know if this library will work because you've never used it before.
To mitigate, you test it to verify assumptions, bringing the risk down to Med, Low or None, because you can at least verify that the library meets core assumptions of functionality.
Your goal is to eliminate all high level risk with minimal effort ASAP, before spending a lot of time on the project. What you don't want to do is spend 6 weeks developing something only to run into a show stopper or reason others can't use your project, when you could of identified it up-front with a little bit of test code.
Obviously, if you have major security concerns, they could be a high level risks, too, because they can doom the project if you can't find a resolution. But, in the case of what you're testing, a high level risk is that the shadow DOM breaks the types of applications you are trying to enable with this project.
So, you'd want to create a test, as easy and minimal as possible, that allows us to then test the Angular 8 app I created. That is why this test can still run it under NOSANDBOX, because this isn't production code. Its purpose is only to prove that the shadow DOM isn't a show stopper.
The code can be throw-away solely for the purpose of the test (testing a 3rd party library), or it can be code ultimately used in the project.
TOTALLY OFFTOPIC:
@HelloZeroNet Can you check out this thread in UNLIMIT TALK, please? (not sure how best to reach you).
Can you clone ZT Talk into a new "ZeroNet Development" ZT with high user limits like UNLIMIT TALK, or give me or imachug the your blessing to do so?
I've noticed that a lot of wrapper code uses jQuery. We probably don't want to pollute sites' environment, so I'm trying to port as much code as possible to Vanilla JS.
I can't speak to React or Vue, but I can say that
In order to get Angular working, I have to include:
<script>document.write('<base href="' + document.location + '" />');</script>
which effectively becomes the site's base (which works if you could hard code the site address, which isn't practical):
<base href="/1Gtzk5w72SmSx7GW6Y1JaGLgPfkXH2Wanz/">
or, if I inspect it:
<base href="http://127.0.0.1:43110/1Gtzk5w72SmSx7GW6Y1JaGLgPfkXH2Wanz/?wrapper_nonce=a23982da7c19577d48c588e311f256b5a272db7bd0dc13212555479a0e5594f0">
I don't know much about locking down accessing outside this because until ZN, I've always assumed the browser is insecure and relied 100% on server-side for security, tokenizing the UI with cookies and AJAX/WebSockets. Obviously, ZN is unique in this respect.
I'm just wondering if the "base href" can play a role in helping you lock down a site. You probably already dismissed it with good reason. :)
@github-zeronet I'm about to finish sidebar, I'll give you a PoC soon.
Can site code access prefix's intervals? If so, the site could maybe guess (or brute-force) all interval IDs and clear (clearInterval) them which would disable security check.
Due to how browsers implement setInterval (notice: browsers, not spec), site code can disable our intervals, and I don't think there's a good solution. I could have fun handling iframe creation and replacing pushState there, but it looks too fragile. Maybe we should just say that changing URL to another site isn't a trouble (and you can use the sidebar if you want to make sure you're browsing the correct site).
Maybe we should just say that changing URL to another site isn't a trouble
But it probably it. Site could pretend to be another site and trick user into entering sensitive information.
Uh, that's why I said "and you can use the sidebar if you want to make sure you're browsing the correct site". I agree that it's not the best solution, but that's the best thing we can do without big changes to ZeroNet.
If we allow big changes, however... The best thing to do would be to utilize browsers' same-origin policy, so each site would have it's own origin. This is possible to do with a local DNS server -- we'd proxy *.bit and *.zero to 127.0.0.1 and delegate all other requests to the default DNS server. This would solve, uh, all problems we're having. The only issue here is that we'll have to make the user change proxy settings.
@HelloZeroNet If we're having problems with such a simple thing as History API, we'll probably have more problems in the future. Thus, I see no reason to continue developing this issue without switching to better domain names like http://talk.zeronetwork.bit and http://1talkfrmwvbnsoof4iokay9euxtbtjipt.zero instead of http;//127.0.0.1:43110/talk.zeronetwork.bit and http;//127.0.0.1:43110/1TaLkFrMwvbNsooF4ioKAY9EuxTBTjipT. Please notice that this change won't break proxies -- we'd use https://talk.zeronetwork.bit.zn.amorgan.xyz and https://1talkfrmwvbnsoof4iokay9euxtbtjipt.zn.amorgan.xyz.
Pinging people from related issues:
Choosing what hostname format to use has become more important than before. We should switch as soon as possible.
For now, we have to deal with case-insensitive hostnames -- that breaks Bitcoin addresses. Possible solutions:
When announcing sites, we'd announce lowercase addresses instead. When checking signatures, we'd compare recovered_address.lower() to real (lowercase) site address.
Pros:
Cons:
1 and L are different, while their lowercase counterparts 1 and l look similar;Pros:
Cons:
I'd choose the first solution, but I'm open to other ideas.
We can have both: bech32 for new sites, lower case for old sites.
Ok, are you fine with me making a PR for this?
My main problem is with this solution is I (and probably many others) would not trust any application to modify the proxy settings of the browser. Other problem is it would make the deployment of the client harder especially if the remote machine does not have domain names configured (eg. on LAN network)
@HelloZeroNet Od just use both (along with otherbl cryptographies) and let user decide what they want to use.
Discussion should be moved to #2087 which is relevant issue for this problem. So, domains should end with TLD .zeronet (but still allow access via 127.0.0.1 and proxies should be accessible via doman.bir.proxy.com. But it should be still available to access sites normal way.
but still allow access via 127.0.0.1
Do you understand that it breaks the goal of same-origin policy? That's the reason why I asked to speed up domain naming scheme standartization.
My main problem is with this solution is I (and probably many others) would not trust any application to modify the proxy settings of the browser.
Seriously, ZeroNet core can do more harm than changing proxy, so there's no reason to worry about proxy settings.
Other problem is it would make the deployment of the client harder especially if the remote machine does not have domain names configured (eg. on LAN network)
We could set 192.168.12.34:53 as DNS server then. Another solution is to use ports for this (ip:43110 for site 1, ip:43111 for site 2, ip:43112 for site 3, etc.). But yeah, using IPs is troublesome with this solution.
Do you know a better solution?
Do you understand that it breaks the goal of same-origin policy?
Code could serve old/current version of wrapper when accessing from same-origin.
Also, you can simply use Proxy PAC file to set 127.0.0.1 as proxy for all .zeronet domains. For start, ZeroNet needs to be modified to allow that domains and properly handle them. This should not be so hard as some part for this is already in ZeroNet but it doesn't work properly.
The next steps would be to redesign wrapper code and create more complete ways to handle URL rewrites (DNS server, proxies, extensions...).
I think this unnecessarily increases code size, and also makes site support troublesome -- you'd have to make sure that your sites work in both (rather different) cases.
imachug, how could you change DNS server just for ZN without impacting other tabs, sites, browsers, app, OS? I'd at least try to limit scope of this to the current running browser.
I can see how that would solve this problem if every site had a unique host. I'm just not aware of ability to scope DNS to a tab, window or the browser. I'd love to know if there is a way.
How do subdomains play a role in all this? Because it would be easier to minimize DNS issues if
127.0.0.1/
redirects to
If subdomains don't work, if you are hacking DNS, then might as well consider "zeronet" as a TLD:
Some of us use very complex DNS setups, and doing at OS level, among other things, prevents a user from running their own DNS or doing special configurations (e.g., other software that also extends DNS).
Of course, you can solve all this if you require ZN to run by itself in a VM, and Docker could end up being the way to go if you must use port 53. But, today, that's a lot to ask of users today. What about mobile or light weight mesh? It would be really nice if Docker ran in Android and you could run BIND in that. That would be a game changer!
I do see that FF has a new feature where you could plug in the URL of a DNS server. If it supports https, then it might work with HTTP and any port, avoiding conflicts with port 53:
https://www.ghacks.net/2018/04/02/configure-dns-over-https-in-firefox/
You could test this with netcat to validate it works with any port you give it. But, it may still require TLS. It gets more complicated as you add support for other browsers. But, since Tor Browser is built on FF, and I've seen many posts saying "use FF!", I suspect FF covers a lot of user base.
Are you using a new wrapper template and tying that to a new site permission to provide backward comparability? If so, then what I'd do for a site that required this DNS config in FF is have a fallback legacy style site it redirects to if I could detect the user doesn't not meet requirements, where I'd have instructions (use FF, configure about:config, etc,...), then a link to new site. You'd still have to throw in something like this.
This isn't ideal. But, if you're backed into a corner, you hopefully this gives you options.
@github-zeronet Yes, configuring DNS and rewriting at OS lever could be a difficult problem. That's why I think that we should have multiple options for users: legacy 127.0.0.1, Proxy PAC file, browser extension, OS rewrites, registry changes, network changes... But first feature should be to provide possibility to use ZeroNet with Proxy PAC which is simplest way for now.
imachug, how could you change DNS server just for ZN without impacting other tabs, sites, browsers, app, OS? I'd at least try to limit scope of this to the current running browser.
Yes, configuring DNS and rewriting at OS lever could be a difficult problem.
Uh, what? We could just set up a local DNS server that'd delegate clearnet requests to a real DNS, e.g. 8.8.8.8 or whatever the user chooses.
How do subdomains play a role in all this? Because it would be easier to minimize DNS issues if
.zeronet.xyz solved all your problems, because then you would wildcard that to 127.0.0.1.
Using subdomains was the first thing I considered, but Linux & MacOS (not sure about Windows though) don't understand wildcards in /etc/hosts.
The challenge here is that it needs to happen in browser, so rules out URL rewriting like what Apache does, unless you get into redirects.
But why? Configuring a DNS server (which will be automatic) is easier than setting up extensions or something.
Some of us use very complex DNS setups, and doing at OS level, among other things, prevents a user from running their own DNS or doing special configurations (e.g., other software that also extends DNS).
Using several DNS servers and delegating requests is used rather often, and I don't see how that would be a problem here.
Are you using a new wrapper template and tying that to a new site permission to provide backward comparability?
Nope, the new prefix way must be the default one and should be the only one.
What we have now is not ideal as well, but I think I'm convinced that it's better than what I'm proposing. Nevertheless, we need to move somehow. I think that this proposal introduces many security issues until we switch from 127.0.0.1:43110 to domains or something. Thus, I'm closing this issue. We'll reopen it when the switch occurs. Please continue working on the solution however.
The task is to make different sites use different origin. An origin is a protocol, hostname and port combination. We can't change the protocol because it's always http or https. So we're left with the hostname and the port:
We can change hostname to e.g. talk.zeronetwork.bit.
Pros:
Cons:
Use :43110 for site 1, :43111 for site 2, etc.
Pros:
Cons:
:43110 may mean ZeroHello, ZeroTalk or whatever, depending on the device.There's, however, another solution: we could handle talk.zeronetwork.bit in a browser extension, in a local DNS or somewhere else.
Pros:
Cons:
I don't see any solution so far. Anyone with a fresh idea?
imachug, I understood when I responded that you were describing setting up a DNS server on port 53 at the OS level that would intercept and pass all other requests to another DNS server.
I'd never run it. Here is why:
That said, I'm OK with localizing DNS functionality to ZN, and if need be, a single browser instance, because I can always run other browsers for other functions knowing they are not vulnerable or restricted in any way.
DNS is growing in complications as people continue to innovate and solve new problems, as evidenced by the growth in BIND plug-ins to add functionality and solve interesting problems. You really don't want to be responsible for all the DNS on a person's machine.
The only way I'd consider running a dedicated ZN DNS service on port 53 is in Docker or a VM where ZN is inside the container. I'd never run it on any host where I do anything other than ZN.
Why can't the protocol be changed? It should be possible to add a protocol handler for custom ones like zhttp and zhttps โ that seems to solve a lot of issues. (Full disclosure, I'm not too familiar with the current architecture yet, so if that's a stupid question due to the implementation my apologies!)
Edit:
I proposed new protocols more fully in https://github.com/HelloZeroNet/ZeroNet/issues/2087 if anyone wants to take a look (no implementation proposals, just advantages if we're able to make it work).
How do subdomains play a role in all this? Because it would be easier to minimize DNS issues if
.zeronet.xyz solved all your problems, because then you would wildcard that to 127.0.0.1. Using subdomains was the first thing I considered, but Linux & MacOS (not sure about Windows though) don't understand wildcards in /etc/hosts.
Setting aside IP resolution, does it solve the problems in the browser, such as local storage?
The challenge here is that it needs to happen in browser, so rules out URL rewriting like what Apache does, unless you get into redirects.
Some of us use very complex DNS setups, and doing at OS level, among other things, prevents a user from running their own DNS or doing special configurations (e.g., other software that also extends DNS).
Are you using a new wrapper template and tying that to a new site permission to provide backward comparability?
Nope, the new prefix way must be the default one and should be the only one.
Why? Maybe in the end it will be. Yet, why close doors that could be opened with backward compatibility and a migration path that includes beta testing.
I don't see any solution so far. Anyone with a fresh idea?
Did you look at this part of my post:
https://www.ghacks.net/2018/04/02/configure-dns-over-https-in-firefox/
You could test this with netcat to validate it works with any port you give it. But, it may still require TLS.
This is a solution I'd be comfortable with as an end-user. DNS would be inside ZN, on any port you gave it, but not 53. It would be local to only that browser instance, containing the security risk.
It would be easy to do a quick test with netcat to verify if it requires SSL (by redirecting to a non-SSL DNS) and can be pointed to any port (if it resolves successfully) since you'd use netcat to listen on a port like 43900 and point to that.
If this does work, you may be able to create a simple FF plug-in for users to make it easy for them.
Would you like me to test that or would you like to test it?
@imachug Are wildcard certificates really such problem? Hostname solution currently seems the best solution.
For local ZeroNet instance, we could use talk.zeronetwork.bit.zeronet. For proxy, we could use talk.zeronetwork.bit.zeronet.xyz (append proxy URL). Rewriting could be done with PAC, extension or other programs.
@KilowattJunkie This doesn't get benefit over hostname solution. It just gets more complicated, because you have to handle all protocol schemes and browser probably also won't recognize them.
@imachug Are wildcard certificates really such problem? Hostname solution currently seems the best solution.
For local ZeroNet instance, we could use
talk.zeronetwoek.bit.zeronet. for proxy,
Personally, I haven't been a fan of .bit domains and its reliance on namecoin. But, I do respect other's opinions on it and wouldn't want to break compatibility for them.
I do prefer the idea of a .zeronet (or similar) TLD. If we are going to intercept all ZN DNS requests, we can implement that however we want. The sky is the limit.
@imachug I do like where you were going with DNS. I only oppose listening on port 53 of the host machine. I think having a ZN DNS opens a lot of doors, otherwise.
:github-zeronet hands @imachug a frosty beer: :)
Ok, if DNS over HTTPS works well enough and we can easily integrate it into existing browsers (it should be as difficult as "open settings, enter this string and press OK"), we could use it. @HelloZeroNet Do you see any problems here?
Setting aside IP resolution, does it solve the problems in the browser, such as local storage?
It does, because we're now using same-origin policy for our good instead of avoiding it.
:github-zeronet hands @imachug a frosty beer: :)
I don't drink beer.
@filips123 Sorry, I have to disagree pretty vehemently. The current proposals add a significant amount of complexity, and actually add a huge new attack surface due to the workarounds needed to get things working. You're also working against current known good systems (like DNS resolution, browser security extensions) by adding a layer on top of it, which is going to cause a lot of maintenance and security concerns in the long run (and also prevents end user customization).
Changing the protocol alone solves the following problems (just off the top of my head):
.bit sites to define ZeroNet address. It also allows interesting things like CNAMEs and redirects to ZeroNet defined domainsI'm sure there are a lot more, but that's just what I can think of off the bat. Again, I have not contributed so I don't mean to step on anyone's toes, but coming in with a fresh pair of eyes it seems like there is a lot of concern mixing going on, which seems to be the cause of the complexity (versus doing one thing, and doing it well). When I initially heard about ZeroNet I thought it was simply a way to access a site in a distributed manner, versus doing it in an anonymous or secure way. To me, that's more the realm of TOR, or security extensions to DNS and such. In my opinion, it probably shouldn't be a core concern of the base protocol.
@KilowattJunkie
I'm not sure we're all on the same page, yet because there is a lot to digest here. I like Imachug's idea of some DNS layer not only because it helps solve some problems we're talking which has a greater purpose of allowing more portability of apps built with Angular, React and Vue, you have to keep in mind this reality today:
Local communications do not need SSL in the browser. The only really good browser use case for it today is from browser to relay.
The relays always have the option of using traditional HTTP servers such as Apache or nginx as a proxy, which can handle SSL and do any desired URL rewriting before passing to ZN.
So, what we're looking at is a very minimally intrusive DNS that ZN can own solely for handling the 127.0.0.1 calls. If we could scope it to 127.0.0.1:43110, we would. But, at least we found a way to limit the scope to a browser instance. So, you could configure your Tor Browser to use it, and all your other browsers and applications would still avoid the ZN DNS.
Because the only host we're handling is 127.0.0.1 (setting aside relays not using a reverse proxy), it makes sense to actually allow a DNS layer here, where:
resolves to 127.0.0.1 addresses, but the browser will see it as unique host for the purposes of this thread, securing zeronet while allowing it to do what is needed to run vast more web apps.
And since they would all point to 127.0.0.1, the benefits of a regular DNS server don't really help anyone today, except, of course, the host name of the relay on the Internet.
Having a ZN DNS means we can actually do some creative things here. Whereas sites have no real ability to use CNames or any other DNS functionality with their sites today, the site's content.json could include the equivalent of a ZN DNS zone record. Can you see where this can go?
I agree 100% with all your DNS security concerns, which is why this needs to be as localized as possible. For now, we found a way to limit it to a single browser instance that we hope can work. If we can find a way to narrow the scope further, then great. I'm all for it.
Just keep in mind our greater goal here is to enable a broader range of sites to be able to run on ZN that cannot run on it today without serious security concerns (by disabling the IFrame sandbox). This requires balancing competing ideals.
To be sure, even if we conclude this can work locally, we still have to evaluate the impact to relays.
I'm not sure if DNS over https will work over http protocol (as we can't issue https cert for localhost) and chrome does not have such settings yet. So we have to wait before jumping into it.
Maybe the custom extension/pac file is easier solution (and probably has less side-effect)
@github-zeronet Yeah we might not be on the same page due to my admittedly poor understanding of how the current system works (and if so my sincere apologies for creating noise). I just was commenting based solely on some issues that were raised due to custom DNS (like TLS concerns in https://github.com/HelloZeroNet/ZeroNet/issues/2087). I'll drop it for now and get a better understanding of the current system and see if my proposal still makes sense, and if so I'll comment and offer some specific implementation alternatives to the points you made. Again, sorry if I'm making unnecessary noise, I think this is a cool concept and would love to see it widely used!
@github-zeronet Yeah we might not be on the same page due to my admittedly poor understanding of how the current system works ...
Not noise at all. I love diverse perspectives in a complex conversation. Your understanding of DNS is good, and that does count.
So we're going to use an extension/PAC. Ok. That will probably work and won't be difficult to implement. curl will stop working, but that should still be easily fixable with curl -H"Host: talk.zeronetwork.bit" 127.0.0.1:43110 or whatever. Now, what do we do with proxies? Will the proxies have to ask the user to install the plugin? How will the plugin know what proxy to use? Will there be a configuration setting (if so, what if I need to use both proxy and localhost at once to deploy a site)? If it'll be configurable with an URL (i.e. talk.zeronetwork.bit.zn.amorgan.xyz), how will we learn what part is a ZeroNet domain and which part is Clearnet one? Should we split by .bit. and thus disallow such names as bitco.no (or similar)?
it'll be configurable with an URL (i.e. talk.zeronetwork.bit.zn.amorgan.xyz), how will we learn what part is a ZeroNet domain and which part is Clearnet one?
Proxy owner should specify which part is real clear net domain. Or alternatively (preferely) program should detect this automatically.
@KilowattJunkie Regarding custom URL schemes: How would browser know to send HTTP request to zhttp scheme? Also, Android Chrome doesn't recognize bit and instead opens Google search.
@filips123
I like to look at what can be done before discussing what should be done. we didn't get far enough to talk about all the incredible things that can be done with a ZN DNS: site configuration, domain issuance, DNS like functionality like CNAME and MX if we can ever introduce ZN SMTP, etc,...
@imachug did a great job at looking at possibilities! We may put ZN DNS on a side burner for now, but I'll be thinking about all the incredible possibilities in the meantime thanks to imachug's imagination.
As for relays, I think ultimately we can package nginx or httpd in a Docker with ZN to make deployment very easy, providing any out-of-the-box reverse proxy and URL rewriting config needed. This also allows it to be deployed on more platforms.
At the end of the day, we need 1000 developers! lol
@github-zeronet I partially agree with you. However, PAC file is something that should nort be so hard, as there are only few issues with it (actually some of them are a bit harder to do). This would also enable easier integrations with other incredible things in the future.
But I don't think ZN specific DNS would be useful. It would be better to just integrate it with other existing DNS projects (#2049).
Also, requiring user to use Docker isn't good. Docker could be hard to install, specially on Windows, where, if you don't have Pro versions, you have to install both VirtualBox and Docker Toolbox.
I don't have any particular comments to make other than: ideally the resolution to the problem should get everyone on the same page, and allow for "clean" urls (no 127.0.0.1:port business in the nav bar).
As it stands I and others use a .pac file to redirect http://zero/address/, http://domainname.bit/, and http://name.zeroid/ addresses to their proper counterparts. I've attached the .pac file I use below (renamed to add a .txt extension to allow upload). something like: http://address.zeronet/ would work fine IMO, and if it works for direct addresses (not just domains) as well then that'd solve a lot of the linking problems. But if there's an extra step in setup, I'm not sure everyone would do it? That seems to be the problem right now.
IE advanced users have no problem setting up the .pac and whatever else to allow links to work, but I always have to be careful to change it back to the native solution to ensure everyone can use the link. Any "optional" steps would still have that problem.
So looks like we have two equivalent solutions here:
It looks like a special .pac file is not required, we can set localhost:43110 as the proxy address. This requires a minor modification in ZeroNet core, but it's simple.
We'd set localhost:53 (or an equivalent DoH port) as DNS server.
They are basically the same with the feature set we currently have. There might be some good usecases for DNS in the future, but I don't see any features now, @github-zeronet:
site configuration
How exactly would we utilize DNS? AFAIK the same is possible with proxy.
domain issuance
Domain resolving is done by ZeroNet in both cases, so it's not a difference.
DNS like functionality like CNAME
...but why?
and MX if we can ever introduce ZN SMTP, etc...
This is possible without DNS, see ZeroMailProxy.
If the solutions are about the same, I'd use proxy: it looks like it's easier to install and can be easily set per-browser (instead of at OS level).
@imachug Other possibly may be browser extension. It isn't so powerful, but it may be easier for users to use.
I'd first support PAC file and proxy which are most easiest way to implement (and both of them are related to each other). But later, we should also implement other ways to do this, such as DNS, extensions and system programs. Of that possiblities, extension should probably be implement first.
Actually there is already a browser extension, but it's also using the pac file, so there is no difference:
https://github.com/goldenratio/zeronet-protocol-crx
@HelloZeroNet With last update 3 years ago and old available for Chrome. Not good solution because of that, but something like this should be made to be official. There should be extension like this, but for more browser, regularly updated with latest specs and official. I will try to make and release PoC of this but I need to wait until we standardize hostname (or other) solution. Here I would still like something like .zeronet TLD or alternatively custom protocol schemes.
@filips123 ... requiring user to use Docker isn't good. Docker could be hard to install, specially on Windows, where, if you don't have Pro versions, you have to install both VirtualBox and Docker Toolbox.
This was just to make installing relays easier. Not for normal users. Not a requirement for anyone. Who in their right mind would run a Relay on Windows, anyway? lol
If the solutions are about the same, I'd use proxy: it looks like it's easier to install and can be easily set per-browser (instead of at OS level).
That was all potential future state if we had ZN DNS. Not needed at all for our current feature.
If I understand ZeroMailProxy, its purpose is to allow a client, such as Thunderbird, to send/receive email for you through the equiv of the ZeroMail site via SMTP/POP3 protocols. Correct?
ZeroMail is a nice temporary way to solve a problem. It has limitations preventing it from being a long-term solution in its current state. I'm thinking more from a blank slate how to send email in a decentralized way with ZeroNet as the backbone.
OFFTOPIC: POSSIBLE FUTURE ZERONET DNS/SMTP
So, let's say I own the "safemail.yu" ZN domain. I'd want a way that people could send email to its users, like "[email protected]". But, the sender should not have to have a user account on that domain. So, I'd provide an ZN SMTP server at a site address they could use to route email to those users. The DNS record would provide an MX record pointing to that site. I'm basically pondering how this could be done on ZN, w/o clearnet.
Any domain owner would have the ability to setup an email server (special type of site) that its users could receive email to (with its own storage and other rules). Anyone would be able to send email to these users, even if they did not create a user at that domain.
There's a lot more going on in my head, such as offline mesh networks with intermittent connectivity to the Internet, apps to people, apps to apps, people to apps communications in addition to people to people, decentralized autonomous message storage (in this case, the owner of a SMTP relay would control storage, forwarding and other policies for their domain, and not rely on limitations of ZeroMail.)
@HelloZeroNet I have managed to make a browser extension to access ZeroNet via e.g. talk.zeronetwork.bit.zeronet / 0talkfrmwvbnsoof4iokay9euxtbtjiptsg699gc.zeronet. Everything works well but this new address syntax (i.e. .zeronet TLD instead of zero/ / zero:// prefix and .bit.zeronet instead of .bit) make it incompatible with existing sites. Do you accept this syntax and do you think we'll be able to modify existing (official) sites to support this syntax?
Is redirecting /zero/1anysite -> 0anysite.zeronet and talk.zeronetwork.bit -> talk.zeronetwork.bit.zeronet not possible with the extension or what the reason of the incompatibility?
It is possible but I think both zero/ and .bit will result in incompatibilities. For example, zero local host can be used by web developers (or just as a LAN domain in a big corporation) and .bit domains are available not for ZeroNet only.
Another possibility could be to use zero:// protocol as IPFS and Swarm do, but this won't make it possible to use ZeroNet with other protocols in the future, as handler won't be able to distinguish between them. But with .zeronet TLD, you could use both http://domain.bit.zeronet, ftp://domain.bit.zeronet, gopher://domain.bit.zeronet...
Using zero:// could look cleaner, but with it, it won't be possible to handle more protocols. And most browsers won't let you directly type zero://domain.bit as it will just redirect to search.
The .zeronet postfix looks good for me, I just wondering what the reason of the incompatibility.
I could support .bit and zero/ as well but I still hope we'll get rid of them soon. Is that ok?
I don't see a reason why you should support them. Even if we get rid f them later, we can also get rid of them now. Users would have to change anyway.
The problem is backward compatibility (oh well). Even if we get ZeroHello, ZeroTalk and such working correctly, it might be a pain to make other sites apply fixes.
Existing sites will automatically convert http://127.0.0.1:43110/1anysite to http://zero/1anysite, so if you don't redirect it in the plugin, then you will break links on all exsting site.
I can support zero/ and .bit but such links would have to be redirected twice (http://zero/1anysite -> http://0anysite.zeronet -> http://127.0.0.1:43110) so this would be kind of inefficent so I'd still ask you to use .zeronet by default (say, in case the plugin is installed).
I meant not internal proxy redirect, but actual http location: header redirect. So you would have to redirect once and not for every request.
But sure, I'm up to turn the links to .zeronet TLD ones if someone visits the site on that url.
So you would have to redirect once and not for every request.
Oh, that makes sense. I'll implement that soon (around tomorrow).
But sure, I'm up to turn the links to .zeronet TLD ones if someone visits the site on that url.
Great!
Let me also add some noise :)
Different approach:
example.bit.zeronet.io:43110 points to 127.0.0.1 as a wilcard. Zeronet gets host in header. Origin is unique per subdomain.
When the proxy, extension or wildcard hosts is supported it's handled with those. When not, relies on clearnet DNS (I know it's a problem).
Ideally, both *.zeronet and *.zeronet.some.tld should be supported. Some other decentralized systems (Onion, IPFS...) are also doing it in simialar way.
Just note that it should be possible to access sites from port 80 and 443, and not 43110. Using 43110 as port number was probably meant to separate ZeroNet UI and prevent conflicts with other servers that may run on port 80. But as hostnames will be now separated, it should be possible to access sites as normal clearnet sites, from port 80 and 443 (and other ports for other services). This would also enable adding CNAME/A/AAAA and TXT record
to clearnet domain, and accessing site publically with normal domain. Solution (with some problems in macOS) for this would be to use 127.43.11.0:80 as default host of ZeroNet UI. See https://github.com/HelloZeroNet/ZeroNet/pull/2214#issuecomment-565189278 for more details.
And instead of *.zeronet.io, it might be better to use *.zeronet.link. Because zeronet.io would be used as normal website and (your) zeronet.link would be used for proxy. For example, ENS and IPFS are also doing this in similar way, where eth.link is just proxy to resolve and handle ENS domains.
Then, browser plugin or local program would set *.zeronet and *.zeronet.link as wilcard for local ZeroNet instance. Additionally, DNS records should also be set to local IP (127.43.11.0:80). So when ZeroNet will be installed, and user will access example.bit.zeronet or example.bit.zeronet.link, it will be automatically forwarded to local ZeroNet instance. And when ZeroNet won't be installed, zeronet.link will have public record to instead use some reliable public ZeroNet proxy(ies).
To bind on ports below 1024 admin/root rights are needed and on Android it's not even possible.