Sickchill: Default SSL, Hardware auth support (U2F, FIDO2, Yubikey), default password authentication

Created on 2 Nov 2018  Â·  55Comments  Â·  Source: SickChill/SickChill

Please thumbs this so I can gauge user interest!

I am saving for a Yubikey 5 NFC, so that I can implement (optional) hardware authentication and 2FA (using your own server as the auth, not ours or a 3rd party).

Automatically generate the self signed ssl cert and enable SSL if it is not enabled
Add settings buttons to generate letsencrypt SSL certs when you have a domain
Force login authentication
Yubikey/Hardware Auth support, including FIDO2, U2F, OTAP, etc

BACK-BURNER Experiment Feature Feature Request

Most helpful comment

I'm in favor of TLS, 2FA auth, etc on the assumption that it can be disabled easily. As a lot of other users, I run SickChill behind a proxy that already handles all of this.

All 55 comments

I have a solo and solo tap as pledge from kickstarter with FIDO2 support on the way, should be in in december

I'm trying to save any crypto I get until the markets spike at least. Saving via other means. Solo looks cool though.

I am definitely in favor of:

  • forced SSL using letsencrypt, with fallback to self signed
  • forced authentication when peer IP is public ip

To me MFA support is nice to have, but mostly fun to code :stuck_out_tongue_winking_eye:

I feel like hardware auth is going to be the common way of doing things soon, want to get on board early

I do not expect it to be 'the common way' for a while, since Microsoft (Windows) has a few steps to go.

Granted with FIDO2/WebAuthN all big players and all big browsers are on board, but:

  • There is no support for FIDO2 in Azure AD yet, for personal nor business accounts
  • There is no support for FIDO2 in ADFS yet
  • There is no proper solution for WebAuthN in LDAP environments yet

But I agree, it's a good thing to lead the way 8)

Shouldn't be mandatory but it would be good to have these things turned on by default and providing the option to turn them off (if they aren't already). Just my opinion.

Of course hardware multifactor wont be required, but SSL and authentication by at least having a username and password set will. This will stop from tons of users' installs popping up all over the internet with no password, opening them up to all sort of attacks.

I'm in favor of TLS, 2FA auth, etc on the assumption that it can be disabled easily. As a lot of other users, I run SickChill behind a proxy that already handles all of this.

Everything can easily be set to not force this stuff in the ini, but by default it's going to ask you to set a password. Too many people leave their site web facing without even a simple password.
TLS won't break your proxy, it will just encrypt the traffic between your proxy and SC.
Multifactor auth obviously isnt going to be forced, its a feature.

As a relative Luddite using Sickchill, my only concern is from an end-user perspective? What will be required from and end-user? I host SickChilll on QNAP NAS and while I only access using local IP, what would I need to do? Would I need to purchase a SSL certificate? I have no idea how to install and configure this stuff. I'm not against the idea but I'd hate to lose access due to my limited technical abilities. I haven't voted as do not feel qualified one way or the other. As long as from an end-user perspective it is transparent or documentation provided to manage the change I'm all for being secure!

While we'd have to work out the details on this, my guess at this moment would be:

  • if you use SC accessible from internet, default behavior would be to force ssl on, and to request a free certificate using letsencrypt
  • if you use SC accessible from Lan only force SSL to be on, and generate a self signed certificate, the use will have to accept once in every browser he uses

And this off course configurable, so you can forcr SC to behave differently, possibly entering your purchased certificate, or other fancy options

And when accessible from the internet by default SC will require you to set a login, whether it is user/pass or hardware auth. You could manually disable this forced authentication by setting a config value.

@DarkKman Letsencrypt gives free SSL certs if you own a domain, you would only need to buy an SSL cert if you wanted to, and when using IP only you would be using a self signed cert generated by SC. Nothing would change for the user except increased privacy and security.

All of these changes would be handled by code, not the user. AKA, SC would generate the SSL cert whether it as with letsencrypt of a self signed cert, and SC would pop up a form on local network forcing the user to set a password. A true idiot could follow the directions that SC prompted them with,

The minor inconvenience this would cause would be a one time thing, and it would be as simple as typing a username and pass or clicking a button. But it would protect your from phishing and MITM attacks and snooping.

@miigotu Many thanks for the clarification. I'll make my vote shortly (very happy to keep things secure). And great work by the way... So glad to be back with you guys on this repo. I continue to be impressed with your application and development. 10/10!

Yubikey bought, thanks for donation!
screenshot from 2018-11-04 09-10-10

My thoughts would be to recommend the security approach during the setup of sickchill for new users based on how it will be accessed (ie, lan only, direct internet access, internet access via proxy).

Forcing security could break a few existing setups if people are using reverse proxy's for example.

Also, you would need to take into account, third party applications, it could take them many days/weeks(if ever) to work with secure connections to sickchill.

My 2p :)

My personal feelings on this are that application servers should not be responsible for TLS concerns, _unless_ the end user is authenticating via a TLS client certificate (and only then in unusual circumstances). For years I've been using nginx to terminate TLS in front of applications like this, and it works quite well. Now that Docker is the main way to run web applications, the process is even easier - a dozen lines in docker-compose.yaml and sickchill, sab, and any related services are all protected and the app servers themselves needn't be extra complicated.

I'd rather see yubi support in an nginx or other reverse-proxy layer than native in an app server like sickchill. At the reverse proxy layer, such work is more easily reusable. Looks like someone may have done it already: https://github.com/sanderv32/ngx_http_auth_yubikey_module is the first hit on Google for "nginx yubikey github".

I'd be happy to share the setup using docker, nginx, and letsencrypt to help other users get the same benefits, without needing to modify app servers or spend developer effort on many applications, when the layers already exist :)

Configuring 4 different layers vs adding 3-5 lines to SC? Most people dont use a reverse proxy, and most people have no clue about running a secure web facing server.
For the advanced users, these changes will not affect them, but I cannot just sit and let hundreds of users sit unprotected without so much as a password on their SR and open to the internet just because they don't know what they are doing.

Seriously, the amount of push back is strange on some features that if you don't want to use you just don't use them.

self signed ssl cert can be used between nginx and SC without hurting anything.
fido2/u2f are just an added way to authenticate
Not allowing public access without protection is a good thing.

No problem then - I didn't realize it was only a few lines of code. Perhaps I misunderstood the proposal, but the word "mandatory" sure made it sound like this would break my existing setup and not offer "just don't use them" capabilities.

I have the Feitian Multipass (BLE, NFC, USB) and the Feitian Key (NFC, USB - similar to Yubikey NFC). These are the same two keys in Googles Titan Key pack. Feitian is the OEM who makes them. Anyway, I'd love to have this feature on my Sickrage install and am more than willing to help test, etc.

Just let us know!

I don't understand why all of this has to be mandatory. Couldn't it be implemented as "on" by default, but have a setting where users could disable it?

I don't care too much about the SSL bit -- as I can just ignore that in the browser if necessary. But, I dislike the idea of having to log in (authenticate) every time.

So far the feedback has been twofold:

  1. LGTM
    or
  2. (and this is mostly from power users) why have this mandatory?

Great feedback, I think that if we change 'mandatory' to 'enabled by default in new installs' there is no objection left?

"Enabled by default" sounds like a great path forward to me :+1:

It is mostly "enabed by default" ^
@Amadeus- you realize it is stored in a cookie? Currently it stays logged in for 30 days, you just have to log in once every 30 days for each browser. We can extend that cookie out for as long as we want, but we can't just let novice users (Users are really playing the role of web administrator) make disastrous mistakes simply because they are not web admins. Hundreds of SR/SC installs have been found in the wild with no login which are directly web facing. If there are advanced administrators who really want to leave their web facing install unprotected then I am sure they can change a bool in their config.ini to disable it.

Quick question - I am looking for said bool to disable listening on the external interface and can't find anything. Can you just set web_host to 127.0.0.1?

Also my config.ini manual changes never get saved after restarting my sysd service for SC. Anyway to change this?

Sorry don't mean to hijack this thread / issue..

My 2 cents regarding enabled by default - good for SSL and PW Auth. 2FA enabled by default may be too much for the novice users your trying to help with this..

@ndom91 You have to shut SC down completely before you edit config.ini - change web_host from 0.0.0.0 to 127.0.0.1

@miigotu I don't have my sickrage open to the public, only inbound connections. So I also don't want this enabled by default. Will the update process enable it or is the config.ini not updated on an update

You don't want what enabled by default? Setting a password and it running over ssl will not adversely effect any install. It simply improves security and safety. 2FA is just an added feature, to SUPPORT 2FA.

I think that if TLS and password auth can both be disabled via the config, that would be a reasonable compromise. Then it doesn't matter too much if it's enabled by default.

For people who don't want/need TLS (e.g. internal hosts), self-signed certificates become a hassle UX wise. Similar reasoning for password auth. It's probably best to allow people to disable both these features.

@marvinpinto exactly, however, even though SSL will be able to be disabled, it should still be left enabled even if you are running it on 127.0.0.1/localhost with an nginx or other server/proxy on top. They can redirect to an TLS server just as well as a non-TLS, even when it is self-signed.

Contrary to common misconception, true end-to-end encryption is not merely machine-to-machine. An attacker with local access to the machine, even though that access is not privileged and the attacker does not have permission for the server's files on the file system, can snoop and manipulate unencrypted traffic on localhost. If it is running over a TLS connection (even in virtual interfaces) the security is greatly increased.

Honestly the only people who should be against any of these ideas are people who want to snoop on or exploit users who use bad practices. Who here works for the government or a media company? lol.

Let's be fair here if a bad actor has local access to your machine, its game over.

Encrypting comms on the wire, won't make any difference, as you have much more things to worry about than someone getting your sick chill data.

I understand your view but am very much of the kiss(Keep It Simple Stupid) principle. If uneducated(from a security perspective)/unaware users run sick chill open on the net, they are leaving themselves exposed and your approach is a perfect, but for people in the know, or who do not want to further complicate their configuration, should have the option to run in non tls/non secure mode.

I suggest that during a fresh install, a wizard is launched asking for how sick chill, will be accessed remotely and based on the response, you set default security settings accordingly.

For existing installs, advise people of the new options(be it in the config file itself, or in the GUI).

That will alleviate the issue.

Also, one other point to consider, it is not just a human who accesses sick chill, other open source apps may do so as well(ie sabnzbd, nzb get, nzbtosickrage etc).

It may take these a while to update their configs, to support secure connections.

my 2p :)

All of those 3rd party apps already support all of the features I am talking about because we have had password authentication for years ^. The only new thing being added is the option to use 2FA, TLS and password auth have always been there.

I agree a wizard would be acceptable, but someone needs to code it =P
Prompting the user to enable or ignore TLS and prompting the user to set a password or ignore are much easier

Yea, but we ain't talking about password support, but some sort of encryption, which they may not support at the moment. :)

I appreciate the wizard comments :) but was thinking more of asking the question, if they will be accessing sickchill remotely over the internet and configure accordingly(if yes, enable TLS, if not, don't enable it).

I will never, ever say no to increased security. I currently run a reverse proxy for my setup and already use Let's Encrypt for my main internet-based access portal, and use passwords for everything from accessing both my portal and SickChill. Let's go!

@chqshaitan if you think that SC and nzbtomedia and nzbget and sabnzbd do not support https you are mistaken. ALL of the apps including SC have had the support for password logins and SSL/TLS (This just means https instead of http) for YEARS. We simply want to make the settings use the more secure settings that ARE ALREADY THERE in ALL of these apps. Absolutely nothing will break.

Those particular apps where just examples, I just wanted to make the point, that by implementing tls, you could break third-party apps/code/plugins (ie chrome).

Don't get me wrong, I am all for security, but it has to be the (educated) end users choice, not forced upon them.

my 2p :)

No, it absolutely should be forced. If you don't want it, then you can turn it off. But if it is up to end users, it will never get done. They will always opt for the paper house vs the brick house.

Edit: And funny that you mention Chrome breaking - because Google is intentionally making Chrome more difficult to use if you DON'T use SSL.

Imo if a user is smart enough to install addons on SC, they will probably also be smart enough to figure out why this isn't working properly once TLS/Auth is enabled.

I say let's not wait for every possible 3rd party integration to support this, but leave a GitHub notify/issue for the big ones, give them a week to confirm compatibility while the feature is build in a branch, which then gets merged into dev and onto the release train.

3rd party already support these features. We have had these features for more than 5 years.

Just don't make any of this required. I run SC on local server at home. No outside access at all. TSL/web-auth/2fa all pointless and overkill for something on my own network.

It isnt going to make any changes to local access, and 2FA is just an optional feature that I plan to add support for, not a requirement.
Web auth from remote machines? Yes, I'm going to block those if there is no password set or at least a hidden setting set. The people who REALLY WANT to run an insecure server will be able to do it, but it will be easier to be safe than to be unsafe from now on.

You guys should read the thread before posting lol, nobody has ever said 2FA was going to be required, and it's going to pop up warnings for http connections and no web auth when connecting remotely unless they are dismissed/explicitly disabled. You guys really dont take your freedom seriously?

After reading the comments I'm OK with it.
Putting in safe defaults and allowing them to be disabled is a lot better than having no protection or forcing it on with no option to turn it off.
I'd prefer TOTP 2FA personally.

@Solbot I plan to support several different types of 2FA. This is why I got the yubikey 5 NFC, so that I could implement one but then continue on to implement all of the ones it supports. It supports more than any other key afaik, and I can work on using it over NFC on mobile also.

I understand that most people will not use 2FA, or have a yubikey or other hardware auth key (software 2FA authenticators will work), which is why all of the 2FA features will be optional and DISABLED by default. But I believe that NOT supporting 2FA in 2018/19 sort of means we are behind the times. In order to use hardware keys, we have to be using https. This will bring our security into the 21st century.

Having an unprotected server in 2018 is the equivalent of russian roulette.

It's great having certs if exposed to the internet,
though my server is internal only and I'm presently only using http.
Will this be forced on to me?

Self signed certs are annoying, so it would ultimately force me to setup domain name and potentially open it up to the internet (if only briefly to get the cert).

Don't get me wrong, I love this idea and switch 2fa on for everything I can.
Though I would like it to be optional.

Thanks everyone who helping out with the code, I love this service

Please read the comments.

My personal feelings on this are that application servers should not be responsible for TLS concerns, _unless_ the end user is authenticating via a TLS client certificate (and only then in unusual circumstances). For years I've been using nginx to terminate TLS in front of applications like this, and it works quite well. Now that Docker is the main way to run web applications, the process is even easier - a dozen lines in docker-compose.yaml and sickchill, sab, and any related services are all protected and the app servers themselves needn't be extra complicated.

@aarontc while I can appreciate your position, good security is end-to-end encryption, from the client through to the application server, whether or not is it behind an encrypting reverse proxy.

Authentication should be performed by the application, especially 2FA, or else there is a risk of MITM attacks and authentication replay attacks.

I'd rather see yubi support in an nginx or other reverse-proxy layer than native in an app server like sickchill. At the reverse proxy layer, such work is more easily reusable. Looks like someone may have done it already: https://github.com/sanderv32/ngx_http_auth_yubikey_module is the first hit on Google for "nginx yubikey github".

These are good goals, however, 99+% of users will never implement a proxy, let alone add single or multi-factor authentication at that layer, and would instead use the application directly and accept the risk.

I'd be happy to share the setup using docker, nginx, and letsencrypt to help other users get the same benefits, without needing to modify app servers or spend developer effort on many applications, when the layers already exist :)

Please do — this will greatly assist those who want to add additional security in this manner.

IMHO, if you "protect" an application server behind an encrypting, authentication reverse proxy, there must be an encrypted connection between the proxy and application server and some form of client authentication by the proxy to the application server (e.g. mutual SSL authentication), and any 2FA designed to mitigate replay attacks should be implemented and validated at the application server, even if the principle (credential) is validated by the proxy.

I plan to support several different types of 2FA. This is why I got the yubikey 5 NFC, so that I could implement one but then continue on to implement all of the ones it supports. It supports more than any other key afaik, and I can work on using it over NFC on mobile also.

@miigotu I would first implement RFC 6238 (TOTP) and add hardware 2FA as a second stage. Everyone can use TOTP today, while hardware solutions require people to first procure a device and get it working with their various choices of client.

I understand that most people will not use 2FA, or have a yubikey or other hardware auth key (software 2FA authenticators will work), which is why all of the 2FA features will be optional and DISABLED by default. But I believe that NOT supporting 2FA in 2018/19 sort of means we are behind the times. In order to use hardware keys, we have to be using https. This will bring our security into the 21st century.

Agree, but begin by implementing software 2FA first, and add hardware 2FA later. More people benefit from the significant increase in security, and the minor increase between software and hardware tokens is minimal for most situations.

@marvinpinto exactly, however, even though SSL will be able to be disabled, it should still be left enabled even if you are running it on 127.0.0.1/localhost with an nginx or other server/proxy on top. They can redirect to an TLS server just as well as a non-TLS, even when it is self-signed.

Contrary to common misconception, true end-to-end encryption is not merely machine-to-machine. An attacker with local access to the machine, even though that access is not privileged and the attacker does not have permission for the server's files on the file system, can snoop and manipulate unencrypted traffic on localhost. If it is running over a TLS connection (even in virtual interfaces) the security is greatly increased.

@miigotu ABSOLUTELY AGREE — I wrote my comments above before seeing your response here!

As I have added, the 2FA really must be in the application by default, unless a sophisticated user decides to fully delegate this to a proxy. Even then, there should be some mechanism for the application server to validate and log the true client.

In an ideal world, the authentication process should be fully secured from the client (web browser) through to the authentication provider (application server) in a manner that precludes inspection and manipulation by any intervening proxy servers — this can be achieved today using the W3C Web Cryptography API — though, as I said before, I'd start with software 2FA first.

I use an nginx server to proxy to my sickchill server. Will i still be able to do this?
nginx handles ssl and passes to the sickchill server over http. this usecase should be considered

It won't be passed to http, you add 2 lines to your proxy_pass and it will be ssl all the way from the client to the application.

Like ITJamie, i use nginx to proxy to sickchill.
So : "Automatically generate the self signed ssl cert and enable SSL if it is not enabled
Add settings buttons to generate letsencrypt SSL certs when you have a domain
Force login authentication" is ok for me, since it's optionnal.

I don't like to be forced to do something ;)

i already use SSL client certificate authentication to my proxy; i don’t need the extra headache of managing the proxy connection thought to another SSL socket. I also don’t need forced in-program authentication because i handle that using client certificates at the proxy — more secure and very usable, IMO.

You can have the most secure connection through a proxy in the world, that all is useless if there is one unsecured connection between the proxy and SC itself.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

mofman picture mofman  Â·  4Comments

Rouzax picture Rouzax  Â·  3Comments

oodonnell picture oodonnell  Â·  4Comments

Theli93 picture Theli93  Â·  3Comments

proctophage picture proctophage  Â·  3Comments