I would like to be able to use jwt instead of sessions when working keycloak.
Using jwt instead of sessions has many advantages including having stateless micro-services. I don't think you guys need more convincing, since UAA is staeless.
There is a security problem with how OAuth2 is implemented today (as discussed here: https://github.com/jhipster/generator-jhipster/issues/6941#issuecomment-424038200), and while fixing it I would prefer to do things the right way.
It have been discussed in some comments on other issues, but I prefer having a new issue that either fixes the concern, or explain the choices to other users.
We don't plan to do this as there was specific reasons why it was decided to do stateful Oauth2 instead of stateless, see this twitter thread: https://twitter.com/avdev4j/status/1087652807505780736
As for the sign out problem you talk about, it should be fixed in the last 5.8 release : https://github.com/jhipster/generator-jhipster/pull/8757
@PierreBesson I'm talking about security problems, not the logout ones. Also, if someone manages to access the token localStorage (we can also use sessionStoarge by the way) he could also access the sessionCookie, so no difference there. Any other reason?
It seems you know what you are talking about so we should discuss this more. I'm reopening the issue. ping @mraible.
The session cookie is sent with an httpOnly flag (if it’s not, it should be) that prevents JavaScript from being able to read it.
The problem with moving to implicit flow (besides that it’s not as secure) is that we’d have to maintain a lot more code on the client apps. Implicit flow + PKCE offers a more secure solution, but will carry the same maintenance burden on the client code.
On Jan 26, 2019, at 04:04, Pierre Besson notifications@github.com wrote:
Reopened #9120.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
@mraible even if httpOnly flag is used the attacker could send any request through my browser when using sessions, which opens to attacks like XCRF.
Also I'm not suggesting using implicite flow, but hybrid flow (which doesn't require a client secret, and allows for refresh tokens on the front end), implicit flow is such a hassle since you spend time redirecting to retrieve a new access token.
@yelhouti If you want to create a pull request, I'll be happy to review the code.
@mraible can I first create project, we agree on what is generated, then merge?
Also, I didn't mention that PKCE used to be a necessity for iOS mobile apps, since older iOS versions allow for multiple apps using the same URL (scheme) now (since iOS 10 I think) you can use https with domain verification, so not a problem anymore.
Here's the process I typically use when modifying JHipster:
My co-worker @aaronpk submitted a new draft for OAuth 2.0 browser-based apps that recommends authorization code flow + PKCE.
We don't support CORS on our /authorize
endpoint just yet, so it's not possible to do PKCE in a browser with Okta today. We hope to fix that in the next couple of months. I'm not sure about Keycloak.
Keycloak support CORS, they also have valid redirect URIs which might be required.
What I will do is step one, validate with you, then do the others steps.
I am very interested in this feature @yelhouti
Feel free to fork the generator and open a pull request (even with WIP status). We could probably help you if it's necessary.
👍
First I will create the POC with a monolith project (it's almost done) then I will contact you for wokring on the generator. I'll share my code with you @avdev4j very soon. feel free to talk to me on gitter so we can better organise.
Hi,
I agree with the fact that a long lived insecure session has more security implications compared to the short lived tokens.
As HTTP is stateless, the protocol session was mainly used to identify the request. The OpenID connect enables the Identity, Authentication, Authorization. If OIDC is already utilized for IAM, then there is no need of using additional session handling. In-fact it may result in a more complicated architecture as well as possible security and operational issues. Session is not a good friend for the microservice architecture.
To implement session management securely - https://www.owasp.org/index.php/Session_Management_Cheat_Sheet
There are several implementations of SPA with PKCE
Here my version with angular and keycloak: https://github.com/elhoutico/jhipster-keycloak-stateless
There are still few things todo (check the TODOs) and help is appreciated
@VinodAnandan @mraible or anyone, can you tell me if PKCE is needed in any other case than the one I mentionned (with iOS) to see how important it is to implement. Thanks
@yelhouti Can you provide a brief summary of what you did to get things working? I see you're using keycloak-js
. Does this store the access token in local or session storage? If so, I'm not in favor of this change as it reduces security.
@mraible keylcoak-js doesn't use storage for storing the tokens, also I'm planning to use session storage for my implementation that replaces keycloak-js to make it work with Okta and solve other problems. can you tell me how an attacker that manages to get access to my machine/browser and steal the token from storage, not able to use the cookie (I can show how the opposite is true meaning cookies are less secure).
What I did is:
My colleague, @rdegges, wrote a post about what's wrong with using local storage, which I believe applies to session storage as well. The most common way that attackers can get access to your local storage is via 3rd party scripts. Of course, a developer probably won't add 3rd party scripts, but as soon as marketing gets involved, some might be added.
For cookies, you can use an httpOnly flag to prevent JavaScript from having access to them.
I think it's OK to add what you're implementing to JHipster, but I don't think it should be the default option. I think we should prompt the user to choose between authorization code flow (the current setup) and implicit flow. Then they can make the security decision themselves, and we can warn them that implicit flow is less secure.
If we do add support for implicit flow, we should also enable a CSP by default that says "only scripts from this app" are allowed.
@mraible, With all due respect to your colleague, if some javascript can be used to access storage, it can be used to send any request using the cookie without retrieving it, it might be less convenient for the attacker but for me it's the same. Also, using cookies opens you to more attacks like XSRF, where a script can execute GET requests using your cookie without even injecting malicious code, getting access to your browser or attacking the server, but only abusing the trust the server has in your browser (in his own website, the attacker can send GET requests to yours if both are open inside the same browser, and use your cookies in the process).
Also open to discuss the matter with your colleague if his interested.
By the way I read the article, I agree with most point except using sessionStorage for JWT, which for fairness he doesn't say to avoid at all cost.
Here is OWASP take on the subject: https://www.owasp.org/index.php/JSON_Web_Token_(JWT)_Cheat_Sheet_for_Java#Token_storage_on_client_side
And is exactly what I'm planning to do.
That's good info @yelhouti, thanks for the link!
Another thought I had today is that this is already implemented in Ionic for JHipster. When you create an Ionic project, it sets up a resource server on the JHipster side and uses implicit flow. In the next version, I'll upgrade to Ionic 4 and try adding PKCE support. Since you can use Ionic for PWAs, and it supports Angular, React, and Vue in v4, that could be an option.
@yelhouti I think you are forgetting about the CORS and XSRF protection in JHipster
In the link you posted, it addresses this in the section notes:
Note:
It's also possible to implements the authentication service in a way that the token is issued within a hardened cookie, but in this case, a protection against Cross-Site Request Forgery attack must be implemented.
Note that I'm not trained in security so others will probably know better.
Hi,
In the implicit flow, the token exchange is happening via GET requests and it will result into the token getting logged in different servers, client side, etc. An abuser who gets access to this token can easily impersonate the user.
Using PKCE, the client app only receives the authorization code via the GET request and it's exchanged via a POST request with the token endpoint to obtain tokens. Even if the abuser gets the authorization code, it will be protected with PKCE (Proof Key for Code Exchange ).
First of all guys, thank you all for your comments, I really appreciate it.
@mraible I will definitely check the cods and see if we can reuse/improve it. Also, since Ionic 4, a simple blueprint might help generate the Ionic project if stateless OIdC support is added.
@ruddell I didn't forget about the protections, these are complimentary:
If you follow the linked you mentioned, about using cookies it says on protecting against XSRF: The following list assumes that you are not violating RFC2616, section 9.1.1, by using GET requests for state changing operations. Which I can tell you many people use (example: mark message as seen when user X GET's it) to summarize XSRF tokens protect only POTS/PUT and XHR requests when implemented correctly, GET request can't be protected that way since (your second point) they can circumvent CORS protection. If you want more information on how or to discuss the matter more in dept please DM me on gitter it would be a pleasure. (Edit: experience is often more valuable that training, and we are all human who forget things, so thanks for helping :) ).
@VinodAnandan , as I tried to explain I'm planning to use hybrid flow instead implicit flow which doesn't not have the problem of GET params. for PKCE, the RFC (https://tools.ietf.org/html/rfc7636#section-1) says it's used to avoid attacks "within a communication path not protected by Transport Layer Security (TLS), such as inter-application communication within the client's operating system" this is not possible with browser and the only thing I can think of it resembles to is "scheme squatting" for iOS which is not a problem anymore if you use universal links/deeplinks (https://developer.apple.com/ios/universal-links/) available since iOS 9. Am I missing a point or there is no need for PKCE anymore in our use case.
@yelhouti My colleague, @aaronpk, proposed a new spec for OAuth 2.0 that recommends PKCE for browser-based apps. See OAuth 2.0 for Browser-Based Apps for more information. From its overview section:
For authorizing users within a browser-based application, the best current practice is to
o Use the OAuth 2.0 authorization code flow with the PKCE extension
o Require the OAuth 2.0 state parameter
o Recommend exact matching of redirect URIs, and require the hostname of the redirect URI match the hostname of the URL the app was served from
o Do not return access tokens in the front channelPreviously it was recommended that browser-based applications use the OAuth 2.0 Implicit flow. That approach has several drawbacks, including the fact that access tokens are returned in the front-channel via the fragment part of the redirect URI, and as such are vulnerable to a variety of attacks where the access token can be intercepted or stolen. See Section 7.8 for a deeper analysis of these attacks and the drawbacks of using the Implicit flow in browsers, many of which are described by [oauth-security-topics].
Instead, browser-based apps can perform the OAuth 2.0 authorization code flow and make a POST request to the token endpoint to exchange an authorization code for an access token, just like other OAuth clients. This ensures that access tokens are not sent via the less secure front-channel, and are only returned over an HTTPS connection initiated from the application. Combined with PKCE, this enables the authorization server to ensure that authorization codes are useless even if intercepted in transport.
EDIT: this comment doesn't serve any purpose anymore :)
Hi,
I will explain the other situations when the authorization code can be leaked and abused. Between a public client and an Identity Provider Server, there can be many other servers and devices ( Load balancer, Firewalls, WebServers, app servers, etc.). Most of these will log (e.g: access_log) the URL parameters including the public client (browser history).If an abuser gets the authorization_code from any of these sources, he can send it to the Identity Provider and obtain the tokens to impersonate the user.
For a PKCE flow, along with the initial request, the public client will send a code_challenge/hash of the random string. And when the public client will request the tokens with an authorization code, it must provide the code_verifier (random string) otherwise the tokens won't be issued.
Authorization code and refresh code is really sensitive and it needs more security. Silent renewal is a secure replacement of using refresh token SPA.
All the information mentioned above can be implemented in an Angular SPA and it's explained in the following link.
If you need further details, please let me know.
@VinodAnandan authorization_code can't be used twice, and if you have SSL from your keycloak server the browser/app there is no way it can be retreived. I understand exactly how PKCE works, what I don't see is the need for it when you have HTTPS. (event query parameters are intercepted when you use HTTPS). Any way, If you guys insiste on adding PKCE, even if I don't agree I will oblige, it's 10 more lines of code, I will live :) .
But thanks for confirming my suspicions.
Also I found this: https://github.com/Hitachi/contributions/wiki/Description-of-RFC7636-for-keycloak that says OIdC protects against the same attacks using nonce...
Also, I found this lib https://github.com/IdentityModel/oidc-client-js that is certified by openid https://openid.net/developers/certified/
Cherry on the cake, they support PKCE...
There's also AppAuth for JS from the OpenID project itself. I've used it with Electron in the past.
@yelhouti the abuser may have access to the data in the URL from any source mentioned before (SSL/TLS can't prevent all data leakage). It's not a safe practice to share sensitive data via the URL.
The beauty of PKCE is that it will help to authenticate a public client without a public client credentials. The code_challenge and code_verifier will indirectly authenticate the public client.
Also, the silent renewal is very important. It will help to prevent storing/processing of very very sensitive refresh token in the browser.
The following angular-auth-oidc-client is also a Certified OpenID Connect Implementation. They have implemented PKCE with silent renewal.
https://github.com/damienbod/angular-auth-oidc-client
More details about the implementation can be found on the below URL.
@VinodAnandan I agree about the URL part even though I can't find any modern reverse proxy/server that logs query param by default. stupid servers by the way can also log post params.
Your lib sounds pretty cool, keycloak-js adapter uses a similar iframe mechanism. I will definately give it a chance since I will be starting with angular part.
Thanks for sharing
I'm pretty happy with the current version at https://github.com/elhoutico/jhipster-keycloak-stateless
please @mraible give it a look so I can start working on the generator. For the anecdote, there was a bug blocking me in the library yesterday, after I understood what it was it was fixed today. So it all worked great. By the way, great lib @VinodAnandan . My implementation can be much improved to show time remaining before token expiration + a button to refresh..., all easy to do thanks to the lib.
@mraible are you ok with the changes, and can I start working the generator please?
Thanks
@yelhouti, I am happy that I could convince you about the benefits of PKCE as well as some security issues with the implicit flow. If you want to discuss more, please let me know
@yelhouti I took a quick look at your example project and figured out what you changed by looking at the commits. It looks like there are some Keycloak settings hardcoded in the client. With the current implementation, it's easy to switch OAuth providers using properties. With this setup, it looks like developers will need to modify app.module.ts
to specify their settings. Is there any way to implement so it's easier?
You'll need to implement things for the React implementation too.
@VinodAnandan to tell you the truth, I'm not convinced at all about PKCE but since it's in the lib why not. For Implicit flow It was clear that Implicit flow has many security and practicality drawbacks so it was a no go. On the other hand the angular lib you pointed at was very helpful :)
@mraible Indeed we can put stuff in a app.constants.ts (which I forgot to clean by the way). for the react part, do you know anyone that could help :) ?
I'd suggest to wait @jdubois and @deepu105 before starting to implement it into the core, as it will add one additionnal option.
Another solution: what it could be done too, is the blueprint. Same option, excepting for authenticationType.
I'm not in favour of adding another auth option to the mix, unless @mraible says that the value outweigh the complexity, as it will complicate the templates. So I'll also suggest you to try it as a blueprint
Or if this option is better than current then replace it. I'm not in favour of providing 2 options for oauth in questions coz
So I suggest you guys come to an agreement about which is a better implementation and go with that instead of providing options or do it as a blueprint
Totally Agree with @deepu105 , also IMHO, once we have proper support for OIdC, we should tweek Jhipster's UAA implementation to use the same client code for authentication, this would fix some security issues that could happen with the OAuth2. @mraible what do you think?
I'm not convinced this is a better solution than the current one. It seems to require more code on the client, which will cause a maintenance burden. If it's possible to fix the small security issue you mentioned, I'd rather go that route.
@mraible Here is why I think this solution is better:
I'm happy if you can change my mind, since this involves a lot of work for me, (that I'm doing any way for my customers ) :p
Since @mraible is the stream lead of OIDC as well as the original contributor of this part I'll leave the final decision to him.
@yelhouti We currently have CSRF protection with both React and Angular, so I'm not sure what you mean in your first point. Do you have any code or a test that shows what we have is vulnerable?
Yes, it's stateless, but have you done performance tests to ensure it scales better? In theory, it might, but I'd like to see some hard numbers. You can always use Spring Session with Redis, which will likely solve performance issues.
For mobile, both Ignite JHipster and Ionic for JHipster setup a resource server. You can see the one for Ionic here.
I'm sorry for being stubborn about this. In my time at Okta, I've learned a lot about OAuth 2.0 and OIDC. From what all my co-workers tell me, authorization code flow on the server is the most secure implementation. I don't want to reduce security. I can see this as an add-on, but not a replacement.
If you feel really strongly about this, I'd suggest you implement it as a module, or volunteer to take over the stream lead for OAuth/OIDC in JHipster. I'd be happy to give up the maintenance burden. ;)
for the first point, you can't protect against CSRF for GET requests with side effects since cookire are sent automatically by the browser even if used from another domain (which I think is still the case for some backward compatibility reasons), explained here: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet .
For the redis solution having a consensus between all the nodes has a cost that increase exponentially with each new node, this same bottleneck is present when you want to scale keycloak/Okta or it's DB for SSO reasons..., and it would just a waste of resources to to have to deal with the same problem twice.
Ionic is web and I don't know about Ignire, but what I know is that the Http client in native Android apps doesn't handle cookies OOTB.
@mraible no need to apologize I'm stubborn my self :D , and you and your coworkers are right, authorization code flow on the server is the most secure solution but:
It would be an honor for me to take the lead for OAuth/OIdc in JHipster if the core team agrees :) .
@yelhouti Can you please confirm your current example works with Okta? You can signup for a developer account at https://developer.okta.com/signup/.
@mraible sure
@mraible I tried chaging the CORS configs... didn't manage to have CORS header added to response from token endpoit:
Access to XMLHttpRequest at 'https://dev-190319.okta.com/oauth2/v1/token' from origin 'http://localhost:8080' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Once this is fixed, It will work.
It seems, you already now that: https://devforum.okta.com/t/cors-headers-for-oauth2-v1-token/355/6
:p
Also implicit flow doesn't work well with angular because of the # used also for URLs, but I'm sure I can make it work if I tweak a bit...
Yep, can easily be solved using: https://github.com/damienbod/angular-auth-oidc-client/issues/132
@deepu105 are you sure you don't just want to add a new stateless Keycloak/OKTA option at this point to keep backward compatibility. as i'm sure people will complain if we replace an option we already had, even if it's for a better one?
@yelhouti Can you try using https://dev-190319.okta.com/oauth2/default/v1/token
as your base URL instead? If that doesn't work, make sure http://localhost:8080
is listed as a Trusted Origin in API > Trusted Origins.
If it still doesn't work, I believe it's because we don't allow CORS to our token endpoint. It works in mobile and Electron apps because they don't send an origin header.
I can tell you fore sure that it doesn't work because you "don't allow CORS to our token endpoint" I can see that it's a 200 but javascript can't read it because of missing CORS Header from the server. This should be fixed by Okta normally.
Making the /token endpoint accessible from a browser is on our roadmap, but could take 2-3 months to see in GA. AFAIK, you’ll only be able to hit it from a browser if you’re using PKCE too.
On Feb 6, 2019, at 18:10, yelhouti notifications@github.com wrote:
I can tell you fore sure that it doesn't work because you "don't allow CORS to our token endpoint" I can see that it's a 200 but javascript can't read it because of missing CORS Header from the server. This should be fixed by Okta normally.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
@mraible I'm using PKCE so no problem with that. How do you suggest we move forward then please?
from my stand point:
@yelhouti : the problem is not having 4 choices, the main problem is to maintain the option once it is in the main generator-jhipster.
We have too much tickets opened right now, and I'd prefer to try closing / solving them, rather than adding new option.
That's the reason I proposed to implement your solution inside a blueprint:
That's why I love Vue.js blueprint and it's the perfect example what it is possible to do.
So as Deepu, I think the current options are enough: JWT, session, OAuth2, UAA
Personally I'm against a blueprint for authentication. IMO for structuring options like this one it adds more maintenance burden (always keeping up with the main generator) and a discoverability issue (people don't know about it). Also knowing the security code, it is actually only a handful of files and I think it's totally possible to maintain an additional choice.
For me this Stateless OIDC option is good to have and probably will have a larger audience than the UAA solution as you can use any IdP. It would be a shame to pass on the offer of @yelhouti's contribution.
So +1 for me.
@PierreBesson I'm definitely against adding it as an option as I already said. We have to think of our end users and the majority of them are not security experts and would be confused by seeing 2 options for OAuth. Many of them trust us to provide the right defaults and most of them believe that if we provide an OAuth option it's a good one. So I'll be more considerate of those users rather than a minority who are security experts and will know what to choose. So I stand by my decision as it will only add more maintenance(it's not just a few files, you need to take the Angular, React and now Vue clients also into account), more confusion to end users and will benefit may be few people. So for me, the benefit doesn't outweigh the complexity. Whereas building it as a blueprint or module means the maintenance burden is only on @yelhouti and not on the entire community and core team.
@deepu105, Very good points !
I have a last idea, @yelhouti do you think it might be possible for your option to "merge" with the UAA option, in my understanding they do similar thing, it's just that in case of UAA we provide our own JHipster based IdP vs using Keycloak/Okta.
@PierreBesson Unfortunately, they will be too much useless code generated in the frontend and backend, for user/role management, what I could do on the other hand, is make the UAA code easier to switch to another Idp and have a script/blueprint/module that removes/changes all the unusable code.
@yelhouti This sounds like a plan as long as @xetys is OK with it. Note that we do already have a --skip-user-management flag but I'm not sure if it's applicable in this case.
@PierreBesson thanks for the info, didn't know about that, I will try it. @xetys are you ok with this approach?
Thank you guys for your help.
Heyo! I saw my name mentioned in this thread, in regards to a suggestion I made to not use JWTs for web authentication.
I've covered this extensively in talks I've given, but I just wanted to chime in with one aspect that I didn't see discussed in this thread, but one which I think is probably the most important: if you're using JWTs for web auth, you aren't going to get any stateless benefits.
Here's what I mean.
Let's say that you modify JHipster to allow for users to authenticate and store a token in the browser using localStorage or sessionStorage (both of which OWASP says not to do, btw: https://www.owasp.org/index.php/HTML5_Security_Cheat_Sheet#Local_Storage). In this scenario, you're now opening yourself up to XSS, which is probably the most difficult type of web attack to prevent today.
But ignore that for a moment, because attackers can XSS you even if you're authenticated using a sever-side session cookie, and can just make requests to your backend to accomplish stuff (albeit, it's much more difficult for them, since they can't make requests impersonating a user remotely unless the victim's browser is active).
So anyway: you've got your JWT as your session token now.
The problem is this: your JWT will have a timeout on it. That timeout is a variable that a developer will set to some value. For "stateless" benefits, most developers set this value to some amount of time > a few seconds, otherwise, they'd have the same old issues as typical server-side session cookies: they would need to retrieve the user's data out of some central db/cache.
So you've got this JWT now, and let's say it has a 10 minute timeout. Cool. If an average user makes 10 page requests per minute, you've essentially saved yourself 100 database/cache lookups, right? Yey!
But going back to the way this works, what happens if you want to revoke or change a user's permissions? Or what happens if the user updates their information? Or what happens if the user overruns their account balance and doesn't have enough money in their account to fulfill some operation?
Do you have any way to log this user out? To stop them from accessing your service?
The answer is "no", and that's the problem.
JWTs are stateless by design, but web authentication isn't meant to be stateless. The entire point of web authentication (and web security, in general) is to guarantee consistency of sensitive data like user information.
If a user is logged into your app as an admin, but then their permissions are revoked, if your server-side code is only locally validating that JWT then you're now in a compromised state. You have opened up the ability for users to do things they should not be able to.
The solution, of course, is to build a centralized revocation list. Some sort of DB or cache that keeps track of all issued tokens, so that every request a user makes to your site is then validated against this central DB to make sure the token hasn't been revoked, or the user's data hasn't changed since the token was issued.
But if you do this, you're now right back to square 1: everything is centralized, and your app is not stateless. Plus, you're now making the client send a larger token over the network to identify itself on every single request (because JWTs have user data embedded inside of them for statelessness), which means network requests are slower from the client to your server.
All in all, I'm not a huge fan of using JWTs for web authentication. I don't see the benefit. The risk of using them is:
Hope that makes sense.
@rdegges I see we have a fervent defender of cookies here :) . So here is why I disagree and I beleive you get benefits when using JWT:
If your access token are short lived, and you work with microservices, for each request a user does, there might multiple requests to the IdP to check the token, and revoking a refresh token is centralized any way on the IdP (with the microservices still being stateless). So my point it, yes web authentication is not really stateless (the IdP by the way has a session to allow SSO...) but client server communication should be to allow scalability... and the seconds the access token is valid is (and I think you would agree) in most cases not a problem.
To take your point one by one:
So, no, we can agree that the solution is not to build a centralized system, therefore all the points you make after that I don't agree with.
Now how about the benefits: well I mentioned them in my previous comments and you can find them all over the web.
Also, I sincerely thank you for taking them time and giving you feedback since it allows the team to get a global view of all the pros and cons of each solution.
Now this part is mainly for the Core team @deepu105 @mraible @PierreBesson @ruddell @jdubois (and sorry if I forget anyone interested):
--skip-user-management
by @PierreBesson Hi @rdegges,
Thank you for your comments. You are not comparing the correct things, please don't isolate the JWT from the OpenID Connect context.
Also, please consider a highly scalable application with microservices and multiple technology stacks, they need a common language to authenticate and authorize the user request.
XSS has far more abuse cases than just stealing session ID, Tokens, etc. Happy to discuss more about XSS if needed.
Also, OpenID connect delivers more security and privacy controls. I think your company (Okta) is a great believer and support for OpenID Connect.
Thanks and Regards,
Vinod
Heyo!
Thanks for all the responses.
@yelhouti:
- if the user updates his data, at most seconds later the data is updated, and how often do someone changes there name...
This scenario was an example. This is more problematic when it relates to authorization (like what is common in OAuth/OIDC with scopes). It's more of a problem when a user authenticates and gets a token that allows them to perform some action, but then that permission is changed or removed.
Then you have a security issue on your hands for the remaining duration of the token.
- For the users balance, double spending problems are not solved by cookies anyway there is a centralized database for account balances and usually only one microservices (used by all the others) keeps the info cached and checks the requests...
In this case, you are saying the same thing as me, e.g. there must be some central service that checks the user account/profile data/whatever related user info there is to validate that there is enough $$$ (or whatever data is a precious quantity) to complete a request.
In all of these cases, you will have centralization and a local validation will be insufficient.
- If I manually remove a permission for a user, 10 seconds would almost never make a difference IMHO.
In your apps this might be true, but this is literally the exact opposite or what security means =)
Authentication and authorization, by definition, will be "broken" if you cannot guarantee the consistency of your requests.
To be clear: I don't think there's anything wrong with using JWTs the way you're explaining here (particularly in the context of OIDC), so long as the developers building the systems know that they are making an explicit speed vs security tradeoff.
When you choose to use local validation + JWTs (like done in OIDC) you are basically shifting the burden of security correctness onto the end developer, and expect them to know/understand these complex tradeoffs in their codebases.
I've worked with hundreds of companies who've built products this way and they often times have absolutely no idea that because they're using 6-hour tokens they've essentially crippled their application security model.
That's why I even both mentioning this stuff: very few developers know enough (or care enough) about security to really understand these tradeoffs. I don't think it's fair to shift from a secure authn/authz system to an insecure one and put the burden on the end developer to understand :(
- The way you block a user from accessing a service is by revoking the refresh token that is centralized any way and is part of OIdC spec by I think you already know that.
Noooooooooooo. As a matter of fact, refresh tokens shouldn't be used in almost any circumstance.
Your IdP should be maintaining sessions instead, and you should be doing transparent redirects and token refreshes that way, vs. leaving a refresh token around which increases risk.
Doing things this way still leaves you with the same problems I mentioned in my initial post: you cannot revoke user permissions, log users out, etc., because you are only relying on local validation. If your app needs the ability to do anything where permissions are being changed or removed, etc., you have to have centralization, and there are more efficient ways to do that than via JWTs.
Here's the way I generally position this stuff (high level);
If you don't care much about security, and want to use JWTs, use OIDC's Auth Code flow w/ PKCE. If you're building a server-to-server API, use OAuth Client Credentials. Never use refresh tokens.
If you care about security, just use a traditional session cookie approach. Nothing wrong with that. It's been around forever, is well vetted, and security professionals recognize it as the best way to handle web sessions.
@VinodAnandan:
Thank you for your comments. You are not comparing the correct things, please don't isolate the JWT from the OpenID Connect context.
I wasn't separating them. I was just using those examples to simplify the context, but all of that remains true for OIDC as well. OIDC with local validation is NOT a good solution if you care about security. If you decide to use token introspection on every request, however, along with the auth code flow, then sure, do whatever you want. Ya, it is less efficient, etc., but go for it.
Also, please consider a highly scalable application with microservices and multiple technology stacks, they need a common language to authenticate and authorize the user request.
If you're talking about server-to-server APIs here (which most microservices would fall into), OIDC will not help you (neither will JWTs) since the spec doesn't address that. The only solution you have is:
In both of these cases, those standards are well defined, and you can use them to handle your microservice auth in a simple way.
If you're building high-scale services in private networks (so your services aren't publicly exposed), I don't personally see anything wrong with going the client credentials route, and knowing that you're making a security tradeoff since you will be locally validating your tokens.
XSS has far more abuse cases than just stealing session ID, Tokens, etc. Happy to discuss more about XSS if needed.
Yes! I was just highlighting the obvious things here, but I'm pretty experienced with XSS =)
Also, OpenID connect delivers more security and privacy controls. I think your company (Okta) is a great believer and support for OpenID Connect.
Nooooo. OIDC does not deliver more security or privacy controls for first party apps. The entire reason that OAuth/OIDC was created in the first place was to handle delegated authn/authz, which is an entirely different problem.
If the website you're building will be exposing user data to third parties, then by all means, use OIDC auth code flow wherever you can to make your app easier for other services to interconnect with. But if you're talking purely about building a website that handles authn/authz, there are far more secure ways to handle things.
@rdegges I think we can agree that storing the $$$ or anything of that nature in a token is just plain stupid plus this often than by an ACID transaction in db...
Furthermore when you retrieve an access token with a refresh token you permissions are updated, at each refresh = each 10 secs.
Tokens are by default are 10 Secs and revoking a refresh token blocks the IdP from generating a access token using it so your Noooooo is a bit of a stretch, also refresh tokens have normaly the same lifespan as a session (10 min) and if not refreshed you are logged out.
And, as I mentionned in earlier comments, redirects don't work with native mobile apps, and you need the refresh token for these case. I also can agree that the implicit flow (but with iframe) would solve them problem.
OIdC doesn't only help with third party auth, it always in a company to have one place where users authenticate one time and have only one password to access everything + SSO...
We are talking here about applications with microservices, the a simple website to showcase a company, I'm not suggestion to add OIdC to a wordpress website, it's JHipster...
Hi @rdegges
I think we have a different point of view on many things. In some cases, I may be wrong, if you could answer my following questions, it would be helpful.
Hi @VinodAnandan. I'll answer your questions inline.
Do you think localized Identity & Access Management is more secure than Federated/Centralized Identity & Access Management? if yes, why?
No. There's nothing wrong with centralizing or federating identity/access management. The only insecure parts are the ones I've mentioned, e.g., using local validation only. You can do federated IAM without local validation.
Do you think maintaining multiple individual identity stores without a proper security protection and monitoring controls will have less chance to get hacked than a centralized Identity store with more protection and monitoring?
I'm not really sure what you mean here by individual identity stores without security protection... Obviously, if whatever you are doing doesn't have adequate security then it will be worse than another option that is secure.
To that end, centralization doesn't really matter. If the centralized solution is better, I say go with that.
Do you think SAML is a secure alternative to OIDC for federation? why? Or do you have any other preference?
No. SAML is a bad choice. There are countless ways to mess up SAML implementations purely do the XML-based nature of the standard. As a matter-of-fact, I just did a write-up on this not too long ago: https://developer.okta.com/blog/2018/02/27/a-breakdown-of-the-new-saml-authentication-bypass-vulnerability This sort of thing is pervasive over years and years.
What are the problems related to validating a short-lived token with a digital signature? What is the problem if a highly sensitive application is utilizing token introspection? what can be the problems related to classifying application in groups based on their security sensitivity and deciding token expiration based on it?
The problem with short-lived tokens + digital signatures are:
The problem with highly sensitive apps using token introspection is purely spec related. There's nothing high-level wrong with doing token introspection on every request, except that it is less efficient than normal session cookie authentication since the end-user is going to be passing larger tokens over the network on every request, vs a small signed session ID only. This impacts user experience in low-bandwidth scenarios.
If you want to know more about the spec issues/risk, you can read this excellent article: https://paragonie.com/blog/2017/03/jwt-json-web-tokens-is-bad-standard-that-everyone-should-avoid
The problem with using something like JWTs in JHipster by default is that you're pushing more security burden onto end developers, and are expecting them to know a lot of new things in order to build a reasonably secure app.
Whereas with session cookies a developer doesn't need to know anything to have a secure app, the same is not true of token-based authentication. Developers now need to know:
I'm just not a huge fan of giving end-developers that much extra work to do, especially with it relates to something that could have a very real security impact on an application. Developers are trusting y'all to do the right thing for them =)
We are in a cloud era. If people start to rely on their network security, don't you think a cloud IAM provider is a very insecure option? You recommend basic auth and client credentials, but do you agree that if one mircoservice gets hacked or if its credentials get leaked, the attacker can steal all users data from all other microservices?
First part: do I think that cloud IAM providers are an insecure option? I don't necessarily think so. It's probably at least as secure as doing it yourself.
Let's say that you are deploying an enterprise app on AWS. Well, if you don't use a cloud-hosted IAM provider (like Okta) for IAM, and you roll it yourself, where do your users go? Into RDS? In that case, they're still sitting right there on AWS, except now you, as the developer, have a lot more work to do.
In regards to my recommendation of using basic auth or client credentials: these are basically the only options for machine-to-machine authentication today. What other standards are there? You could potentially use PKI, but that is going to be a one-off and not easily understandable by most people.
Regardless of whether you're using OAuth or Basic Auth for server-to-server API security, if you leak your credentials you are screwed no matter what :)
If a End-User can have visibility of all his authorized client's active sessions and have a privilege to revoke the access - is that a bad privacy design?
Not at all! But that functionality is ONLY POSSIBLE if you centralize your session management anyway! That is true of session cookies, JWTs, or any form of token auth. So in that scenario, the best option is to use session cookie auth regardless, since it is more secure and provides less risk.
Do you think the option to obtain End-User Consent in OpenID connect is a bad privacy practice?
Not at all. None of the things I said above have anything against that.
@rdegges this is the second time you mention this point:
- If you want to trade security for speed, you can use local validation and short-lived tokens, but then if you want to have revocation functionality you have to re-centralize again anyway, so....
And I really don't understand why you are trying to make it, obviously you must know how revocation works, and it being centralized with OIdC doesn't impact performance. When a refresh token is revoked it can't be used again to retrieve an access token and in the same time, you remove all communication between the IdP and all the SPs. So what are you talking about when you are mentioning re centralizing revocation like it's a bad/un-optimized thing.
Another point you are trying to make is:
Whereas with session cookies a developer doesn't need to know anything to have a secure app, the same is not true of token-based authentication. Developers now need to know:
- What endpoints they have, and what security guarantees they need for each endpoint?
- What is an acceptable token lifetime/duration for their app, given their security needs?
- How to handle token introspection/local validation, and what each of those things means, and what performance considerations they have?
Well, as matter of fact, unless you use microservices (Monolith which is the encouraged and the default) you need to manually say that you don't want cookies and in that case you are making the choice to handling your security.
Also even if you use token validation through endpoints, you need to configure the validation endpoint the same way you would with the IdP public key/certificate (by the way you use the same configuration file/URL to configure both: .well-known/openid-configuration). for token duration and introspection we provide the defaults, if you decide to change them to a bad value that is still your choice...
So suggesting the team would be encouraging the use of JWT's by default for small apps even after our modifications would not be factual.
Sorry for the spam guys (@deepu105 @mraible @PierreBesson @ruddell @jdubois) : but what do you think of my last proposition (keeping both after fixing the security issues)?
@yelhouti I would prefer to see this as a module first. That way, you can just support the features you’d like to without needing to make it work for React and Vue too.
If your module becomes popular, we can merge it into the main generator. This also allows you to develop and release improvements at your own pace.
Hi @rdegges ,
I really appreciate your reply. I am definitely interested to continue the conversation. If you prefer a private communication, you may contact me at [email protected]
Do you agree that we use the session as a way to identify a user as HTTP/s is a stateless protocol?
Is there any centralized session management technology which takes care of integration with multiple technologies in a microservice world, which also can be configured with different session security properties ( Idle Timeout, Absolute Timeout, Renewal Timeout, Simultaneous Session Logons, Session Revocation, etc. ) based on the security sensitivity of the application?
In a microservice model, if you provide access to the data based on user identity, even if a user's access token gets compromised the attacker can only take particular user data from other microservices. A microservice doesn't have to trust any other microservices just because they are in the same network or they know some secret. When an Identity Provider issues an access token for a user, the main application/SPA/client application can carry forward the user's access token to all other microservices.
To implement an effective central Identity and Access Management system, the system should have an end to end coverage. Consider a scenario where a user authenticated to an enterprise application via central IAM solution. But the application was integrated in a way that it only utilizes the central IAM solution for the initial identity establishment (login) and after that application is handling its own local session management to identify the user requests. If an attacker managed to steal the local session ID, it will be very difficult for the security team monitor the central IAM as well as the end-user to identify it and revoke the access.
I believe that the developer must know how his application authentication and authorization work at least at a high level. It doesn't mean that they need to learn complete spec or implement the spec by them self. Lack of understanding may result in insecure apps with weak authentication and authorization.
In order to simplify this, the OpenID foundation already provides different OpenID Connect Implementations designs for use cases and technologies ( Server side application, Single Page Application, Mobile App, etc.). OpenID foundation also provides certified and recommend implementations ( https://openid.net/developers/certified/ ). Developers just need to utilize it.
@mraible , please check this library: https://github.com/IdentityModel/oidc-client-js/wiki
Hi @yelhouti:
And I really don't understand why you are trying to make it, obviously you must know how revocation works, and it being centralized with OIdC doesn't impact performance. When a refresh token is revoked it can't be used again to retrieve an access token and in the same time, you remove all communication between the IdP and all the SPs. So what are you talking about when you are mentioning re centralizing revocation like it's a bad/un-optimized thing.
It definitely does impact performance. If you need to guarantee the integrity of the requestor, then you need to make a request to a centralized IdP at some point. There is no way to do that without centralization (unless you decide to broadcast your revocation list with cache invalidation to your nodes or something, which is a whole other discussion).
You're talking about refresh tokens here being revoked. But again: that doesn't matter, because the problem is with the access tokens that can't be trusted for the duration of their lifetimes.
If you want to configure your services such that on every request the make a token introspection request to the IdP to validate the user's identity/accuracy, then sure! Go ahead and do it =) You will not be trading speed for security in this case. But the question is: why do that? Now you've just increased the complexity and network burden of your clients for no apparent gain?
Well, as matter of fact, unless you use microservices (Monolith which is the encouraged and the default) you need to manually say that you don't want cookies and in that case you are making the choice to handling your security.
I'm not a Java developer and do not have any working knowledge of how this implementation would work. I was pulled into the thread for the security POV only, sorry.
Also even if you use token validation through endpoints, you need to configure the validation endpoint the same way you would with the IdP public key/certificate (by the way you use the same configuration file/URL to configure both: .well-known/openid-configuration). for token duration and introspection we provide the defaults, if you decide to change them to a bad value that is still your choice...
If you are providing default values for token introspection (any value) and are not doing introspection on every request by default, then you are potentially opening up holes in your appsec model for reasons I outlined previously.
Hi @VinodAnandan, I'll definitely shoot you an email separately. Thanks!
Hi @rdegges, thanks for your answers, I really appreciate it.
Indeed I'm talking about refresh token invalidation and not access tokens, so I want enter in the details of your first paragraph. I'm not planning to send a request to the IdP on each request this clearly goes against the performance I'm trying to gain.
So what we are discussing is: is it worth it to allow an attacker 10 more seconds with an access token (compared to cookie where the victim is "protected" once they close their browser) or should we try to defend against that.
If your data and actions are so sensitive that you can't afford that, having cookie won't protect you as the attacker can send request through the victims browser and you have no way to find out, these kind of actions should in all cases be protected by something like a second factor... so am i trading security for speed, not really.
For the implementation details, I was just answering you point about the defaults and how other developers trust to us to make the right choices, and I'm saying: these are not the defaults...
Another good stuff I see to not using sessions is that Kubernetes support will then be reachable ;-)
Ok guys, this is not going anywhere so I'm gonna make an executive decision, as one of the project lead
I'm in favour of not adding any new implementation to core and to keep status quo, you can ofcourse create a module/blueprint and demonstrate this so that people who are for JWT option can use it and as Matt said if it becomes popular we can reconsider additing to core. And hence I'm closing the issue but ofcourse feel free to continue the discussion as the security insights are valuable to the community.
My decision is based on below facts.
Let's revisit if it becomes an issue
Most helpful comment
Personally I'm against a blueprint for authentication. IMO for structuring options like this one it adds more maintenance burden (always keeping up with the main generator) and a discoverability issue (people don't know about it). Also knowing the security code, it is actually only a handful of files and I think it's totally possible to maintain an additional choice.
For me this Stateless OIDC option is good to have and probably will have a larger audience than the UAA solution as you can use any IdP. It would be a shame to pass on the offer of @yelhouti's contribution.
So +1 for me.