Vault: Proposal: JWT Claim-based OIDC Auth Backend

Created on 23 Mar 2017  ยท  142Comments  ยท  Source: hashicorp/vault

Kubernetes supports authentication (and group extraction forth authorization) using OICD (OpenID Connect) JWT id_tokens tokens, see here for docs. Basically JWT tokens are crypto-verifiable JSON key-value pairs called "claims".

For Kubernetes Auth, two such claims are used:

  • username (configurable) - indicating the subject of the token
  • groups (configurable) - indicating the list of groups the user belongs to

Both KeyCloak and Dex are configurable OpenID Connect servers that can delegate to upstream identity providers (e.g. Azure or Google).

This proposal is about introducing an Auth Backend that is a configurable, generic OICD backend that uses JWT token validation.

Contrary to what's been discussed previously in #465, OIDC doesn't require browser flows to be used, and such is not an obstacle for Vault adoption. They can be used in exactly the same fashion as GitHub personal tokens, by copy-pasting.

 vault auth -method=oidc token=<id_token>

In fact this is exactly what K8S's kubectl is expected to be used, with --token flag.

A couple of other considerations:

  • the TTL of the token is a min(configured_max_ttl, expiraton_of_id_token)
  • the configuration endpoint allows to set the: upstream URL for verification, (optionally) a pinned CA cert for interacting with OICD
  • JWT claims enter the metadata of the token
  • there's a groups/ configuration endpoint that maps onto policies, similarly as with Github

The K8S oicd plugin seems fairly straightforward and could act as a basis for this work. We'd actually be willing to send in PRs for this if Vault maintainers would accept them :)

Most helpful comment

The plugin now has full tests for both OIDC discovery endpoint based and offline verification based workflows. It has been merged into the repo and is now included in vault master. As such, closing this.

All 142 comments

Hi there,

Consuming OIDC for auth is a totally fine endeavor IMHO. (What we're not targeting currently is Vault being a provider, but that's separate from this.) I'd suggest working up a doc with a proposed API and describing the overall structure and we can do a review before you commit to code writing. Thanks!

@mwitkow we are interested in this proposal too and happy to help where we can.

@jefferai here's a proposal design doc for the OIDC auth backend for Vault:
https://docs.google.com/document/d/1saxtZMuh3OYilpa0BvP5_Ien_6U5bqdqOUj5AATdTtc/edit?usp=sharing

The one thing where I'm lacking context is how Vault AcceptanceTests work, but a straw-man proposal for using them is outlined in the document.

@ericchiang, CoreOS Dex maintainer and K8S OIDC contributor for context.

@mwitkow Thanks for writing this up! I won't have time to get to it in depth until early next week, probably, but one thing I wanted to make you aware of up front: we do not allow the use of the PathMap/PolicyMap constructs in new backends, so you're not going to get an automatic API, and the proposal should sketch out what an API for the backend looks like.

@mwitkow hey yeah I'd be happy to provide code reviews and feedback on this! Overall content seems reasonable.

Please let us know if you need anything on the github.com/coreos/go-oidc side of things or if any explanations of OpenID Connect related things would be helpful.

Jeff, I understand why you guys don t want to support the path to policy
mapping. Do you have an example of an API I could follow? As long as it's a
mapping (JSON or path) from external names to Vault policy enamel we should
be fine here ๐Ÿ™‚

Hi @mwitkow ,

Generally these days we use config/ for config paths, roles/ for roles, users/ for users and groups/ for groups. Pretty straightforward. In your doc you are using teams, which probably would slot into the "roles" vernacular. It's pretty easy to change later on (at least until it's publicly released) but one of the reasons we don't use those maps anymore is that they make changing anything super not-straightforward :-)

Cool, would you rather have a separate config variable that matches JWT
claims to Vault groups and roles then? (can you link me something that
defines them clearer in your vernacular?) That actually very much aligns
with what we want to do ๐Ÿ™‚

On Fri, 31 Mar 2017, 22:09 Jeff Mitchell, notifications@github.com wrote:

Hi @mwitkow https://github.com/mwitkow ,

Generally these days we use config/ for config paths, roles/ for roles,
users/ for users and groups/ for groups. Pretty straightforward. In your
doc you are using teams, which probably would slot into the "roles"
vernacular. It's pretty easy to change later on (at least until it's
publicly released) but one of the reasons we don't use those maps anymore
is that they make changing anything super not-straightforward :-)

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/vault/issues/2525#issuecomment-290830673,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AJNWowOyPFEdpb6pkQn5JM87PSXHD6eDks5rrWuZgaJpZM4Ml9kn
.

@mwitkow How familiar are you with Vault generally? Maybe it would be good to have a call at some point and discuss what you're trying to do, because who defines what, where, depends pretty strongly on what use cases you're trying to solve.

Sure happy to grab a chat ๐Ÿ™‚ anything that somewhat overlaps with BST
timezone would work. If you're on Gapps just stick something in my
"Michal(weird symbol)improbable.io" calendar ๐Ÿ™‚

@jefferai I've updated the proposal to follow the Okta Auth Backend, and remove the PolicyMap bit:

  • auth/oidc/groups - for mapping of a groups claim to policies - this is meant for teams of users
  • auth/oidc/roles - for mapping existing roles-based information from the token to vault policies
  • auth/oidc/users - for additional mapping of users to groups and roles

Please see:
https://docs.google.com/document/d/1saxtZMuh3OYilpa0BvP5_Ien_6U5bqdqOUj5AATdTtc/edit#

Would be great if we could get this signed off before we start dabbling in code :)

@mwitkow: I've been talking to @jefferai regarding #2571 and I think it may be possible to incorporate your proposed changes into mine. There are some concerns that mine won't be very widely-usable because most Oauth2 providers don't make the password flow available to 3rd parties, however I believe it wouldn't be too difficult to add the ability to use an auth code or OIDC token as well. Currently, I'm using the golang/oauth2 library and also following the Okta backend's general pattern. I'm not as familiar with the OIDC flow you're looking to implement, so please take a look at what I've already written and see how tall an order you think it would be to implement in the same backend alongside my existing changes vs. starting from scratch.

@mwitkow For some more context around what @mikeokner said, Ping Identity doesn't return an OIDC token with their flow (just an OAuth2 bearer token), but it does implement the UserInfo endpoint that provides the same parameters. We've been backchanneling and I suggested that they take a look at this proposal and how it could be modified to fit this kind of workflow, where it can also perform the OAuth2 handshake, and use a UserInfo endpoint as the source of truth rather than only a token. It'd be nice to have a sort of full-service oauth/oidc backend rather than carving things out across multiple backends (potentially with a lot of duplicate code). They have some code that does a fairly standard OAuth2 resource owner flow that could serve as a start towards a combined proposal.

@jefferai @mikeokner OIDC is a superset of OAuth2. It basically adds:

  • an Identity Token that is returned alongside the Access Token
  • a discovery endpoint that is used to fetch what paths the necessary OAuth2 endpoints are available at

As far as I can see Ping does support optional OpenID Connect 1.0, and it does return Identity Tokens, so you should be able to integrate it.

The doc https://github.com/coreos/dex/blob/master/Documentation/openid-connect.md by @ericchiang is probably a great introduction to OpenID Connect.

Personally, I would prefer an OpenID Connect approach, as it is a standard that doesn't require users to configure things such as:

  • pointing at the UserInfo endpoint
  • specifying what UserInfo json paths returned contain users data etc.

@mwitkow We fully realize it's a superset of OAuth2. And since Ping (sort of) supports OIDC that's a reason I'd like to see them merging.

However:

  • Ping doesn't _always_ return an ID token depending on how you auth, even though in those cases it does support the UserInfo endpoint
  • Not requiring users to fetch OIDC tokens ahead of time (for not just Ping but other providers that support OIDC tokens being returned) can be nice UX.

Ping doesn't always return an ID token depending on how you auth, even though in those cases it does support the UserInfo endpoint

The id_token is mandatory in the response[0]. Is the issue that users will login to ping not using the openid scope? If Ping supports OpenID Connect, there surely must be a way to always get an ID token.

Not requiring users to fetch OIDC tokens ahead of time (for not just Ping but other providers that support OIDC tokens being returned) can be nice UX.

But they still need to fetch an access token which can hit the userinfo endpoint?

Are these concerns around Pings OpenID Connect implementation or general OpenID Connect?

[0] https://openid.net/specs/openid-connect-core-1_0.html#TokenResponse

@jefferai

Not requiring users to fetch OIDC tokens ahead of time (for not just Ping but other providers that support OIDC tokens being returned) can be nice UX.

In that case, the vault CLI client would need to implement the OAuth2 flow. The reason why I opted out of it in the proposal, is that it is:

  • complicated to ds: implementing an HTTP server in a CLI command with a browser AuthCode flow (e.g. popping out a browser)
  • tricky to get secure: you need to store the refresh token somewhere securely on disk, which means that vault needs to keep two secrets (Valt Token and Refresh Token)
  • not really standard: different providers expect different scopes, different endpoint paths etc, requiring extensive configuration (client ids and secrets, scope configuration, endpoint configuration etc.)
    Having built one of those, it is hard to make it "generic".

The reason why in the design I opted to have a "ready-made" token passed, is to simplify all this and make it "pluggable" by some third party (such as a subcommand).

The id_token is mandatory in the response[0].

_If_ what's coming back is an OIDC TokenResponse.

Is the issue that users will login to ping not using the openid scope? If Ping supports OpenID Connect, there surely must be a way to always get an ID token.

According to the people currently running a Ping identity server who have tried this out, it doesn't, because it depends which OAuth2 flow you've used to authenticate to Ping. Some flows return only OAuth2 bearer tokens, some also return OIDC tokens. Maybe @mikeokner can get in touch with Ping support and find out if there is a way to enable it for all flows, but they have not yet found a way to do this. But OIDC does mandate a UserInfo endpoint, and Ping appears to always support that whether you're coming in with a bearer token or OIDC token.

And, generally speaking, there are lots of OIDC providers where the nicest user experience isn't to go log in through a web site and copy/paste a token but to simply provide the user/pass to Vault's login endpoint.

The id_token is mandatory in the response[0]. Is the concern that users will login to ping not using the openid scope? If Ping supports OpenID Connect, there surely must be a way to always get an ID token.

We haven't been able to get an ID token back when using the resource owner password flow. Other flows to return it. I don't know enough to say whether that's a failing on Ping's side or a config somewhere we're missing.

In that case, the vault CLI client would need to implement the OAuth2 flow.

That's exactly what I did :)

But they still need to fetch an access token which can hit the userinfo endpoint?

complicated to ds: implementing an HTTP server in a CLI command with a browser AuthCode flow (e.g. popping out a browser)

Those two reasons are why we went the resource owner password flow in my PR, not auth code. It's an available flow with our IDP and makes things easy for the end user.

I could see a combined implementation that allows auth with either username/password or token, and then reads group assignments from the userinfo URL or id token if present.

@mwitkow Honestly, I don't really see why passing in a bearer token and hitting up UserInfo vs. passing in OIDC only is really a problem.

Ah okay. I took a look at https://github.com/hashicorp/vault/pull/2571

We haven't been able to get an ID token back when using the resource owner password flow.

A lot of providers don't implement this flow, and I think they're right in avoiding it. It steps around any 2 factor auth that the provider might implement, for example.

@mwitkow Honestly, I don't really see why passing in a bearer token and hitting up UserInfo vs. passing in OIDC only is really a problem.

Because the id_token is signed and has things like the intended audience (client_id) so others can ensure that the user logged in through an authorized client.

A lot of providers don't implement this flow, and I think they're right in avoiding it. It steps around any 2 factor auth that the provider might implement, for example.

Depends, e.g. if the provider sends a push notification for 2nd factor before responding with authorization information (Ping supports this as do others).

Because the id_token is signed and has things like the intended audience (client_id) so others can ensure that the user logged in through an authorized client.

And yet, in cases where this isn't possible, getting the info from UserInfo still allows for significant code reuse and serves the needs of other users.

I looked at #2571, it only implements part of the OAuth2 spec, namely the Client Credentials Grant (https://tools.ietf.org/html/rfc6749#section-4.4). This is the least secure form of OAuth2 authentication, and most providers do not provide it. It basically prevents strong security by passing credentials t (passwords) hrough a third party (in this case vault), and removes the possibility for using two-factor auth.

The most used, and most widely supproted OAuth2 flows are:

  • Authorization Code Grant (https://tools.ietf.org/html/rfc6749#section-4.1), for interactive authentication (reflow through browser) that means that the password and other creds (e.g. 2FA) is only exchanged with the identity provider
  • Refresh Code grant (https://tools.ietf.org/html/rfc6749#section-6) for refreshing a previous authentication.

The PR in #2751 doesn't support them, as it doesn't deal with Access Tokens at all.

@jefferai I would be ok with having a combined OIDC and OAuth2 provider, if both were only based on Tokens:

  • OIDC flow validates the token assuming it is an Identity Token
  • OAuth2 flow takes a token, treats it as an Access Token and hits up a UserInfo endpoint

Depends, e.g. if the provider sends a push notification for 2nd factor before responding with authorization information (Ping supports this as do others).

This is not part of any OAuth2 standard. Can you provide a list of other providers other than Ping that supports it?

That's exactly what I did :)

By auth flow, I meant the Authorization Code flow, which requires browser interaction and a callback URL to a localhost webserver. I don't see that in your PR, is it in another branch perhaps?

I looked at #2571, it only implements part of the OAuth2 spec, namely the Client Credentials Grant

It's actually the Resource Owner Password flow, but yes, most providers (especially public SaaS offerings like G-Suite) don't provide it. But as @jefferai mentioned, it doesn't totally preclude the use of MFA for providers like Ping that use a push notification and app before responding to the original request with an auth token.

For providers who do provide it, it's the most convenient option for Vault cli users as there's no need to pre-fetch a token or pop a browser. It's part of the official oauth2 spec, so I don't see why a generic oauth/oidc backend for Vault shouldn't implement it. The Okta backend already uses username/password, so it's not like Vault isn't already handling external credentials, either.

Edit:

This is not part of any OAuth2 standard.

MFA isn't anywhere in the spec. How the resource owner is actually authenticated, how many factors, etc., is solely up to the provider to implement.

And yet, in cases where this isn't possible, getting the info from UserInfo still allows for significant code reuse and serves the needs of other users.

@jefferai when developing these plugins for Kubernetes, we found the more common case is that the server (Kubernetes, Vault) doesn't want to have to trust _every_ client that the provider issues. It eliminates the ability for using public providers (e.g. Google), and even in private provider deployments means that you have to have close control over who can be issued client credentials.

For this alone I think accepting id_tokens would be a good idea, even if Vault chooses to accept access_tokens as well.

@ericchiang I'm not suggesting ignoring id_tokens, I'm suggesting that if what comes in isn't a JWT that it can fall back to a UserInfo lookup.

(Recognizing that this puts more configuration requirements (or possible .well-known support) into the backend!)

Despite the OIDC spec's claim that it's a superset of OAuth2, it really isn't. It drops two flows (client credentials and resource owner) that are part of the OAuth2 spec from its own definition, which leaves ambiguity for how in what I'll call "OIDC" mode a provider should support them. Ping has chosen not to. The specs are a complete mess, but that doesn't mean we can't make this work. For purposes of users accessing Vault, the Resource Owner flow provides -- by far -- the best experience.

While the Resource Owner flow is indeed generally discouraged, and not supported by some public implementations, it's perfectly fine for situations where you're running an IDP with your own organization's identities and you fully trust the resource server (in this case Vault). Both the Okta and LDAP backends are already collecting credentials on behalf of the user. This would be no different.

We (I'm with https://github.com/hashicorp/vault/pull/2571) are not proposing that we wouldn't also support the use you suggest. We're trying to come up with a common implementation so there doesn't need to be multiple flavors of an OAuth/OIDC backend. Your use case is valid, and so is ours.

As for MFA being part of a spec, it's not part of OAuth2, OIDC, or SAML. It's completely up to the provider if and how they support it. Ping does though. Others may too, but that's up to their own implementation.

Jeff, from a code reuse point of view, the only code shared between the two
implementations would be the users, groups and roles policy mappings.

OIDC flow doesn't require talking to an OAuth2 server at all, instead it
depends on JWT parsing, discover url parsing and cert fetching (on
initialization). All of this is done inside go-oidc.

I would love to see a full spec OAuth2 implementation with
AuthCode+UserInfo and the MFA flows. They all require talking the OAuth2
protocol per request.

So for practices purposes I'd recommend splitting them into OAuth2 and OIDC
backends.

I'll do a strawman PR next week for a pure OIDC one to demonstrate.

On Fri, 7 Apr 2017, 18:17 Brian Rodgers, notifications@github.com wrote:

Despite the OIDC spec's claim that it's a superset of OAuth2, it really
isn't. It drops two flows (client credentials and resource owner) that are
part of the OAuth2 spec from its own definition, which leaves ambiguity for
how in what I'll call "OIDC" mode a provider should support them. Ping has
chosen not to. The specs are a complete mess, but that doesn't mean we
can't make this work. For purposes of users accessing Vault, the Resource
Owner flow provides -- by far -- the best experience.

While the Resource Owner flow is indeed generally discouraged, and not
supported by some public implementations, it's perfectly fine for
situations where you're running an IDP with your own organization's
identities and you fully trust the resource server (in this case Vault).
Both the Okta and LDAP backends are already collect credentials on behalf
of the user. This would be no different.

We're not proposing that we wouldn't also support the use you suggest.
We're trying to come up with a common implementation so there doesn't need
to be multiple flavors of an OAuth/OIDC backend. Your use case is valid,
and so is ours.

As for MFA being part of a spec, it's not part of OAuth2, OIDC, or SAML.
It's completely up to the provider if and how they support it. Ping does
though. Others may too, but that's up to their own implementation.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/vault/issues/2525#issuecomment-292597560,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AJNWozrOxdsdOwuLMBcOSFhnVEBDOzxQks5rtm--gaJpZM4Ml9kn
.

As long as you don't foresee ever wanting to add anything to yours to obtain the OIDC token, then you are right that it wouldn't have a ton of code reuse. They're still in somewhat overlapping subject areas though. That said, we can keep them separate if Jeff's OK with that.

Jeff, from a code reuse point of view, the only code shared between the two implementations would be the users, groups and roles policy mappings.

Maybe @ericchiang can chime back in with some additional suggestions, but coreos/go-oidc appears to heavily leverage the golang/oauth2 library I'm already using.

So, you'd end up with a pretty substantial amount of shared code in my opinion:

  • backend.go: Branching to make proper auth request and handle parsing/validation of token. No changes to how groups are applied to users.
  • cli.go: Minimal addition to accept token.
  • path_config.go: Minimal addition of necessary OIDC config params. go-oidc still uses oauth2.Config so a good chunk is shared.
  • path_groups.go: No changes, 100% shared
  • path_login.go: Minimal addition to accept token
  • path_users.go: No changes, 100% shared

Maybe @ericchiang can chime back in with some additional suggestions, but coreos/go-oidc appears to heavily leverage the golang/oauth2 library I'm already using.

From the code @mwitkow is proposing, the CLI wouldn't have OAuth2 login logic, it'd only accept a bearer token that happens to be an ID token. All the OIDC code is server side. It actually doesn't even need the OAuth2 package, but will import it because we reference it in the package API.

Aha, so just to be sure I'm understanding this completely, when the user authenticates via Vault CLI by executing something like:

$ vault auth -method=oidc -token=<token>

Then <token> would be an OIDC ID token already acquired somehow from a token response? I was thinking it would be an auth code but now that I think about it, I don't really think either is a great solution. I'm unclear on how the user is expected to actually get that token in the first place. In my experience, GitHub making it easy to get a personal access token outside a browser-based redirect flow is the exception, not the rule.

@mwitkow, you initially stated that

Contrary to what's been discussed previously in #465, OIDC doesn't require browser flows to be used, and such is not an obstacle for Vault adoption.

but I think that's only because half the flow is being ignored in your proposal. How is the user supposed to get a token in the first place? Is there a standard Oauth2 or OIDC process for acquiring tokens programmatically or manually or any other way outside a browser redirect flow?

Then would be an OIDC ID token already acquired somehow from a token response?

Yes, that's what we do in Kubernetes. Users build an oauth2 client, and return the id_token to the user. We actually prefer it because different companies have different needs for those portals. Branding, authz requirements (only people with this email domain can login), etc. Users login through these then are presented with commands to run which configure the command line tool (or download a configuration).

We have an extremely bare-bones implementation over in the dex repo of an example client[0].

How is the user supposed to get a token in the first place? Is there a standard Oauth2 or OIDC process for acquiring tokens programmatically or manually or any other way outside a browser redirect flow?

There aren't good standards for non-browser based flows, or at least no good standards that a lot of providers implement. kubectl for instance doesn't login, but instead consumes a pre-fetched id_token (with optional refresh token).

OIDC providers that do implement non-web flows can also let their own command line tool fetch the id_token, it's generally out of spec as far as kubectl is concerned.

[0] https://github.com/coreos/dex/blob/master/cmd/example-app/main.go

From my perspective I don't particularly about where the token comes from; building the resource owner flow into a server-based backend is possible but seems like it's best as part of a larger oauth2/oidc effort to not duplicate work/code, and I've had too much experience with oauth2 to ever think a generic oauth2 backend can ever really work unless we add in a redirect_uri capability. Which is a separate conversation.

Having that be something that is a particular auth-method added to the CLI is possible -- several of the backends actually share a single CLI helper and how it's invoked triggers some minor behavioral differences, but that could be a way to work with different providers where the end result is to get an OIDC or OAuth2 token. The key downside with it being CLI specific is that you don't have a way to set any required API variables (IDs, secrets, etc.) in a way that hides it from the user, if needed, although since in the discussed flow they're providing a password it may be possible to simply spread those variables around to client machines without worrying too much (they'd still not be _public_). But keeping those variables actually secret requires a way to set them into the backend.

It doesn't seem like (unless I'm missing it) that anyone is insisting that fetching a token must be done by the backend itself. So if we can put that aside for the moment, it just comes down to: can a backend consume only OIDC ID tokens, or can it consume OAuth2 bearer tokens + userinfo endpoint. Either way the end data is the same (in theory). It doesn't seem all that onerous to do the latter, just a simple, single HTTP request.

So here are my questions to everyone:

1) Does anyone think that the token fetching flow happening outside the backend -- via a CLI helper, or a plain 'ol shell script -- is a problem?

2) Does anyone have any reason other than ideological why the backend shouldn't accept OAuth2 token + userinfo URL in addition to OIDC?

Remember, the perfect is the enemy of the good. I understand (and am fully aware of, and have already been aware of) arguments about why OIDC tokens are better than OAuth2 + userinfo, but while Vault is opinionated it's not dogmatic. It tries to enable useful flows that meet varied needs. And from a maintainability perspective fewer backends is always better than more.

Personally I'd like to see token fetching stay inside the Vault CLI, and not require distributing and having users configure an Oauth endpoint, client ID, and secret before being able to start using Vault. That was the motivation for putting it in the backend. I've got hundreds of developers who will be using Vault. I'd like to just be able to say "download the vault binary, set VAULT_ADDR, run vault auth -method oauth2 and that's it.

@bkrodgers How would you avoid that if the token fetching was in the Vault CLI?

@bkrodgers Also do you consider the client ID/client secret to be secret from those users? Or just secret to your org, since they also are providing their own credentials?

In our implementation on the PR, the user/pass the CLI collects get passed to the backend, which calls Ping for us. So the backend config has that info. To do it client side, yes, we'd need to distribute that one way or the other, which is what I'd ideally like to avoid.

I don't necessarily consider it particularly secret. It wouldn't let them into Vault, though there's a (low) risk it can be used to create a phishing attack of some sort. Were you thinking the back end could have an unprotected endpoint to allow the CLI to fetch them? That'd work, but keeping it in the backend seems somewhat more secure.

In our implementation on the PR, the user/pass the CLI collects get passed to the backend, which calls Ping for us.

This is not what your org has been saying on this issue, which is that the CLI fetches the tokens.

I'm not sure where Mike or I have said that. The code in https://github.com/hashicorp/vault/pull/2571 is pretty clear that it doesn't. It works the same way Okta and LDAP backends work. Ask the user for their creds, pass them to the backend, backend calls auth provider.

I'm not sure where Mike or I have said that.

https://github.com/hashicorp/vault/issues/2525#issuecomment-292585336 for example.

OK, long thread. Mike was mistaken saying that. Regardless, no, it doesn't. Code has been consistent. :)

@bkrodgers Haven't looked at any code, since this is sort of a pre-code discussion. As a result, details in the discussion are important :-)

Jeff, how about this as a proposal:

We implement a single OAuth2 backend. It accepts OIDC Identity tokens and
Access Tokens+UserInfo. The backend is simple because it doesn't do any
OAuth2 flows, the only call is a simple UserInfo fetch configured via a
URL.

We make the Client simple, without any OAuth2 logic (simple single field).
We document and rely on users to provide a separate bash script or program.
This fits into what Eric said: a lot clients have custom logic (e.g
supporting Google IAM service account flows for Access Tokens).

We can implement the Resource Owner flow in a separate client that gets an
access token and passes it to a backend.

Thoughts?

On Fri, 7 Apr 2017, 21:44 Jeff Mitchell, notifications@github.com wrote:

In our implementation on the PR, the user/pass the CLI collects get passed
to the backend, which calls Ping for us.

This is not what your org has been saying on this issue, which is that
the CLI fetches the tokens.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/vault/issues/2525#issuecomment-292646930,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AJNWox5eva_dM4S9-X3kGDkrvpFW5Cx8ks5rtqAggaJpZM4Ml9kn
.

Rather than distribute a helper client just to get the token, I'll probably end up forking Vault and distributing an integrated CLI with that included. I'm really trying to keep this straightforward for my users.

Sounds drastic. Give me some time to think, please.

It accepts OIDC Identity tokens and Access Tokens+UserInfo.

Want to reiterate that accepting an access token means you can't use this with a public provider because you can't limit the client it was issued through. E.g. this feature won't work with Google. edit: _"won't work" is a bit strong, see my comment below for an explanation._

That's fine but needs to be called out in documentation.

@ericchiang What do you mean by "won't work with Google"? The main point of the original proposal is that you don't make calls to a third party because you're relying on the JWT signature and simply matching claims. That's not going to change if the claim information is being fetched a different way in some cases. If Google already gives you a JWT you're not going to be calling back to Google for any reason are you? Or are you implying that you'll still need/want to call the userinfo endpoint?

Brian, why fork vault as a whole, instead of just starting a separate
contrib project that does OAuth2 flows (including Res Owner) on the client
side and wraps the vault CLI (like as a single binary?)

On Fri, 7 Apr 2017, 21:57 Jeff Mitchell, notifications@github.com wrote:

Sounds drastic. Give me some time to think, please.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/vault/issues/2525#issuecomment-292649714,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AJNWowhxmn2gkDWq4nzWg0c3F0aRdFutks5rtqMggaJpZM4Ml9kn
.

No need to fork at all. I have some ideas. Like I said, just give me some time to a) think on it and b) discuss with the team. But also trying to make sure I have all the context e.g. Eric's most recent comment.

@jefferai If you accept an access token to talk to the userinfo endpoint, you can't determine what client issued that access token. So as an attacker I could create an arbitrary client with Google, login to my own client, then send that access token to Vault and Vault will log me in.

Sorry, "won't work with Google" is maybe a bit strong. What I mean is anyone will be able to talk to Vault, and it's up to Vault to do the ACLs.

If you only accept ID tokens, you can ensure they went through a trusted OAuth2 client, and not any random client that Google issued.

Edit: that's what I was trying to point out in this comment https://github.com/hashicorp/vault/issues/2525#issuecomment-292586980

If you accept an access token to talk to the userinfo endpoint, you can't determine what client issued that access token. So as an attacker I could create an arbitrary client with Google, login to my own client, then send that access token to Vault and Vault will log me in.

Only if the claims from the userinfo endpoint match the claims configured on the role, and the info from the userinfo endpoint is dependent on your token.

This isn't to say that there aren't spoofing concerns on both sides, especially in cases where you're looking for user-defined information. e.g. if a user can set an arbitrary claim in their account that Google will sign when issuing an ID token, and that's all you're matching on, you are opening up yourself to spoofing; same if you only match that from the userinfo endpoint.

Only if the claims from the userinfo endpoint match the claims configured on the role, and the info from the userinfo endpoint is dependent on your token.

Correct. If Vault enforces the ACLs correctly then it _should_ be okay. We've shied away from doing this in Kubernetes, but I'm not as familiar with the Vault's auth-Z layer. Just felt it was important to call out.

With an access token, if that token is generated out of band (including directly from the CLI calling the IDP), Vault on the backend would need to validate the access token. That would give it the opportunity to see the client ID that created it and ensure it is what it expected. The problem is that validating reference access tokens is not currently in a finalized standard. There is one in draft, but right now each provider has its own custom grant extensions for that. Some providers also support JWT access tokens, which can be validated without calling the IDP, but that's not universal either.

By having the back end make the call to get the token, it received the token directly from the IDP and can thus rely on TLS for integrity and skip that validation. But otherwise Vault would need to validate the access token and its client ID.

@ericchiang Sure, appreciate you making sure we're aware!

@bkrodgers A userinfo endpoint would validate the token implicitly. If the administrator configures the correct endpoint (and it's over TLS, of course) then if the endpoint returns correct data with correct claims for the token provided by the client it should be appropriately validated.

Just as a heads-up, we're also having some internal discussions about this and we'll get back to you all with some more concrete thoughts after we're done. (These discussions are not only "how should we do X", but like, if it's done way X or Y, what's the overall maintainability story, what are we wanting to take on, what lessons have we learned from prior work, etc.)

It's late on the east coast today and next week we have several teammates out on vacation and/or trips so we'll be a bit slower than usual. But we'll definitely keep an eye on this.

Cool, thanks. I appreciate the thought and input from everyone. Hopefully we'll come to a solution that will meet everyone's needs.

Userinfo endpoint would be validating the token, though it won't tell you what client ID created it, if you care to validate that. To check the client it'd either need to be a JWT access token or send it to the IDPs token introspection endpoint. Here's the draft standard I'm referring to for that: https://tools.ietf.org/html/rfc7662. Not sure if anyone's started implementing it yet.

2525 (comment) for example.

Sorry for the confusion. I misread that as the Vault server implementing the flow, not the CLI specifically. In my PR, the CLI just collects the credentials from the user and passes them to the server. The server performs all interactions with the oauth provider.

Personally, I don't think it would be a serious hurdle to handle several possible configurations within one backend that would hopefully make everyone happy:

  • JWT Token: User gets token out of band and passes via CLI, Vault server skips oauth flow and parses/validates the token. I have no preference on whether a bearer token to hit a userinfo endpoint should be allowed or not. Seems like an edge case that doesn't need to be addressed immediately.
  • Resource Owner Password: My current PR. Users pass their username/password via CLI, Vault passes them to the oauth provider. If it gets back a JWT token, great, do the above. If not, hits userinfo endpoint if configured or just reads local group assignments.
  • Auth code flow: Somewhat off in the weeds, but it'd be possible to add another flag to the CLI like -browser that just prints a URL for the oauth provider with query params set and directs the user to paste it into a browser. Since Vault is a HTTP server, it could be the one to handle the redirect back from the provider and just display a basic page saying "Now copy/paste this token to the CLI to complete the auth". CloudFoundry does something similar if you want to use SSO with their CLI. This should make it compatible with a much wider variety of providers who don't support the password flow.

Which flows a particular Vault instance would support would just depend entirely on how an admin decides configure the backend and their provider.

Userinfo endpoint would be validating the token, though it won't tell you what client ID created it, if you care to validate that. To check the client it'd either need to be a JWT access token or send it to the IDPs token introspection endpoint. Here's the draft standard I'm referring to for that: https://tools.ietf.org/html/rfc7662. Not sure if anyone's started implementing it yet.

In my mind this is outside of Vault's scope; the identity server introspects when you call the userinfo endpoint so it's really up to it to decide if the token came from a valid client. Of course, nothing stops you from having the identity server attach claims that indicate the client and use that for matching...

@mikeokner It's trickier than you might suspect given Vault's architecture and the way it isolates backends from the core, but I've got some plans coalescing in my mind and in discussion with the team. It might be mid-late next week before there's a concrete proposal, but something will come -- I promise!

Just realized another issue with using the access token I'd like to run
past everyone.

If I configure Vault with Google, and declare that "(my account)@gmail.com"
has access to certain secrets, then I can _never login to a non-trusted
OAuth2 client using that account_, because that client would be given an
access token it could use to talk to Vault on my behalf.

Any website that implements a "Login with Google" flow (e.g. nytimes.com)
could use the access token it gets from Google to read secrets from Vault.
I'd essentially have to have a dedicated Google account just for Vault.

I really think this proposal should consider having an option where in only
accepts ID tokens with a specific audience claim, and not access tokens.

On Fri, Apr 7, 2017 at 4:13 PM, Jeff Mitchell notifications@github.com
wrote:

Userinfo endpoint would be validating the token, though it won't tell you
what client ID created it, if you care to validate that. To check the
client it'd either need to be a JWT access token or send it to the IDPs
token introspection endpoint. Here's the draft standard I'm referring to
for that: https://tools.ietf.org/html/rfc7662. Not sure if anyone's
started implementing it yet.

In my mind this is outside of Vault's scope; the identity server
introspects when you call the userinfo endpoint so it's really up to it to
decide if the token came from a valid client. Of course, nothing stops you
from having the identity server attach claims that indicate the client and
use that for matching...

@mikeokner https://github.com/mikeokner It's trickier than you might
suspect given Vault's architecture and the way it isolates backends from
the core, but I've got some plans coalescing in my mind and in discussion
with the team. It might be mid-late next week before there's a concrete
proposal, but something will come -- I promise!

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/vault/issues/2525#issuecomment-292673127,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACO_XcnSg22yRayBanVrip3QpkIEvk_kks5rtsMGgaJpZM4Ml9kn
.

Interesting Eric. I kinda start thinking that it is better to separate OIDC
and OAuth2, with the former being the recommended one.

On Sun, 9 Apr 2017, 09:04 Eric Chiang, notifications@github.com wrote:

Just realized another issue with using the access token I'd like to run
past everyone.

If I configure Vault with Google, and declare that "(my account)@gmail.com
"
has access to certain secrets, then I can _never login to a non-trusted
OAuth2 client using that account_, because that client would be given an
access token it could use to talk to Vault on my behalf.

Any website that implements a "Login with Google" flow (e.g. nytimes.com)
could use the access token it gets from Google to read secrets from Vault.
I'd essentially have to have a dedicated Google account just for Vault.

I really think this proposal should consider having an option where in only
accepts ID tokens with a specific audience claim, and not access tokens.

On Fri, Apr 7, 2017 at 4:13 PM, Jeff Mitchell notifications@github.com
wrote:

Userinfo endpoint would be validating the token, though it won't tell you
what client ID created it, if you care to validate that. To check the
client it'd either need to be a JWT access token or send it to the IDPs
token introspection endpoint. Here's the draft standard I'm referring to
for that: https://tools.ietf.org/html/rfc7662. Not sure if anyone's
started implementing it yet.

In my mind this is outside of Vault's scope; the identity server
introspects when you call the userinfo endpoint so it's really up to it
to
decide if the token came from a valid client. Of course, nothing stops
you
from having the identity server attach claims that indicate the client
and
use that for matching...

@mikeokner https://github.com/mikeokner It's trickier than you might
suspect given Vault's architecture and the way it isolates backends from
the core, but I've got some plans coalescing in my mind and in discussion
with the team. It might be mid-late next week before there's a concrete
proposal, but something will come -- I promise!

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/vault/issues/2525#issuecomment-292673127,
or mute the thread
<
https://github.com/notifications/unsubscribe-auth/ACO_XcnSg22yRayBanVrip3QpkIEvk_kks5rtsMGgaJpZM4Ml9kn

.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/vault/issues/2525#issuecomment-292771070,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AJNWo6pUGvjBS-E-lSB6IjkbxZcJKNwxks5ruJD2gaJpZM4Ml9kn
.

I really think this proposal should consider having an option where in only
accepts ID tokens with a specific audience claim, and not access tokens.

So you submit an oauth2 token, hit up the userinfo endpoint, and it returns claims including aud. You match the aud claim. What's the problem exactly?

Userinfo isn't required to include aud claim.

http://openid.net/specs/openid-connect-core-1_0.html#UserInfoResponse

On Apr 9, 2017 5:18 AM, "Jeff Mitchell" notifications@github.com wrote:

I really think this proposal should consider having an option where in only
accepts ID tokens with a specific audience claim, and not access tokens.

So you submit an oauth2 token, hit up the userinfo endpoint, and it
returns claims including aud. You match the aud claim. What's the problem
exactly?

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/vault/issues/2525#issuecomment-292782433,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACO_XdJCcytS1mydbPtiEcByfJNHUO49ks5ruMyQgaJpZM4Ml9kn
.

It may, and if the response is signed, it (according to spec) SHOULD. And either way if you are requiring an aud match and it's not part of the userinfo, just reject because there's no match. Easy-peasy.

At least with our current setup with Ping, the UserInfo endpoint doesn't send that data back signed, and thus doesn't include the aud claim. I'd need to look further into whether it can be configured to include that. However, a JWT access token will have client_id in it, or call the token introspection endpoint for a reference token. The issue with the latter is that it isn't a finalized standard yet. However, Ping has already implemented it, and perhaps others have too.

@bkrodgers I thought Ping didn't give you JWT token after OAuth is complete? If so, what's in it?

Ping won't give me a ID Token in resource owner password flow, but it can be configured to give me an access token in JWT format. We can configure what we want to be in it, but here's an example:

{
  "scope": [
    "openid",
    "profile",
    "email"
  ],
  "client_id": "<client-id>",
  "iss": "<our-ping-server>/as/token.oauth2",
  "sub": "bkrodgers",
  "user_id": "bkrodgers",
  "OrgName": "Monsanto",
  "group": [
    "group1",
    "group2"
  ],
  "exp": 1491857773
}

So for such a token, Vault could check the client ID by validating the JWT and looking at the client_id field. To make it work with reference tokens and still validate client ID, Vault would need to call out to the token introspection endpoint.

Wait, so Ping refuses to give a standard OIDC token in that flow but will give a signed JWT containing groups and a client id and a user id?

If you tell it to, yes. They support either JWTs or reference tokens for access tokens, and you can configure what data you want included. But I haven't found a way to get it to tell you to give it an ID token. I did put a question into Ping support to confirm that isn't possible, but I'm pretty sure you can't.

Mumble mumble mumble PITA mumble mumble

OK, so I'm trying to scope out some work, so here are my next two questions, applicable to everyone:

1) Realistically, how many providers are out there that support only oauth2+userinfo (or some in-between bastardization like Ping that support variants but in a way that can be secure)
2) How many providers are out there that support a) normal oauth2 flows (e.g. redirects), b) return OIDC tokens from these flows, and c) support groups and other mechanisms that make them realistic to use for an auth backend to Vault?

Just to clarify, Ping's in-between bastardization only applies to the resource owner password flow, since it isn't included in the OIDC spec and leaves ambiguity to the implementation as to what to do with the OAuth2 flows not specified by OIDC. It handles OIDC according to the spec for the implicit, auth code, and hybrid flows, those just aren't CLI friendly.

I don't have much hands on experience with other providers out there, but I'll see if I can find any info on how they handle things. I know Gluu is one free option I can download and play around with for free, and I'll check out docs for some others.

@ericchiang @mwitkow just pinging you specifically in case you haven't been following the recent discussion

  1. Realistically, how many providers are out there that support only oauth2+userinfo (or some in-between bastardization like Ping that support variants but in a way that can be secure)

Ping is the first I've seen.

  1. How many providers are out there that support a) normal oauth2 flows (e.g. redirects), b) return OIDC tokens from these flows, and c) support groups and other mechanisms that make them realistic to use for an auth backend to Vault?

Including groups, off the top of my head:

  • Azure active directory v2.
  • Cloudfoundry's UAA
  • Okta

I don''t know if keycloack supports groups.

Notable ones that support OpenID Connect but don't provide groups:

  • Google
  • Salesforce

Also, as mentioned above. Dex, which I work on, acts as an OpenID Connect proxy. Users login through upstream providers like GitHub, LDAP, SAML, etc. and dex mints ID tokens. It supports groups.

  1. Realistically, how many providers are out there that support only oauth2+userinfo (or some in-between bastardization like Ping that support variants but in a way that can be secure)

Yeah, same here, I haven't seen anyone not support OIDC if they have userinfo, because it is a relatively trivial addition that makes it easy to "tick a box".

  1. How many providers are out there that support a) normal oauth2 flows (e.g. redirects), b) return OIDC tokens from these flows, and c) support groups and other mechanisms that make them realistic to use for an auth backend to Vault?

To that I'll add (with groups support):

  • auth0 - not with groups, but with role claims
  • Okta - does support groups claim, interesting maybe we could deprecate the old okta bakcend
  • Keycloak - does support groups mapping from LDAP, popular among Redhat users

Looking at the Okta docs, specifically here, it looks like Okta behaves the same way Ping does in regards to the password flow. It doesn't return an ID token for the password flow either. The key difference is that Okta has a (proprietary) extension to the OIDC spec to allow passing a session token to the OIDC authorization endpoint to do API/CLI friendly authentication, which can be obtained using their (also proprietary) authentication API. Via that method, you do get an ID token, it appears, but that's not 100% following the OIDC specs either. So strictly speaking, Okta would appear to behave the same way as Ping as far as not giving an ID token for password grant type, but provides an alternate way to get around using password flow in the first place.

I can't tell for sure if Okta supports calling the userinfo endpoint with an access token obtained via password flow, but nothing I can see in the docs suggests it doesn't allow that. But if it does, the implementation we've proposed in our PR would likely work as is with Okta as well. That's not to say we'd need to deprecate the Okta backend, only that you probably could.

Just as an update, we had to push back an internal discussion on this due to travel schedules this week, so we should be discussing early next week. The main thing we need to talk about is whether we want to target a backend that can handle the redirect workflow, enabling it to work with many more providers. From a technical level, the way to do this in Vault could dovetail nicely with a couple of other future enhancements we had planned, so it's sort of a two-pronged question right now -- do we want to take on this work, and if so, how to make sure that the design we use to implement that doesn't constrain future features.

If we decide to go down that path, someone on the Vault team would handle the inner core changes, and work with anyone interested on this ticket to handle the needs on the backend. It does push up the scope of the original backend a bit, but not a huge amount; mostly it would change the interfaces and design a bit, which is why I want to have this discussion before commenting back on the original design doc.

@jefferai what's the ETA on a decision?

We actually need an OIDC backend in the next two weeks and are willing to contribute to building it. However, if it takes a while, we probably will just internally fork it and build it as specified in the design doc, as it would become a blocker for our internal developer/operator credentials system.

From our point of view, the redirect flow is superfluous, but if it makes a sense if there's wider adoption. Just to clarify, what do you mean by the redirect flow?

  • client-side callback URL for an AuthCode flow (where the client is a web browser and sees the OIDC provider's redirect)
  • server-side callback URL for AuthCode flow (where the Vault server implements a callback URL for an identity provider)
  • something else?

@mwitkow Next week some time.

I'm confused by what you're proposing. Why would you internally fork it according to the design doc instead of just building it to the design doc in a normal public way with a PR and eventual merge upstream?

client-side callback URL for an AuthCode flow (where the client is a web browser and sees the OIDC provider's redirect)
server-side callback URL for AuthCode flow (where the Vault server implements a callback URL for an identity provider)

Server side; generally outside of client applications the client-side callback isn't useful. Since Vault is a remote server it would need to implement that method.

We were able to have a meeting a bit earlier than planned, so I have some feedback.

Here's what we want to do. It helps to skim #2200 to understand what's being proposed.

1) Have an oidc backend that, at a high level, is able to look at claims and assign policies based on these claims.
2) Define an interface that is suitable for defining how to get the claims. This can take multiple forms, discussed in a moment.
3) Write support for these different providers as plugins against the interface.

Going into broader detail:

For any given provider, details differ, both from a simple OAuth2 perspective and from an OIDC perspective. For instance, whether groups are defined in a groups claim or a roles claim -- to Vault this is a variant that needs to be handled somehow. It would be nice to avoid hardcoding values for different providers right into the backend, and it would also be nice to not have to have the user be aware of this type of behavior.

One way to handle this is to create interfaces that hide the differences. So if at a base level we care about individual users/groups. but not how they're encoded in a given OIDC, we can imagine submitting an OIDC token to login against a backend role with a specific type. Then the implementation of the interface knows how to parse out the unique user and group values for that given provider.

We can take this a step further with providers that don't return OIDC but do allow a method of getting signed claims and using authorized clients, such as the Ping resource owner flow. A given interface implementation can be configured with the client id/secret, and the login path can allow submitting the resource password. The implementation could then turn the returned OAuth2 token into claims by hitting Ping's UserInfo endpoint, and those claims can be returned to the backend for processing/matching to policies.

Going one step further, we can even take this approach and with some work in Vault's core, we can enable a normal OAuth2 server-side redirect authentication flow; this way users don't need to fetch OIDC tokens on their own, but can be given a resource that will resolve to a valid Vault token once they log in on the web via normal redirect flow means.

So similar to what we're doing in #2200, this can be built as a core of common code, a well-defined interface to handle providers of various types, and then plugins to actually enable those providers.

As far as the change of scope for the original proposal, it's actually not much, more a reorganization than anything else. There still needs to be a way to map users/groups/roles/etc. to policies so those API endpoints are necessary (but! there are actually several backends that have similar needs and really need the code refactored a bit into a library, so reuse is potentially possible); but the token parsing and verification will happen behind an interface satisfied by a trivial plugin (possibly a "generic" plugin for OIDC tokens that don't need anything special).

However, an advantage of this plugin approach is that we don't need all of the various providers implemented at once to launch the backend, and because the interface is internal we can update it if/when needed without affecting the public API.

Overall we believe that this will coalesce a lot of user stories around authenticating via OIDC (or OAuth-using identity servers that provide user info) and make it easy to support a growing list of these providers going forward.

Thoughts appreciated!

@jeffarai, as far as I understand #2220 is about providing a plugin system for auth. That's great!

However, I have a few key problems

OAuth Plugin or Generic Plugin

The proposal you outlined was fairly clear until:

We can take this a step further with providers that don't return OIDC but do allow a method of getting signed claims and using authorized clients, such as the Ping resource owner flow. A given interface implementation can be configured with the client id/secret, and the login path can allow submitting the resource password. The implementation could then turn the returned OAuth2 token into claims by hitting Ping's UserInfo endpoint, and those claims can be returned to the backend for processing/matching to policies.

At this point it is unclear to me what is the input to this "GenericBackend". Is it a:

  • OIDC token
  • Token: Access Token or OIDC Token
  • "any challenge": OIDC Token, Access Token or Password

While I can see how "Token" is a defined-enough (you have a plugin that takes it and returns you a set of username, roles and groups out of it), "any challenge" becomes murky. Doesn't this basically boil down to: "I got input from user, and return a Token" of what an Auth Backend should do? At this point, Vault is better off exposing a "Generic Auth Backend" plugin system, and we should leave this discussion about OAuth2 and OIDC to rest until that's ready and both can be implemented inside of that.

The Per-Provider Plugin assumption

At the same time, I want to provide a scenario that would challenge the following assumption:

It would be nice to avoid hardcoding values for different providers right into the backend, and it would also be nice to not have to have the user be aware of this type of behavior.
Then the implementation of the interface knows how to parse out the unique user and group values for that given provider.

Let's imagine person Y with a more systems-admin background, without much proficiency in Go. They just bought an Acme Auth provider for their Company X. Acme Auth supports OIDC and says: to configure third-party software to OIDC authenticate with Acme Auth (see linked documentation above) do:

  • set the discovery url to https://something.acme/discovery
  • use sub claim for username
  • use acme_groups claim for groups discovery

The person Y would know how to configure an OIDC backend with Kubernetes, their config takes in values stated in the documentation, i.e. the ones that are also relevant in OIDC. Now person Y looks at Vault documentation and sees: OIDC Dex, OIDC Google, OIDC Keycloak. But no OIDC Acme. And they don't know Go to write a plugin themselves.

As such I propose to have a Generic OIDC backend with the similar knobs outlined in the proposal. It wouldn't be hard to configure for any savvy, and provide much better adoption for lesser known use cases (e.g. we have our own OIDC provider).

Code sharing

As far as the change of scope for the original proposal, it's actually not much, more a reorganization than anything else. There still needs to be a way to map users/groups/roles/etc. to policies so those API endpoints are necessary (but! there are actually several backends that have similar needs and really need the code refactored a bit into a library, so reuse is potentially possible); but the token parsing and verification will happen behind an interface satisfied by a trivial plugin (possibly a "generic" plugin for OIDC tokens that don't need anything special).

I am not sure what you mean here. Do you mean:
a) there will be a generic library (used e.g. in Github or Okta) for storing a mapping from a concept of a "user", "group" and "role" to a policy that we can reuse
b) the "OAuth2/OIDC hybrid backend" should have one that's common and consumed by all these "plugins"?

If it's a), that's great. Most of my hacky oidc plugin code I have locally deals with that. The rest boils down to ~40 lines. If it's b) I'm more skeptical, as outlined above.

Also by "interface" and "plugin", did you mean the high-level "Database Backend" discussed in #2200 or a "OAuth2/OIDC hybrid plugin"?

as far as I understand #2220 is about providing a plugin system for auth. That's great!

Only in the sense of database authentication credentials. It's about adding a way for there to be a common core of a database backend whose actual implementations that connect to the DBs and toggle users there are separated into plugins.

At this point it is unclear to me what is the input to this "GenericBackend"

Plugin, not backend -- just want to be clear here that this is about having a plugin system for this backend. Forget what I said about a generic plugin, I think it muddied the waters too much. My point was only that many OIDC providers use spec-defined or common claim names so there is a potential for a single plugin to handle many cases, but other plugins could easily handle variations.

Let's imagine person Y with a more systems-admin background, without much proficiency in Go. They just bought an Acme Auth provider for their Company X. Acme Auth supports OIDC and says: to configure third-party software to OIDC authenticate with Acme Auth (see linked documentation above) do:

set the discovery url to this
use sub claim for username
use acme_groups claim for groups discovery
The person Y would know how to configure an OIDC backend with Kubernetes, their config takes in values stated in the documentation, i.e. the ones that are also relevant in OIDC. Now person Y looks at Vault documentation and sees: OIDC Dex, OIDC Google, OIDC Keycloak. But no OIDC Acme. And they don't know Go to write a plugin themselves.
As such I propose to have a Generic OIDC backend with the similar knobs outlined in the proposal.

I don't really understand the issue/problem here. If they are using standard values that can be handled by a "generic" style plugin, great. (Maybe part of the config of this plugin is the key of the claims to be used for user/groups, so long as the values are expected, e.g. string and []string.) If the ACME auth provider does something custom, they can create a custom plugin that they can maintain on their own that can be plugged into Vault without needing to wait for us to make a release.

I don't really understand the issue/problem here. If they are using standard values that can be handled by a "generic" style plugin, great. (Maybe part of the config of this plugin is the key of the claims to be used for user/groups, so long as the values are expected, e.g. string and []string.) If the ACME auth provider does something custom, they can create a custom plugin that they can maintain on their own that can be plugged into Vault without needing to wait for us to make a release.

This is exactly what the current design proposes: parsing JWS claims and allowing the administrator of a Vault to configure claim names fields. And this is what every OIDC (not OAuth2 + proprietary stuff) provider I know does:

  • Dex - see docs
  • Auth0 - see docs
  • Okta - see docs
  • Keycloak (can't find docs now)
  • UAA - see docs
  • OpenUnison - (can't find docs now)

They all either encode scopes, roles or groups inside simple OIDC claims (either as string or []string). The proposal outlines hits all of them.

The whole point of my discussion was the following: If you have an ACME Auth provider, and it does OIDC, you most likely will not need custom code and a plugin to work with it.

Yes, but how do you get that OIDC token in the first place? In many machine cases it might be given to you, but for user cases you're very often going through a web-based flow; otherwise you may be providing credentials which are used on your behalf.

I did say that the scope of the original proposal was not going to have to change much! All that really needs to be done for your needs is for straight up hey-Vault-here-is-this-OIDC-token-I-already-have workflows, the actual parsing and/or verification will happen inside a simple plugin rather than directly in the backend code. It's a shifting of where this happens, not a shifting of what happens.

But this has a number of benefits down the line -- it makes it easier to write plugins that allow going through the entire flow from the user perspective (e.g. the web redirection flow). It also means that anyone that needs to do something slightly-to-very different from anything an existing plugin can handle only needs to fork and maintain small plugin that can be loaded dynamically rather than fork and maintain the entire Vault codebase. (And, if it's just a small quirk, that can likely be handled by a config flag to an existing plugin rather than a new one.)

So the additional bits to what you originally proposed are mostly a) adding plugin logic, which is mostly going to be from helper libraries or a bit of copypasta from #2200; and b) defining the plugin interface. I have faith that the various interested parties on this ticket have enough experience to help that take shape.

actual parsing and/or verification will happen inside a simple plugin rather than

My question above was about the scope of the "plugins":

  • OIDC token
  • Token: Access Token or OIDC Token
  • "any challenge": OIDC Token, Access Token or Password

If it's the second, I agree. If it is the latter, it feels like scope creep entering Generic Auth territory. If it is the second, I assume you're talking about an interface similar to:

type TokenVerifier interface {
   Verify(context.Context, string) (username string, groups []string, roles []string, error)
}

But at this point, wouldn't it just be better to have a "Generic Token" authenticator with a Webhook configuration similar to: https://kubernetes.io/docs/admin/authentication/#webhook-token-authentication?

user cases you're very often going through a web-based flow

Why would you want to implement the web-based flow on the server side? Neither OAuth2+UserInfo, nor OIDC needs it. Doing the Web based auth flows server side is complicated:

  • they are custom to specific backends
  • they are hard to make secure in the web world, e.g. you need to implement CSRF cookies throughout them, and make sure anyone implementing a plugin does so. Otherwise anyone who displays a website to users on an intranet, could do Cross-Site scripting attacks.
  • they can be tricky to secure, e.g. we would recommend users use client-side certificates to talk to Vault, and they may not be present in a browser

In our implementations internally (for our CLI tool), and I presume in Kubernetes (see here, @ericchiang will know better), the OIDC AuthCode browser-based flow is done inside the client with a "local web server" for the duration of the callback. This makes it more secure (the CSRF is unlikely as the endpoint lives for a short period of time) and easier to customize for users, since they only need to "fork" a client and the interface is just the "Token" itself.

loaded dynamically rather than fork and maintain the entire Vault codebase

Is this really where you want to go with Vault? I mean, I thought it would be an extensible piece of software with well defined "composable", but "service-oriented" interfaces. Do you think users would trust a dynamically loadable module system?

The point I'm trying to make here is: I actually really like Vault as an auth system, but I would prefer it to do the least possible thing and make it easy to wrap your head around. Dynamic plugins, interfaces and preloaders are hard to reason about and be able to "judge" their sanity. I'd much rather have few standards-based or generic backends that people can integrate things separately.

If it is the latter, it feels like scope creep entering Generic Auth territory

What is "Generic Auth territory"?

But at this point, wouldn't it just be better to have a "Generic Token" authenticator with a Webhook configuration similar to: https://kubernetes.io/docs/admin/authentication/#webhook-token-authentication?

A plugin could certainly behave in this way, but it doesn't require them to, e.g. if the submitted login information (say, a fully formed JWT OIDC token) has enough information in it to avoid a call to a third party.

Why would you want to implement the web-based flow on the server side? Neither OAuth2+UserInfo, nor OIDC needs it.

This assumes you already have an OIDC or OAuth2 token. But we've had plenty of requests (and some PRs that we have so far declined to merge because of their lack of genericism) for auth backends that can handle the OAuth2 flow, because they don't want users to have to dig through UIs or APIs to get a token for Vault, they want their users to simply authenticate Vault directly. This lets Vault send a redirect address, have the user click on it to authorize Vault, then have Vault return a token to the user.

In our implementations internally (for our CLI tool), and I presume in Kubernetes (see here, @ericchiang will know better), the OIDC AuthCode browser-based flow is done inside the client with a "local web server" for the duration of the callback.

Vault doesn't have a way to display these pages. If you're doing this via textual means I'd be interested in that mechanism. I know the OAuth2 flows well but not the OIDC flows as much.

Is this really where you want to go with Vault? I mean, I thought it would be an extensible piece of software with well defined "composable", but "service-oriented" interfaces. Do you think users would trust a dynamically loadable module system?

Doesn't a plugin system add extensibility and composability? Why should users not trust plugins that their administrator has authorized?

The point I'm trying to make here is: I actually really like Vault as an auth system, but I would prefer it to do the least possible thing and make it easy to wrap your head around.

The problem is that I need to look at it from the perspective of a thing that solves use cases for many customers. I don't really understand the consternation you have with what I proposed, as for your particular needs the complexity is not significantly greater.

Dynamic plugins, interfaces and preloaders are hard to reason about and be able to "judge" their sanity.

I think this depends very strongly on the specifics: when are plugins loaded, what are the interfaces, etc. I don't know where "preloaders" is coming into the conversation though... :-)

I'd much rather have few standards-based or generic backends that people can integrate things separately.

You want a backend that consumes OIDC tokens. I want that backend, when used with different providers, to be able to consume OIDC tokens generated directly through Vault's actions. This is directly as a result of repeated user requests and needs. I don't think we are at odds here; I just want more functionality to be able to be supported, and am proposing that a plugin interface be used to keep this complexity out of the core backend code to the greatest degree possible.

You want a backend that consumes OIDC tokens. I want that backend, when used with different providers, to be able to consume OIDC tokens generated directly through Vault's actions. This is directly as a result of repeated user requests and needs. I don't think we are at odds here; I just want more functionality to be able to be supported, and am proposing that a plugin interface be used to keep this complexity out of the core backend code to the greatest degree possible.

I agree with this. While @bkrodgers and I certainly have the ability to stand up some obnoxious, one-off application that does nothing other than go through the browser flow so some poor user can copy/paste a long, encoded string onto the CLI, there are plenty of people who may want to use Vault + IDP and don't have the level of skill or resources required to do so in a reliable, secure manner.

@jefferai can you provide any additional detail at this time in terms of what level of detail would be abstracted out to plugins? I'm envisioning a plugin would handle two specific calls like:

  • Admin configures the backend to use a plugin that supports auth code browser flow, username/password, out-of-band pasted tokens, or something else.
  • When a user auths, call to handle is sent to plugin and returns token, err (This line in current PR) (would be a no-op for pasted tokens)
  • Upon successful auth, second call to get group assignments is sent to plugin which could parse JWT, hit userinfo, or something else (This line in current PR) (would be a no-op if IDP doesn't support groups)
  • At this point, we have groups for an authenticated user, proceed as usual with local group/policy assignments, etc.

Is that in line with what you're thinking?

@jefferai

If it is the latter, it feels like scope creep entering Generic Auth territory

What is "Generic Auth territory"?

By this I mean: "an auth mechanism that takes a value from a client and returns a token back". I probably made my point badly. What I meant was: What do we define as inputs to this backends? Only Tokens (my preference, but would make the password auth not work)? Or general "credentials input" (passwords, tokens, authcodes)? Because if it is the latter, then it is unclear to how this "pluggable OAuth2 backend" differs from a generic notion of an Auth Backend.

Vault doesn't have a way to display these pages. If you're doing this via textual means I'd be interested in that mechanism. I know the OAuth2 flows well but not the OIDC flows as much.

We were actually doing it for a simple OAuth2 flow, and now are using it for OIDC. You can checkout how Kubernetes does it, very similar to this example code

Basically:

  • the CLI tool prints a URL (or optionally auto-opens the default browser through xdg-open), the URL is the AuthCode URL of the OAuth2/OIDC provider with the "state" pre-filled, client_id, and a redirect_url=http://localhost:9090/auth/callback.
  • the browser visits that URL, the provider renders a login page/two factor auth etc. The dance continues.
  • at some point the browser redirects back to the "Redirect URL" of the client_id, which brings the browser to the http://localhost:9090/auth/callback which is served by the CLI tool itself. The page rendered can be a simple textual one: "please close this window".
  • thus, the CLI tool now has the AuthCode and can perform an AuthCode to Token (Access or ID) exchange.
  • the CLI tool then sends the Token to the server for validation. For ID Tokens, the aud field is set to the client_id of the CLI and the Vault server checks that.

Neither Vault, nor the CLI plugin needs to display any web pages here. This flow has worked incredibly well for our spatial CLI tool, and I believe it is much simpler to implement than server side.

Is this really where you want to go with Vault? I mean, I thought it would be an extensible piece of software with well defined "composable", but "service-oriented" interfaces. Do you think users would trust a dynamically loadable module system?

Doesn't a plugin system add extensibility and composability? Why should users not trust plugins that their administrator has authorized?

Apologies, I should have been clearer here. It's not about the trust in the administrator, it's about whether there is a bondary between a 3rd party plugin and Vault "secrets" or not. In case of "remote API" there is, in case of a dynamic loadable library, there isn't. Even though version 0.1 of a 3rd party plugin may be ok, maybe it's possible that version 0.3 has a vulnerability that compromises Vault as a whole. The administrator wouldn't review the code of each one every time he updates a version.

Since Vault is meant to be the lynchpin of our security system, I would strongly prefer there to not be dynamically loadable 3rd party code in my system. This is an off-topic discussion though, and ultimately it is up to you as maintainers of Vault where you want to take it.

at some point the browser redirects back to the "Redirect URL" of the client_id, which brings the browser to the http://localhost:9090/auth/callback which is served by the CLI tool itself.

Right now, Vault's CLI is a very thin client wrapper around Vault's HTTP API. Obviously I'll defer to @jefferai's judgement, but it would seem to be quite a substantial change to the design of the project to start serving HTTP requests from that same CLI. Also, it would then _only_ work when using the bundled CLI and not any other client integrating with Vault's HTTP API unless every client then takes the time to re-implement the same functionality.

No one is suggesting you _can't_ use a client that handles the redirect and then passes the token to the Vault server, just that the server should have the capability to handle the redirect (or other valid oauth2 flows like password) as well.

Since Vault is meant to be the lynchpin of our security system, I would strongly prefer there to not be dynamically loadable 3rd party code in my system.

@mwitkow I suggest taking a look at #2200 and hashicorp/go-plugin as @jefferai suggested. The plugins expose RPC functions running in a completely separate process. There's no dynamic linking or memory sharing. Popular plugins can be shipped built-in with Vault, and external plugins must be whitelisted by checksum by an admin before Vault will load them. I believe your concerns are unfounded.

Hi @mwitkow ,

Thanks for detailing the flow. I'm kind of torn here, for exactly the reason that @mikeokner specified; putting logic in the CLI means that any other Vault API client can't take advantage of it.

In fact you can imagine that the CLI login helper for this backend could actually take care of the entire flow from the user perspective in just the way you described, except with the actual exchange happening behind the scenes! The CLI helper gets an API client and could call the appropriate Vault endpoint, print out the URL for the user to open (or open it directly if possible), then wait until the flow is complete, finally returning the user's token in the end. But it would still then be a thin wrapper and any other API client or tool could implement the same thin flow.

I do also want to help ease your fears about the plugins. As @mikeokner said, it's a separate process communicating via RPC, but it's actually more complicated than that. We specifically do not want to dlopen, which is one reason we're continuing to use HashiCorp's go-plugin system rather than what's coming in 1.9; we see avoiding cgo as a very good thing (and accommodating database libraries that require cgo is actually a reason we are building the updated database support as plugins in the first place).

Any such plugin that is enabled requires the SHA256 and path to be configured by the administrator; any built-in/included plugin (which is automatically whitelisted) shipped with Vault is actually run by Vault invoking its own binary with the SHA256 and path calculated by Vault when it was originally launched (the path and SHA cannot change during the lifetime of the process without throwing errors).

Once the process is invoked, it is handed a Vault wrapping token; this token contains a unique generated TLS certificate/private key for it to use to talk to the original Vault process. Since it's a wrapping token it's obviously single use. This secures the communication between the plugin and Vault in a method quite similar to how you'd give a wrapping token to e.g. a container to allow it to get its Vault-generated cert.

In the future we'd like to actually create a different mode in go-plugin that maintains the same interfaces but runs the plugin in a goroutine, skipping the need to spawn a process, which would make this flow even better.

So to sum up:

1) Vault remains statically compiled, plugins can be static or dynamic
2) Vault launches a plugin with an RPC interface, no memory is shared outside of IPC
3) Communication between Vault and the plugin is TLS encrypted
4) Plugins require known paths and SHA values
5) Like Vault, plugins support the use of mlock when available.

Hopefully this makes you feel less concerned about the plugin system.

Edit: see 5) above, thanks @briankassouf

@jefferai can you provide any additional detail at this time in terms of what level of detail would be abstracted out to plugins? I'm envisioning a plugin would handle two specific calls like:

Rather than specific calls, here's an idea of the structs that might be in play:

type OIDCPluginConfig struct {
    // Opaque config values for a specific provider; e.g. for a
    // "generic" OIDC plugin that consumes an already-existing
    // OIDC token being submitted, this may contain items like
    // which claim to use for a unique user ID and which claim
    // to use for group memberships. For a Ping server this may
    // contain items like the API endpoint and the client id/secret.
    Config map[string]interface{}
}

type OIDCPluginLogin struct {
    // Opaque values, e.g. "id_token" or "password"; if a redirect
    // was used, this may contain the returned parameters
    LoginParams map[string]interface{}

    // If a redirect was required, this identifies the state value
    // that can be used to look up original parameters of the
    // request; the presence of this value can be used to
    // trigger appropriate behavior
    RedirectState string
}

// This contains standard values that can be returned from
// the plugin so that all shared backend logic can operate on
// known data members
type OIDCPluginResponse struct {
    // The unique user ID value
    User string

    // The claim that was used for users, for record-keeping/audit
    UserClaim string

    // The groups that the user is a part of
    Groups []string

    // The claim that was used for groups
    GroupsClaim string

    // If a redirect URL is required, the constructed URL
    RedirectURL net.URL
}

In a situation where you already have an OIDC token, you simply pass it in to the plugin, it's validated/parsed, and the user/groups to use are returned.

In a situation where you need to redirect, the login function can instead return a RedirectURL in the reply, which can be returned to the user; when the redirect happens, the state can be sent back to the backend and associated plugin, which can then validate the information (if it's an OIDC token) or fetch UserInfo/an OIDC token (if it's just a bearer token). The client can then retrieve the token. (Ignore the "how does the client do this" part for now as that involves a whole lot of feature enhancements to Vault's core, but they need to be made at some point anyways.) As discussed in the previous post, if the client is a Vault CLI helper, it can do this transparently so that all the user sees is a returned token as normal.

I think the proposal makes sense. In the short term we may need to deploy the code we submitted as a PR, as it sounds like this will be more involved and could be a month or two (or more) away from making it into a release. We're really chomping at the bit to use our oauth2 submission. But my intention would be to go back to the mainline code once you've got something that handles the resource owner flow server side in a similar way to what we submitted. We'd dump the mappings out of our oauth2 backend, remount the new one at the same path, and then load mappings into the new one.

Ideally that transition will be fairly transparent to the users. In fact, we've realized that we don't even need to distribute a custom CLI for this to work. vault auth -method=ldap -path=oauth2 will work just fine. So would vault auth -method=okta -path=oauth2 -username=bkrodgers or vault auth -method=userpass -path=oauth2 -username=bkrodgers, but I like LDAP's implementation better, since it will pull from the USER env var.

I wonder if it'd make sense to have the CLI for auth decouple "method" from the backend in question, and instead have "method" be the mechanism used to gather credentials and pass them to a backend (at any given path, for any given backend that expects those parameters). A lot of the current "methods" don't actually have anything to do with their named backends, and are in fact very similar to other existing ones.

With that in mind, you might only need auth "methods" in the CLI for userpass (Okta, LDAP, userpass, Radius, and OAuth2), token (Github, OIDC, and token), and cert. It looks like the other auth methods don't have CLI helpers at all and instead the docs suggest using vault write to auth, but they could be added if there was a need (I get why there probably isn't for those). Of course you'd leave aliases to the existing ones for backwards compatibility for at least awhile, if not indefinitely, but going forward just maintain "methods" that are truly distinct.

@mikeokner @jefferai, thanks for clarifying. It wasn't clear from the code at first glance that there was an RPC boundary. May I suggest starting a /docs/proposals or /docs/internal with a file documenting the plugin system concepts? I think this will massively help in saving time to not have to explain the concepts in details to ignorant users such as myself :)

@jefferai regarding the proposed interfaces, they look ok. Although two things:

  • RedirectState string I'm not sure why the client would need to pass this. I don't see any LoginResponse message to the client that the plugin could drive this value through. Can you elaborate?
  • OIDC-prefix, I would be weary in calling this OIDC, if it will support extensions such as UserInfo. How about GenericOAuth2 with an OIDC plugin?

From our side, I'll probably take some time later this week to clean up my hacky implementation and we'll try it out in our staging environment as a stand-alone Auth Backend. Similarly to @bkrodgers we need to get the ball rolling and unblock our internal processes. I'd be happy to clean it up (add tests etc.) and submit it as a PR upstream after that. However I don't think I'm well positioned to generalise it to the form of the GenericOAuth2, since I lack the context that you guys have about a) the Vault plugin system b) the other use cases that it should support apart from Tokens (I'm not familiar for example with the requirements of Ping and other proprietary providers).

@mwitkow You can see info at https://github.com/hashicorp/go-plugin and there's a good talk about it at https://www.youtube.com/watch?v=SRvm3zQQc1Q

@bkrodgers @mwitkow @mikeokner I think there are a couple of moving parts with different schedules here, which is tricky. I think the thing to do in the short term (e.g. ideally this week) is land a design doc that captures the plugin system and describes the interfaces/data structures for it. Revamping the one up above should work fine.

As for the plugins themselves, #2200 hasn't landed yet and I don't want to have a bunch of rewrites if things there need changing; on the other hand I want to make sure anything done for this backend reuses code to the max extent possible, plus has good interfaces, so I will likely be pulling in @briankassouf to help out here in an advisory role. I get the impression though that it should really not be difficult to add the plugins. So that's basically factoring out the Ping-specific and OIDC-specific code into these plugins and leaving most of the rest alone.

The rest of it, for e.g. server-side flows, doesn't have to be done right now, as neither of you need it, although it would be good if we can figure out a reasonable API for the plugins that allows this future capability without much churn in the plugin interface.

So my guess is that since you both already have some implementation work, you'll want to get that going in dev but I don't think it'll be too bad to get it upstreamed over to the united backend and my hope is that it won't be tooooo long before that can be done!

Hi all,

Traffic died down -- if someone picks this up and updates the doc, do please leave a comment here so that we know it happened and can check!

+1

any movement on this ?

+1 also very interested in this

+1 (use case: using Keycloak as OpenID provider)

@jefferai I finally had the time to clean up and add basic tests to the OIDC provider plugin that I had hanging around on my machine.

The reason for the delay is that we a workaround to not rely on OIDC in Vault just yet, so we managed to push it back by 1-2 months.

Unfortunately, @jefferai, as I said above: I don't think I will have the time (or the knowledge) to work on this to generalize it to the fully pluggable oauth2 provider you envisioned.

I do think, however, that base don feedback from @zdoherty @pidah and @panga a simple (yet still generic) OIDC auth provider will hit most use cases. As such I wouldn't mind following up and making sure that #2796 is a quality PR.

@mwitkow Thank for your valuable contribution, it fits exactly my requirements.

I really liked the OIDC approach (OpenID Connect) rather than an OAuth2 flow, the later is far more complex to configure.

I'm using Keycloak SSO as OIDC provider and the following Vault config:

vault auth-enable oidc
vault write auth/oidc/config \
    issuer_url="http://localhost:8080/auth/realms/master" \
    client_ids="testclient" \
    username_claim="preferred_username" \
    groups_claim="groups"
vault policy-write secret-policy secret-policy.hcl
vault write auth/oidc/groups/users policies=secret-policy
vault auth -method=oidc token=<bearer_access_token>

For the folks that want test this change, I've pushed a Vault 0.7.2 Docker image with merge of PR #2796 to Dockerhub as panga/vault:0.7.2-oidc (https://hub.docker.com/r/panga/vault/)

EDIT:
I'm using a fork (https://github.com/panga/vault/tree/oidc) with OIDC backend provided by @mwitkow for now.

@jefferai I understand that the reasons for closing #2796 where as you cited in #2571 "In fact, the proliferation of backends that have come up in the last couple of months around this are exactly why we want to take the approach we took with the combined database backend."

However, I wanted to follow up here on what you said in #2796 :

Someone that wants to put in some minor effort to make it work the way we need (ref #2525) should get in contact.

If you're looking for an external contributor to implement a generic, pluggable OAuth2 backend, may I suggest closing this centi-thread and filing a new bug marked with "Help Needed" a clear description of what exactly needs implementing, in what incremental stages and what is the minimum feature set for acceptance? Or maybe phrase it in a separate design doc?

I've been following this discussion since the very beginning, and it is still unclear exactly what you guys are looking for. I think spending some time to clarify this, and adding pointers to "piggy-backable" code for plugins in Vault, would go a very long way if you're looking for an external contributor to build this generic pluggable backend.

For anyone else interested, @pidah @zdoherty @panga, we'll be tryibng out #2796 in testing, and if that proves successful we'll probably by maintaining a public fork of Vault with #2796 in it until the generic OAuth2 backend becomes a substitute.

@mwitkow Nobody has indicated to this point that what we are looking for is unclear. But I'll try to clarify:

There are a lot of relatively incompatible ways to get OIDC information. Sometimes you already have an OIDC token. Sometimes you have credentials that can be used to get an OIDC token; or in other cases, such as Ping, which won't get an OIDC token for password owner workflows but can fetch the equivalent UserInfo after the fact. Sometimes it requires a full OAuth flow first, at which point either an OIDC token can be fetched or similar user info can be gleaned.

It's also not clear from the examples I've seen that even given an OIDC token you can always rely on the same claims to specify group membership.

There are also potentially different ways to authenticate tokens, including ingressing associated public keys.

If you look at this from a higher level perspective, what it really looks like is that you need the following:

  1. User/Group policy mappings
  2. A way to provide login information, which may be an OIDC token itself, and may be some other set of credentials
  3. A way to validate the result, e.g. signature verification
  4. A way to transfer the result -- e.g. UserInfo data -- to the user/group mappings to generate the final token

With the database backends early on people would just copy the code from one backend to the next and make the changes necessary to talk to e.g. mysql instead of postgresql. Bugs were copied too, and feature disparity grew over time. The way we solved it was a common backend that handled all non-specific functions and plugins conforming to an interface to handle the specific bits. The amount of copypasta has drastically decreased; it's easier to maintain, easier to keep feature parity, and easier not to have bugs proliferate.

In the above list, really only items 2 and 3 are likely to differ significantly between source of the user information. If we adopt a plugin system similar to the database backend, we can have all of the code for items 1 and 4 be the same. For a normal OIDC case, the plugin would be configured with a public key (or the URL to fetch it from); it would accept the JWT, validate the JWT, parse it, and return user info -- things it needs to do _anyways_, just behind a plugin interface. For Ping, the plugin could take in the user credentials, perform whatever calls it needs, and return the user info. We already merged an Okta backend, but Okta also has a way to fetch user info, and it'd be great if Okta could be supported here too.

There is a little extra work required to use a plugin interface, but common plugin functionality is already done (for the database backend).

I think the confusion with OAuth is that I was saying that such a mechanism could support OAuth eventually if other bits were worked on in Vault's core. But that's neither planned nor requested and I don't in any way see it as being necessary for this PR, or for #2796 or #2571.

However, I would like to see a plugin abstraction, even if the only plugin initially is a pure OIDC plugin. I think it has real value going forward and will be far easier to maintain by the Vault team.

If we adopt a plugin system similar to the database backend, we can have all of the code for items 1 and 4 be the same.

I think 4 would differ -- some plugins will need to call the userinfo endpoint, others can get it from an ID token, and still others might call some other arbitrary service to get that info. Or is that part of step 2 or 3? In other words, are you only referring to the step of a plugin saying "here's the user's info, wherever I got it from" and looking up the policies in the map from step 1?

One other question on the plugin implementation. Will plugins have a good way to be able to use Vault to store configuration info? In our original implementation as we submitted it for direct inclusion, we stored things in the mounted backend's config map, like the Ping URLs for both tokens and userinfo, a client ID, client secret, and the name of the attribute in userinfo to use for the groups. Will plugins have a way to store config data for each mounted instance of an auth plugin? Or if they have to deal with it themselves, will they at least be passed the mount info so they can key off that to support being mounted more than once?

I think the confusion with OAuth is that I was saying that such a mechanism could support OAuth eventually if other bits were worked on in Vault's core. But that's neither planned nor requested and I don't in any way see it as being necessary for this PR, or for #2796 or #2571.

Now I'm slightly confused...OAuth (resource owner flow) is definitely requested, but we're fine implementing it ourselves as a plugin once that's supported. We might even open source it. Other parts of your post make it sound like the plugin system will support us writing such a plugin though, so I'm not quite sure if this comment is anything I should be concerned with.

I think 4 would differ -- some plugins will need to call the userinfo endpoint, others can get it from an ID token, and still others might call some other arbitrary service to get that info. Or is that part of step 2 or 3? In other words, are you only referring to the step of a plugin saying "here's the user's info, wherever I got it from" and looking up the policies in the map from step 1?

Yes. My assertion is that the input to a plugin (into step 2) should be the client input (credentials, which may just simply be an OIDC token) and the output (after step 3) should be a defined data type that indicates what configured users/groups the authenticating user should inherit policies for. For OIDC or anything that can return an OIDC UserInfo-compatible struct this would likely be a fairly direct mapping.

One other question on the plugin implementation. Will plugins have a good way to be able to use Vault to store configuration info?

Yes. Normal storage semantics, just piped over a TLS-encrypted connection. (However, to be perfectly transparent, for built-in plugins they would actually be launched directly as objects in-memory. This is what the database backend does. But that's an implementation detail.)

Now I'm slightly confused...OAuth (resource owner flow) is definitely requested, but we're fine implementing it ourselves as a plugin once that's supported.

Sorry, what I should have said (rather than sowing more confusion) was other OAuth mechanisms could be supported eventually, such as the three-legged server side flow, where the client is given a URL to paste in their browser, after which they are redirected to Vault. That would require other plumbing through Vault's core and http layers. But pure OIDC and what you need for Ping would not require any of that and I don't see any reason they would have to be deferred. That's just implementation against a slightly different interface than exists now.

OK. That is a really long thread and my understanding OIDC has been increased as a result. I am looking at this for the same reason as a few others (I want to use KeyCloak as my auth provider - which defaults to OIDC).
While I doubt I can help with the coding on this, I am more than happy to test and/or provide documentation once we get round to a solution. BTW - I like the ide of taking the same approach as you have with databases.
Happy to contribute in any way possible.

@Hobbit71 with backend plugins coming out in 0.8, that might be a better way to go than the plugin system specific to OIDC/OAuth I mentioned above.

That is good to hear. Out of interest, how far away is 0.8. Not looking for anything firm here, is it weeks, months or quarters away?

Like I say, when we have that, happy to be involved. While I am not a Go coder, I now know some guys here (where I work) who are and we would love to contribute back to a great tool.

Are there any updates on this? I would love to be able to use OIDC as auth backend in vault.

There are several plugins (GCP, Kubernetes) using OIDC for auth.

@jefferai

There are several plugins (GCP, Kubernetes) using OIDC for auth.

Does this mean it's possible to use something like Dex? Or are you suggesting there are plugins to adapt from if someone wanted to implement generic OIDC.

@kamalmarhubi I would love to see "generic OIDC" but it may be relatively impossible given that most of the implementations we've seen rely heavily on custom metadata. But, those plugins are good starting points for someone wanting to make a plugin for their own needs.

The specs discussed in this thread are at times a bit obtuse, but It is sad to see this still unresolved nearly a year in. The plugin system seems like it could support this entirely outside the vault "batteries included" set.

@jefferai I do not understand the statement of how a generic OIDC plugin is "relatively impossible" if it is constrained to be a well formed JWT token. When you say "custom metadata" is this metadata that is put onto the vault token for some later use, or are you saying something about custom claims in the JWT.

A point of confusion throughout this thread is that OAuth2 and OIDC are different in their intent. OAuth2 is about authorization grants, while OIDC is only about authentication. OIDC is a much more natural fit for Vault auth. If Vault secrets are considered resources, than the most natural fit of OAuth2 for vault would be as a resource server, and that gets all entangled in client-ids and flows and really puts Vault in the middle of an OAuth2 system, not as a peripheral participant (which starts to feel very awkward to have things like login flows in the CLI or backend).

But the original goal of supporting a simple flow - where you have, at the Vault client, an OIDC token (Vault shouldn't care how you got it), and want to login to vault with that. And the Vault plugin supports roles that bind certain signers/issuer and jwt claims to vault policy. This seems very straightforward and worth supporting and seems the direction of #2796 (thank you @mwitkow for the persistence).

Most of the imperative to optimally combine OIDC with any/all OAuth2 support from early in this thread seems to be dissolved by the flexibility of the plug-in system replacing builtin backends.

Is a generic OIDC plugin still of interest or should this just be hosted outside of Vault project now as just a vault plugin?

Hi Preston,

I don't think it's so much a point of confusion around OAuth2/OIDC as the fact that there are some systems (I forget which) that won't grant OIDC tokens initially but will allow fetching such a token once you already have an OAuth2 token and people would like those to be supported too.

Anyways, OIDC support is totally straightforward in theory, but there are of course complicating factors. One is that many of the tokens people might want to use are JWTs but don't include many of the optional claims you'd really want for security, such as issue time or expiration time (I know this was an issue with the GCP and/or Kubernetes plugins). Could we just ignore it and let the user beware? Sure, but then that's added flags to control such bypass.

Another is that in most of the OIDCs we've seen so far in the plugins that have been written put important information in custom claims. So you'd need a way to define which claims are important for matching against roles, and what types those are -- a string, a list, etc.

Sometimes you can issue a call to a server to perform further token validation or get important parameters needed for matching (like the Kubernetes plugin does), but then you have to define how the call is made, what the inputs and outputs are, and what to do with them.

So it's not really a lack of desire to have a generic OIDC auth mechanism, it's that when it comes down to it, we haven't seen, in the specific examples that have been brought to our attention from real-world systems, a complete workflow that is sufficient from purely the OIDC token itself. Instead for GCP and Kubernetes it's been more like OIDC++ in ways that end up being specific to any given system.

If there is sufficient demand for a pure, token-only OIDC mechanism that requires no external validation and has standard claims, I'm totally happy to have it. But I'd want to actually have an understanding of the real-world systems for which it's sufficient, because if it ends up being the case that we put an OIDC backend in and it's unusable for the majority of real-world use-cases then we end up either in a position where we need to keep frankencoding the backend to try to handle more and more special cases, or we're better off creating simple plugins specific to these systems.

@jefferai you mentioned lack of possible use cases.

My is simple I would like to be able to use Vault with either Dex or KeyCloak (preferable KeyCloak).

Workflow would be like this:

  1. From cli users executes 'vault login -method=AWESOME-OIDC-METHODE'
  2. Some url is displayed
  3. Users opens the url and login
  4. a) if browser is the same host as vault app will redirect him 127.0.0.1 with appropriate tokens to confirm he was logged in successfully
    b) if not user will copy pin and enter it to cli

Beside KeyCloak/Dex support GitLab.com support would be nice.

What are the claims used in each of those cases, what types are they, and how would they map to Vault identities and policies?

I think it would map to http://www.keycloak.org/docs/3.3/authorization_services/topics/policy/role-policy.html

Keycloak's client_id would be mapped to identity("role" I guess) in Vault. Adding policies to this client would enable policies in Vault.

Just had a talk with @chrishoffman about this today, as he's writing an auth backend that uses OIDC -- and of course, nothing useful is in non-proprietary extensions, and even worse, there's very little useful at all that doesn't then require taking a single identifying value and making further API calls.

However, we were talking about it in the context of Identity, which is increasingly how we're moving towards people assigning policy, via Identity groups, and realized that a dead simple approach could work well as a "generic" OIDC plugin. (If you don't know much about Vault's Identity system, take a look at https://www.vaultproject.io/docs/secrets/identity/index.html).

The configuration would consist only of:

  • the public key to use for verifying
  • the claim to use as an identity alias, which would need to be a string
  • (optional) the claim to use as identity groups, which would need to be an array of strings

Identity's API designed to be scripted, so you can easily sync some set of servers or users into Identity and have them auth and be identified as that user every time they auth. So long as you have a known unique identifier that can be validated, you can just do all policy management via Identity, without having to deal with complex mappings of groups/roles/policies/claims (many of which would require further API calls) in the plugin. If you can't get group info out of the JWT, just sync group info into Identity from your source of truth and assign uniquely-identified users to them. Anything much more complicated is probably better served by a specific plugin anyways.

Interested in feedback, and if someone wanted to write such a plugin, I'd be interested in having it in an official repo and pulling it upstream.

Hi @jefferai , myself and members of my team would be interested in this very much.
I got questions, tho.

  1. It sound like Chris Hoffmann is working on OIDC auth backend that you intend to live along the generic OIDC plugin that is the subject of this issue. Is that correct? If so, I don't get what is the difference between the two.

  2. Do you want public key endpoint to be specific to this functionality? I was surprised to find there's no "generic" endpoint for giving Vault my public key for verification of whatever payload later on.

  3. Would you like to use all 3 - registered claim names, public claim names, and private claim names?
    Would you like any verification on the contains of JWT at all, or it's basically as long as the hash is unique and we can map it to recorded Identity, we're good?

  4. What OIDC flow are you thinking about? Implicit would be the easiest to implement, in my opinion.

As I'm re-reading your comment, it occurs to me you just want to map unique claim hash as an identifier of the user accessing Vault's endpoint.

Interactions then would be as follows.

  1. Upload public key to Vault.
  2. Register a indentity, point to key it has to use.
  3. Set identity on some endpoint( say /v1/aws/creds/ by means of policy. Set identity entity-alias for that identity to match claim, given https://tools.ietf.org/html/rfc7519#section-4.1 claims as parameters.
  4. Set /v1/aws/creds/ auth to oidc.
  5. Now, accessing /v1/aws/creds/ will require Authorization: Bearer header packets, encoded with appropriate public key.
  6. After such request is recieved the validation of JWT has to take place.

Please let me know what you think.

@ror6ax I'm a bit confused by all the AWS stuff in your example, especially step 4. I think it looks more like this though:

  1. Upload validating key to mount config
  2. Ensure identities are created that match some unique claim in each JWT and whose identifier is configured in the mount
  3. Users log in by passing in their JWT to Vault via POST/PUT
  4. The mount validates the signature on the JWT, pulls out the configured claim value, and sends it back to Vault, which matches it to an entity and generates a Vault token

What's the current state of the world here? I read through the whole thread but its still not clear. Is this being actively worked on?

2796 looks like exactly what we need as well. I saw there's a year-old fork, but was wondering if this (or something like it) was ever implemented as a plugin?

I don't know of anyone actively working on the proposed solution.

This thread / issue saddens me a little. All I and I believe several others here want is to use Keycloak as a Vault auth method. Apparently that's not achievable for the foreseeable future.

@mvdkleijn I believe it's totally achievable, just requires someone to do it.

it's very strange for me, it's highly demanded feature to have for wide society, but in the same time I see that every small initiative got "killed", in the same time we've got Azure auth

https://github.com/hashicorp/vault/pull/2796
https://github.com/hashicorp/vault/pull/2571
https://github.com/hashicorp/vault/issues/644
https://github.com/hashicorp/vault/issues/1986
https://github.com/hashicorp/vault/pull/3005

Nothing has been "killed", whatever that means. But nobody has written a generic enough provider as outlined at https://github.com/hashicorp/vault/issues/2525#issuecomment-369718031

The fact that Azure uses JWTs is besides the point. The majority of the logic in that plugin has nothing to do with JWT verification but rather with checks and binds against Azure specific APIs.

In fact it's an excellent example of why a generic OIDC auth plugin has been hard to figure out. Most things you'd actually use to perform identification are behind custom claims or custom APIs that must be called and parsed. My guess is that that's why nobody has implemented a generic plugin as outlined in that comment.

Someone pointed me towards https://github.com/immutability-io/jwt-auth. Looks like it fits the mold quite closely to how I said a plugin would need to be designed for the generic case, so we may reach out to the author about pulling it in.

Hi all,

The above plugin https://github.com/immutability-io/jwt-auth is my handiwork. I am more than amenable to upstreaming it; but, I recently added a feature that may complicate matters.

The feature is to facilitate (among other things) delegated authentication using a rather opinionated mechanism. Please take a look at this mechanism (authentication with a JWT via a trustee).

I can refactor the plugin to remove this feature, but, I have a few real world use cases that will use it so I will have to figure out how to accommodate the functionality in a less tightly coupled? fashion.

meanwhile you can use: https://github.com/AnchorFree/vault-plugin-oidc (we use it in production for some time already, and it works).

We use jwt-auth in production too - for what it's worth. We modeled it after the GitHub plugin for mapping a claims (configurable) to policies.

jwt-auth also supports OAuth password grants and refreshes - so that the token can be used via renewal more effectively.

The trustee stuff can be used when a CI/CD pipeline wants to act on behalf of a user. IP constraints can be used alternatively.

@onorua Any interest in upstreaming?

@mwitkow I believe https://github.com/AnchorFree/vault-plugin-oidc is actually basically your code, just modernized. Are you still making use of it?

@jefferai sure, we can work on upstreaming it. I saw that we have quite a dead PR, with a dead ticket for more than a year, and decided to make it work for us.
I would love to cooperate with @mwitkow or do it on our own, I just want this problem to be solved :)

Initial implementation of an official plugin is at https://github.com/hashicorp/vault-plugin-auth-jwt/pull/1

It handles both OIDC discovery and offline JWT validation workflows.

This is completely untested (by which I mean, literally, I made it compile successfully and pushed it up), but anyone that would like to comment on features and/or provide testing before I write some tests for it, feel free!

I hope to get this officially in for 0.11.

The plugin now has full tests for both OIDC discovery endpoint based and offline verification based workflows. It has been merged into the repo and is now included in vault master. As such, closing this.

@jefferai I just wanted to report on this that the 0.10.4 OIDC flow works for me (i'm using Auth0)

It was very hard to follow the documentation here https://www.vaultproject.io/docs/auth/jwt.html
The examples given didn't map well to any of the Auth0 things and i ended up having to read throughly https://www.vaultproject.io/api/auth/jwt/index.html to know what was being mapped to what.

I think it could benefit from better examples for specific services like Auth0.

That being said i was disappointed to see that there was no Vault UI login flow for this - do you know when the improvements to the Vault UI will land for allowing you login with a given JWT token?

Also it would be nice if there was a mapping that could be setup - so that instead of saying this JWT "role" gets X policy - it could parse the "groups_claim" and assign a set of policies - very similar to how the LDAP login flow works.

Thanks and would love to hear your thoughts.

All my OIDC testing was done with Auth0 so it should have just worked with something similar to the OIDC discovery URL that is in those docs. Feel free to PR examples into the docs.

I thought JWT made it into the UI. @meirish can you comment?

You can do group policy mapping with Identity (https://www.vaultproject.io/docs/secrets/identity/index.html). It's a bit confusing right now, we need to shore up the documentation, but basically, set up an external identity group with an alias mapping to the group from auth0. You'll have to do this (once) for each of the auth0 groups but after that membership will automatically happen as people log in, so you can configure policies there. Overall having a central place to do this rather than an implementation of user/group mapping in each plugin makes every plugin significantly less complicated to write and maintain, and makes the centrally managed ones more useful for other features since we can check user/group membership within Vault's core.

Oh, there's a PR of a guide: https://github.com/hashicorp/vault/pull/4968 -- still being reviewed, but might be useful.

I was able to authenticate against Keycloak OIDC provider, but having issues with configuring groups_claim parameter for role

https://github.com/hashicorp/vault-plugin-auth-jwt/issues/9

if I omit it (despite its required mark in documentation), then I could get Vault token

@avoidik Please don't ping on two tickets. One is enough.

@avoidik How did you manage to use Keycloak OIDC provider ?

I'm trying to use Keycloak as an OIDC provider, but I'm getting lost between documentation, could you explain how did you manage to do it ?

Thank you very much.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

maxsivanov picture maxsivanov  ยท  3Comments

adamroddick picture adamroddick  ยท  3Comments

0x9090 picture 0x9090  ยท  3Comments

weisinc picture weisinc  ยท  3Comments

dwdraju picture dwdraju  ยท  3Comments