https://github.com/dotnet/corefx/issues/23177#issuecomment-332995338
```c#
namespace System.Net.Security
{
public partial class SslStream
{
public Task AuthenticateAsServerAsync(SslServerAuthenticationOptions options, CancellationToken cancellation) { }
public Task AuthenticateAsClientAsync(SslClientAuthenticationOptions options, CancellationToken cancellation) { }
public SslApplicationProtocol NegotiatedApplicationProtocol { get; }
}
public class SslServerAuthenticationOptions
{
public bool AllowRenegotiation { get; set; }
public X509Certificate ServerCertificate { get; set; }
public bool ClientCertificateRequired { get; set; }
public SslProtocols EnabledSslProtocols { get; set; }
public X509RevocationMode CheckCertificateRevocation { get; set; }
public IList
public RemoteCertificateValidationCallback RemoteCertificateValidationCallback { get; set; }
public EncryptionPolicy EncryptionPolicy { get; set; }
}
public class SslClientAuthenticationOptions
{
public bool AllowRenegotiation { get; set; }
public string TargetHost { get; set; }
public X509CertificateCollection ClientCertificates { get; set; }
public LocalCertificateSelectionCallback LocalCertificateSelectionCallback { get; set; }
public SslProtocols EnabledSslProtocols { get; set; }
public X509RevocationMode CheckCertificateRevocation { get; set; }
public IList
public RemoteCertificateValidationCallback RemoteCertificateValidationCallback { get; set; }
public EncryptionPolicy EncryptionPolicy { get; set; }
}
public struct SslApplicationProtocol : IEquatable
{
public static readonly SslApplicationProtocol Http2;
public static readonly SslApplicationProtocol Http11;
// Adding other IANA values, is left based on customer input.
public SslApplicationProtocol(byte[] protocol) { }
public SslApplicationProtocol(string protocol) { }
public ReadOnlyMemory<byte> Protocol { get; }
public bool Equals(SslApplicationProtocol other) { }
public override bool Equals(object obj) { }
public override int GetHashCode() { }
public override string ToString() { }
public static bool operator ==(SslApplicationProtocol left, SslApplicationProtocol right) { }
public static bool operator !=(SslApplicationProtocol left, SslApplicationProtocol right) { }
}
}
# Previous Proposal
Change log:
* Updated = Removed results object to keep new objects down
* Updated = Meeting notes
* Updated = Removed string overload as there is no clear way in code to tell the user that it's utf-8 and only adds a potential trap
* Updated = Put that string overload back under protest, but decision was made in a review meeting
References dotnet/runtime#23107
## Rationale
Server Name Indication and Application Layer Protocol Negotiation are two TLS extensions that are missing currently from SslStream. They are needed urgently to support Http/2 (ALPN) and the ability to run Kestrel as an edge server (SNI). There are also many other customer applications that require this, including Clients being able to use HTTP/2 (Mananed HttpClient has an Http/2 support open issue).
These have been outstanding for a long period of time, for a number of reasons. One major reason is that any change will cause an increase in overloading of the Authenticate methods which have become unwieldy and are brittle when adding new options.
There will be more options coming with other TLS extensions available now (max fragment size for restricted memory clients for instance) and having a mechanism to add these without major API changes seems like a sensible change.
## Proposed API Change
Originally I suggested overloading the Authenticate methods, however discussions in dotnet/runtime#23107 have changed my mind. Thanks to @Tratcher for this concept
``` csharp
public partial class SslStream
{
public Task AuthenticateAsServerAsync(SslServerAuthenticationOptions options, CancellationToken cancellation) { }
public Task AuthenticateAsClientAsync(SslClientAuthenticationOptions options, CancellationToken cancellation) { }
public SslApplicationProtocol NegotiatedApplicationProtocol {get;}
}
public class SslServerAuthenticationOptions
{
public bool AllowRenegotiation { get; set; }
public X509Certificate ServerCertificate { get; set; }
public bool ClientCertificateRequired { get; set; }
public SslProtocols EnabledSslProtocols { get; set; }
public X509RevocationMode CheckCertificateRevocation { get; set; }
public IList<SslApplicationProtocol> ApplicationProtocols { get; set; }
public RemoteCertificateValidationCallback RemoteCertificateValidationCallback { get; set; }
public EncryptionPolicy EncryptionPolicy { get; set; }
}
public class SslClientAuthenticationOptions
{
public bool AllowRenegotiation { get; set; }
public string TargetHost { get; set; }
public X509CertificateCollection ClientCertificates { get; set; }
public LocalCertificateSelectionCallback LocalCertificateSelectionCallback { get; set; }
public SslProtocols EnabledSslProtocols { get; set; }
public X509RevocationMode CheckCertificateRevocation { get; set; }
public IList<SslApplicationProtocol> ApplicationProtocols { get; set; }
public RemoteCertificateValidationCallback RemoteCertificateValidationCallback { get; set; }
public EncryptionPolicy EncryptionPolicy { get; set; }
}
public struct SslApplicationProtocol : IEquatable<SslApplicationProtocol>
{
public static readonly SslApplicationProtocol Http2;
public static readonly SslApplicationProtocol Http11;
// Adding other IANA values, is left based on customer input.
public SslApplicationProtocol(byte[] protocol) { }
public SslApplicationProtocol(string protocol) { } // assumes utf-8 and public override
public ReadOnlyMemory<byte> Protocol { get; }
public bool Equals(SslApplicationProtocol other) { }
public override bool Equals(object obj) { }
public override int GetHashCode() { }
public override string ToString() { } // For debugger. utf-8 or a string representation of the raw bytes. E.g. "0x68 0x74 0x74 0x70 0x2f 0x31 0x2e 0x31"
public static bool operator ==(SslApplicationProtocol left, SslApplicationProtocol right) { }
public static bool operator !=(SslApplicationProtocol left, SslApplicationProtocol right) { }
}
There are now a number of parameters that could be modified during an connection being created. One of these is the certificate dictionary. If the consumer changes these by accidentally reusing the config options for a differently configured connection you could run into a security issue. One option is to have a builder but this is frowned upon as an API pattern and instead mostly used in higher level libraries/frameworks (ASP.NET for instance). This is solvable via taking an internal snapshot of the options but will need to be considered in any threat modelling.
var options = new ServerAuthenticationOptions()
{
EnabledProtocols = SslProtocols.Tls12 | SslPRotocols.Tls11,
ApplicationProtocols = new List<ApplicationProtocol>()
{
ApplicationProtocol.Http2,
ApplicationProtocol.Http11
}
};
var secureStream = new SslStream(innerStream);
await secureStream(options, token);
dotnet/runtime#17677 SNI
dotnet/runtime#15813 ALPN
dotnet/runtime#23107 Previous API proposal
RFC 7301 ALPN
RFC 3546 SNI and Max fragment
IANA ALPN Protocol Ids
[EDIT] Updated formatting by @karelz
cc @geoffkizer @Priya91 @wfurt
@Drawaes What is the motivation behind adding options bag separately, this doesn't work well with the current API design in SslStream. Instead of introducing so many new types, I think exposing overloads of AuthenticateAs[Client|Server]Async methods to have application protocols that they would like to handshake for will be simple and get the job done.
Because you also need another overload for SNI or overloads. So now you have a larger matrix. The referenced previous issue dotnet/runtime#23107 was originally going this route however the number of overloads is becoming excessive. There are more extensions that have yet to manifest or be considered both with TLS 1.2 and TLS 1.3 around the corner.
Consider ZeroRtt data, that should be surfaced as an API which will need yet another overload. I guess the question is how many overloads is too many?
I would be happy for the return types to be dropped and maybe the client and server args to be merged (not the best but hey) but just adding protocolIds doesn't cover SNI. It merely kicks the can a little down the road. It also makes it difficult for end users a raw protocol list is a bit unfriendly without any helper methods.
Reading through the RFCs, this is the proposal that I have,
```c#
namespace System.Net.Security
{
public partial struct TlsExtension
{
public TlsExtension(TlsExtensionType extensionType, Span
public TlsExtensionType ExtensionType { get => throw null; }
public Span
}
public enum TlsExtensionType
{
ServerName = 0,
Alpn = 16
}
public partial class SslStream : AuthenticatedStream
{
public virtual void AuthenticateAsClient(string targetHost, System.Security.Cryptography.X509Certificates.X509CertificateCollection clientCertificates, System.Security.Authentication.SslProtocols enabledSslProtocols, bool checkCertificateRevocation, IList
public virtual void AuthenticateAsServer(System.Security.Cryptography.X509Certificates.X509Certificate serverCertificate, bool clientCertificateRequired, System.Security.Authentication.SslProtocols enabledSslProtocols, bool checkCertificateRevocation, IList
public virtual Task AuthenticateAsClientAsync(string targetHost, System.Security.Cryptography.X509Certificates.X509CertificateCollection clientCertificates, System.Security.Authentication.SslProtocols enabledSslProtocols, bool checkCertificateRevocation, IList
public virtual Task AuthenticateAsServerAsync(System.Security.Cryptography.X509Certificates.X509Certificate serverCertificate, bool clientCertificateRequired, System.Security.Authentication.SslProtocols enabledSslProtocols, bool checkCertificateRevocation, IList
}
}
For the ExtensionData, we can provide structure to it by maybe having static methods on TlsExtension like,
```c#
public partial struct TlsExtension
{
public static Span<byte> GetAlpnExtensionData(IList<string> protocols) { throw null; }
public static Span<byte> GetServerNameExtensionData(IList<string> serverNames) { throw null; }
}
cc @geoffkizer @stephentoub @davidsh @caesar1995 @CIPop
@CIPop Since you've already looked at implementing Alpn, do you have any thoughts on how we should define the APIs here..
Wait, but can I only add a single extension here? What if I want ALPN and SNI (which is a very common case on the server side). Also how do I provide multiple certificates? SNI's normal use is this
Client says I am connecting to www.mycooldomain.com
Server says okay I expect either www.mycooldomain.com or www.mylesscooldomain.com, seeing as you picked the first I shall provide certificate A to you.
You would need a (xxxxxxxxxx, params TlsExtension[] extensions)
And if you do that you now preclude any future overloads, and get into trouble. And it still doesn't provide for the ServerName -> Certificate mapping.
Also this may become a thing, dotnet/runtime#22507 if there was an options bag it could be added easily, with an overload it will be messy.
I'm a little concerned that the options bag approach is turning a relatively constrained feature add (ALPN) into a bigger deal (fix the SslStream API).
Wait, but can I only add a single extension here?
The overloads can be made to use an IList
Also how do I provide multiple certificates?
I'm not sure how to address this with the current design, let me think about it more. I'll have to investigate how openssl/schannel provide support Tls extensions, and see if we can emulate that design. I agree that with more extensions being added, we should design an api that's extensible.
Instead of bundling all the options in a new options bag, wouldn't it be better to identify the specific set of settings that's required for each extension that's supported by Sslstream, and add a property for it on the TlsExtension struct? So the TlsExtension will contain all settings required for performing that extension in the handshake.
So you want an OptionsBag but to leave the current options out of it. Sure that's a possibility. If you want to understand SChannel and OpenSsl for SNI on the server side there are two different approaches (of course they are different :P )
OpenSsl You provide a callback. When SNI is detected it calls the callback. With your SSL object, you can then read the SNI from the callback params and you "switch" the SSL_CTX (context) that it is attached to (hence you normally make and pool the contexts with all the certs upfront, which by the way is something that should be done anyway, but that is a different issue for another day).
For SChannel you supply a list of certificates when you make the Authentication Object, rather than one, and it picks based on the SNI.
For ALPN you provide the extension to Schannel as an extra token buffer. You basically write it itself. Then there is a property you can inspect to find the one agreed on.
For OpenSsl you once again set the protocols, and on the server side again you get a callback.
So I am not sure the implementation really gives us much help here, basically for every new extension it is hand coded in OpenSsl and they add specific methods/callbacks for them.
@geoffkizer in your response, you have ignored SNI which is pretty important, for a number of reasons (I have internal ones) but @Tratcher will give you some I am sure as well.
Plus as a final thing, SslStream doesn't have a cancellation token, which is a bit rubbish, handshakes can easily be stalled by a client and today there is no way of stopping it other than killing the underlying stream, so the upper application needs access to the network stream.
Updated to drop results.
References to other frameworks
GO - Options Bag
NodeJs - Options Bag
Jetty - ContextFactory
So I am not sure the implementation really gives us much help here, basically for every new extension it is hand coded in OpenSsl and they add specific methods/callbacks for them.
The SslStream uses openssl and schannel, for whatever APIs we expose, we need to be able to plug it into these pal layers. So it is worth thinking about implementation here, and getting ideas from already used frameworks
Sure I agree, I was merely pointing out that they are both diametrically opposed unfortunately. Life would be too easy otherwise :)
FYI, further investigation it seems you can provide a callback for SNI on SChannel. Which means that if a callback is surfaced on the SslStream then the scenario of something like Let's Encrypt providing a dynamic certificate is possible for both SslStream and OpenSsl. So either a list or a callback would be ideal, if only one can be provided then the callback would be preferable.
it seems you can provide a callback for SNI on SChannel
Can you provide a link to the API for this?
@Drawaes I'm not ignoring SNI or cancellation. I'm simply observing that the only must-have feature here for 2.1 is ALPN.
That said, @stephentoub and I chatted a bit today, and we agree that the options bag approach here seems like the right one, despite the additional work and risk associated with it.
I think it would be useful to break this proposal down into a couple different parts:
(1) New AuthenticateAsClient/Server APIs that take option bags and a cancellation token, as well as the definition of the option bags that match existing APIs. We should be clear about the principles used here. I believe we plan to take both constructor args and method args and put them on the options bag. Is that correct?
(2) Proposal to extend this model for ALPN
(3) Proposal to extend this model for SNI
edited to add:
(4) Proposal to extend this model for MaxFragment handling
public string ServerIndicationName {get;}
public string ProtocolId {get;}
I would suggest:
"ServerNameIndication" (or even just "ServerName")
"ApplicationProtocolName"
public X509Certificate ServerCertificate { get; set; }
public IDictionary<string, X509Certificate> ServerCertificates { get; set; }
Based on above discussion, it sounded like we were moving toward a callback model here. Is that correct? What does this look like?
public SslProtocols? EnabledSslProtocols { get; set; }
public EncryptionPolicy? EncryptionPolicy { get; set; }
These should not be nullable; they should just have appropriate defaults.
A static list of the Http 1.1, Http/2 and Http/2 over TLS should be provided to stop users having to look it up
We should define these explicitly as part of the ALPN proposal.
Do we actually need http2 without TLS? Are there any other defined protocol names we should include?
One option is to have a builder but this is frowned upon as an API pattern and instead mostly used in higher level libraries/frameworks (ASP.NET for instance). This is solvable via taking an internal snapshot of the options but will need to be considered in any threat modelling.
Here's my suggestion: Make these objects freeze-on-use.
For options like RemoteCertificateValidationCallback, which I can specify today in the constructor, what happens if I pass something to the constructor but then leave this null on the options bag? Do we use the value from the options bag (null), or do we use the value from the constructor? Similarly for other constructor args.
Okay I am good with splitting SNI and ALPN to be separate to the options bag change. I will launch two sub issues for those and answer the specific questions there for those and discuss only the options bag concept here.
How does freeze on use manifest? Is there an internal flag and if you try to set something after that it throws? If so what's the exception appropriate here? What about if the certificates ends up being a list or dictionary? We have to freeze the list/dictionary as well.... how is that done? (I like the idea just not sure how it is implemented).
The other option is to have a hashcode or compare that checks the settings + the certs etc (child objects) and checks an internal cache. If it doesn't exist duplicate and put it in there. There is already an internal credentials cache.
You could ref count simply to remove items. Then on a client it wouldn't be much burden and on the server it rarely would drop to zero unless your server is not doing much in which case a class allocation or two won't matter?
Thoughts? I would rather sort out some details before updating the proposal again.
Why is it a problem to have the options bag be mutable? It's no different from anything else: you give this to the API, and the API expects it to be in a consistent state for the duration of the operation... if you change it while the operation is in flight, that's a bug, no different from any other case where you mutate something while a called method depends on it (e.g. the buffers passed to read/write). If you want to treat it as immutable and just never change it after it's created, you can. If you want to treat it as mutable such that you can modify it between uses, you can.
How does freeze on use manifest? Is there an internal flag and if you try to set something after that it throws? If so what's the exception appropriate here?
Yep. InvalidOperationException. This is what HttpClient/HttpClientHandler do for their settings. See https://github.com/dotnet/corefx/blob/master/src/System.Net.Http/src/System/Net/Http/HttpClient.cs#L589
What about if the certificates ends up being a list or dictionary? We have to freeze the list/dictionary as well.... how is that done? (I like the idea just not sure how it is implemented).
We could make a freezable list/dictionary if necessary.
I'm fine with just letting it be mutable, too. Seems like the easiest approach.
For the constructor vs option bag params. Either an exception should be thrown if they are supplied in the constructor and the options are used. Or throw only if they are supplied on the constructor and different, or take the most restrictive ( in your example use the call back not the null).
I am leaning towards throwing though... purely on the "it's security" side. Less ambiguity and less permutations and rule guessing is usually a safer. Again what exception is appropriate here?
I am thinking hard about how you will need to use the options..... at the moment I can't see an immediate problem other than internal perf (you can make a single ctx and reuse over and over on openssl) but I think that can be handled internally. So for now let's leave the freeze on the backburner. Mutate it is :)
I presume just adding things like the protocol negotiated server name etc as properties is fine rather than a results object? (I prefer the properties but want to check).
That's the pattern today, right? I'm fine with that.
Options bag sounds good, and having this options bag specific to TlsExtension makes the new API less confusing on what it's intending to do. For now, keep the options bag support only those setting required for alpn and sni, as and when new tls extensions need to be supported this bag can be extended to add new properties. If there's real user request to extend the options bag to old settings, then we can surely extend the apis.
```c#
namespace System.Net.Security
{
public partial struct TlsExtensionConfig
{
public TlsExtension(TlsExtensionType extensionType) { throw null; }
public TlsExtensionType ExtensionType { get => throw null; }
// support sni from server side
public IDictionary
// support sni from client side - is this required, since targetHost is already a parameter in
// authenticate APIs?
public string ServerName { get; set; }
// support alpn
public IList
}
[Flags] // need to maintain mapping to rfc values.
public enum TlsExtensionType
{
ServerName = 1,
Alpn = 2
}
public partial class SslStream : AuthenticatedStream
{
// alpn protocol chosen during handshake
public string ApplicationProtocol { get ; }
// On client - targetHost, on server - current servername requested by client.
public string ServerName { get; }
// Authenticate APIs - without cancellation (separate issue).
public virtual void AuthenticateAsClient(string targetHost, System.Security.Cryptography.X509Certificates.X509CertificateCollection clientCertificates, System.Security.Authentication.SslProtocols enabledSslProtocols, bool checkCertificateRevocation, TlsExtensionConfig tlsExtension) { throw null; }
public virtual void AuthenticateAsServer(System.Security.Cryptography.X509Certificates.X509Certificate serverCertificate, bool clientCertificateRequired, System.Security.Authentication.SslProtocols enabledSslProtocols, bool checkCertificateRevocation, TlsExtensionConfig tlsExtension) { throw null; }
public virtual Task AuthenticateAsClientAsync(string targetHost, System.Security.Cryptography.X509Certificates.X509CertificateCollection clientCertificates, System.Security.Authentication.SslProtocols enabledSslProtocols, bool checkCertificateRevocation, TlsExtensionConfig tlsExtension) { throw null; }
public virtual Task AuthenticateAsServerAsync(System.Security.Cryptography.X509Certificates.X509Certificate serverCertificate, bool clientCertificateRequired, System.Security.Authentication.SslProtocols enabledSslProtocols, bool checkCertificateRevocation, TlsExtensionConfig tlsExtension) { throw null; }
}
}
```
If I have learnt anything if you change concepts, and make that jump, you best clean up all the bits in one go, or you will forever have some options there and some elsewhere. When there are 5 or six more config options on SslStream and in the bag, now you have 1/2 your config somewhere and 1/2 somewhere else.
Also if you make it TlsExtension specific, what if you add an option that isn't a TlsExtension. Say we get the ability to specify the ciphers (an often asked for feature) and it's not an extension? Now we have the "do we overload it or add it to an options bag for extensions but it's not an extension" discussion.
I really like the idea of all options in the options bag, the Authenticate methods for that ONLY supporting async, and all having cancellation. Its not confusing at all, because if you are using the "old way" and don't want or need new stuff, you just won't get it, just keep doing what you always did.
After an offline discussion with @ericstj , we have framework design guidelines, which recommend making only the longest overload virtual, clearly the current Sslstream overloads already violate that. It is afterall been observed that adding overloads will be a poor design. Also, the original design follows the create-set-call pattern which is widely used pattern in the framework. @Drawaes I'll be working on this, since there is an urgent need to get this done. Will be creating a prototype with the proposed APIs, I think we may hit a problem with Unix implementation, where we will need these options during SslStream object creation, during when the sslcontext gets instantiated, and we will be setting the callbacks then. It will become clear if we need these options during new SslStream or can defer it later while creating prototype. I'll update the thread when i have more info.
I can answer you OpenSsl question right now. You don't create the context (thus need any callbacks etc) until you hit the AcquireServer/Client Credentials private method on the SecureChannel. You don't hit this until you call the first Token Creation, and that isn't hit until you start the handshake. You can't make the SslCtx object without the certificate anyway, which is supplied on the AuthenticateAsClient/Server operation.
I am not sure how moving all of the settings to a settings object violates the create set call pattern, in fact I would suggest that it re-enforces it by moving the current settings out of the constructor in the "new" case. As for making overloads virtual, I don't think that was ever part of the design?
https://github.com/Drawaes/corefx/commit/de8fc6a997035067079b0938db6a292f50dc363a
Something like that for the options bag would be a good start... for ALPN I would use an enum, for the protocols and fixed buffers behind the scenes, they are set by IANA and take a fair amount of haggling to change. An enum that is [Flags] will allow multiple to be supplied, and then the "magic" strings can be hard coded in so mistakes aren't made
Something like this
[Flags]
public enum ApplicationLayerProtocolType
{
None = 0,
Http1_1 = 1,
Spdy1 = 2,
Spdy2 = 4,
Spdy3 = 8,
Turn = 16,
Stun = 32,
Http2_Tls = 64,
Http2_Tcp = 128,
WebRtc = 256,
Confidential_WebRtc = 512,
Ftp = 1024
}
You probably want to trim the list for stuff that you don't need to support (speedy for instance) but just to get the idea of what it would look like.
I am not sure how moving all of the settings to a settings object violates the create set call pattern
I didn't say that it violates, I in fact said the original design on this proposal(at the top of the issue) is a good starting point.
I miss understood then:) so you doing a prototype so I shouldn't update the API proposal until that?
so you doing a prototype so I shouldn't update the API proposal until that?
I would like to implement and test the design first, before taking it to api-review. I think I have enough input to go on, once I publish my changes, you can dig at it to propose improvements. Sounds good?
Perfect ! Happy coding.
for ALPN I would use an enum
Why not just use strings? This has the advantage that if a new protocol is added, you can specify the string yourself.
BTW, where did you get that list from? I don't know where the "official" IANA list lives.
Also, let's not stop working on the API proposal. We should at least write up where we are currently and list the remaining open issues. We may be able to make progress on some of these independent of Priya's prototyping work.
https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids
That is the official (as in RFC 7301 says that is the registry). Next there is an efficiency to being able to hold the strings statically as you need ASCII (or maybe its utf8 but I am sure it is ASCII/ANSI whatever) bytes and then add them to a list buffer, they are very rarely going to change from here, a flag is an easy change, and users will make mistakes.
Who is going to remember that HTTP/1.1 is the string "http/1.1" but
http/2 is h2c and http/2 over tls is h2?
For instance you didn't know where to find the list, what hope does a dev sitting at their corporate desk that needs to connect to an server with http1 ?
Put it this way, I am good with strings for me, heck I would prefer to use bytes :) But I have read the specs forward and back and I am not sure others have.
I agree, I don't expect users to remember the strings. I'm just saying we should have a static list of strings instead of an enum.
Something like:
public static class ApplicationLayerProtocolNames
{
public const string Http11 = "http/1.1";
public const string Http2 = "h2";
public const string Http2Clear = "h2c";
// whatever else should go here
};
Yeah, do not use an enum for alpn. This is an open dataset and an enum would prevent supporting additional values without a framework upgrade, let alone custom values. The value also does not affect the ssl implementation so there's no reason to restrict it. A similar mistake was made in the past for http status codes, but at least there you could cast to work around it.
The spec also says list order matters and should be customizable. Using a flags enum prevents this.
Known constants are good though.
Fair enough, known constants is good for me.
I suppose if we really wanted to be on spec we'd use ReadOnlySpan<byte>
as the alpn datatype. It would avoid issues like encoding and casing, and be more efficient to serialize to the wire. The known constants would make this usable.
We can't have a list of span.
for static strings or buffers or whatever i would say
Http1.1
Http/2 over ssl
WebRTC
Confidential WebRTC
Pop3
IMAP
FTP
should be enough for now. Http/2 without tls seems silly because you are already connecting to a TLS endpoint.
I would say the behaviour for ALPN should be...
If you provided protocols and the other side didn't participate at all (client never sent or server didn't respond) a null protocol negotiated. If there was no match but both sides sent then it's an exception with authentication failed. And the rest is a match.
Based on separate discussions, I think our option bag will also need a flag to support disabling renegotiation, as this seems to be required for HTTP2.
FYI you mentioned above ALPN was the pressing need, but
The TLS implementation MUST support the Server Name Indication (SNI) [TLS-EXT] extension to TLS. HTTP/2 clients MUST indicate the target domain name when negotiating TLS.
Afraid SNI is a MUST as well...
We already support SNI on the client side, which is I think all that's necessary to be HTTP2 compliant.
Good point, there is a clear MUST, however the loophole is that you don't have to respond with anything even if it supports it... :) back to the backburner for SNI. (Presuming the client does send it, I haven't checked but will take your word on that :) )
Hmmm maybe we should investigate this... the assumption might be wrong that sni is working client side
HttpClient in .NET Core 2.0 doesn't use SslStream.
Really... so it uses curl and whatever magic api on windows... but the new managed one will right?
Really... so it uses curl and whatever magic api on windows... but the new managed one will right?
HttpClientHandler uses libcurl on Unix and WinHttp on Windows. The new managed HttpClientHandler implementation does use SslStream:
https://github.com/dotnet/corefx/blob/2efb83a3170ec14cafe6a14ea37edb037eb99836/src/System.Net.Http/src/System/Net/Http/Managed/HttpConnectionHandler.cs#L57
Cool and we know that SSLStream sends SNI for sure... or should I break out wireshark?
Cool and we know that SSLStream sends SNI for sure... or should I break out wireshark?
If you're volunteering, validation is always good. :)
On windows at least it does
Status: I have a prototype for Unix, but the alpn apis are supported only from openssl 1.0.2 version. Currently working on patching my machine to have this version to get my implementation to build.
What version is the "min" for .net core these days? (Excluding macos)
I'm confused by the mentions to h2c - when would it make sense to discuss it in the context of SslStream? A client and server should never negotiate h2c using ALPN, only h2.
Sorry if I missed something in the discussion (kinda skimmed through).
:) then we agree as I said
should be enough for now. Http/2 without tls seems silly because you are already connecting to a TLS endpoint.
So from experimenting with prototype, this is the API structure that I have,
SslAuthenticationOptions -> options bag that contains settings common to client and server, set when initializing constructor. This ensures we are not in a fix of deciding which value to pick by making these options available only through constructor, and not through any of the authenticateas* methods.
SslClientAuthenticationOptions, SslServerAuthenticationOptions -> this bag contains options specific to either client or server, and set through one of the AuthenticateAs* methods, during which time the identity of the SslStream is found, either as client or server.
```C#
public class SslAuthenticationOptions
{
public IList
public RemoteCertificateValidationCallback UserCertificateValidationCallback { get; set; }
public LocalCertificateSelectionCallback UserCertificateSelectionCallback { get; set; }
public EncryptionPolicy EncryptionOption { get; set; }
}
public class SslClientAuthenticationOptions
{
public string TargetHost { get; set; }
public X509CertificateColllection ClientCertificates { get; set; }
public bool CheckCertificateRevocation { get; set; }
public SslProtocols EnabledSslProtocols { get; set; }
}
public class SslServerAuthenticationOptions
{
public X509Certificate DefaultServerCertificate { get; set; }
public Dictionary
public bool ClientCertificateRequired { get; set; }
public SslProtocols EnabledSslProtocols { get; set; }
public bool CheckCertificateRevocation { get; set; }
}
public class SslStream : AuthenticatedStream
{
public SslStream(Stream innerStream, SslAuthenticationOptions sslAuthenticationOptions) { }
public SslStream(Stream innerStream, bool leaveInnerStreamOpen, SslAuthenticationOptions sslAuthenticationOptions) { }
public virtual Task AuthenticateAsClientAsync(SslClientAuthenticationOptions sslClientAuthenticationOptions, CancellationToken cancellationToken) { }
public virtual Task AuthenticateAsServerAsync(SslServerAuthenticationOptions sslServerAuthenticationOptions, CancellationToken cancellationToken) { }
public string NegotiatedApplicationProtocol { get; }
}
```
Is there a property on SSLStream to find out the hostname that was indicated? Other than it looks good. I will hopefully update the issue with the design tonight. I like the constructor/ authenticate split idea.
Nice, so all the old constructors become wrappers for the new one? And the same for the AuthenticateAsClient/Server methods?
This also means APLN works for all of the old AuthenticateAsClient/Server APIs, not just the new ones?
There's an implication that SslAuthenticationOptions are mutable between the call to the constructor and the call to Authenticate... If that's true then you don't need the new constructors. e.g.
var sslStream = new SslStream(inner);
sslStream.AuthenticationOptions.ApplicationProtocols = new[] { HTTP2, HTTP11 };
await sslStream.AuthenticateAsServerAsync(...);
Can EnabledSslProtocols be made nullable so it can float with the system defualts? Or does None work for that?
Not so keen on that... Cause allocations you want to cache that object and use it again and again. Also there might be some smarts we can do with the hashcode and a cache... ?
If the options are going to be surfaced as a property then their i/mutability needs to be more explicit / enforced.
@Tratcher
The correct way will be,
c#
SslStreamAuthenticationOptions options = new SslStreamAuthenticationOptions();
options.ApplicationProtocols = new [] {};
var sslStream = new SslStream(inner, options);
Options cannot be changed after the object creation, as it will not be reflected in the security context, that is created after first authentication. Meaning the security context cannot be changed once created, it needs to be disposed off first, to initialize it with new values. So, making this property mutable is confusing, where the changes will not have any significance after the security context is established.
And yes, all the old constructors and authenticateas methods will be wrappers for these new APIs.
if sslStream.AuthenticationOptions.ApplicationProtocols = new[] { HTTP2, HTTP11 };
won't work then there should be a compiler or runtime error when I try it.
The options object is only a getter but the app protocol list its self is mutable in your design above.
Now that I think about it, maybe it's not that important to think about the scenario of changing options outside of constructor, since the SslStream wouldn't allow you to do multiple Authenticates, and it is only during the handshake phase that these properties are required. So I think the above way is also acceptable, the options can be changed any number of times before handshake, after handshake even if the options are changed, it doesn't affect the encrypt/decrypt functions on write/read.
:) I like that option...
have we considered allocations on the string -> ascii and the alpn lookups ? There might be some people upset if we throw more allocations in the mix. Is there someway we could adjust the API to help with that?
If you have the public SslStream(Stream innerStream, SslAuthenticationOptions sslAuthenticationOptions) { }
overload then you could leave it to the caller to decide if they want to re-use or mutate, so long as you were clear that it was the original instance and not some copy.
I like this approach. I think putting the options bag on the constructor makes a lot of sense.
I don't think we need to expose the AuthenticationOptions property at all. This doesn't seem to add anything, and it's a bit confusing. We don't expose existing constructor args on SslStream today (e.g. RemoteCertificateValidationCallback).
One thing I'm unsure of is: This proposal effectively has two different "options bags", one that's used for the constructor and one that's used for the AuthenticateAs methods. (Actually three, since there's one for AuthenticateAsClient and another for AuthenticateAsServer).
This means every consumer will have to construct two options bags (or use the existing APIs).
If we are moving to options bags, I wonder if we would be better off simply having one that's used on the constructor, with Client and Server flavors. This allows the consumer to just construct one options bag.
So I like that, because if you did that then it would just be AuthenticateAsync()?
Also I have been scratching my head as to what use exposing the options bag on the SslStream would be and come to the same conclusion, nothing good. I would say remove it for clarity.
I don't think we need to expose the AuthenticationOptions property at all. This doesn't seem to add anything, and it's a bit confusing. We don't expose existing constructor args on SslStream today (e.g. RemoteCertificateValidationCallback).
Even if we didn't expose this, the same problem exists, I can still change the value of the option. I'm more concerned of this kind of scenario,
c#
SslAuthenticationOption options = new SslAuthenticationOptions();
options.ApplicationProtocols = something;
SslStream stream = new SslStream(inner, options);
Task t = stream.AuthenticateAsServerAsync(...);
options.ApplicationProtocols = somethingelse;
await t;
is AuthenticateAsServer going to register something or somethingelse? Because of this now, i'm against the constructor options.. The best will be to have a client options bag, server options bag exposed through the authenticateas methods, and then, have these options bag not include the existing options made available through the current SslStream constructor. Thoughts?
How is that different to
Task t = stream.AuthenticateAsServerAsync(someOptions);
someOptions.ApplicationProtocols = somethingElse;
await t;
?
The issue you have with doing it that way (and I am not against it just currently need to think about it). Is we come back to the "what if the user sets something in the constructor and in the options bag?"
I think @Tratcher asked at one stage and it might be worth a revisit, how different are the client and server options really? Is it worth merging and just throwing if an invalid option has been set. Might not be just worth a thought.
FYI the reason not to expose the options...
var newStream = new SslStream(options);
await newStream.Authenitcate(blabla)
// pass the stream to other code... and it modifies your options you are using to make more connections.
I am less worried about a user modifying between creation and and auth because that is normally controlled by the person/code making the connection.
pass the stream to other code... and it modifies your options you are using to make more connections.
Meaning? that'll be a new connection now, and it will perform handshake again with the specified options.
I mean this
A server makes a TCP Connection, and then an ssl stream. The sslStream is passed to user code to read/write from. But now the user code could change the setttings of the options, which if you reuse to make new connections it would modify.
But now the user code could change the setttings of the options
These changes are ineffective, since it is not going to perform handshake again on the sslstream. Once a stream is authenticated the sslstream cannot be reused. It has to be disposed.
Right but say (I know there are framebuffers etc but lets assume)
Kestrel makes an optionsbag and wants to cache and use it for the next 100 connections.
It makes a connection with the options bag and then passes the sslstream to usercode.
Now it is true you can't change the current connection from usercode (by accident or otherwise) but you could change the settings for all future connections.
@geoffkizer Do this proposal match your thinking?
```c#
public class SslAuthenticationOptions
{
public IList
public RemoteCertificateValidationCallback UserCertificateValidationCallback { get; set; }
public LocalCertificateSelectionCallback UserCertificateSelectionCallback { get; set; }
public EncryptionPolicy EncryptionOption { get; set; }
}
public class SslClientAuthenticationOptions : SslAuthenticationOptions
{
public string TargetHost { get; set; }
public X509CertificateColllection ClientCertificates { get; set; }
public bool CheckCertificateRevocation { get; set; }
public SslProtocols EnabledSslProtocols { get; set; }
}
public class SslServerAuthenticationOptions : SslAuthenticationOptions
{
public X509Certificate DefaultServerCertificate { get; set; }
public Dictionary
public bool ClientCertificateRequired { get; set; }
public SslProtocols EnabledSslProtocols { get; set; }
public bool CheckCertificateRevocation { get; set; }
}
public class SslStream : AuthenticatedStream
{
public SslStream(Stream innerStream, SslAuthenticationOptions sslAuthenticationOptions) { }
public SslStream(Stream innerStream, bool leaveInnerStreamOpen, SslAuthenticationOptions sslAuthenticationOptions) { }
public virtual Task AuthenticateAsync(bool isServer, CancellationToken cancellationToken) { }
public string NegotiatedApplicationProtocol { get; }
}
```
:) I like it, not sure about everyone else... what do you think ?
Now it is true you can't change the current connection from usercode (by accident or otherwise) but you could change the settings for all future connections.
I think that's the concern of the user to develop code as they see fit, I'm not sure if we should guard for that here, only that the options should be made immutable or leave it mutable and ensure that the options object is not modified by sslstream internally to store any sslstream state info.
That model starts to hurt option discoverability, and makes it easier to leave out required parameters like DefaultServerCertificate.
At a minimum:
public class SslAuthenticationOptions
{
public IList<string> ApplicationProtocols { get; set; }
public RemoteCertificateValidationCallback UserCertificateValidationCallback { get; set; }
public LocalCertificateSelectionCallback UserCertificateSelectionCallback { get; set; }
public EncryptionPolicy EncryptionOption { get; set; }
}
public class SslClientAuthenticationOptions : SslAuthenticationOptions
{
public string TargetHost { get; set; }
public X509CertificateColllection ClientCertificates { get; set; }
public bool CheckCertificateRevocation { get; set; }
public SslProtocols EnabledSslProtocols { get; set; }
}
public class SslServerAuthenticationOptions : SslAuthenticationOptions
{
public X509Certificate DefaultServerCertificate { get; set; }
public Dictionary<string, X509Certificate> ServerCertificates { get; set; }
public bool ClientCertificateRequired { get; set; }
public SslProtocols EnabledSslProtocols { get; set; }
public bool CheckCertificateRevocation { get; set; }
}
public class SslStream : AuthenticatedStream
{
public SslStream(Stream innerStream, SslClientAuthenticationOptions sslAuthenticationOptions) { }
public SslStream(Stream innerStream, SslServerAuthenticationOptions sslAuthenticationOptions) { }
public SslStream(Stream innerStream, bool leaveInnerStreamOpen, SslClientAuthenticationOptions sslAuthenticationOptions) { }
public SslStream(Stream innerStream, bool leaveInnerStreamOpen, SslServerAuthenticationOptions sslAuthenticationOptions) { }
public virtual Task AuthenticateAsClientAsync(CancellationToken cancellationToken) { }
public virtual Task AuthenticateAsServerAsync(CancellationToken cancellationToken) { }
public string NegotiatedApplicationProtocol { get; }
}
@Priya91 and @Tratcher: Yes, those look good to me. A couple minor points:
Something a bit more radical to consider:
As far as I can see, the only reason we separate the Authenticate method from the ctor is that the Authenticate method is async.
Instead, we could consider moving the Authenticate method to the options bag itself and have it do both creation and authentication. This reduces the process to a single step and avoids the slightly weird "stream is created but unauthenticated" state.
It would look something like this:
var options = new SslClientOptions() { ... }
SslStream sslStream = await options.AuthenticateAsync(innerStream);
In this model, it might make sense to adjust the naming a bit -- SslClientOptions => SslClientFactory and AuthenticateAsync => CreateAsync, or something like that.
Thoughts?
One other minor thought:
We should add a bool like "AllowRenegotiation" to the base options bag, since it seems we need this for HTTP2 support. May as well include that here.
I will put this out there even though it has been discussed before but seeing as we are almost heading in that direction anything.... (and @geoffkizer called it a factory so I blame him )
Today the schematics for using at least OpenSsl is wrong, the SSL_CTX object is allocated for an SslStream and then the SSL object is allocated. Why would you make both of those for a connection? Why didn't they make one? Why don't you make a new X509 certificate object for every connection (normally) and instead take a ref to one (with ref counting).
The answer is simple the use of an SSL_CTX per connection is wrong. It is designed that you have a single SSL_CTX. This can make a significant difference with OpenSSL able to share various resources, such as recycling buffers. But today this is near on impossible because of the number of settings, that could change between making two streams.
However, if we could have an "object" that effectively could be a tracking or factory or whatever object (this is really why the mutable/immutable/Build() question is important to me) then we could make an SSL_CTX for that and reuse it. Also there is a lot of locking to make an SSL_CTX as it has a global lock... refer to my OpenSsl locking discussion so many quick connections put pressure on those.
Anyway just an idea, that if that could be designed in, it could ease the way to a faster SSL Stream atleast for linux. Which would be nice because in the model there is a lot of hoop jumping because it was designed around SChannel which is understandable, but this might be a small design nod that could be given?
I kept AuthenticateAsClient/Server for consistency / discoverablity.
Moving AuthenticateAsync off SslStream to options moves you to a completely different usage pattern. If we were starting from scratch I would say yes, but I wouldn't want to fork the existing API like that, it would make adopting the new features in existing code more difficult. Is corefx is up for that?
Moving AuthenticateAsync off SslStream to options moves you to a completely different usage pattern.
I agree. But I feel like we're already partway down the "different usage pattern" path already. The most recent proposal requires users to change both the constructor call and the Authenticate call in nontrivial ways. Once we're down this path, it's not clear to me where we should stop.
Alternatively, if we want to minimize the impact on the usage pattern, we could just add a couple new ctor overloads.
Here's the spectrum, as I see it:
(1) Just add new ctor overloads that take "IList
=> Smallest change, smallest impact on current usage model
(2) Add new ctor that takes an OptionBag, with existing ctor params plus the two new ones above.
=> Impacts current usage model somewhat, but only for constructor
=> A bit weird in that ctor is OptionBag-based, but Authenticate methods aren't
=> Could be extended in the future to look more like (3) or (4) -- but if we think we want to do these eventually, it probably makes sense to do them now when we're making usage model changes, rather then dribble out the usage model changes over time and end up with lots of different ways to accomplish the same thing
(3) Add new ctors that take ClientOptionsBag/ServerOptionsBag plus new Authenticate methods with minimal params
=> Bigger change to usage model
=> More consistent than (2), in that all options are via a single options bag
(4) Factory model
=> Large change to usage model
=> If we were starting from scratch, we'd probably do this
I'm not quite sure where to land here. (2) is my least favorite.
It does, and while that is the "ideal" scenario I feel (as per a previous lib I hacked together for pipelines in the beginning) we could do something more like X509's here where it is basically immutable in nature and you just ref count to it, so if the user disposes it you don't care.
FYI my original design was more along what TCP sockets are where they have a Listener that provides you a socket for each connect, and this was a "SecureListener" that takes a stream and gives you a secure stream.
It also means we can avoid allocations/work for say the ALPN list because the way I see it above we are going to have to convert to ansi strings, generate a buffer with a header and then put each in with a header, where as that could/should be done one time.
One other thought.
(1) and (4) are not mutually exclusive. That is, we could just do (1) now with the intention of adding a factory model (4) on top in the future. I think we could build (4) without any changes to SslStream proper.
If 4. was a strong possibilty, then I would say do 1 right now, because if you partially make a big change then a really big change will be impossible later :) or at least that is what I have learnt working for super big companies ;)
if you partially make a big change then a really big change will be impossible later :)
Yep, agreed.
Naming/general style aside I had something like
using (var serverContext = new SslSecurityContext(isServer : true))
{
//Set all your settings and things
using(var sslStream = serverContext.CreateSecurePipeline(innerPipeline);
{
await sslStream.PerformHandshakeAsync();
}
}
This allowed for a lot of optimisations around contexts, buffers, ALPN buffers you name it. If a big redesign was happening that is what I would push for. I also understand the Http/2 ALPN time constraints so I would say slap them in as an overload ... and the second that is done start the discussion on a redesign :)
In a real server of course you would make multiple of the sslStreams from the "context/factory/[insert name]"
Yeah, and then that makes me want to generalize even further, beyond SslStream, and makes me wonder how this relates to the project bedrock stuff.
Please don't do (1), the constructors have already exceeded a sane number of parameters, and it gets us no closer to addressing other scenarios like SNI. (2) & (3) are largely aesthetic differences. (4) is a fundamental usage change that will be harder to consume. E.g. it combines the creation and connection code which may currently be taking place in different places in my application.
it combines the creation and connection code which may currently be taking place in different places in my application.
Doesn't this apply to (3) too?
@tractcher my design above doesn't the context just creates the stream u still get to call the handshake you just don't get to configure it.
One problem I see with 4 as proposed is that it is changing the basic understood functionality of SslStream, you now do handshake on the options, and then what's the purpose of authenticateas* on sslstream. These APIs will be redundant now, and we will still have these exposed, and this results in multiple ways to do the same thing. I like @Drawaes code snippet above, where the handshake is still kept on the sslstream and have the options be a factory to produce sslstream objects with various input settings. I think we can go ahead with this model even now, put the minimal required knobs on this options factory for now, and extend later for other tls extensions.
I like @Drawaes code snippet above, where the handshake is still kept on the sslstream and have the options be a factory to produce sslstream objects with various input settings
I missed this originally.
In a "factory" style approach, I'm not sure what the point of having a separate Authenticate method is (PerformHandshakeAsync in the @drawaes snippet). When would you ever do anything other than Create immediately followed by PerformHandshakeAsync? And the Authenticate* methods are still redundant.
In the pipelines case it was a matter of
//Some part of the connect handler
var secureStream = Listen.OnConnect( stream => context.CreateSecureStream(stream));
SomeServerLoop(secureStream);
//The actual handling loop
public async Task SomeServerLoop(secureStream)
{
await secureStream.Authenticate();
do
{
//Some logic
} while(true);
}
So moving the creation and the authenticate helped here. But it may or may not be useful. I think this was the pattern that @Tratcher was hinting at above.
The other thing is, if you return a Task from there because the handshake is inherently async then code might look something like this (which is ugly)
using(var mySslStream = await context.CreateSecureConnection(innerStream))
{
}
So you always end up having to get that object from the task... not the most pretty thing in the world.
I don't understand why that's ugly. There are lots of APIs that work like that today.
This whole discussion is pushing me toward a minimalist approach here. Doing a more involved change seems to open up a bunch of new issues, and I'd rather defer them and just make ALPN etc work.
Side issue: SNI. Is there any reason we can't just use the "targetHost" argument to LocalCertificateSelectionCallback to make this work on the server? On the server, this is always set to string.Empty today.
@geoffkizer I've been tempted to propose doing SNI like that, but it hasn't been confirmed if we can do callback based SNI or not on schannel. However, most of the other params wouldn't make sense.
I am happy to drop that bit either way. I just mean, I can't think of many examples that have an IDisposable inside the Task and it's "surprising" to most coders (maybe not in MS but outside :) ).
As for dropping it to get it done, as I said above I would be fine with that, but then no options bag, just add a param and overload and you are done for now and it can be thought through, I have a small advantage that I have been thinking about this for a wee while :)
On SNI is there a way to use a callback to switch certs on SChannel, I was struggling to find one, but the source is closed and the documentation is a few years old, you might have more resources and contacts to help on that.
Please don't go back to option 1... please...
The upside to option 1, it's soooo ugly that it will force a change quickly after :P
馃
@Tratcher Yeah, the whole LocalCertificateSelectionCallback mechanism is a bit weird, and I don't quite understand how it's intended to work, or which stuff works on client vs server.
LocalCertificateSelectionCallback is only for clients. targetHost and localCertificates come from AuthenticateAsClientAsync, and remoteCertificate and acceptableIssuers come from the server. The client has to pick a cert that matches one of the acceptable issuers.
For SNI the client sends a single hostname, the server then has a choice to pick it's certificate. The problem is as far as I can see in SChannel you have to supply that list upfront and SChannel will pick from those for you, you can't provide it from "outside". But there might be a way I am missing
LocalCertificateSelectionCallback is only for clients.
That makes sense. However, it's actually called in the server case too. Apparently all the params are set to null/empty except for the cert collection -- we construct a temp collection with just the server cert in it. And then we use whatever cert you return from this callback.
Why we do this, I have no idea.
Legacy code is a wonderful and mysterious thing..
@Priya91 @geoffkizer Has a decision been made ? The way I see it is
Personally with the time constraints for 2.1 Http/2 I would say go with 2, get ALPN out the door and start the bigger discussion with more time in the pocket and taking into account things like project bedrock and pipelines.
Anyway.. thoughts?
Go with the option bag. The current design is unsustainable, and I don't foresee a more dramatic redesign happening any time soon. The options bag is also no more time consuming to implement.
@Tratcher Which option bag proposal? (2) above? (3) above? Something else?
(2) with the options bag for the constructor is the least impactful while addressing the requirements. This can easily handle ALPN and the renegotiation option. Nothing here requires changes to the Authenticate methods.
Thinking ahead to SNI, that can still be done with (2) if you have client/server specific options & overloads. Or you introduce separate client/server options bags for the Authenticate methods. I hesitate to go all the way to (3) where everything moves to the constructor options as it would require more churn for the implementor and consumers, but I think that's a decision best evaluated by the implementor. I expect either of these options to meet our ASP.NET Core requirements.
Okay well, even though @Tratcher is thinking how I was at the start, I got my hopes up for a future actual fix and now aren't sure it's the correct direction.
However I am going to depart from general internet behaviour and capitulate as there seems to be solid support for that design. So here is what I think to move things along
@Priya91 if you agree as it looks like you are doing the implemenation/investigation and if you agree with @Tratcher can we have a design around that put in here, if @geoffkizer agrees as well with the design I will update the top and we can push on...
(A comment for posterity, we will be back here in the near future :P )
I have also gotten alpn working for Linux, and used a basic options bag for the api surface area, I can change this after the api is approved. In the meantime, here's the commit with the alpn changes on Linux, the alpn test works, but some other tests are failing, investigating those.
Okay @Tratcher if you are happy with the general API in that commit I can distill out and update the first comment in the issue and we can move on?
By all means
I don't believe we should do (2). It's a half-step at best towards a full options bag approach.
I've been supportive of exploring the options bag approach, but it's been literally weeks that we've been going around on this and it hasn't converged. At this point I don't think it's worth continuing to pursue.
I think we should simply do 1, add a ctor overload.
@Tratcher, you've implied that (1) does not meet ASP.NET requirements. Please explain this.
Here's the full writeup forapproach (1). If there's anything incorrect here, please let me know.
ALPN:
```c#
public class SslStream : AuthenticatedStream
{
public SslStream(
Stream innerStream,
bool leaveInnerStreamOpen,
RemoteCertificateValidationCallback userCertificateValidationCallback,
LocalCertificateSelectionCallback userCertificateSelectionCallback,
EncryptionPolicy encryptionPolicy,
IList
public string NegotiatedApplicationProtocol { get; }
}
Support for disabling renegotiation:
```c#
public class SslStream : AuthenticatedStream
{
public SslStream(
Stream innerStream,
bool leaveInnerStreamOpen,
RemoteCertificateValidationCallback userCertificateValidationCallback,
LocalCertificateSelectionCallback userCertificateSelectionCallback,
EncryptionPolicy encryptionPolicy,
IList<string> applicationProtocols,
bool allowRenegotiation);
}
Let's finish this in offline.
@geoffkizer @Tratcher Do we have a consensus on the design?
No consensus at this point.
I spoke to @terrajobst today, and we'll use the regular Tues 11am design slot to discuss this. Invite should go out shortly.
@Tratcher, please forward the invite as appropriate.
@Priya91 @karelz @stephentoub, FYI -- hopefully you can make this meeting.
@Drawaes would be great to have you too, but not sure how this fits your schedule...
Not related to the api proposal, but SecureTransport doesn't have any public APIs to support ALPN, so we'll be throwing PNSE on OSX.
Huh. That's unfortunate.
SecureTransport doesn't have any public APIs to support ALPN
That's also part of the reason the libcurl that ships with macOS doesn't support HTTP/2:
https://daniel.haxx.se/blog/2016/08/01/curl-and-h2-on-mac/
11am where? Eg timezone?
sorry -- PST (Seattle)
Do we support OpenSSL as an option on OSX?
@bartonjs can confirm but not for TLS only for a few missing functions I believe.
I found this open radar bug asking for ALPN support for SecureTransport. One of the comments mentioned a new SSLSetALPNProtocols API to be released in macOS 10.13 which currently in beta. Could we light up ALPN support on macOS if this API is available?
7pm ldn should be okay as long as my Pen Testing next week doesn't go badly :)
Could we light up ALPN support on macOS if this API is available?
I assume so. @Priya91?
Could we light up ALPN support on macOS if this API is available?
Not easily. Right now we build on 10.12 and only have 10.12 API available. We'd have to do tricky things to make any sort of lightup work with this shim, or just get buyoff on making 10.13 the minimum OS for "2.1".
It would still be probably tricky but I'm wondering if installing new XCode and headers on 10.12 would be sufficient. The SecureTransport.h lives under /Applications/Xcode.app/Contents/Developer/Platforms/*
We should probably not count on it at this point and take it as bonus when possible.
actually:
+/*
+ * Set the ALPN protocols to be passed in the ALPN negotiation.
+ * This is the list of supported application-layer protocols supported.
+ *
+ * The protocols parameter must be an array of CFStringRef values
+ * with ASCII-encoded reprensetations of the supported protocols, e.g., "http/1.1".
+ *
+ * See RFC 7301 for more information.
+ */
+OSStatus
+SSLSetALPNProtocols (SSLContextRef context,
+ CFArrayRef protocols)
+ __OSX_AVAILABLE_STARTING(__MAC_10_12_4, __IPHONE_9_3);
+
+/*
+ * Get the ALPN protocols associated with this SSL context.
+ * This is the list of supported application-layer protocols supported.
+ *
+ * The resultant protocols array will contain CFStringRef values containing
+ * ASCII-encoded representations of the supported protocols, e.g., "http/1.1".
+ *
+ * See RFC 7301 for more information.
+ *
+ * Note: The `protocols` pointer must be NULL, otherwise the copy will fail.
+ * This function will allocate memory for the CFArrayRef container
+ * if there is data to provide. Otherwise, the pointer will remain NULL.
+ */
+OSStatus
+SSLCopyALPNProtocols (SSLContextRef context,
+ CFArrayRef __nullable * __nonnull protocols) /* RETURNED */
+ __OSX_AVAILABLE_STARTING(__MAC_10_12_4, __IPHONE_9_3);
Oooh. That does make life somewhat easier. (10.12.4 seems like an easier minimum to declare than 10.13)
I can put it on TODO list to investigate. And yes, I agree that would be much more viable.
The open question would be behavior on older versions. It would be nice IMHO if everything works, just ALPN would throw exception for unsupported platform.
@wfurt how would the consumer of SslStream know if it could enable ALPN other than trying to do so and having it throw?
How about a gracefull fallback mode where the negotiated ApplicationProtocal returns null and the app gets to decide if they fall back to a default (HTTP/1.1) or fail? The app already has to deal with this scenario when the remote party did not support ALPN.
I think the library loader will refuse to load the shim (since it couldn't resolve the import), meaning that anything touching crypto would cause the application to fail to load. (Unless we do a lot of scary work)
@bartonjs, remind me why in the shim we can't just use dlopen/dlsym to find the function we care about and use it if it exists?
In general, PlatformNotSupportedExceptions are consumer landmines, ASP.NET would rather have lightup/no-op for optional features like ALPN.
@stephentoub "Scary work" :). Though doing it for one pair of functions might be a lot more self-contained than if we were to do the whole thing like we did against OpenSSL for the portable effort.
I would tend to agree @Tratcher The downside is dragging new OS requirement for something perhaps marginal for most users. Requiring 10.12.4 instead of 10.12 does not seems to bad.
I would put this on hold for now and I can share more info when I get some tests done.
Of all the proposals, @Priya91's commit looks the simplest to both consume and implement. I'll summarize:
```c#
public partial class SslStream
{
public SslStream(Stream innerStream, SslAuthenticationOptions options) { }
public SslStream(Stream innerStream, bool leaveInnerStreamOpen, SslAuthenticationOptions options) { }
public string ServerIndicationName { get; }
public string ApplicationProtocol { get; }
}
public class SslAuthenticationOptions
{
public bool AllowRenegotiation { get; set; }
public IList
public EncryptionPolicy? EncryptionPolicy { get; set; }
public IDictionary
public LocalCertificateSelectionCallback UserCertificateSelectionCallback { get; set; }
public RemoteCertificateValidationCallback RemoteCertificateValidationCallback { get; set; }
}
```
A callback would still be preferred over IDictionary<string, X509Certificate>
for ServerCertificates, but I've gotten mixed reports on if that's supported on Windows.
I like that the options class will give a place to stick a collection of certificates for doing root pinning (once we add that option to X509Chain).
It will work, there a couple of things I would still like figured out
I need to sit down and think if 1,3,4 can be solved. As for 2 I would say hold off on the SNI if it's not critical at this point until someone can do some investigation on the SChannel side (OpenSsl supports it for sure).
@Drawaes 1,3,4 will not affect the api proposal issue here. 1 and 4 are perf improvements, 2 and 3 are different api requests. I wouldn't consider these blockers for the ALPN proposal.
Sure but the API paints the implementation into a bit of a corner. I am trying to thinking a way out of it and it might be possible but it's worth consideration.
Performance and usability don't need to be mutually exclusive.
Let me put it another way. The API proposal doesn't make it significantly more difficult but it doesn't lend a hand either. The factory/builder/listener concept helped a lot.
One additional thought that's orthogonal to the overall API approach.
We need to define the ALPN protocol strings for general use. Something like:
class SslStream
{
public const string Http11 = "http/1.1";
public const string Http2 = "h2";
// etc
}
These strings could live:
(1) On SslStream itself.
(2) On some other top-level (presumably static) class, e.g. SslApplicationProtocols.
(3) On a nested (static) class inside SslStream. So you'd access them by doing something like "SslStream.ApplicationProtocols.Http11". Slightly more discoverable than (2), but more verbose.
(2) seems to be the way we expose stuff like HttpRequestHeaders. That said, I'm happy with any of these.
Whats the story with tomorrow?
What do you mean? Meeting is on at 11am PST.
(2) with their own static class. The others cause API noise.
I've updated the proposal with the meeting notes and a tweaked API. Let me know if I missed anything.
You need the ability to disable rengotiation and remove max fragmentsize for now.
The disabled renegotiation is an http/2 requirement
@Drawaes Added renegotiation.
You already removed max fragment size, right?
Nope it was on the client side but I have now :)
@stephentoub had a good point, just after the meeting and I think I will explorer it some more. It might make @halter73 happy as well. That with the optionsbag approach a "factory" could be added after and the options bag might actually help. So it would be complimentary. In order to make it work it would need some internal access to SslStream to be able to directly supply cached values so might need to be in the assembly.
To be clear this in no way effects any optionsbag work, or this API proposal.
@karelz asked for references to other frameworks using a "factory approach", I found my refs (up there in the comments 23days ago ;) )
References to other frameworks
GO - Options Bag
NodeJs - Options Bag with a factory/provider
Jetty - ContextFactory
The proposed API in the beginning looks good, have a few questions,
Yeah same 2 I don't care. 3 Ssl class prefix is fine
One question, is the Protocol buffer just the content of opaque bytes for that protocol or the opaque bytes + the size prefix?
Also i put the Ssl prefix on the classes at the top
@Drawaes I would expect them to exactly match the IANA values, no prefix.
Cool I don't have a preference as long as it's clear :)
I would suggest removing the sni apis from here, since we are not planning to implement those for now. Only keeping alpn and allowrenegotiation values. For sni, there can be a separate api proposal. Updated the top issue with this, and also implemented iequatable on applnprotocol.
@Tratcher I dont get this,
EncryptionPolicy and SslProtocols should be nullable so they can float with the system default. In other words, you can tell if the user set them on purpose or not.
If the user doesn't set them the defaults can be None, null is going to be None internally. I don't understand the advantage nullable is offering over None.
There's a new SslProtocol every few years. We want to update the defaults, but we also want to honor developers explicit settings. If SslProtocol is a required parameter then it's hard to tell if the developer set it because that's what they wanted or just because that's what was available at the time. If it can be left null, then it's clear that the developer opted to use the system defaults, even if those defaults change.
Ok, let me put it this way, if i dont set the EncryptionPolicy property, it is going to use whatever default value the current state of ssl security requires. If the user explicitly changes the default use the users value.
@Tratcher SslProtocols.None
has already been co-opted to mean that in the current model. While null
is a better ("I don't know, you should do the right thing") value in general, I don't think we want two different ways of expressing it. So I'd argue for reusing the overloaded value of None
.
I agree. If it was greenfield null is much clearer on intent but it is where is is. To be honest it's completely backwards to how you would want anyway. Rather than picking what protocols you want it should be pick what protocols you disable.
So you would say I don't want ssl3 or tls1 but give me what ever else you can handle. Which was my design from 12 months ago but we are probably too far down this line to change that.
Regarding the SslServerAuthenticationOptions
, would it be possible to also add support for selecting the trusted issuers here?
See: https://github.com/dotnet/corefx/issues/23938
@ayende Please file separate issue for that.
He has and has linked to it here at my suggestion. It might be worth opening an SNI issue though as we dropped it from here.
SNI And which CAs you allow will need to be aware of each other.
public Task AuthenticateAsServerAsync(ServerAuthenticationOptions options, CancellationToken cancellation) { }
public Task AuthenticateAsClientAsync(ClientAuthenticationOptions options, CancellationToken cancellation) { }
These should be Ssl[Server|Client]AuthenticationOptions.
public bool ClientCertificateRequired { get; set; }
I still wonder if we should rename this to "ClientCertificateRequested" or similar. From MSDN, that seems like a more accurate description of what this does (unless I'm misunderstanding...)
public static readonly SslApplicationProtocol Http2;
public static readonly SslApplicationProtocol H2;
Do we really need two ways to specify Http2? I would just go with Http2 and drop H2.
public class SslApplicationProtocol : IEquatable<SslApplicationProtocol>
There was some discussion of making this a struct. Did we close on this?
I've removed the duplicate H2.
Changed it to struct, there was no disagreement for using struct. Are we going with ReadOnlyMemory? What advantage does that offer over byte[]
Since we need to copy anyway, I think byte[] is fine.
Re: ClientCertificateRequ{ired|ested}: If set to true with no custom callback, it's required. If set to true with a custom callback, it's whatever the custom callback decides.
What it really boils down to (for RFC terminology) is "send [client] certificate request" https://tools.ietf.org/html/rfc5246#section-7.4.4
So RequestClientCertificate
, maybe?
ReadOnlyMemory was more important before SslApplicationProtocol when we were directly adding IList
RE: client cert, do we really want to rename existing APIs? That will cause as much confusion as it save.
Re renaming, I don't feel strongly either way. We have an opportunity here to clarify these things that we probably won't have again for a long time. But as you said, I'm not sure it's worth it.
RE: read-only memory... let's put it another way.. . Why wouldn't you use it? It gives far not flexibility than an array. You can't by ref compare anyway because as you say you are copying it.
Changed readonlymemory to byte[]. I think all outstanding questions have been resolved. @karelz @KrzysztofCwalina @bartonjs @terrajobst @weshaggard Can you please do a final review and mark the API as approved. Thanks!
Wait, why is the Readonly memory being ditched? People are spending time actively adding the "memory" concept across the framework, byte[] is convertible to readonly memory but not the otherway around. We also agreed that the data is copied and therefore you can't do a ref equality anyway which would be the only upside. Finally if the memory needs to be pinned to transfer to the native API you can pin it once, and grab the pin for the cost of a reference count.
@Tratcher Just mentioned that we initially went with readonlymemory to make the static readonly http1.1, http2 fields immutable. Reverting it.
Update: We should make the checkCertificateRevocation setting on the options bag to be an enum. As it's not extensible for future design. Say, we decide to add a new certificate revocation enum, now the user can set conflicting values through these different settings and we will have to obsolete the bool value. So it makes sense to add the enum now on the options bag.
Change this,
```c#
public bool CheckCertificateRevocation { get; set; }
to,
```c#
public System.Security.Cryptography.X509Certificates.X509RevocationMode CheckCertificateRevocation {get; set;}
Updated with public SslApplicationProtocol(string protocol) { } // assumes utf-8
and public override string ToString() { } // For debugger. utf-8 or a string representation of the raw bytes. E.g. "0x68 0x74 0x74 0x70 0x2f 0x31 0x2e 0x31"
Seems reasonable. Is it worth checking there are no multibyte utf8 chars in the string and throwing if there are? You can still set those via readonlybytes and it could stop the hidden character copy paste issues (like happens with thumbprints copied from mmc that I have wasted hours of my life on)
@Drawaes any examples?
The example is if you copy and paste a thumbprint from the certificate plugin for mmc you end up with a hidden Unicode char. If you then try to use this to match to thumbprint in the certificate store in code you can't find it. You then spend 2 hours checking everything only to find that in vs on the string the column number jumps between the end of the thumbprint and the ".
It's not critical but more a pit of success thing, it's very unlikely you will actually want a Unicode char turned into utf8 with a codepoint. If you do you will know it and can use the read-only byte.
Anyway not a deal breaker just something that when you hit it makes you never forget.
I meant an example char/byte sequence. Was space (0x20) the errant character?
It's at the start sorry.. It's 0x200e and it's the left to right mark. There is a trail of broken dreams on stackoverflow ;)
So my suggestion is this. If anyone sends in a string that results in the most significant bit set in any of the bytes throw it out as an exception. It will stop odd runtime matching problems early and there is a way around it if you meant it.
It's partially also because there is no way to say "we will interpret this as utf8" almost like we need a utf8 string ;)
Ah, understood. That's more of a code editor issue rather than an API issue. Depending on your file encoding those extra bytes may just get dropped. Second-guessing the values is way beyond the purview of this API.
Right, I have thought about this more, the string overload should be ditched entirely. There are consts for the common case, for anything else it's "advanced" and the user should control the string that is entered.
They can convert to UTF8 if they want, but there is no utf8 string type, so it's not clear what sending a string in will do. There is nothing in the spec about UTF8 but just opaque bytes, the overload helps very little and just adds a potential trap.
Common trap would be a binary Redis key/value which has an implicit conversion to string (without widening or narrowing)
So using a binary value from Redis would then get interpreted as Utf8 and either get every other byte eaten or produce an invalid Utf8 interpretation.
The spec does give utf-8 as a suggestion for defining spec values, and all of the current ones have used that encoding.
Identification Sequence: The precise set of octet values that
identifies the protocol. This could be the UTF-8 encoding
[RFC3629] of the protocol name.
Right and all of the current ones that are relevant have a const. That is also the IANA considerations for allocating them. The spec itself is
Protocols are named by IANA-registered, opaque, non-empty byte strings, as described further
in Section 6 ("IANA Considerations") of this document.
And the could be is about as weak as an IETF spec gets, it's not even a lower case "must" :P
I guess the real question is. What is the use case for passing in a string that is facilitated that warrants the potential issues?
The spec does give utf-8 as a suggestion for defining spec values, and all of the current ones have used that encoding.
That different than automatically converting the input to the function from 2 byte per character representation (string
) to a 1 byte per character representation (utf8
) so it can match the on-wire representation.
When for example if it came from a redis key it would likely be in the on wire representation so you'd be dropping every other byte in the automatic conversion from string
to utf8
. Or it might be in a string format.
Issue I have is with the automatic conversion from 16-byte string to 8-byte opaque binary format
^^^ That. The unease I have is there is nothing saying "It's going to be assumed to be UTF8" in the API definition other than a comment. You could for instance have a
(ignoring shape and naming)
CreateUTF8ApplicationLayerProtocolName(string inputString);
Where it was clear what was going to happen, but to have an automatic overloaded constructor is problematic.
Some of your examples are copy paste and some are treating the input as dynamic. The copy paste seems far more likely, you don't add new protocols very often, and certainly not on the fly from redis.
The main scenario I see the string constructor being used for is when copy pasting new values from IANA list, which explicitly encourages utf-8 encodings.
I've been told several comparable platforms are only exposing the utf-8 string api, no byte[] api at all. I'd be curious to see a actual references though.
Sure go takes a "string" but a string in Go is opaque bytes anyway not utf16 like a c# string. The issue isn't the UTF8. If there was a utf8 string, no problem.
From GO
It's important to state right up front that a string holds arbitrary bytes. It is not required to hold Unicode text, UTF-8 text, or any other predefined format. As far as the content of a string is concerned, it is exactly equivalent to a slice of bytes.
Current usage without overload for a string would be this?
var myProtocol = new SslApplicationProtocol(Encoding.UTF8.GetBytes("MyProtocol"))
@Drawaes Can you please revert the changes you did to the top issue comment? The last proposal update was done after an api-review meeting during which adding a string constructor was recommended, we don't want the top comment changed without an api-review.
cc @karelz
I only removed the string overload which wasn't discussed at the API Review meeting, and was only added 4 days ago, unless there was a meeting that I didn't know about?
Updated with public SslApplicationProtocol(string protocol) { } // assumes utf-8 and public override string ToString() { } // For debugger. utf-8 or a string representation of the raw bytes. E.g. "0x68 0x74 0x74 0x70 0x2f 0x31 0x2e 0x3
That was this comment
Comment where it was added
If I am mistaken then it could be added back, but this was added as far as I can tell 4 days ago and the review meeting only discussed the constructor with readonly memory and the consts for well known types?
@Drawaes Yes there was another meeting to follow up on the open questions from initial review. And the meeting notes 22-sept-2017 was the result of that.
Looking it another way
byte[] binaryProtocol // = ...;
var myProtocol = new SslApplicationProtocol(binaryProtocol);
var otherProtocol = new SslApplicationProtocol(myProtocol.ToString());
Console.WriteLine(myProtocol == otherProtocol)
Would potentially output false
?
Whereas
var myProtocol = new SslApplicationProtocol(binaryProtocol);
var otherProtocol = new SslApplicationProtocol(myProtocol.Protocol);
Console.WriteLine(myProtocol == otherProtocol)
Would return true
?
Except... it needs a ReadOnlyMemory<byte>
.ctor; as you can't currently do that 鈽癸笍
Protocol
is ReadOnlyMemory<byte>
; and there is only an byte[]
.ctor.
So the constructor <-> object symmetry for SslApplicationProtocol
doesn't currently work. You cannot take the output properties of a SslApplicationProtocol
and create another that is equal.
ToString()
output could be invalid when put into the string
.ctor as .ctor interprets it as utf8 and ToString can output hex codes.Protocol
which won't do interpretation has no .ctor to give it to; so you'd need to allocate a new byte array and then iterate over Protocol
setting each byte individually, then give the newly allocated byte[]
to the .ctorI agree as it stands the API for the protocol is asymmetrical which is problematic for any API.
@Priya91 One issue I have with the meeting, was not that it took place, that is fine if you needed to sort stuff out. But that it wasn't mentioned other than by updating my top comment, just that there was an API change. Now its partially a failure of GH in that you don't get notified for changes to a comment, only to new comments so I was blissfully unaware and could not understand why the API had been changed.
The point above does still stand, taking a byte[] in and surfacing a Memory seems asymmetrical. As well as the misgivings on the string.
You cannot take the output properties of a SslApplicationProtocol and create another that is equal.
Would the following not work?
```c#
var myProtocol = new SslApplicationProtocol(binaryProtocol);
var otherProtocol = new SslApplicationProtocol(myProtocol.Protocol.ToArray());
Console.WriteLine(myProtocol == otherProtocol)
```
I think the point was you had to copy it out to an array when it ends up being read-only memory in the end again. Why not just take read-only memory in?
It is a little weird to use different types for the constructor and property. In some version of the proposal the constructor took ReadOnlyMemory<byte>
. I'm not sure where that got lost.
Why not just take read-only memory in?
c# byte[] bytes = ... ReadOnlyMemory<byte> memory = bytes; var myProtocol = new SslApplicationProtocol(memory); bytes[0] = 20;
Point being we have to copy in the ctor no matter what we take. And then we return ReadOnlyMemory and we don't have to copy.
Okay I am obviously missing something (I will believe you on this). But for my own understanding why do you have to copy whatever you have in the constructor anyway?
From the way I would understand it ... again I might be missing something you have x number of ALPN structs with the opaque bytes.
When you come to send that to the client you need to prepare a list so you do (now I wouldn't use linq but its example code, you could use an array equiv or whatever)
var totalSize = listOfAlpn.Sum(i => i.Length + 1) + 2;
var outputBuffer = new byte[totalSize];
var span = new Span<byte>(outputBuffer);
span.WriteUshort(totalSize-2);
span = span.slice(2);
foreach(var alpn in listOfAlpn)
{
span.WriteByte(alpn.Length);
//slice again (this is why spans need a write FYI)
alpn.ReadonlyMemory.Span.CopyTo(span);
//one last slice
}
//done you have an array with the buffer to put on the wire
I assumed we want SslApplicationProtocol to be immutable. Immutable types are better for representing logical constants.
Immutable types cannot store data is that mutable and shared. The byte array passed to the ctor is mutable and shared. To make it not shared, the ctor needs to make a private copy.
In your code example, the loop could produce different data in the result span every time the loop is run, even if the collection of alpn items is not ever changed. This is because the items in the collection could be changed through the byte array "back door". And we don't want that, do we?
And we don't want that, do we?
Might be fun :) Though more likely a cause of bugs.
Regardless, should be a .ctor
public SslApplicationProtocol(ReadOnlyMemory<byte> protocol) { }
Or perhaps
public SslApplicationProtocol(ReadOnlySpan<byte> protocol) { }
As they correctly convey the intent for the use of protocol
parameter; as well as accepting other sources than just a bcl byte[]
@KrzysztofCwalina sure we don't however I said really early on that mutability was an issue for all of the options. Here we are saying the ALPN is special compared to say, the list of ALPN, the certificate or anything else that could be changed in the options bag? Here is the quote from @stephentoub when I said we should lock down the options bag (which FYI convinced me that it shouldn't be immutable so I am not sure why now ALPN needs to be).
Why is it a problem to have the options bag be mutable? It's no different from anything else: you give this to the API, and the API expects it to be in a consistent state for the duration of the operation... if you change it while the operation is in flight, that's a bug, no different from any other case where you mutate something while a called method depends on it (e.g. the buffers passed to read/write). If you want to treat it as immutable and just never change it after it's created, you can. If you want to treat it as mutable such that you can modify it between uses, you can.
@Drawaes The options bag IList<SslApplicationProtocol>
property can be changed, but the SslApplicaitonProtocol struct protocol value is immutable. For now, we are assuming SslApplicationProtocol use lies only in the options bag, but this type could be used outside of options bag in other scenarios in user code.
@Priya91 I 100% understand the design, but it seems arbitrary that the ALPN is protected by immutability using copies, but not the list, or anything else in SslStream. So then we have arbitary immutability and inconsistancy, and we have an asymmetrical API that exposes a ReadOnlyMemory on the Protocol property but no way to use this to make a new one?
Anyway, if it's the design you all want, so be it.
@Drawaes, we won't be exposing constant/pre-cooked options bags. We do want to have constant SslApplicationProtocols exposed.
@benaadams, I agree that ReadOnlySpan
Having said that, I am not pushing for any design here (in fact I think it's a bit hair splitting and we should just let the developer responsible for this feature, i.e. @Priya91, make the call) . I was just commenting that the ctor must copy regardless what we pass in, and so the argument that we should use ReadOnlyMemory because it's readonly does not seem to be valid.
and we have an asymmetrical API that exposes a ReadOnlyMemory on the Protocol property but no way to use this to make a new one?
I don't understand this statement, The readonlymemory can be used to make a new SslApplicationProtocol object, new SslApplicationProtocol(readonlymemory.toarray()).
The readonlymemory can be used to make a new SslApplicationProtocol object, new SslApplicationProtocol(readonlymemory.toarray()).
It can. However you are taking an immutable item ReadOnlyMemory<byte>
allocating to convert it into a mutable item byte[]
and then passing the mutable item into the constructor, so it can copy it again and turn it into an immutable ReadOnlyMemory<byte>
.
What I'm suggesting is taking the parameter as ReadOnlyMemory<byte>
conveys the intent of the function correctly in a new way that we couldn't do previously with arrays. It says the function that takes this does not intend to modify it. Whereas a function taking an array suggests no such guarantee; the parameter may be changed for the callee after the call.
I'm not suggesting to remove the byte[]
.ctor; as that would probably just confuse people; just to add an additional .ctor that can directly take the property the type exposes and create another one, rather than having to convert it to something else; or translate it first.
Also you are probably never actually going to do
new SslApplicationProtocol(proto.Protocol));
or
new SslApplicationProtocol(proto.Protocol.ToArray()));
As if you have the protocol why would you create an identical one; so mostly its that the api is not self consistent that bothers me. However its a minor point.
However I do think interpreting a string
input as UFT8 is problematic and likewise without a string
.ctor you could easily do
new SslApplicationProtocol(Encoding.ASCII.GetBytes("MyProtocol"));
new SslApplicationProtocol(Encoding.UTF8.GetBytes("MyProtocol"));
new SslApplicationProtocol(Encoding.BigEndianUnicode.GetBytes("MyProtocol"));
new SslApplicationProtocol(Encoding.Unicode.GetBytes("MyProtocol"));
new SslApplicationProtocol(Encoding.UTF7.GetBytes("MyProtocol"));
new SslApplicationProtocol(Encoding.UTF32.GetBytes("MyProtocol"));
new SslApplicationProtocol(Encoding.GetEncoding("iso-8859-5").GetBytes("MyProtocol"));
Where it is explicit what the encoding is, as string in .NET is a 16-bit format; that can also quite happily store binary data and not a UFT8 format.
I suppose in the same vein you could do
byte[] protocol = new byte[str.Length * 2];
int i = 0;
foreach (char ch in str)
{
protocol[i] = (byte)(ch & 0xFF);
protocol[i+1] = (byte)((ch >> 8) & 0xFF);
i += 2;
}
new SslApplicationProtocol(protocol);
To maintain string
's natural format - I just think an api that takes string an automatically assumes a different output encoding is asking for trouble - is baking a conversion assumption into the api.
The issue with .NET apis as demonstrated by .NET Standard 2.0 (and amazing it is) - is that they can never be changed once they are live.
A string overload can always be added later if there is demand; however once there it can never be taken away or its behavior changed.
With a ReadOnlyMemory
.ctor + https://github.com/dotnet/corefx/issues/24293 then you'd be able to avoid the foreach char loop with the no-alloc
new SslApplicationProtocol(new ReadOnlyMemory<char>(str).AsBytes())
(ignoring defensive copy in .ctor)
@benaadams What are the use-cases for someone to do this?
I'm not suggesting to remove the byte[] .ctor; as that would probably just confuse people; just to add an additional .ctor that can directly take the property the type exposes and create another one, rather than having to convert it to something else; or translate it first.
Whether it is byte[] or Readonlymemory
Regarding the string constructor overload, it's provided for usability and convenience of using new alpn values that are not currently constants, or for custom pre-agreed new values that the clients and servers may use. It is not our goal to provide equality for an sslaplicationprotocol object created with another tostring() to match, that scenario is not a use case for this api. It is similar to how the encoding apis don't match with getstring and getbytes for incompatible encoding values.
We should converge the discussion on this issue and get to consensus, it is dragging for quite long :(.
@Tratcher
public SslApplicationProtocol(string protocol) { } // assumes utf-8
What does it mean "utf-8"?
We have UTF-16 encoding of strings, so that is what we're going to receive. We will have to convert it internally to UTF-8 string and store in "byte[]" that we are passing into.
ToString
will interpret underlying "byte[]" as UTF-8 string. If that fails, it will be string representation of the raw bytes, e.g. "0x68 0x74 0x74 0x70 0x2f 0x31 0x2e 0x31".
This is the latest ALPN proposal from the discussion on this thread. Posting that here for the api-review team.
```c#
namespace System.Net.Security
{
public partial class SslStream
{
public Task AuthenticateAsServerAsync(SslServerAuthenticationOptions options, CancellationToken cancellation) { }
public Task AuthenticateAsClientAsync(SslClientAuthenticationOptions options, CancellationToken cancellation) { }
public SslApplicationProtocol NegotiatedApplicationProtocol { get; }
}
public class SslServerAuthenticationOptions
{
public bool AllowRenegotiation { get; set; }
public X509Certificate ServerCertificate { get; set; }
public bool ClientCertificateRequired { get; set; }
public SslProtocols EnabledSslProtocols { get; set; }
public X509RevocationMode CheckCertificateRevocation { get; set; }
public IList
public RemoteCertificateValidationCallback RemoteCertificateValidationCallback { get; set; }
public EncryptionPolicy EncryptionPolicy { get; set; }
}
public class SslClientAuthenticationOptions
{
public bool AllowRenegotiation { get; set; }
public string TargetHost { get; set; }
public X509CertificateCollection ClientCertificates { get; set; }
public LocalCertificateSelectionCallback LocalCertificateSelectionCallback { get; set; }
public SslProtocols EnabledSslProtocols { get; set; }
public X509RevocationMode CheckCertificateRevocation { get; set; }
public IList
public RemoteCertificateValidationCallback RemoteCertificateValidationCallback { get; set; }
public EncryptionPolicy EncryptionPolicy { get; set; }
}
public struct SslApplicationProtocol : IEquatable
{
public static readonly SslApplicationProtocol Http2;
public static readonly SslApplicationProtocol Http11;
// Adding other IANA values, is left based on customer input.
public SslApplicationProtocol(byte[] protocol) { }
public SslApplicationProtocol(string protocol) { }
public ReadOnlyMemory<byte> Protocol { get; }
public bool Equals(SslApplicationProtocol other) { }
public override bool Equals(object obj) { }
public override int GetHashCode() { }
public override string ToString() { }
public static bool operator ==(SslApplicationProtocol left, SslApplicationProtocol right) { }
public static bool operator !=(SslApplicationProtocol left, SslApplicationProtocol right) { }
}
}
```
We have UTF-16 encoding of strings, so that is what we're going to receive. We will have to convert it internally to UTF-8 string and store in "byte[]" that we are passing into.
And its not conveyed in the api that this is a baked in auto-conversion e.g.
public SslApplicationProtocol(string protocol)
: this(Encoding.UTF8.GetBytes(protocol))
{ }
Atm its at best a side comment in this Issue.
The spec does not say it must be a UTF-8 string interpretation - it says it could be
Identification Sequence: The precise set of octet values that identifies the protocol. This could be the UTF-8 encoding [RFC3629] of the protocol name.
How common will it be that users will be making their own protocols so need a convenience constructor that does an opinionated conversion from string
to UTF-8 bytes?
What subset of those users will not understand the Encoding apis so need help to do
var protocol0 = new SslApplicationProtocol(Encoding.UTF8.GetBytes(protocol))
Equally anyone straying into the early non-ascii with accents; will they go for the extra byte UTF8 creates for
// 13 bytes in UTF-8, 12 in Windows-1250/ISO-8859-2
var protocol0 = new SslApplicationProtocol("m暖j protokol");
// 22 bytes in UTF-8, 21 in Windows-1252/ ISO 8859-1
var protocol1 = new SslApplicationProtocol("doppelg盲ngerprotokoll");
// 23 bytes in UTF-8, 12 in Windows-1251/ ISO 8859-5
var protocol2 = new SslApplicationProtocol("屑芯泄 锌褉芯褌芯泻芯谢");
Or will they prefer to use Latin 2 or Windows 1250/1251/1252 encodings from System.Text.Encoding.CodePages which saves a byte and each char is still 1 byte; whereas in UTF8 its 2 bytes?
I don't think the string overload is a good one - its trouble, especially with the pushback on using utf8 as a compact string representation from people using Latin character sets as they use their localized 8-byte format https://github.com/dotnet/coreclr/issues/7083
All the current IANA ALPN Protocol Ids are ASCII, so there are no actual examples or precedent of UTF-8 in actual use:
"http/1.1"
"spdy/1"
"spdy/2"
"spdy/3"
"stun.turn"
"stun.nat-discovery"
"h2"
"h2c"
"webrtc"
"c-webrtc"
"ftp"
"imap"
"pop3"
"managesieve"
@karelz Yes I meant that the string would be converted to bytes using the utf-8 encoding.
This string overload is not a critical piece of the API and does not merit the amount of time already spent on it. It provides a slight improvement in usability over the other overload, that's all. Keep it or delete it, but either way it's time to move on.
Whether it is byte[] or Readonlymemory it has to be copied in the constructor, and a readonlymemory is just a byte[] that doesn't expose means to modify the underlying byte[]. So I don't see any extra value in readonlymemory in the constructor.
Its a good point. As you say it needs to be copied anyway, so ReadOnlySpan<byte>
would have more value than ReadOnlyMemory<byte>
as it can also be stack memory - and you can always get one from a ReadOnlyMemory<byte>
Okay it's upto the dev implementing it and MS who has to support it in the end. Mostly I wanted an options bag and was moonshotting for a builder so I am pretty happy I got what I wanted. I will make one last comment on the constructor then someone at MS can make a decision and they are welcome to update the top issue.
So my final word.. as repeated often API design is forever. Why take a string and utf8 It? UTF8 strings might be around the corner anyway. What's the likelihood of someone actually needing the overload right now? It can be added later if needed you can always add you can't takeaway.
Second read-only memory .... arrays implicitly cast to memory anyway right ? So make it read-only memory and add an array if you need it. If the concern is familiarity well if you are making your own ALPN protocol I would hope a certain level of depth of knowledge anyways.
Anyway let me know what you want above.....
Incase It's not clear 1 read-only memory constructor only is my recommendation.
@benaadams The spec does not say it must be a UTF-8 string interpretation - it says it could be
Correct. The spec is intentionally unclear from unknown reasons, we believe that the UTF-8 is strongly hinted. It is our interpretation of the spec.
BTW: We discussed also ASCII. If anyone needs UTF-16 or other encodings, they can use directly byte[] overload.
Practically, we do not think anyone will use anything different than the IANA pre-defined ASCII/UTF-8 strings.
How common will it be that users will be making their own protocols
Users can use it for all the IANA protocols which we do not include as convenient constants. It is also result of a long internal discussion about using only string as the underlying data instead of byte[] (which is more feature proof). Adding string overload is convenience for usability.
This string overload is not a critical piece of the API and does not merit the amount of time already spent on it. ... either way it's time to move on.
Agreed 馃挴!
@Drawaes UTF8 strings might be around the corner anyway.
I don't think it is a good idea to make APIs dependent on each other. We don't know if/when UTF-8 strings get into the platform.
I view the convenience string API as way to pass ASCII (or UTF-8) string, as per spec guidance.
BTW: I will Update top post with copy & link to latest version of the API from @Priya91.
The spec is intentionally unclear from unknown reasons, we believe that the UTF-8 is strongly hinted. It is our interpretation of the spec.
It isn't, it doesn't use a RFC2119 term (which the spec also references)
Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in [RFC2119].
Its says "could" which is not one of these terms; rather than "RECOMMENDED" or even "MAY" - its more of an example usage - to clarify that even though its a binary field, text is probably a sensible option.
If you want to bake this conversion in; I would suggest being explicit in the api in some way e.g.
new SslApplicationProtocol(string uf8Protocol)
I think the people specifying their own protocols can cope with the onerous extra step of using an explicit Encoder for the string rather than having an ambiguous convince string overload .ctor to make it easier.
Anyway; I've raised my objections, made suggestions - won't object to the final decision.
API review: Approved as latest proposal. We discussed the alternatives suggested above, but didn't feel we should take them.
Okay cool, looking forward to it being in corefx
Most helpful comment
@karelz Yes I meant that the string would be converted to bytes using the utf-8 encoding.
This string overload is not a critical piece of the API and does not merit the amount of time already spent on it. It provides a slight improvement in usability over the other overload, that's all. Keep it or delete it, but either way it's time to move on.