VS2015, Update 3:
Steps to Reproduce:
public class Example
{
public static void GetValue<TKey>(TKey key)
{
lock(key)
{
// some code...
}
}
static void Main ()
{
GetValue(1);
}
}
Expected Behavior:
I would expect this to result in a compile time error.
Actual Behavior:
The code compiles cleanly.
/cc @MadsTorgersen
Thanks for reporting this issue.
I just checked and this bug repros on both the native compiler as well as Roslyn. I verified on the 5.0 compiler but I'm guessing this bug goes back even further. Given how long this bug has been around it seems likely that changing this to an error now would break existing customers. This looks like a good candidate for warning waves when we implement them.
@jaredpar I agree that changing behavior where it would introduce errors on existing code is bad, and this should have a high bar. I do think this clears that bar. Any code that did lock on a value type in this manner just won't work.
There are two cases here where customers could be affected:
TLDR: Compat is one of the most important goals of the compiler and at times it can force us to keep bad behavior around
Any code that did lock on a value type in this manner just won't work.
Agree. However this alone does not determine whether or not we fix a shipped bug. It certainly factors into the decision, quite heavily, but there are a range of items we consider here.
There is no formal list that we go down that I could point to for guidance. But off hand I would say the major factors that we consider when looking at compat issues are the following:
What is the impact for affected customers if we make the fix?
When customers run into this bug what is the effect on their code? Does it cause ...
At a glance of the list it may be easy to look at (1) and say ...
Correct code doesn't compile? That's awful and breaks customer expectations!
While it's certainly awful and unfortunate that we didn't accept correct code, it's also one of the easiest bugs to fix. Fixing code which doesn't compile means that I have virtually no chance of breaking existing customers. It makes the fix / won't fix decision very easy; it just comes down to the risk of the actual code change.
On the other end of the spectrum in terms of impact is (4). In this case it's possible that many customers out there actively hit this bug but their programs are functioning correctly. Either because they only use the code with the correct types / values or they are simply getting lucky due to their specific circumstances. In either case though if we fix the bug then we run the risk of taking code which is running fine today (rightly or wrongly) and breaking it in some way.
This particular bug falls into category (4).
What are the mitigation for broken customers?
When we fix this bug what will the impact on affected code be for customers?
These are listed in roughly the order of pain to the customer (least to greatest). The more action the customer needs to take the harder it is to justify fixing the bug.
Category (1) has virtually no consequences. There is little chance customers depend on correct code not compiling. Anything under (2) though has trade offs to be considered.
There are times where (2.i) can be acceptable. For example if we know that overload resolution will only change in a very small number of cases then we can have confidence it won't impact customers. It's very likely their code will continue compiling and executing. Even if wrong there is the possibility that it picks a method that has the same effect (overloads tend to have similar functionality). This makes it easier to justify a fix.
Anything in (2.ii) or (2.iii) though is harder to accept. It means that code which customers were using, compiling and running before no longer compiles. This forces some action on their part. It could be as simple as suppressing a warning, or a one line fix or as complicated as forcing a significant rethinking of their code base. This necessary action can be an adoption blocker for new versions of the compiler or Visual Studio. It's hard to upgrade if the code stops compiling 馃槮
This particular bug falls into category (4). It's possible that the fix forces a refactoring of the code in question.
How long has the bug existed?
Essentially when was the bug introduced? The longer a bug's existed in the compiler the higher the chance a customer took a dependency on this behavior (directly or indirectly).
This is probably one of the biggest factors that is weighed. The longer a bug existed the more chance a customer has taken a dependency on it. This means we have to consider the number of affected customers larger the longer a bug has existed.
Even trivial problems with trivial work arounds become virtually unfixable if the bug is wide spread and existed since 1.0. The number of potentially impacted customers is simply too large and the potential for major fall out too great.
On the other hand we're actually quite aggressive when it comes to fixing bugs that have existed for 1 or even 2 updates. It's easy to justify fixing a bug of this nature if it's only existed for say three months.
This particular bug though has existed for at least five years. Very likely since the 2.0 version of the compiler (haven't verified that yet). That makes it a virtual certainly customers have taken a dependency on this behavior. That combined with the mitigation (change your code) is why I feel we can't fix this issue. The potential negative impact is too great.
Note: I don't want this post coming off as being happy about the decisions we have to make here. It's actually downright maddening at times to have to make these sorts of trade offs. This frustration is in large part why we are so eager to implement warning waves. It's a path for allowing us to fix egregious errors like this in future releases in a safe way.
What if in this specialized case where there is no type constraint the compiler could produce code that boxes the variable if it's a value type to produce a reference suitable for synchronization.
And of course also produce a warning.
@dustinhorne That's what currently happens: the value type is boxed every time it's used in the lock. The problem is that this creates a separate object every time, so it's not actually suitable for synchronization.
Any code that did lock on a value type in this manner just won't work.
Note that this isn't particularly correct. Due to the use of a freshly-boxed object, the lock statement won't block, which means the code becomes roughly equivalent to the same code without a lock statement (ignoring the memory fence). It's entirely plausible that users could hit this scenario and end up with code which behaves entirely as expected in all cases, or near to that end up with a race condition which rarely (or never) leads to incorrect behavior in production.
Most helpful comment
TLDR: Compat is one of the most important goals of the compiler and at times it can force us to keep bad behavior around
Agree. However this alone does not determine whether or not we fix a shipped bug. It certainly factors into the decision, quite heavily, but there are a range of items we consider here.
There is no formal list that we go down that I could point to for guidance. But off hand I would say the major factors that we consider when looking at compat issues are the following:
What is the impact for affected customers if we make the fix?
When customers run into this bug what is the effect on their code? Does it cause ...
At a glance of the list it may be easy to look at (1) and say ...
While it's certainly awful and unfortunate that we didn't accept correct code, it's also one of the easiest bugs to fix. Fixing code which doesn't compile means that I have virtually no chance of breaking existing customers. It makes the fix / won't fix decision very easy; it just comes down to the risk of the actual code change.
On the other end of the spectrum in terms of impact is (4). In this case it's possible that many customers out there actively hit this bug but their programs are functioning correctly. Either because they only use the code with the correct types / values or they are simply getting lucky due to their specific circumstances. In either case though if we fix the bug then we run the risk of taking code which is running fine today (rightly or wrongly) and breaking it in some way.
This particular bug falls into category (4).
What are the mitigation for broken customers?
When we fix this bug what will the impact on affected code be for customers?
These are listed in roughly the order of pain to the customer (least to greatest). The more action the customer needs to take the harder it is to justify fixing the bug.
Category (1) has virtually no consequences. There is little chance customers depend on correct code not compiling. Anything under (2) though has trade offs to be considered.
There are times where (2.i) can be acceptable. For example if we know that overload resolution will only change in a very small number of cases then we can have confidence it won't impact customers. It's very likely their code will continue compiling and executing. Even if wrong there is the possibility that it picks a method that has the same effect (overloads tend to have similar functionality). This makes it easier to justify a fix.
Anything in (2.ii) or (2.iii) though is harder to accept. It means that code which customers were using, compiling and running before no longer compiles. This forces some action on their part. It could be as simple as suppressing a warning, or a one line fix or as complicated as forcing a significant rethinking of their code base. This necessary action can be an adoption blocker for new versions of the compiler or Visual Studio. It's hard to upgrade if the code stops compiling 馃槮
This particular bug falls into category (4). It's possible that the fix forces a refactoring of the code in question.
How long has the bug existed?
Essentially when was the bug introduced? The longer a bug's existed in the compiler the higher the chance a customer took a dependency on this behavior (directly or indirectly).
This is probably one of the biggest factors that is weighed. The longer a bug existed the more chance a customer has taken a dependency on it. This means we have to consider the number of affected customers larger the longer a bug has existed.
Even trivial problems with trivial work arounds become virtually unfixable if the bug is wide spread and existed since 1.0. The number of potentially impacted customers is simply too large and the potential for major fall out too great.
On the other hand we're actually quite aggressive when it comes to fixing bugs that have existed for 1 or even 2 updates. It's easy to justify fixing a bug of this nature if it's only existed for say three months.
This particular bug though has existed for at least five years. Very likely since the 2.0 version of the compiler (haven't verified that yet). That makes it a virtual certainly customers have taken a dependency on this behavior. That combined with the mitigation (change your code) is why I feel we can't fix this issue. The potential negative impact is too great.
Note: I don't want this post coming off as being happy about the decisions we have to make here. It's actually downright maddening at times to have to make these sorts of trade offs. This frustration is in large part why we are so eager to implement warning waves. It's a path for allowing us to fix egregious errors like this in future releases in a safe way.