I have opened this issue as a place to discuss how we should adapt the rules for implicit resolution.
The current spec says:
First, eligible are all identifiers x that can be accessed at the point of the method call without a prefix and that denote an implicit definition or an implicit parameter. An eligible identifier may thus be a local name, or a member of an enclosing template, or it may be have been made accessible without a prefix through an import clause. If there are no eligible identifiers under this rule, then, second, eligible are also all implicit members of some object that belongs to the implicit scope of the implicit parameter's type, T.
This gives a priority: Search in context first, only is that fails search in implicit scope of type. Do we want to keep that? It prevents us from defining in a context a "fallback" rule that is less specific than rules in the implicit scope of their types, since the "fallback" would be selected before the other implicits are even considered.
The current spec includes the prefix p of a type reference p.T in the implicit scope of that type. This holds even for the package prefix. I.e. p could be a package with a toplevel implicit in it. Referring to p.T would bring that implicit in scope. Do we want to keep that? Or is it too suprising / abusable as a mechanism?
Do we want to change how implicits are ranked for disambiguation? If yes, how?
/cc @milessabin
See #5887. This applies the following changes to implicit resolution:
(1) enables a simple solution to the local coherence / local consistency problem #4234 #2046 #4153.
Consider:
abstract class A {
def show: String
}
class B extends A {
def show = "B"
}
class C extends A {
def show = "C"
}
object M {
def f given B, C : String = {
implied for A = the[B]
the[A].show
}
}
f has two givens for B and C which both extend A. So inferring an A inside f is ambiguous.
However, we can heal the ambiguity by defining a nested implied instance for A which takes precedence over the two parameters.
I believe this is quite workable in practice, and obviates the need for a more complex solution.
My tendency is to start with #5887 and refine it as follows:
T.That way, it would be possible to import low-priority fallback implicits that complement what is
found in the type scope.
(*) Note that this is not necessarily the outermost scope: There could be implicits defined directly in
enclosing packages.
What's the motivation for no longer searching package prefixes?
A (I believe) legitimate use case is in mixed (Java/Scala) projects, where for whatever reason rewriting a Java class to Scala is infeasible, but one nevertheless wishes to put values in its implicit scope. Especially with the demise of package objects and the rise of top-level definitions (yay!), this is a useful technique.
@hrhino The reason for no longer searching package prefixes is that implicits put into scope that way tend to be very surprising. A common use of a package level implicit as a global definition for a whole project, but it's generally intended to stay private to that project. Most people are very surprised to learn that the implicit leaks out with any type they define in the package.
The meta goal of many of the implicits related changes is to make it clearer to the user of a library where implicits are coming from.
What's the motivation for no longer searching package prefixes?
Something which might be worth considering reg. local implicits being fallbacks to global ones: we already allow implicit (implied?) matches. If we added a similar mechanism that only searched the type scope, then it'd be possible to explicitly define a local fallback to a type-scope instance. That way, we also might end up with simpler resolution rules overall.
This hypothetical mechanism might look as follows:
inline implied localFallback: T = implied[type] match {
case t: T => t
case _ => default : T
}
Implicit ranking should consider the implicit arguments of implicit methods when using specificity to order candidate implicits.
The relative ordering of a pair of candidate implicits should be computed as follows,
This change makes combining implicits from multiple implicit scopes a great deal more straightforward, since we can now rely on more than just the result type of the implicit definition to guide implicit selection. With this change the following works as expected,
class Show[T](val i: Int)
object Show {
def apply[T](implicit st: Show[T]): Int = st.i
implicit val showInt: Show[Int] = new Show[Int](0)
implicit def fallback[T]: Show[T] = new Show[T](1)
}
class Generic
object Generic {
implicit val gen: Generic = new Generic
implicit def showGen[T](implicit gen: Generic): Show[T] = new Show[T](2)
}
object Test extends App {
assert(Show[Int] == 0)
assert(Show[String] == 1)
assert(Show[Generic] == 2) // showGen beats fallback due to longer argument list
}
In Scala 2 and Dotty the specificity of an implicit is computed only from its result type. This is partly responsible for a similar bug in each (Scala 2, Dotty) but, more importantly, is both against the spirit of the spec (which aspires to consistency with normal overload resolution) and contrary to most people's expectations: everyone I've asked expects that "more demanding" implicits should be preferred over "less demanding" implicits if they can be satisfied; many have been surprised that this isn't how scalac _already_ behaves.
There's a pull request making this change to Scala 2 here which gives some additional examples. It should be fairly straightforward to port to Dotty. Notably the only thing in the Scala 2 community build that broke was shapeless and that a corresponding fix was very simple.
@odersky
- The scope corresponding to top-level imports(*) is special.
If an implicit is searched for type T in this scope, we _also_ consider all implicits in the
implied scope of typeT.- This replaces the previous sequence of contextual search, followed by type scope search.
That way, it would be possible to import low-priority fallback implicits that complement what is found in the type scope.
I'm very much in favour of this.
However, I don't think it's quite enough to deal with the imported low-priority fallback problem. The issue I see is that the problem is only solved if the import is at the _top-level_ — any implicit imported at an inner scope will _still_ override more specific implicits in the implicit scope. For example, in this case,
trait Show[T] {
def show(t: T): String
}
class Foo
object Foo {
implicit def showFoo: Show[Foo] = _ => "Pick me!"
}
object ThirdParty {
implicit def showFallback[T]: Show[T] = _ => "Only if necessary"
}
If ThirdParty is imported at the top level then things will work as expected,
import ThirdParty._
object Test {
implicitly[Show[Foo]].show // == "Pick me!"
}
because both showFoo and showFallback are found when searching the top-level scope, and showFoo beats showFallback on specificity. However, if the import is in an inner scope then showFallback will be selected,
object Test {
import ThirdParty._
implicitly[Show[Foo]].show // == "Only if necessary"
}
I don't think it's quite good enough to say "move the import to the top-level" in this sort of case, because it's reasonable to want to limit the scope in which such fallback implicits are visible.
We can fix this with a variant of import (import implicit), which imports implicits with the priority they would have if they were imported at the top-level. We could then write,
object Test {
import implicit ThirdParty._ // implicits imported with top-level priority
implicitly[Show[Foo]].show // == "Pick me!" // showFoo wins again
}
and get the desired result.
There's a prototype of this against Scala 2 here. In addition to changing the priority of imported implicits, the prototype imports _only_ implicits. I think that's actually quite desirable as well, but it's orthogonal to this comment.
@odersky
- nested implicits always take precedence over outer ones
- no more shadowing checks
- package prefixes are not considered.
Big thumbs up to all of these.
Am I right thinking that this means that the sets of available implicits as you ascend from inner to outer scopes are nested? If that's the case then I think we have a pretty good story for local consistency _and_ scope level instance caching.
I often run into issues where I have to add in lots of extra typing or extra code to explicitly disambiguate implicits.
def withOrd[O] given(O: Ord[O]) = { ... }
Inside the body of withOrd, the only instance of Ord that will be found without extra help is O. To get other instances, you have to very carefully tell it what to do. This is compounded when there's an inclosing implicit-scoped Ord.
The behaviour I'd prefer (although I don't know if it is viable) is to first look at the things in the closest scope and attempt to typecheck with that instance. If the typechecking fails, back off and look in the wider scope.
@drdozer you might want to experiment with using path-dependent types if the typical resolution does not work for you. For an example:
class Ord {
type T
}
def withOrd given (O: Ord): O.T = ???
This way, withOrd will always lock on the lexically closest Ord. You can use blocks with explicit instances of Ord to select a specific instance for some scope. Note that it's still possible to search for Ord with a specific T by using refinements:
def withIntOrd given (o: Ord { type T = Int }): Int = 0
val a = {
implied for (Ord { type T = Int }) = new Ord { ... }
{
implied for (Ord { type T = String }) = new Ord { ... }
(withOrd, withIntOrd) // (String, Int)
}
}
The behaviour I'd prefer (although I don't know if it is viable) is to first look at the things in the closest scope and _attempt to typecheck_ with that instance. If the typechecking fails, back off and look in the wider scope.
I believe that's what #5887 implements if you mean "typecheck the implicit candidate with the expected type".
Am I right thinking that this means that the sets of available implicits as you ascend from inner to outer scopes are nested? If that's the case then I think we have a pretty good story for local consistency _and_ scope level instance caching.
Yes, they are nested.
I'm very much in favour of this.
However, I don't think it's quite enough to deal with the imported low-priority fallback problem. The issue I see is that the problem is only solved if the import is at the _top-level_ — any implicit imported at an inner scope will _still_ override more specific implicits in the implicit scope. For example, in this case,
I agree that it is restrictive. However, if we change that we lose the nice nesting properties of implicit contexts that helps caching. So I'd like to wait and see for the moment whether we hit use-cased that can't be worked around that way.
About specificity: It could be good to change the rules. However, we cannot appeal to analogy with overloading. In fact, if anything, overloading behaves in the opposite way to what is proposed.
Example:
def foo[T](implicit gen: Generic): Show[T] = new Show[T](2)
def foo[T]: Show[T] = new Show[T](1)
println(foo[Int])
Here, both scalac and Dotty will say "ambiguous". Here's Dotty's error message:
23 | println(foo[Int])
| ^^^
|Ambiguous overload. The overloaded alternatives of method foo in object Test with types
| [T] => Show[T]
| [T](implicit gen: Generic): Show[T]
|both match type arguments [Int] and expected type Any
If we change the implicit gen parameter to a normal parameter, both scalac and Dotty will pick the second version of foo. That means _shorter_ parameter list wins, not longer one.
@odersky
I agree that it is restrictive. However, if we change that we lose the nice nesting properties of implicit contexts that helps caching.
I've pretty much convinced myself that this isn't the case. Can you show me an example where an import implicit would break nesting?
@odersky
Yeah! It works now as I'd expect :) Wonderful! That will make it far easier to define instances by composing other instances.
I believe that's what #5887 implements if you mean "typecheck the implicit candidate with the expected type".
@milessabin
If an import implicit provides a fallback with a lower priority than implicits with type scope, we can't simply go inside out to find the possible implicits. inside out goes from highest to lowest priority. Or I am misunderstanding what you propose.
My proposal (ie. the one I implemented in the scalac PR) is that import implicit imports as if _at the outer level_ from the point of view of the scope being imported into.
My proposal (ie. the one I implemented in the scalac PR) is that
import implicitimports as if _at the outer level_ from the point of view of the scope being imported into.
So the lexical scoping is not the same as the conceptual scoping here?
So the lexical scoping is not the same as the conceptual scoping here?
I'd say: the priority ordering is different from the lexical nesting. We have a precedent for that wrt specificity.
Most helpful comment
My tendency is to start with #5887 and refine it as follows:
If an implicit is searched for type T in this scope, we _also_ consider all implicits in the
implied scope of type
T.That way, it would be possible to import low-priority fallback implicits that complement what is
found in the type scope.
(*) Note that this is not necessarily the outermost scope: There could be implicits defined directly in
enclosing packages.