This code was working for me before but broken in the latest:
class A {
forEach(callback: (a: number) => void): void
forEach(callback: (a: number, b: number) => void): void
forEach(callback: (a: number, b?: number) => void): void {
}
}
const a = new A();
a.forEach((a, b)=> { // Parameter 'a' and 'b' implicitly has an 'any' type
})
I can confirm with 2.1.0-dev.20161029. Seems like it tries to match with only the first overload.
Removing the first overload fixes it. Changing the callback to (a) => {} also fixes this.
Probably caused by https://github.com/Microsoft/TypeScript/pull/11905
this indeed is caused by the change #11905. we should reconsider our fix there.
This is actually an improvement — b was (mostly) an unreported implicit any before. In 2.0 (or earlier, all the way back to 1.1) try compiling b.blah() inside the body of the lambda. You won't get an error because b is implicitly any.
Note that
b as number, so there's an additional bug in the language service.anys are a limitation of the current compiler design, which fixes types while resolving the _first_ overload, which in this case is too short. Previously the types were incorrect, now they're not available at all.The correct fix is to reverse the order of the overloads or to get rid of the overloads entirely. They are technically redundant but are probably useful as documentation.
which fixes types while resolving the first overload, which in this case is too short
@sandersn Why do you need to fix the type on the first overload? And not just choose the overload that "fits"? In my example, the second overload should clearly be used for 2 argument callbacks.
I'm pretty sure though, that previous types where not typed any, at least not on my real world application. I'm also getting the error, when defining a test with Mocha testing framework(this one I'm pretty sure it was not typed as any):
it(testName, (done) => { // error 'done' is implicitly typed as any
})
@andy-ms actually fixed jasmine.d.ts this morning to work with the updated compiler. The core limitation, since TypeScript 1.1, is that the compiler will only assign types to any expression _once_. This gives, unfortunately, extremely subtle semantics.
Normally this is fine, but overload resolution changes the contextual type as it tries each overload in order, so you can observe the effect of this limitation. I don't think there's a reasonable way to check overloads besides checking the arguments against each overload in turn. For example, these single-argument functions:
declare function f((x: number, y: number, z: number) => string): void;
declare function f((x: string, y: string) => string): void;
f(x => x);
Even when there's something that clearly makes an overload unsuitable for an argument, it may be nested deep in the argument or parameter type. So I think it's better to use normal type-checking machinery when possible: just ask whether all arguments are assignable to their respective parameters.
In the future, we might go back and rewrite overload resolution but right now we'd prefer to correctly report implicit-any errors from our limited algorithm instead of silently hiding them.
Thank you for detailed comments. It's really enlightening.
I'm thinking about doing overload inference better by choosing overload by their argument length.
The reasoning behind is that at runtime we cannot tell apart two functions from their signatures other than their length properties, as pointed out here.
For example
var use: Overload;
use((req: string, res) => {});
interface Overload {
(handler1: (req1: string) => void): void;
(handler2: (req2: number, res2: number) => void): void;
}
the argument function will always be contextual inferred against handler2 that has length of 2. And at runtime, an Overload implementation can only inspect the function by their length property, so even if user manually annotate req as string, it gets number at runtime.
Also, the paramter annotation of string is incompatible with number, it is desirable compiler can detect this and report an error.
(theoretically users can inspect function type by inspecting function's source string, Angular1 actually have done this. But that isn't common in JS)
Well, the _top-level_ overload resolution does check function arity. But in your example, both signatures have arity=1 and the parameters themselves are functions that differ in arity. And this difference in arity could occur at any level, eg
class Foo1 {
foo: Bar1;
}
class Foo2 {
foo: Bar2;
}
class Bar1 {
nested(req1: string) => void): void;
}
class Bar2 {
nested(req2: number, res2: number) => void): void;
}
interface Overload {
(handler1: Foo1): void;
(handler2: Foo2): void;
}
let use: Overload
use({
foo: {
nested: (req2, res2) => {
console.log("req2: string, oh no!");
}
}
})
An instance of Foo2 looks like it could match handler1 until we reach Bar2. That level of nesting is why the check should rely on normal assignability rules to check each overload. But checking assignability requires the compiler to know the types of each parameter and argument. And asking for the type the argument will fix the types for later overloads.
So I think we need to remove the limitation that once checked, types are fixed, and let type checking operate in a non-permanent fashion.
@sandersn, why it is not merged to (req2: string | number, res2?: number)?
@ArtemAvramenko this merging is too permissive in many cases. If the two overload signatures are (s: string, t: string) and (n: number, m: number) then merging them to two string | number parameters incorrectly allows me to pass a string and a number.
Most helpful comment
@andy-ms actually fixed jasmine.d.ts this morning to work with the updated compiler. The core limitation, since TypeScript 1.1, is that the compiler will only assign types to any expression _once_. This gives, unfortunately, extremely subtle semantics.
Normally this is fine, but overload resolution changes the contextual type as it tries each overload in order, so you can observe the effect of this limitation. I don't think there's a reasonable way to check overloads besides checking the arguments against each overload in turn. For example, these single-argument functions:
Even when there's something that clearly makes an overload unsuitable for an argument, it may be nested deep in the argument or parameter type. So I think it's better to use normal type-checking machinery when possible: just ask whether all arguments are assignable to their respective parameters.
In the future, we might go back and rewrite overload resolution but right now we'd prefer to correctly report implicit-any errors from our limited algorithm instead of silently hiding them.