Typescript: Optional (?) in function signature interferes with inference of completely unrelated type

Created on 2 Mar 2020  Β·  2Comments  Β·  Source: microsoft/TypeScript

TypeScript Version: 3.8.3


Search Terms:
wrong inferred generic type infer operator optional argument signature interference

Expected behavior:

In the example below CType should have been resolved to number.

Actual behavior:

CType was resolved to number | boolean | undefined.

Interestingly when I take the optional marker ? out of the r parameter in the signature of either SomeAbstractClass.foo or SomeAbstractClass.bar, then CType is inferred properly even though such a change seems like it would play absolutely no role in the inference of CType.


Related Issues:

Code

declare class SomeBaseClass {
  set<K extends keyof this>(key: K, value: this[K]): this[K];
}

abstract class SomeAbstractClass<C, M, R> extends SomeBaseClass {
  foo!: (r?: R) => void;
  bar!: (r?: any) => void;
  abstract baz(c: C): Promise<M>;
}

class SomeClass extends SomeAbstractClass<number, string, boolean> {
  async baz(context: number): Promise<string> {
    return `${context}`;
  }
}

type CType<T> = T extends SomeAbstractClass<infer C, any, any> ? C : never;
type MType<T> = T extends SomeAbstractClass<any, infer M, any> ? M : never;
type RType<T> = T extends SomeAbstractClass<any, any, infer R> ? R : never;

type SomeClassC = CType<SomeClass>; // = number | boolean | undefined βœ— (expected number)
type SomeClassM = MType<SomeClass>; // = string βœ“
type SomeClassR = RType<SomeClass>; // = boolean βœ“

Output

"use strict";
class SomeAbstractClass extends SomeBaseClass {
}
class SomeClass extends SomeAbstractClass {
    async baz(context) {
        return `${context}`;
    }
}

Compiler Options

{
  "compilerOptions": {
    "noImplicitAny": true,
    "strictNullChecks": true,
    "strictFunctionTypes": true,
    "strictPropertyInitialization": true,
    "strictBindCallApply": true,
    "noImplicitThis": true,
    "noImplicitReturns": true,
    "useDefineForClassFields": false,
    "alwaysStrict": true,
    "allowUnreachableCode": false,
    "allowUnusedLabels": false,
    "downlevelIteration": false,
    "noEmitHelpers": false,
    "noLib": false,
    "noStrictGenericChecks": false,
    "noUnusedLocals": false,
    "noUnusedParameters": false,
    "esModuleInterop": true,
    "preserveConstEnums": false,
    "removeComments": false,
    "skipLibCheck": false,
    "checkJs": false,
    "allowJs": false,
    "declaration": true,
    "experimentalDecorators": false,
    "emitDecoratorMetadata": false,
    "target": "ES2017",
    "module": "ESNext"
  }
}

Playground Link: Provided

Bug Fix Available

Most helpful comment

This is definitely a strange one, but it's an easy fix.

Here's what's happening: During inference we obtain base signatures (using getBaseSignature) in which we substitute constraints for type parameters declared in the signatures. This is a fine thing to do for an inference source signature, but not so much for an inference target signature because it may, in rare cases, create unintended new inference targets. That's what's happening in the examples above. Specifically, in an inference target, the this[K] type in the set method becomes BaseType<T1, T2>[keyof BaseType<T1, T2>] which becomes a union of the function types of the methods in the class. We then proceed to infer from each method type in the source to each method type in the target, and we get meaningless results.

The simple fix is to use the erased signature in the target. In an erased signature we substitute any for type parameters declared in the signature. That causes this[K] to become any, meaning we're not creating unintended new inferences targets.

All 2 comments

What is even happening here...

interface BaseType<T1, T2>  {
  set<K extends keyof this>(key: K, value: this[K]): this[K];

  useT1(c: T1): void;
  useT2(r?: T2): void;
  unrelatedButSomehowRelevant(r?: any): void;
}

interface InheritedType extends BaseType<number, boolean> {
  // This declaration shouldn't do anything...
  useT1(_: number): void
}

// Structural expansion of InheritedType
interface StructuralVersion  {
  set<K extends keyof this>(key: K, value: this[K]): this[K];

  useT1(c: number): void;
  useT2(r?: boolean): void;
  unrelatedButSomehowRelevant(r?: any): void;
}

type GetT1<T> = T extends BaseType<infer U, any> ? U : never;

type T1_of_InheritedType = GetT1<InheritedType>; // = number | boolean | undefined βœ— (expected number)

// S2: number | boolean | "useT1" | "useT2" | "unrelatedButSomehowRelevant" | undefined
type S2 = GetT1<StructuralVersion>; // = number | boolean | undefined βœ— (expected number)

This is definitely a strange one, but it's an easy fix.

Here's what's happening: During inference we obtain base signatures (using getBaseSignature) in which we substitute constraints for type parameters declared in the signatures. This is a fine thing to do for an inference source signature, but not so much for an inference target signature because it may, in rare cases, create unintended new inference targets. That's what's happening in the examples above. Specifically, in an inference target, the this[K] type in the set method becomes BaseType<T1, T2>[keyof BaseType<T1, T2>] which becomes a union of the function types of the methods in the class. We then proceed to infer from each method type in the source to each method type in the target, and we get meaningless results.

The simple fix is to use the erased signature in the target. In an erased signature we substitute any for type parameters declared in the signature. That causes this[K] to become any, meaning we're not creating unintended new inferences targets.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

zhuravlikjb picture zhuravlikjb  Β·  3Comments

MartynasZilinskas picture MartynasZilinskas  Β·  3Comments

blendsdk picture blendsdk  Β·  3Comments

CyrusNajmabadi picture CyrusNajmabadi  Β·  3Comments

Roam-Cooper picture Roam-Cooper  Β·  3Comments