Mypy: Cannot assign to field of Callable type

Created on 6 Jun 2015  Â·  40Comments  Â·  Source: python/mypy

Consider this example:

from typing import Callable

class A:
    f = None  # type: Callable[[int], int]

    def __init__(self, g: Callable[[int], int]) -> None:
        self.f = g

mypy reports:

trial.py: In member "__init__" of class "A":
trial.py, line 7: Cannot assign to a method
trial.py, line 7: Invalid method type
trial.py, line 7: Incompatible types in assignment (expression has type Callable[[int], int], variable has type Callable[[], int])

while I'd expect this to work.

Note that there is a second error in the reporting of the type of the variable.

Also, this passes type checking (correctly):

f = None  # type: Callable[[int], int]

def g(x: int) -> int: return 0

f = g
bug false-positive priority-0-high size-large

Most helpful comment

This affects basically every Django app that uses django.contrib.admin. The following is an often-used pattern there:

class FooAdmin(admin.ModelAdmin):
    list_display = ("id", "custom")

    def custom(self, obj):
        return obj.bar
    custom.short_description = "My Custom Field"  # error in mypy

All 40 comments

The mismatching types is probably because mypy thinks f is a method and therefore strips off the first argument (self).

Hmm.. actually not sure what's the right thing to do here. Based on a quick experiment, the desired behavior seems to depend on what kind of a callable g is in your original example.

Consider this:

class A:
    f = None
def m(x):
    return x + 1
print(m(1))  # 2
A.f = m
print(A().f(1))  # Error (2 args given, expects 1)

But now look at this:

class A:
    f = None
class M:
    def __call__(self, x):
        return x + 1
m = M()
print(m(1) # 2
A.f = m
print(A().f(1))  # Okay => 2

Taking off priority label because the fix doesn't seem obvious.

Before reporting it I also thought for a while that I could fix it quickly :smile:.

Is there any update? The issue is still present. For some real world example, it is used in the ctypes module (in the examples).

from typing import Any, Callable

class A():
    a = lambda: None  # type: Callable[..., None]

def f() -> None: return None

A().a = f

gives

/tmp/assign_callable.py:8: error: Cannot assign to a method
/tmp/assign_callable.py:8: error: Incompatible types in assignment (expression has type Callable[[], None], variable has type Callable[..., None])

Having Callable[..., None] or Callable[[], None] do not resolve it, having Callable[..., Any] remove the incompatible types line.

I'm sorry, I don't have any further updates.

Thanks for bumping this with the new repro!

It's possible these two errors are different issues. The second looks like it may be a case of the Void/NoneTyp distinction (recently eliminated in our --strict-optional future: #1975). Would you try reproducing it on master and with --strict-optional?

The first error looks like the same thing as the original issue.

@tharvik, would you also link more specifically to the bit in ctypes that refers to this? It's good to know about that use case, and it'd be interesting to see it more concretely.

Seems that I'm unable to reproduce 'Incompatible types in assignment' on master with or without --strict-optional.

@gnprice in ctypes, when declaring a foreign function, one can redefine errcheck, which should be a callable. It allows processing of the return value (and raising on failure).

Example from the documentation

def errcheck(result, func, args):
    if not result:
        raise WinError()
     return args

GetWindowRect.errcheck = errcheck

This way, we can load a library, in which, every attribute is a foreign function, then assign to some an errcheck for freeing some pointer or other.

For another example, here goes libLAS, where the library is loaded then it is assigned some checks.

To be clear, the example still produces "Cannot assign to a method".

The problem seems to be that mypy disallows assigning to a method (which I
think is reasonable) but thinks that a class variable declared with a
Callable type (and initialized with a lambda) is also a method. The latter
is wrong, I think.

Just for the completeness of cases: same error with Python 3.6 variable annotations:

MyFunction = Callable[[int], int]                     

class Thing:                                          
    function: MyFunction                              

    def __init__(self, function: MyFunction) -> None: 
        self.function = function                      

This affects basically every Django app that uses django.contrib.admin. The following is an often-used pattern there:

class FooAdmin(admin.ModelAdmin):
    list_display = ("id", "custom")

    def custom(self, obj):
        return obj.bar
    custom.short_description = "My Custom Field"  # error in mypy

I guess mypy needs to introduce a notion of a function object which is a
subclass of Callable that also supports other operations like __getattr__
and __setattr__ (though we probably won't be able to type-check such
attributes).

Just an example I ran into today:

from collections import defaultdict
d: DefaultDict[int, int] = defaultdict(int)

d.default_factory = None

fails with --strict-optional with 3 errors, and fails without --strict-optional with 2 errors.

When I changed typeshed to make default_factory an Optional[Callable...], --strict-optional type check passed without errors, and --no-strict-optional was unaffected.

To reiterate, assigning a callable to a class variable is fraught with problems in Python itself too. The reason is descriptors (things with a __get__ method) -- if the object is a descriptor then its __get__ method will be called when you use e.g. C.f so what you get back is a bound method. (Regular function objects defined with def or lambda are descriptors that work this way, but builtin functions and other callables typucally don't).

We should really teach mypy about all this (after all it now does know about descriptors!) but that would be a complicated change.

Being a bit of a Python beginner and also ran into this which might or might not have more implications than I realise.

My workaround (since all I wanted was a "selfless" function pointer as a member variable) was to put the Callable in a List like: _get_time_ms: List[Callable[[], int]]. It means an extra list lookup every use, but is in my case negligible and the type check is worth more.

That's not a mypy problem per se -- you can't initialize a class variable
to a "selfless" function pointer at runtime either (or rather, it will be
interpreted as a method and the first argument will be passed the
instance). Your workaround is valid.

Bunmping priority to "high" since this is affecting many users, even though we don't have a solution yet.

Hello everybody from July'18

apps/main/admin.py:120: error: "Callable[[Any, Any], Any]" has no attribute "short_description"

Django dev love mypy :D

I realize that this is complex to fix, but in the meanwhile: is there any suggested workaround for this, other than "wrap it in a List"? I'm not keen on distorting the interface of my types, or manually writing dozens or hundreds of manual __init__s that save things away into a private list attribute. Is there any if TYPE_CHECKING hack that will let me just have some callable attributes and then fix up the annotations later?

Reading back this issue I'm a little bit confused. The original description (from 2015!) is about assigning to an instance variable that is backed by a class variable with a Callable type: (1)

class A:
    f = None  # type: Callable[[int], int]

    # The original example had __init__ but it doesn't matter what method it is
    def foo(self, g: Callable[[int], int]) -> None:
        self.f = g  # Three errors here (line 12)

The three errors are:

_.py:12: error: Cannot assign to a method
_.py:12: error: Invalid self argument "A" to attribute function "f" with type "Callable[[int], int]"
_.py:12: error: Incompatible types in assignment (expression has type "Callable[[int], int]", variable has type "Callable[[], int]")

There's a variation where the assignment target is A().f which elicits essentially the same errors. (2)

But then someone else added a "me too" comment that's actually about something completely different (assigning to a function attribute). And that got 8 thumbs up. Those people should really open their own issue. (3)

So excuse me for asking clarification -- which of the above three do you (@glyph) need help finding a workaround for? Assuming it's (1), I was going to suggest putting # type: ignore on the assignment that gets the error, but then I realized that the problem is just as much using the f attribute! This is also an error:

A().f(42)

The errors:

_.py:16: error: Invalid self argument "A" to attribute function "f" with type "Callable[[int], int]"
_.py:16: error: Too many arguments

This was also shown in Jukka's comment but perhaps not emphasized enough (and he confused matters by considering a callable that was an instance of a class with a __call__ method).

Now what to do about it? The best I can come up with is this:

class A:
    if TYPE_CHECKING:
        def f(self, arg: int) -> int:
            ...
    else:
        f = None

    def foo(self, g: Callable[[int], int]) -> None:
        self.f = g  # type: ignore

This makes calling A().f(42) pass without errors (and return the right type).

So excuse me for asking clarification -- which of the above three do you (@glyph) need help finding a workaround for?

No worries, my request wasn't entirely clear; I'm talking exclusively about (1).

The suggested workaround does make sense, but it seems unfortunate, in that I lose type-checking on the attribute setter. And it's not much better than explicitly making a "wrong" Callable signature; at least then I'm explicitly making it a variable.

@gvanrossum - your hint upthread ("(after all it now does know about descriptors!)") gave me this idea for a workaround, which appears to work perfectly with no downside I can immediately derive. But I'm wondering if I'm going to regret some aspect of it:

from typing import Callable, Any, Generic, TypeVar

T = TypeVar("T")

class FunctionProperty(Generic[T]):
    def __get__(self, oself: Any, owner: Any) -> T:
        ...
    def __set__(self, oself: Any, value: T) -> None:
        ...

OperatorType = Callable[[int, int], int]

class A(object):
    x: FunctionProperty[OperatorType]

    def __init__(self, x: OperatorType) -> None:
        self.x = x

Correct usages look correct:

def add(x: int, y: int) -> int:
    return x + y

def subtract(x: int, y: int) -> int:
    return x - y

A(add).x(3, 4)
A(subtract).x(4, 3)

incorrect usages seem to produce errors as they should:

def tripladd(x: int, y: int, z: int) -> int:
    return x + y + z

A(tripladd).x(7, 9) # error: Argument 1 to "A" has incompatible type "Callable[[int, int, int], int]"; expected "Callable[[int, int], int]"

The same with assignments:

A(add).x = subtract  # no error
A(add).x = tripladd # error: Argument 2 to "__set__" of "FunctionProperty" has incompatible type "Callable[[int, int, int], int]"; expected "Callable[[int, int], int]"

When I initially tried to do this I was a little confused because I thought I'd have to make it a ClassVar, but that doesn't seem to be how descriptors are specified. (How does one say "this is a descriptor, but it's an instance variable"?)

Oh, clever trick. I guess you're using a loophole where the class variable x is not assigned, so at runtime x does not act like a descriptor -- it's just an instance variable initialized by __init__. From mypy's POV the code would mean exactly the same thing if you wrote x = FunctionProperty[OperatorType]() (i.e. assign it an instance) because that would give A.x exactly the same type. But at runtime it then would act as a descriptor so you would have to put actual working code in the bodies of __get__ and __set__. And what code would you put there exactly? It would have to store the value on oself, but under what name? Your current hack bypasses that question and that's fine, at least until this mess in mypy is fixed.

Currently mypy often does not keep track of whether attributes are initialized, only of their types. It also often does not distinguish between instance variables and class variables -- that's the source of the original problem (can we dub that "OP" as well? :-). I guess we should make sure not to break your hack before we've fixed the OP.

And what code would you put there exactly? It would have to store the value on oself, but under what name? Your current hack bypasses that question and that's fine, at least until this mess in mypy is fixed.

I was hoping to avoid having real code there, because in my more fully fleshed-out examples the actual class-level value of the attribute is an attr.ib().

However, I have, unfortunately, had to write this code a few times. If it becomes necessary, it looks like this:

from sys import _getframe as oh_no

class NamedDescriptor:

    def __init__(self):
        self._scope = oh_no(1).f_locals
        self._found_name = None

    def __get__(self, oself, owner=None):
        return getattr(oself, "_" + self.__name__)

    def __set__(self, oself, value):
        setattr(oself, "_" + self.__name__, value)

    @property
    def __name__(self):
        if self._found_name is None:
            for k, v in self._scope.items():
                if v is self:
                    self._found_name = k
        return self._found_name


class User:
    use1 = NamedDescriptor()
    use2 = NamedDescriptor()

print(User.use1.__name__)
print(User.use2.__name__)

I think we're all happier if I don't have to write this, though :).

I guess we should make sure not to break your hack before we've fixed the OP.

This would be appreciated! But if this is a "hack" though, would the right way to do this "for real" be? (Since there are descriptors in the wild which do actually do things like this.) Just make it a ClassVar as I originally suspected? Is it a bug that that doesn't work today?

I encountered this and I also thought the fix for this would be simple – I was in for a ride.

I fixed the thing I actually wanted to fix but broke some existing tests and I have no patience nor knowledge to continue the fight. Here's the difference between the master branch and my current working directory – maybe it'll be useful in one way or another:

```patch
diff --git a/mypy/semanal.py b/mypy/semanal.py
index cf5224419b37..0aaf69dcb765 100644
--- a/mypy/semanal.py
+++ b/mypy/semanal.py
@@ -1775,10 +1775,12 @@ class SemanticAnalyzerPass2(NodeVisitor[None],
self.process__all__(s)

 def analyze_lvalues(self, s: AssignmentStmt) -> None:

+ no_rhs = isinstance(s.rvalue, TempNode) and s.rvalue.no_rhs
for lval in s.lvalues:
self.analyze_lvalue(lval,
explicit_type=s.type is not None,
- is_final=s.is_final_def)
+ is_final=s.is_final_def,
+ is_uninitialized=no_rhs)

 def apply_dynamic_class_hook(self, s: AssignmentStmt) -> None:
     if len(s.lvalues) > 1:

@@ -2072,7 +2074,8 @@ class SemanticAnalyzerPass2(NodeVisitor[None],
def analyze_lvalue(self, lval: Lvalue, nested: bool = False,
add_global: bool = False,
explicit_type: bool = False,
- is_final: bool = False) -> None:
+ is_final: bool = False,
+ is_uninitialized: bool = False) -> None:
"""Analyze an lvalue or assignment target.

     Note that this is used in both pass 1 and 2.

@@ -2082,9 +2085,10 @@ class SemanticAnalyzerPass2(NodeVisitor[None],
nested: If true, the lvalue is within a tuple or list lvalue expression
add_global: Add name to globals table only if this is true (used in first pass)
explicit_type: Assignment has type annotation
+ is_uninitialized: Assignment has no rhs
"""
if isinstance(lval, NameExpr):
- self.analyze_name_lvalue(lval, add_global, explicit_type, is_final)
+ self.analyze_name_lvalue(lval, add_global, explicit_type, is_final, is_uninitialized)
elif isinstance(lval, MemberExpr):
if not add_global:
self.analyze_member_lvalue(lval, explicit_type, is_final)
@@ -2113,7 +2117,8 @@ class SemanticAnalyzerPass2(NodeVisitor[None],
lval: NameExpr,
add_global: bool,
explicit_type: bool,
- is_final: bool) -> None:
+ is_final: bool,
+ is_uninitialized: bool) -> None:
"""Analyze an lvalue that targets a name expression.

     Arguments are similar to "analyze_lvalue".

@@ -2161,6 +2166,8 @@ class SemanticAnalyzerPass2(NodeVisitor[None],
if is_final and unmangle(lval.name) + "'" in self.type.names:
self.fail("Cannot redefine an existing name as final", lval)
v = self.make_name_lvalue_var(lval, MDEF, not explicit_type)

  • if not is_uninitialized:
  • v.is_initialized_in_class = True
    self.type.names[lval.name] = SymbolTableNode(MDEF, v)
    else:
    self.make_name_lvalue_point_to_existing_def(lval, explicit_type, is_final)
    @@ -2203,7 +2210,6 @@ class SemanticAnalyzerPass2(NodeVisitor[None],
    if kind == MDEF:
    assert self.type is not None
    v.info = self.type
  • v.is_initialized_in_class = True
    if kind != LDEF:
    v._fullname = self.qualified_name(lvalue.name)
    if kind == GDEF:
    diff --git a/test-data/unit/check-newsyntax.test b/test-data/unit/check-newsyntax.test
    index 61a012b49f82..068a4e7219e2 100644
    --- a/test-data/unit/check-newsyntax.test
    +++ b/test-data/unit/check-newsyntax.test
    @@ -151,3 +151,17 @@ v = 1
    reveal_type(f'{v}') # E: Revealed type is 'builtins.str'
    reveal_type(f'{1}') # E: Revealed type is 'builtins.str'
    [builtins fixtures/f_string.pyi]
    +
    +[case testNewSyntaxWithCallableInstanceVars]
    +# flags: --python-version 3.6
    +from typing import Callable
    +def same(s: str) -> str:
  • return s
    +
    +class TstInstance:
  • a: Callable[[str], str]
  • def __init__(self) -> None:
  • self.a = same
    +
    +TstInstance().a('wat')
    +[builtins fixtures/callable.pyi]
    diff --git a/test-data/unit/check-selftype.test b/test-data/unit/check-selftype.test
    index 0371e99546a4..e5e6c0cb9a26 100644
    --- a/test-data/unit/check-selftype.test
    +++ b/test-data/unit/check-selftype.test
    @@ -382,12 +382,12 @@ class B(A):

reveal_type(A().g) # E: Revealed type is 'builtins.int'
reveal_type(A().gt) # E: Revealed type is '__main__.A'
-reveal_type(A().f()) # E: Revealed type is 'builtins.int'
-reveal_type(A().ft()) # E: Revealed type is '__main__.A
'
+reveal_type(A().f(1)) # E: Revealed type is 'builtins.int'
+reveal_type(A().ft(1)) # E: Revealed type is 'builtins.int'
reveal_type(B().g) # E: Revealed type is 'builtins.int'
reveal_type(B().gt) # E: Revealed type is '__main__.B
'
-reveal_type(B().f()) # E: Revealed type is 'builtins.int'
-reveal_type(B().ft()) # E: Revealed type is '__main__.B'
+reveal_type(B().f(1)) # E: Revealed type is 'builtins.int'
+reveal_type(B().ft(1)) # E: Revealed type is 'builtins.int
'

[builtins fixtures/property.pyi]

@@ -407,12 +407,12 @@ class B(A):

reveal_type(A().g) # E: Revealed type is 'builtins.int'
reveal_type(A().gt) # E: Revealed type is 'Tuple[builtins.int, builtins.int, fallback=__main__.A]'
-reveal_type(A().f()) # E: Revealed type is 'builtins.int'
-reveal_type(A().ft()) # E: Revealed type is 'Tuple[builtins.int, builtins.int, fallback=__main__.A]'
+reveal_type(A().f(1)) # E: Revealed type is 'builtins.int'
+reveal_type(A().ft(1)) # E: Revealed type is 'builtins.int'
reveal_type(B().g) # E: Revealed type is 'builtins.int'
reveal_type(B().gt) # E: Revealed type is 'Tuple[builtins.int, builtins.int, fallback=__main__.B]'
-reveal_type(B().f()) # E: Revealed type is 'builtins.int'
-reveal_type(B().ft()) # E: Revealed type is 'Tuple[builtins.int, builtins.int, fallback=__main__.B]'
+reveal_type(B().f(1)) # E: Revealed type is 'builtins.int'
+reveal_type(B().ft(1)) # E: Revealed type is 'builtins.int
'

[builtins fixtures/property.pyi]

@@ -436,16 +436,16 @@ class Y(X): pass
X1: Type[X]
reveal_type(X.g) # E: Revealed type is 'builtins.int'
reveal_type(X.gt) # E: Revealed type is 'def (x: builtins.int) -> __main__.X'
-reveal_type(X.f()) # E: Revealed type is 'builtins.int'
-reveal_type(X.ft()) # E: Revealed type is 'def (x: builtins.int) -> __main__.X'
+reveal_type(X.f(1)) # E: Revealed type is 'builtins.int'
+reveal_type(X.ft(1)) # E: Revealed type is 'builtins.int'
reveal_type(Y.g) # E: Revealed type is 'builtins.int'
reveal_type(Y.gt) # E: Revealed type is 'def (x: builtins.int) -> __main__.Y'
-reveal_type(Y.f()) # E: Revealed type is 'builtins.int'
-reveal_type(Y.ft()) # E: Revealed type is 'def (x: builtins.int) -> __main__.Y'
+reveal_type(Y.f(1)) # E: Revealed type is 'builtins.int'
+reveal_type(Y.ft(1)) # E: Revealed type is 'builtins.int
'
reveal_type(X1.g) # E: Revealed type is 'builtins.int'
reveal_type(X1.gt) # E: Revealed type is 'Type[__main__.X]'
-reveal_type(X1.f()) # E: Revealed type is 'builtins.int'
-reveal_type(X1.ft()) # E: Revealed type is 'Type[__main__.X]'
+reveal_type(X1.f(1)) # E: Revealed type is 'builtins.int'
+reveal_type(X1.ft(1)) # E: Revealed type is 'builtins.int*'

[builtins fixtures/property.pyi]

@@ -469,12 +469,12 @@ a: A[int]
b: B[str]
reveal_type(a.g) # E: Revealed type is 'builtins.int'
--reveal_type(a.gt) # E: Revealed type is 'builtins.int'
-reveal_type(a.f()) # E: Revealed type is 'builtins.int'
-reveal_type(a.ft()) # E: Revealed type is '__main__.A[builtins.int]'
+reveal_type(a.f(1)) # E: Revealed type is 'builtins.int'
+reveal_type(a.ft(1)) # E: Revealed type is 'builtins.int
'
reveal_type(b.g) # E: Revealed type is 'builtins.int'
--reveal_type(b.gt) # E: Revealed type is '__main__.B[builtins.str]'
-reveal_type(b.f()) # E: Revealed type is 'builtins.int'
-reveal_type(b.ft()) # E: Revealed type is '__main__.B
[builtins.str]'
+reveal_type(b.f(1)) # E: Revealed type is 'builtins.int'
+reveal_type(b.ft(1)) # E: Revealed type is 'builtins.int*'

[builtins fixtures/property.pyi]
```

As noted in the discussion in https://github.com/python/mypy/issues/8058, using callback protocols may be a decent workaround for this issue.

It would be super nice to just have a syntax (a decorator?) that lets you use a regular def and get a callback protocol out rather than the awkward protocol-with-a-__call__-method.

It seems like a clean workaround is to just not bother setting the instance variable type at the class level (which seems incongruous to me, anyway).

For example, if I take the original code posted in this issue and comment out the class-level type hint, mypy has no problems with the code. See below, where I've also assigned the callable and used it, just for good measure:

from typing import Callable

class A:
    #f = None  # type: Callable[[int], int]   # <-- commenting out from the original code
    def __init__(self, g: Callable[[int], int]) -> None:
        self.f = g

# Test the  usage for good measure...
def plus1(x: int) -> int:
    return x + 1
a = A(plus1)
print(a.f(1.1))  # <-- mpy's only complaint is this float (which is what we want)
print(a.f(1))    # <-- no complaint from mypy

As commented, the only line mypy has an issue with is the attempt to pass a float to the int arg, so type checking is working as expected.

But... what side effects are there to working around the problem like this? I don't understand the reason for defining instance-level types at class level in the first place, so I'm unclear what the impact is of skipping the class-level hinting entirely.

I actually only ran into this problem (and found this issue) after I learned (from PEP526) that I was supposed to be defining my instance-level property types at class level. I then converted some formerly mypy-error-free code to define instance property types at class level... and started getting these "cannot assign to method" errors.

I'm not sure if this comparison will be helpful, but I just checked pyright to see what it does in this situation (since I've noticed that it handles class-level vs instance level type hinting for instance properties in a different way). It turns out that pyright is 100% ok with class-level hints of callable instance-level properties.

To demonstrate that, here is the same code as above, tuned up very slightly to avoid the None assignment at class level:

from typing import Callable

class A:
    f: Callable[[int], int]  # <-- instance property type included at class-level (w/o None)
    def __init__(self, g: Callable[[int], int]) -> None:
        self.f = g

# Test the  usage for good measure...
def plus1(x: int) -> int:
    return x + 1
a = A(plus1)
print(a.f(1.1))  # <-- pyright's only complaint is this float (which is what we want)
print(a.f(1))

And here are the outputs for mypy and pyright:

$ python --version
Python 3.8.2
$ mypy --version
mypy 0.770
$ mypy --strict ./hmm.py 
hmm.py:6: error: Cannot assign to a method
hmm.py:6: error: Invalid self argument "A" to attribute function "f" with type "Callable[[int], int]"
hmm.py:6: error: Incompatible types in assignment (expression has type "Callable[[int], int]", variable has type "Callable[[], int]")
hmm.py:12: error: Invalid self argument "A" to attribute function "f" with type "Callable[[int], int]"
hmm.py:12: error: Too many arguments
hmm.py:13: error: Invalid self argument "A" to attribute function "f" with type "Callable[[int], int]"
hmm.py:13: error: Too many arguments
Found 7 errors in 1 file (checked 1 source file)
$ 
$ 
$ pyright --version
pyright 1.1.29
$ pyright ./hmm.py 
typingsPath /home/russ/hmm/typings is not a valid directory.
Searching for source files
Found 1 source file
/home/russ/hmm/hmm.py
  12:11 - error: Argument of type 'Literal[1.1]' cannot be assigned to parameter 'p0' of type 'int'
  'float' is incompatible with 'int'
1 error, 0 warnings 
Completed in 1.474sec

mypy has 7 complaints about the code (including the one that originated this issue), while pyright only complains about the legitimately bad float usage.

I still hesitantly prefer the workaround (?) in my previous post where you skip the hint at class level entirely, since A) it isn't a class level property, and B) you don't need to type out the exact same hint for the __init__ arg. However, I say "hesitantly", because (per previous) I'm not sure what the impact is of violating PEP526's recommendation to define instance property types at class level.

I managed to find a pretty clean workaround for this: declare the attribute optional, and wrap it in a normal method. The result is clear code that satisfies mypy without any more egregious hacks:

from typing import Callable, Optional

class A:
    _f: Optional[Callable[[], int]]

    def f(self) -> int:
        assert self._f is not None
        return self._f()

    def __init__(self, f: Callable[[], int] = lambda: 42) -> None:
        self._f = f

A().f()

@jakevdp - your example can be trimmed to below, retaining both functionality and mypy happiness:

from typing import Callable

class A:
    def __init__(self, f: Callable[[], int] = lambda: 42) -> None:
        self.f = f

A().f()

Significantly less lines. No need for the wrapper layer. No need to define the type hint for f twice.

The only "penalty" is that the f instance property isn't defined at the class level (which seems better to me, anyway).

Is it no longer recommended to forward-declare attribute types in classes?

I've just had a re-read of the relevant PEP526 section, and see that it says this at the bottom:

As a matter of convenience (and convention), instance variables can be annotated in __init__ or other methods, rather than in the class:

With this example included:

from typing import Generic, TypeVar
T = TypeVar('T')

class Box(Generic[T]):
    def __init__(self, content):
        self.content: T = content

Unfortunately, that example has a mypy error due to the missing type annotation on the content arg, but if the annotation is moved from the assignment (self.content: T = content) up to the function definition everything is fine (and it matches my example).

Can a PEP be corrected?

A quick workaround is to disable the error with a comment:

thumbnail.short_description = _("Thumbnail") # type: ignore[attr-defined]

Just to make it easier for those arriving here after a search for solutions, here is the the proper work-around using a Protocol, as pointed out by Ivan:

from typing import Protocol

class FooCallback(Protocol):
    def __call__(self, arg1: int, arg2: bool) -> str:
        pass

class Bar:
    callback: FooCallback

    def __init__(self, callback: FooCalback) -> None:
        self.callback = callback

An additional advantage is that a Protocol with __call__ is much more expressive than Callable[...] is; you can use keyword arguments, for example.

Note that in a callable protocol, the argument names are part of the interface, while in a Callable annotation they are not.

For example, see this extension of @mjpieters' example:

from typing import Protocol

class FooCallback(Protocol):
    def __call__(self, arg1: int, arg2: bool) -> str: ...

class Bar:
    callback: FooCallback

    def __init__(self, callback: FooCallback) -> None:
        self.callback = callback

def f(x: int, y: bool) -> str:
    return 'called!'

b = Bar(f)
print(b.callback(2, True))

Here mypy doesn't consider f to be compatible with the callable protocol:

error: Argument 1 to "Bar" has incompatible type "Callable[[int, bool], str]"; expected "FooCallback"

One solution is to define the protocol with position-only arguments: (requires Python 3.8+)

class FooCallback(Protocol):
    def __call__(self, arg1: int, arg2: bool, /) -> str: ...

...or by prefixing the argument names with double underscores: (as suggested by @JelleZijlstra below)

class FooCallback(Protocol):
    def __call__(self, __arg1: int, __arg2: bool) -> str: ...

Another solution is to make sure the argument names match the protocol's:

def f(arg1: int, arg2: bool) -> str:
    return 'called!'

Note that even before 3.8 you can simulate positional-only arguments by prefixing the name with a double underscore (__arg1). This is documented in PEP 484 (https://www.python.org/dev/peps/pep-0484/#positional-only-arguments).

Note that even before 3.8 you can simulate positional-only arguments by prefixing the name with a double underscore (__arg1).

That's useful to know. Thanks!

Was this page helpful?
0 / 5 - 0 ratings