mypy
mypy copied to clipboard
Use case for typing.Type with abstract types
In https://github.com/python/mypy/issues/2169 and https://github.com/python/mypy/issues/1843 there was discussion about using Type[some_abc] and how it should be allowed in a function signature, but that the call-site was expected to pass a concrete subclass of some_abc. There is an implicit assumption that the objective of a function taking such a type is, ultimately, to instantiate the type.
@gvanrossum said, in https://github.com/python/mypy/issues/2169#issuecomment-249001710:
But maybe we need a check that you call this thing with a concrete subclass; and for that we would need some additional notation (unless maybe we could just say that whenever there's an argument of Type[A] where A is abstract, that the argument must be a concrete subclass. But for that we'd need some experiment to see if there's much real-world code that passes abstract classes around. (If there is, we'd need to have a way to indicate the need in the signature.)
I have such a use case.
I have a sequence of observers supplied by clients of my library, to which I want to dispatch events according to the abstract base class(es) that each implements. I tried to do this as follows:
import abc
import typing
class FooObserver(metaclass=abc.ABCMeta):
"""Receives Foo events."""
@abc.abstractmethod
def on_foo(self, count: int) -> None:
raise NotImplementedError()
class BarObserver(metaclass=abc.ABCMeta):
"""Receives Bar events."""
@abc.abstractmethod
def on_bar(self, status: str) -> None:
raise NotImplementedError()
class Engine:
def __init__(self, observers: typing.Sequence[typing.Any]) -> None:
self.__all_observers = observers
def do_bar(self, succeed: bool) -> None:
status = 'ok' if succeed else 'problem'
for bar_observer in self.__observers(BarObserver):
bar_observer.on_bar(status)
def do_foo(self, elements: typing.Sequence[typing.Any]) -> None:
count = len(elements)
for foo_observer in self.__observers(FooObserver):
foo_observer.on_foo(count)
__OBSERVER_TYPE = typing.TypeVar('__OBSERVER_TYPE')
def __observers(
self,
observer_type: typing.Type['__OBSERVER_TYPE']
) -> typing.Sequence['__OBSERVER_TYPE']:
return [observer for observer in self.__all_observers
if isinstance(observer, observer_type)]
Unfortunately, MyPy complains about this as follows:
/Users/erikwright/abc_typing.py:24: error: Only concrete class can be given where "Type[BarObserver]" is expected
/Users/erikwright/abc_typing.py:29: error: Only concrete class can be given where "Type[FooObserver]" is expected
Given that (AFAICT) the decision was made to not attempt to verify that the runtime type supports any specific constructor signature I'm wondering why there is nonetheless an expectation that the runtime type is constructable at all. In my case, the entire purpose of typing here is:
- Require you to actually pass a type, which means I can use it in
isinstance - Allow me to specify the return type of the method in terms of the supplied type.
The point is that it is too hard to track which classes are instantiated in the body of a given function, and which are not. If we would have such tracking, then we could allow calling functions with abstract classes at particular positions.
Taking into account that such tracking would take some effort, and that during the year of current behaviour, this is a first such request, I would recommend just using # type: ignore. Even if this will be implemented at some point, this is quite low priority.
As far as I can tell, the same problem applies to Protocol as well. Is there any way to have a TypeVar that references anything abstract?
@glyph Perhaps you could use # type: ignore to silence the error as suggested above? It's clearly not optimal, though. Can you give more information about your use case?
I wonder if we could enable Type[x] with abstract and protocol types but disallow creating an instance (i.e they wouldn't be callable). They could still be used for things like isinstance checks.
What I’m trying to do is to write a class decorator, @should_implement(SomeProtocol) which type-checks the decorated class to ensure it complies with the given protocol, so ignoring the error would obviate the whole point ;-).
Makes sense, though I'm not sure if lifting the restriction would be sufficient to allow the decorator to be checked statically. Right now a runtime check is probably the best you can do with that syntax, at least without a plugin. For a static check you could use a dummy assignment (which is not very pretty).
Increasing priority to normal since I think that the current approach is too restrictive. I'm not sure what's the best way to move forward, though.
@JukkaL 🙏
@JukkaL I think I found a workaround, leveraging the little white lie that Protocols without constructors are callables that return themselves:
from typing import Callable, Type, TypeVar
from typing_extensions import Protocol
class AProtocol(Protocol):
x: int
protocol = TypeVar("protocol")
def adherent(c: Callable[[], protocol]) -> Callable[[Type[protocol]], Type[protocol]]:
def decor(input: Type[protocol]) -> Type[protocol]:
return input
return decor
@adherent(AProtocol) # No error; Yes is the expected shape
class Yes(object):
x: int
other: str
y = Yes()
y.x
y.other
@adherent(AProtocol) # We get an error here, as desired
class No(object):
y: int
I should note that there's a big problem with my workaround; you can only apply the decorator once, and then it breaks down. There's a variant here where you can abuse a Generic instead, and then write
Adherent[AProtocol](Yes)
Adherent[AProtocol](No)
but these have to come after the class body and look somewhat uglier.
I'm not sure what's the best way to move forward, though.
A possible ad-hoc solution (which may be not so bad), is to just remove this check for class decorators, because it also causes troubles for dataclasses, see https://github.com/python/mypy/issues/5374 that has 12 upvotes.
This "Only concrete class can be given" error also seems impossible to overcome for code that is supposed to accept an abstract class and return some instance of that class, even if it would have to create the class right then and there using some fancy mechanism like type(a, b, c). Think unittest.Mock and similar dummy object factories.
I have also just realized that this breaks (mypy raises this error) even when you write a function that acts like isinstance(...) with the provided class. Makes you wonder how isinstance is actually typed in typeshed (gonna check that now).
from abc import ABC
from typing import TypeVar, Type
T = TypeVar('T')
class Abstract(ABC):
pass
def isthisaninstance(this, type_: Type[T]) -> bool:
return isinstance(this, type_)
isthisaninstance("", Abstract) # type: ignore :(
Is there any way to overcome this (other than to # type: ignore all function/method calls?
Is there any way to overcome this
In some cases you may use if TYPE_CHECKING: to conditionally define the base class so that mypy sees the base class as object while at runtime it will be ABC:
from typing import TYPE_CHECKING
if TYPE_CHECKING:
Base = object
else:
Base = ABC
class Abstract(Base):
pass
You'll lose any ABC checking by mypy, however.
@JukkaL ,
Thanks. That's not enough, though. Mypy treats (frankly, it should) classes as abstract even when they don't specify ABC as their parent. It's enough for them to have @abstractmethods on them for example.
Maybe it would be possible to use a similar trick with TYPE_CHECKING to override even @abstractmethod (set it to some no-op decorator), but that's really pushing it :D.
Today, we have dealt with this issue in our code in a way where we use @override to allow our code to be called with an abstract class, and possibly even return its instance, however, the user has to specify the return type for the variable where they are storing it. Which isn't that bad.
from typing import Type, TypeVar, overload
T = TypeVar('T')
@overload
def do(type_: Type[T], ...) -> T:
pass
@overload
def do(type_: type, ...) -> T:
pass
def do(type_: Type[T], ...) -> T:
"""
Do ...
"""
from abc import ABC
class Abstract(ABC):
pass
var: Abstract = do(Abstract, ...)
I've had to redact some of the bits, but this works, unless somebody takes out the type annotation for var. It's a bit wonky, because of course, if you lie to your face and mistype the type annotation for var, do will likely give you something different. But I think it's better than if TYPE_CHECKING or # type: ignore :D. Especially since we actually use this particular function with concrete classes.
Just to add another data point here:
This "Only concrete class can be given" error also seems impossible to overcome for code that is supposed to accept an abstract class and return some instance of that class, even if it would have to create the class right then and there using some fancy mechanism like
type(a, b, c). Thinkunittest.Mockand similar dummy object factories.
This is what happens in Injector (a dependency injection framework, I imagine most similar frameworks will have this pattern somewhere). The Injector class' get() method is typed like this:
get(interface: Type[T], ...) -> T
The behavior is if T is abstract an instance of something non-abstract will be provided (if bound earlier) or a runtime error occurs, and from Injector's point of view it'd be cool if this type checked: https://github.com/alecthomas/injector/issues/143 :)
I've run into this problem too: I have a function in which takes a (possibly abstract) type and returns an instance of that type; but in my case it does it by calling a class method on the type. What I'd ideally like to be able to do is declare a protocol that inherits from Type[_T] e.g.
class _Factory(Protocol[_T], Type[_T]):
def from_file(cls, file: io.IOBase) -> _T
to indicate that the argument must be a class which provides a from_file class method returning an instance of itself (_T being covariant). Unfortunately when I try this I get 'error: Invalid base class "Type"'; and if I take the Type base class out then I get lots of other errors referring to "_T to use in the protocol.
If something like this was possible (and I don't understand type theory nearly well enough to say whether it makes sense), it would make it practical to express concepts like "subclass of X which is instantiatable", "subclass of X which has this __init__ signature" or "subclass for X for which this particular abstract class method has an implementation" (my use case).
Maybe try:
class _Factory(Protocol[_T]):
def from_file(cls: Type[_T], file: io.IOBase) -> _T
@JelleZijlstra thanks. I tried something like that and couldn't get it to work. Here's an example with your suggestion:
#!/usr/bin/env python3
from abc import ABC, abstractmethod
from typing import Type, TypeVar
from typing_extensions import Protocol
_T_co = TypeVar('_T_co', covariant=True, bound='Base')
_T = TypeVar('_T', bound='Base')
class Base(ABC):
@abstractmethod
def foo(self) -> None: ...
@classmethod
def from_file(cls: Type[_T]) -> _T: ...
class Derived(Base):
def foo(self) -> None: ...
class _Factory(Protocol[_T_co]):
def from_file(cls: Type[_T_co]) -> _T_co: ...
def make_thing(cls: _Factory[_T]) -> _T: ...
make_thing(Base)
It gives these errors (mypy 0.780):
type_proto.py:21: error: The erased type of self "Type[type_proto.Base]" is not a supertype of its class "type_proto._Factory[_T_co`1]"
type_proto.py:25: error: Argument 1 to "make_thing" has incompatible type "Type[Base]"; expected "_Factory[<nothing>]"
I also tried using _T_co throughout instead of just _T since I don't think I really understand how variance interacts with generic-self, but it didn't work any better.
I'm also hitting up against this issue. I have a similar case where I have a plugin system that utilizes sub-type filtering. The following fails with the same error when invoked where interface_type is of an abstract class:
import abc
from typing import Collection, Set, Type, TypeVar
T = TypeVar("T")
def filter_plugin_types(interface_type: Type[T], candidate_pool: Collection[Type]) -> Set[Type[T]]:
"""Return a subset of candidate_pool based on the given interface_type."""
...
class Base(abc.ABC):
@abc.abstractmethod
def foo(self) -> None: ...
class Derived(Base):
def foo(self) -> None: ...
type_set = filter_plugin_types(Base, [object, Derived])
@Purg As I'm also working on a plugin system, would be interested in hearing more about what you're doing.
@Purg As I'm also working on a plugin system, would be interested in hearing more about what you're doing.
Hi @pauleveritt, thanks for your interest! I sent a message to your email you have listed.
Could the message be assigned its own error-code as a stopgap? That way it would be possible to silence it globally in codebases where a lot of #type: ignore-s would be needed otherwise.
Any updates on this issue?
I think assigning this a dedicated error code is a good idea. Finding a good compromise for everyone may be hard, and with e.g. --disable-error-code=typevar-abstract people will be able decide the safety/convenience balance themselves.
I think assigning this a dedicated error code is a good idea. Finding a good compromise for everyone may be hard, and with e.g. --disable-error-code=typevar-abstract people will be able decide the safety/convenience balance themselves.
That's a good solution for the time being. In the long run, it might be nice to have separate type, like Interface, which is a superclass of Type that does everything that Type can do except support instantiation (similar to @bmerry 's suggestion). Then, the above examples could simply be written with Interface. What do you think?
Yeah, I was thinking about this, but this will require a PEP probably, and all other type-checkers will need to agree on exact semantics. This may take a lot of time and effort, so I am not sure it is worth it.
Would also an implementation over TypeGuard be possible? So that you can tell TypeGuard somehow that only concrete classes can be returned
Would also an implementation over TypeGuard be possible? So that you can tell TypeGuard somehow that only concrete classes can be returned
No, this will not help.
The dedicated error code for this landed and will be available in the next release. You can opt-out from this check using --disable-error-code=type-abstract (or using your config file).
I ran into this issue today, and was surprised to do so. I expected Type[T] with an unbounded T to match any class, even non-instantiable ones. After all, an abstract class is still a type, right? That's what I learnt at university at least. Just because people often mean an instantiable type when they say Type[T] doesn't mean that that necessarily makes sense from the perspective of type theory.
I would like to suggest a potential solution beyond --disable-error-code=type-abstract, which is a bit coarse (thanks nevertheless, it does help!). I have a function in my library API which should be able to take an abstract base class as an argument, and I'd rather not make all the users disable checking in general, or have to put # type: ignore every time they use my library.
It seems to me that whether a class can be instantiated or not isn't so different from it having a particular class method. (__call__ or __new__, and yes, I realise that that's not exactly how it is, I'm just claiming it to be a useful analogy.) If you give this to mypy
from typing import Type, TypeVar
class A:
@classmethod
def fn(cls) -> None:
pass
class C:
pass
T = TypeVar('T')
def fn(cls: Type[T]) -> None:
cls.fn()
it will tell you test1.py:14: error: "Type[T]" has no attribute "fn", which is entirely reasonable because this will raise if you passed C, even though it would work for A.
To fix this, of course, you need to constrain T to types that have the required function:
from typing import Type, TypeVar
class A:
@classmethod
def fn(cls) -> None:
pass
class B(A):
pass
class C:
pass
T = TypeVar('T', bound=A)
def fn(cls: Type[T]) -> None:
cls.fn()
fn(A)
fn(B)
fn(C)
The above will only return an error for the last line: test2.py:21: error: Value of type variable "T" of "fn" cannot be "C" because C is outside of T's bound, and that's exactly what we want.
So maybe if you want to pass a type to a function and instantiate it, you should have to do something like this:
from typing import InstantiableType, Type, TypeVar
T = TypeVar('T', bound=InstantiableType)
def fn(cls: Type[T]) -> T:
return cls()
and if you forget the bound it would error on the last line with error: Type[T] cannot be instantiated.
But okay, Type[T] is assumed to be bound to a virtual supertype of all instantiable types. Take that as a given. What if we could provide an explicit bound to allow abstract base classes?
from abc import ABC
from typing import Type, TypeVar
T = TypeVar('T', bound=ABC)
def fn(cls: Type[T]) -> T:
return cls()
Here mypy would tell you error: Type[T] cannot be instantiated on the last line, but now
from abc import ABC, abstractmethod
from typing import Type, TypeVar
class A(ABC):
@abstractmethod
def absmeth(self) -> None:
pass
class B:
pass
T = TypeVar('T', bound=ABC)
def fn(cls: Type[T]) -> None:
print(cls.__name__)
fn(A)
fn(B)
would happily pass.
I don't know how easy to implement this is, and it would be a special case for sure because T would also match classes not derived from ABC or having ABCMeta as their metaclass (perhaps typing.Any could be used instead, or a new special type?), but unconstrained Type[T] not allowing non-instantiable types is also a special case, it's backwards compatible, and it feels natural at least to me.
One possible solution would be for Python to support both a typing.Type and a typing.AbstractType where AbstractType doesn't have these same constraints on the class needing to be instantiated. This could also allow for better error checking inside the functions that receive an abstract type as a parameter.
I just ran into this too. The assumption (from mypy docs) that the type would be inevitably be used to try to construct an instance is ludicrous. Even then, AnyIO has a number of abstract classes where their __new__() methods return instances of known subclasses, so even that argument goes right out of the window.