mcstatus
mcstatus copied to clipboard
Move async methods to its own classes
End date for deprecations was set to 2023-02, six months after opening this PR. I will move it, if this PR will lay too long.
There is no MCServer.status
actually, so the only place where I'm breaking LSP here is MCServer.lookup
, which is exist only for BedrockServer
, because JavaServer
redef it.
After some time, I come up with this hill of abstractions:
(Methods prefixed with
+
- abstract, a
- async. Blue classes are abstract and yellow - end implementation.)
Trying to sit on all chairs in one time giving really complicated system, so maybe try to remove abstractions?
This looks much better.
I think going this route would be quite complex for little benefit. Even though your 2nd image example would be somewhat acceptable, another idea might be doing something likeBaseAsyncMcServer
, and BaseSyncMcServer
, which would cover both bedrock and java variants, with shared logic for sync or async behaviors.
Since it's likely that some people would want to use this kind of abstractions, but others the kind you proposed above (with shared logic for both sync and async, but only covering a single game variant), it's hard to choose which way to proceed here. We could even go with both but that would make the resulting code an utter mess with the concrete classes inheriting from both (for example the AsyncJavaServer
would need to inherit BaseJavaServer
and BaseSyncMcServer
, both of which are ABCs that themselves inherit from McServer
).
An alternative would be to only use one of these approaches, and implement the other one as pure typing protocol, which avoids some of the inheritance mess, but it's still not very practical.
Honestly, I don't think either of these solutions are very good, and while it is a bit weird that we use async_
prefixed methods in the server classes, while using full async/sync variant classes in other places, it seems like a way less complex solution than any of these alternatives.
I tried to visualize your idea...
Yes it's not breaking LSP, but so hard and confusing...
Or if remove useless last abstract classes:
Better, but still too confusing.
That's my point exactly, approach like this is just too complex and I don't want to see it implemented. I was just using it to showcase what would it take to fully implement it. That is, without making assumptions on what's better (java and bedrock abcs as base for both sync/async, or syc and async mcserver abcs as base for both bedrock/java variants).
As I've already said, we could in theory just pick one of these options and only implement that (such as the approach you described in your first comment), but we shouldn't make assumptions for our users in what variant they'll need when using our code. And sure, we could just provide pure typing protocols for the other option, but at that moment we're almost back at original levels of complexity, except the concrete classes won't need double inheritance (as typing protocols don't need to be inherited).
I still do want Kevin's opinion before closing as wontfix, but I firmly believe that this approach would just be way too clumbersome if done properly, and I'm against doing things improperly. Especially when all this would just be just for the sake of making the server classes match the rest of the codebase.
Sidenote - I don't really understand why you're even considering creating abstract classes that just merge some other abstract classes and are then directly inherited by just a single concrete classes in the diagrams, it's just extra code and makes things even bigger that they already would be. You made these in the first diagram images of both the first comment and this recent comment above.
You made these in the first diagram images of both the first comment and this recent comment above.
In first this was for rewriting sync abcs, and in second for creating abstractions for all actual methods (which is useless in Python, because abstract classes != interfaces).
Also how about try overloads?
class MCStatus(ABC):
@typing.overload
@abstractmethod
def status(...): ...
@typing.overload
@abstractmethod
async def status(...): ...
@abstractmethod
def status(...): raise NotImplementedError
Looks like this is what we exactly want, without hill of abstractions. I will test it with type checker some later, when will have time, and if it's working - I will implement it here.
Issue with this is that overloads don't mean "one of these will work", it means "all of these can be used, and should have handlings in the single implementation method". So for a concrete class to implement that abc/protocol with these overloads, it would need to satisfy both overloads in the concrete class, not just one.
However providing both overloads has it's problems. Consider for example AsyncJavaServer
class, that now has to have both synchronous and asynchronous typing overloads, and handling them differently in the final implementation method (like raising NotImplemented on sync, and doing the proper thing on async).
This may still sound ok conceptually, but a problem quickly arises heres: We can't actually detect whether a method was called as sync or async in the single implementation method. Especially when we consider the fact async methods can be called like coro = my_async_method()
and you can then do await coro
later. So there is no good way to check whether a method was called synchronously or asynchronously, making handling both sync and async in a single implementation method impossible.
pyright
don't support overloads for abstract methods, it's just ignoring that it's abstract method and reporting Overloaded implementation is not consistent with signature of overload 1
.
Also I know that it's impossible to support async and sync implementations in one function, because actually call of async function return Awaitable
object, which will be handled by await
near the call of the function.
pyright
don't support overloads for abstract methods, it's just ignoring that it's abstract method and reportingOverloaded implementation is not consistent with signature of overload 1
.
Pyright has nothing to do with this, python doesn't support it. typing.overload
doesn't provide any real functionality and unlike overloads in other languages, it doesn't trigger the overloaded function that matches given arguments, instead it's just there as a placeholder, to be overwritten by the implementation method. Pyright just picks up on these overloads.
Using @abstractmethod
on them obviously doesn't make sense, because again, they're just placeholder values. The way python functions work makes it essentially equivalent to this:
def _foo_1(x: Literal[5]) -> None:
...
def _foo_2(x: Literal[3], y: int) -> None:
...
def foo(x: int, y: int = 1) -> None:
return None
my_func = typing.overload(_foo_1)
my_func = typing.overload(_foo_2)
my_func = foo
As can be seen here, my_func
variable that is holding the function is simply getting overridden each time a new overload is defined, it doesn't matter if my_func
also had abc.abstractmethod
decorator applied to it, as it just gets overridden later and the last overload will just hold the actual implementation.
Also I know that it's impossible to support async and sync implementations in one function, because actually call of async function return
Awaitable
object, which will be handled byawait
near the call of the function.
Yeah, meaning overloads are just inappropriate for this kind of thing.
What if things were split into sync and async submodules? Would that be any better or is added maintenance make it not a good option?
What if things were split into sync and async submodules? Would that be any better or is added maintenance make it not a good option?
In public API (server.py
) it's quite unnecessarily, there are only two classes plus one base for them.
How about do abstract classes with abstract sync methods, but do not add them to __all__
(this will not be a breaking change, because MCStatus
also is not in __all__
now). So abc
package logic will be used to just validate existence of required methods.
This anyway breaks SOLID, however we can fully controll uses of these classes. All methods should just return typing.Self
, and not abstract classes. Nothing (and no one) can use these classes in type hints or isinstance
checks.
We can also create type aliases for users, like
SyncMCServer: TypeAlias = Union[JavaServer, BedrockServer]
AsyncMCServer: TypeAlias = Union[AsyncJavaServer, AsyncBedrockServer]
And those can and should be used in type hints.
Type checkers still won't like this, as it still breaks LSP. A class with some async method can't inherit from a class with that method being sync. This would therefore require type ignores on all overridden methods, and I don't think that's justified.
Especially with the only benefit being that the ABCMeta
logic will prevent initialization if all abstract methods weren't implemented, we can test that these methods were implemented in unit-tests on our own, we don't need ABCs just to ensure it,
ABCs are only justified if they're here for some shared logic that relies on these abstract methods, which clearly isn't the case here considering the abstract methods won't even be followed signature-wise in this example.
Type checkers still won't like this
No errors by pyright
with:
from abc import ABC, abstractmethod
class Base(ABC):
@abstractmethod
def redef_me(self):
...
class Children(Base):
async def redef_me(self):
...
we can test that these methods were implemented in unit-tests on our own
That's an idea, we could use structure that I showed in "no-abstractions" variant, and check existence of all methods inside tests. However, abc
package automates it, and type checkers also additional check the signature.
ABCs are only justified if they're here for some shared logic that relies on these abstract methods, which clearly isn't the case here considering the abstract methods won't even be followed signature-wise in this example.
This is the case, implementations follow signature, the only exclusion is async
prefix before def
. Consider also that this difference is ignored by type checkers and abc
package itself.
No errors by
pyright
with:from abc import ABC, abstractmethod class Base(ABC): @abstractmethod def redef_me(self): ... class Children(Base): async def redef_me(self): ...
Indeed, however that doesn't mean it's correct. Consider this example, which is also passes pyright, but it's clearly not correct:
class Base(ABC):
@abstractmethod
def redef_me(self, y: int) -> str:
...
class Children(Base):
async def redef_me(self, x: str) -> int:
...
However with cases like these, the signature wouldn't be correct anyway. Say we expect this abstract method to generally return some type T
, with sync implementation, this is met, but with async implementation, we instead get a return type of Coroutine[Any, Any, T]
, hence making the original signature in the ABC not match the overridden one.
we can test that these methods were implemented in unit-tests on our own
That's an idea, we could use structure that I showed in "no-abstractions" variant, and check existence of all methods inside tests. However,
abc
package automates it, and type checkers also additional check the signature.
ABCs don't actually automate this. There are (very intentionally) no runtime checks that a class inheriting from an ABC implements all of it's abstract methods on class definition. ABCs only prevent initialization. That means we wouldn't see this issue without testing this initialization anyway. And as can be seen in my example above, type checkers don't really care that much about the signature when overriding abstractmethods, so that's not actually an additional check.
ABCs are only justified if they're here for some shared logic that relies on these abstract methods, which clearly isn't the case here considering the abstract methods won't even be followed signature-wise in this example.
This is the case, implementations follow signature, the only exclusion is
async
prefix beforedef
. Consider also that this difference is ignored by type checkers andabc
package itself.
Again, they don't actually follow the signature, because async methods return coroutines. Though you are right that this difference is indeed ignored by pyright, but so are any other signature differences, as can be seen above. This may be an issue in the type checker (at least with pyright 1.1.270 it didn't seem like the signature was getting enforced at all). If this really is a bug though, I suspect that if it would get fixed, the async overrides would also be typing errors.
The reason I can say this with pretty good confidence is because in mypy, this actually would be an error, reporting:
Return type "Coroutine[Any, Any, str]" of "redef_me" incompatible with return type "str" in supertype "Base"
it's also true that the ABCMeta class also doesn't enforce it as you said, however you probably aren't very familiar in how ABCs work internally, as if you were, you'd know that ABCs don't have any signature checks here whatsoever. The way it can prevent initialization without implementing all abstract methods is just by going over all parameters in the class and checking if they have __isabstractmethod__
set on them (which is what gets added by the @abstractmethod
decorator). When we then override the abstractmethod, this parameter obviously doesn't get set on this function object, and so it's no longer detected. For this reason, ABCs would even allow you to override the abstractmethod with a string object for example without it being an issue, so yeah, ABCs do allow it, but it isn't really relevant here.
Indeed, however that doesn't mean it's correct. Consider this example, which is also passes pyright, but it's clearly not correct:
I found that this is an additional behavior in pyright, it's activated with reportIncompatibleMethodOverride
option.
However with cases like these, the signature wouldn't be correct anyway. Say we expect this abstract method to generally return some type
T
, with sync implementation, this is met, but with async implementation, we instead get a return type ofCoroutine[Any, Any, T]
, hence making the original signature in the ABC not match the overridden one.
Then how about use Union
s in abstract classes?
from abc import ABC, abstractmethod
from typing import Any, Coroutine
class Base(ABC):
@abstractmethod
def redef_me(self) -> str | Coroutine[Any, Any, str]:
...
class Children(Base):
def redef_me(self) -> str:
...
class AsyncChildren(Base):
async def redef_me(self) -> str:
...
No errors by pyright with reportIncompatibleMethodOverride
on.
Also, this will not break LSP because client code (code that uses this code) must have implementations for both returned types, and if in runtime there always will be only one of those types - no actual type error.
ABCs don't actually automate this. There are (very intentionally) no runtime checks that a class inheriting from an ABC implements all of it's abstract methods on class definition. ABCs only prevent initialization. That means we wouldn't see this issue without testing this initialization anyway.
In tests, we initialize these classes many times, and if we will use abc
, every of those tests will fail with one and clear error. But if we will test signatures manually, there will be only few tests which failing with clear error, and many fragile tests failing with something like object has no attribute
or takes ... positional arguments but ... were given
.
I also would like to mention that we should use automatization (pyright) instead of manually writing tests for things, that cover type checker.
however you probably aren't very familiar in how ABCs work internally, as if you were, you'd know that ABCs don't have any signature checks here whatsoever
Yeah, sorry. I forgot that abc
doesn't check signatures at all.
Closing this as it's not in a state to be a PR, if further discussion on this is desired, open an issue instead, where you can reference the PR for existing discussion done here, but PRs shouldn't be used for discussions like these when there's no code in them that's expected to be merged, that's the whole point behind issues.