injector icon indicating copy to clipboard operation
injector copied to clipboard

[Typing] Functions (and Protocols) as dependencies - and dependees?

Open JosXa opened this issue 3 years ago • 6 comments

The type annotation for Binder.bind is as follows:

def bind(
    self,
    interface: Type[T],
    to: Union[None, T, Callable[..., T], Provider[T]] = None,
    scope: Union[None, Type['Scope'], 'ScopeDecorator'] = None,
) -> None:
    ...

The following test case however proves, that Callables are also a valid dependency:

def test_injector_can_inject_a_protocol():
    class Func(Protocol):
        def __call__(self, a: int) -> int:
            ...

    def func_matching_protocol(a: int) -> int:
        return a + 1

    def configure(binder: Binder):
        binder.bind(Func, to=InstanceProvider(func_matching_protocol))

    inj = Injector([configure])
    func = inj.get(Func)
    assert func(1) == 2

[ The Protocol doesn't really have a purpose here other than to show that it's also allowed ]

➡️ Two situations:

  1. This is a bug. You should not be able to declare callables or coroutines as dependencies (I have no idea why that should be true, it's very useful).
  2. It's an intended feature. Then the type annotation should not say interface: Type[T], but rather something else that also includes Callables and Coroutines, and probably also a few other cases.

Continuing 2., Pylance (the new VSCode type checker based on pyright) reports the Func protocol being unassignable to a generic Type:

image

Consequently, inj.get(Func) and func(1) are also being inferred as T, which is a "class instance-like type", not sure what the right term is.

A similar issue arises with @inject, thought there it's more subtle - As far as I understand, the InitializerOrClass helper should be a ClassVar instead of a TypeVar..? I'd have to dig deeper.


Aaand last but not least another related, but different question:

It is abundantly clear that support for @inject on callables has been removed, and i guess you must have had good reasons for that. Assuming I want to shoot myself in the foot, is there a simple workaround to get this feature back with the existing code base? Is it as simple as injector.call_with_injection(...) or is there more to it, and things that might get removed aswell?

I'm thinking that for more performance-critical things that get called a ton, I'd rather not succumb to the DI framework's iron chains and create per-request scoped classes where a bunch of functions would totally suffice.

For context, here is what I'm trying to do:

class Middleware(Protocol[TContext]):
    """ 
    A protocol type is easier to document and has slightly better tooling support than declaring `Callable`s inline. 
    It can also host an `@abstractmethod` decorator. 
    """
    @abstractmethod
    def __call__(
        self, context: TContext, call_next: NextDelegate[TContext]
    ) -> Union[Any, Coroutine[Any, Any, Any]]:
        ...

Library users should be able to bind their own Middleware functions using injector, but also be able to receive additional request-scoped instances. These get then chained together like this:

image

This is going to be a realtime chatbot, so performance matters - and I've seen how nicely FastAPI integrated dependencies nested arbitrarily deep into functions, which is what I'm aiming for. But without the clutter that this arg: Type = Depends(XYZ) syntx brings with it, simply every parameter should be coming from DI or be annotated as NoInject, that pattern is nicer.

JosXa avatar Dec 07 '20 23:12 JosXa

The following test case however proves, that Callables are also a valid dependency No, it doesn't, but it may give you this impression and it starts you on the wrong track.

The static type hints tell you what works and is supported (Types). The runtime doesn't prevent you from shooting yourself in the foot if you want to violate the type hints, binder.bind(str, to=3.14159) will give you a float when you request a str – this doesn't have much to do with functions or protocols, just with there not being runtime type verification.

The support for injecting partially applied function has been removed. You can still declare dependencies in a free function (as in not a class constructor) and call it with Injector.call_with_injection() to provide those.

jstasiak avatar Dec 08 '20 10:12 jstasiak

Sounds like I should make a PR for the specific issues I found with the typings

JosXa avatar Dec 13 '20 06:12 JosXa

That depends on what you mean by specific issues in this context as there are no issues demonstrated in this thread in my opinion.

jstasiak avatar Dec 13 '20 10:12 jstasiak

I understand there may be some issues with allowing injection of function protocols functionally, they would seem to be a reasonable use-case, when backed by a Protocol (I understand a generic, callable definition would lose it's typing at runtime, this would appear not to be the case for a Protocol with call as it would keep it's runtime typing.

On top of this, talking more generally about the Protocol type, the other use case for Protocol is about as close as I've found in python to a java interface, thus also feels like it would be a pretty desirable use case to support in an injection framework. In fact, if you ignore the MyPy warnings, this use case seems to work with the injector code on master.

From my perspective I'm aware the use-cases seem sensible, but there's probably a good chunk of knowledge around the issues around injecting partially applied functions I'm not aware of, and that feels like the pain point here.

With usage of @runtime_checkable on the Protocol class, you can use isinstance() on it, which might ease some of the implementation pain?

I think with some sensible use of @overload on the binder.bind method, it might be possible to isolate these use cases and allow some type checking that things aren't entirely wrong, if your concerns are around pushing a load of checking to run-time?

At the very least, it would seem simple enough to support the second Protocol use case for now, and fail at runtime for any Protocol passed to bind with a call method defined to avoid the injection of partially applied functions. It feels like that route would warrant a separate issue to this one - which I'll happily raise and think about the PR for if it seems reasonable?

davebirch avatar Feb 01 '21 12:02 davebirch

What you're saying is not unreasonable. Feel free to raise an issue and flesh out the details there, I'm interested.

jstasiak avatar Feb 02 '21 07:02 jstasiak

@JosXa just having another read of your requirements, I have an idea you might want to try.

Given that the thing you want to inject is a function, which stores no state and is presumably defined once for the binding, I'm not sure the scoping issues you mention exist if you were to wrap this in a class and you could potentially just wrap a class and bind at singleton scope.

As a quick pseudocode example using your middleware protocol as an example:

class MiddlewareWrapper:
    def __init__(self, function: Middleware) -> None:
        self.function = function
        
        def callback(self) <- assume this follows the middleware protocol __call__ signature
            return self.function(<appropriate arguments passed>)

def concrete_function() <- assume this follows the Middleware protocol def

def bindings(binder: Binder) -> None:
   binder.bind(MiddlewareWrapper, to=MiddlewareWrapper(concrete_function), scope=singleton)

You then also have extra control in the wrapper class to handle binding multiple middleware functions, or whatever other bindings you might want to support.

For user binding the chained middleware for example, you could have a ChainedMiddlewareWrapper, which takes a List[Middleware] on construction and performs your chaining functionality in the callback method.

As a disclaimer, I've not tried it, but would seem to work logically based on what you've suggested.

davebirch avatar Feb 02 '21 16:02 davebirch