litellm
litellm copied to clipboard
[Bug]: Static type checking issues with completion and acompletion methods.
What happened?
Issue:
When using litellm with pyright, the linter complains about type errors.
Examples:
Dictionary access linting error:
import litellm
response = litellm.completion(
model="gpt-3.5-turbo",
messages=[{"content": "Hello, how are you?", "role": "user"}],
max_tokens=10,
)
print(response["choices"][0]["message"]["content"])
# Error: "__getitem__" method not defined on type "CustomStreamWrapper"
# (variable) response: ModelResponse | CustomStreamWrapper
Attribute access linting error:
print(response.choices[0].message.content)
# Cannot access attribute "choices" for class "CustomStreamWrapper"
# Error:
# - Attribute "choices" is unknown
# (variable) choices: List[Choices | StreamingChoices] | Unknown
# The list of completion choices the model generated for the input prompt.
# - Cannot access attribute "message" for class "StreamingChoices"
# Attribute "message" is unknown
Streaming linting error:
import asyncio
import os
import traceback
from litellm import acompletion
async def completion_call():
try:
print("test completion + streaming")
response = await acompletion(
model="gpt-3.5-turbo",
messages=[{"content": "Hello, how are you?", "role": "user"}],
stream=True,
)
async for chunk in response:
# Error: "ModelResponse" is not iterable
# "__aiter__" method not defined
# (variable) response: ModelResponse | CustomStreamWrapper
print(chunk)
except:
print(f"error occurred: {traceback.format_exc()}")
pass
asyncio.run(completion_call())
Related Issues: https://github.com/BerriAI/litellm/issues/2006
Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
v1.60.5
Twitter / LinkedIn details
No response
I recently started using this package. I'm interested in this issue.
If @overload is added to completion, it would eliminate the need for isinstance for type safety, which would greatly simplify my project's codebase.
I also think introducing generics might be effective.
If I have the resources available, I'd like to open a PR for this project to replace #8456 (which has been stalled for nearly two months).
@junkmd have you had any progress on that?
Hi, @NickSherrow
Lately I've been busy, and with @miraclebakelaser not having signed the new CLA, I'm unsure if it's okay to cherry-pick, so I haven't been able to start yet.
However, I'm planning to add @overloads to completion and accompletion like the following in #8456.
https://github.com/BerriAI/litellm/blob/5b0309e6f9f81b9870aaf1277ed1df7febad598b/litellm/main.py#L866-L872
@overload
def completion( # type: ignore # noqa: PLR0915
model: str,
*,
- stream: bool,
+ stream: Literal[False],
**kwargs: Unpack[CompletionParams],
-) -> Union[ModelResponse, CustomStreamWrapper]: ...
+) -> ModelResponse: ...
+
+
+@overload
+def completion( # type: ignore # noqa: PLR0915
+ model: str,
+ *,
+ stream: Literal[True],
+ **kwargs: Unpack[CompletionParams],
+) -> CustomStreamWrapper: ...
If you have the resources to take over the PR, I'll cooperate as much as possible.
Hi all,
Just wanted to add that I'm also experiencing this static type checking issue with Pyright.
For non-streaming calls to both litellm.completion and litellm.acompletion, Pyright frequently flags errors like Cannot access attribute "choices" for class "CustomStreamWrapper" (and similar for .message or .usage), as it seems to struggle with the ModelResponse | CustomStreamWrapper union type, defaulting to CustomStreamWrapper even when a ModelResponse is expected and returned at runtime.
I've encountered this across a few recent versions (including around v1.68.0 - v1.70.0) and have ended up using # type: ignore on those lines as a temporary workaround.
Looking forward to a potential fix in the type hints!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
Not stale, and quite frankly surprising that this still hasn't been fixed for a project of this size... Are most people really not using type checkers? We are currently getting around this by using a thin wrapper around litellm to fix the incorrect typing.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
I'm with @JoongWonSeo on this one. This should not be stale.