llmflows
llmflows copied to clipboard
refactor: consider data types for logically-grouped variables
https://github.com/stoyan-stoyanov/llmflows/blob/6ce79a8cc3b06e799d2b6eb6f6e8e78354f68dcd/llmflows/llms/openai.py#L80
I would always consider whether to make my code more informative and cleaner by e.g. making a specific data type out of a tuple.
Consider e.g.
from dataclasses import dataclass
@dataclass
class OpenAIResult:
generated_text: str
call_data: CallData
model_config: ModelConfig
where CallData
and ModelConfig
are other data types as well: everything becomes much more self-documented. Furthermore, you don't have to build maps (i.e. dict
) on the fly inside the methods.
You could also define a common interface, e.g. BaseLLMResult
and have OpenAIResult
extend it.
I am trying to avoid defining these until I start adding LLMs from other providers. Any idea on how we can make this generic enough beforehand?
I am trying to avoid defining these until I start adding LLMs from other providers. Any idea on how we can make this generic enough beforehand?
Yes: you can define abtract data types and extend them whenever you get into a different provider e.g., OpenAIResult
could just extend BaseLLMResult
- if we assume that you will get a result most times. 😀
The beauty of having abstract interfaces is that you do not have to think about all the possible providers at once.
Reevaluating this since I added support for another LLM and the need for this becomes more obvious. My worry is that dataclasses are not serializable by default