lin.yunfan
lin.yunfan
If response mode is default everything is fine. llm setting ``` llm = ChatOpenAI(temperature=0.3, model_name=MODEL) llm_predictor = LLMPredictor(llm=llm) ``` prompt helper setting ``` max_input_size = 4096 num_output = 1500 max_chunk_overlap...
### System Info 0.0.166 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat...
### How to reproduce this bug? ```python import weaviate from weaviate.collections.classes.config import Property, DataType, ReferenceProperty from weaviate.collections.classes.filters import Filter from base_dependencies import load_config client = weaviate.connect_to_custom( http_host=load_config("weaviate")["host"], http_port=load_config("weaviate")["http_port"], http_secure=False, grpc_host="localhost",...
I am trying to install them following http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/
how to reproduce the problem: ```python @Memoize(10000, timedelta(seconds=100)) async def foo_a(a:int) -> int: raise Exception("test") return a @foo_a.key def _(a:int) -> str: return f"a:{a}" async def main(): await foo_a(1) if...