Dao Trung Thanh
Dao Trung Thanh
If you use openai version >1.0 setup OPENAI_BASE_URL = "your base url" Else (openai version == 0.27.7) setup OPENAI_API_BASE = "your base url" # http://127.0.0.1:8000 my case using litellm as...
is there any updated information?
I used "BAAI/bge-base-en" embedding and created succesfully a Chroma Database. ``` # Supplying a persist_directory will store the embeddings on disk persist_directory = '/content/drive/MyDrive/db' ## Here is the new embeddings...
I have set up upto 20 seconds in openai.py #### def _create_retry_decorator(self) -> Callable[[Any], Any]: import openai min_seconds = 20 max_seconds = 60 # Wait 2^x * 1 second between...
USER_NAME = "Agent 007" # The name you want to use when interviewing the agent. LLM = ChatOpenAI(max_tokens=1500, request_timeout=120) # Can be any LLM you want. But I did not...
I use Llma 2 # Use a pipeline for later from transformers import pipeline, TextStreamer streamer = TextStreamer(tokenizer, skip_prompt=True) pipe = pipeline("text-generation", model=model, tokenizer= tokenizer, torch_dtype=torch.bfloat16, device_map="auto", max_new_tokens = 512,...
import os os.environ["OPENAI_API_KEY"] = 'YOUR_OPENAI_API_KEY'
Open c:\Python311\Lib\site-packages\pyvis\network.py Find line 507, change from with open(name, "w+") as out: to with open(name, "w+", encoding="utf-8") as out:
@RidiculousRonZzz Why I don't see the image that Visual Chat generated? https://drive.google.com/file/d/1Na3VTKgoKSMpa2FOe6Rojk8QCPHnmMyX/view?usp=sharing
I can use with liteLLM (an OpenAI compatible API Proxy)  And use it as an evaluator 