RyanChen
RyanChen
me too
Hi @FoobarProtocol Thanks for your clear reply. After reading your reply, i review the code. I can't found the function named ```GPTBigCodeOnnxConfig``` in ```model_configs.py```. So i go to optimum github...
```python def generate(self, batch_data): if isinstance(batch_data, list): prompts = [] for data in batch_data: prompts.append(self._generate_prompt(data)) else: prompts = self._generate_prompt(batch_data) inputs = self.tokenizer( prompts, return_tensors="pt", max_length=256, truncation=True, padding=True ) input_ids =...
@jaideep11061982 Just copy the function named generate_prompt from https://github.com/nlpxucan/WizardLM/blob/main/WizardCoder/src/inference_wizardcoder.py
Thank @sonichi for your reply ! Following your answer, i set default_auto_reply at UserProxyAgent: ```python # create a UserProxyAgent instance named "user_proxy" user_proxy = autogen.UserProxyAgent( name="user_proxy", human_input_mode="NEVER", max_consecutive_auto_reply=10, default_auto_reply="no code...
> The local model you are using may not support empty messages in the list of messages. The UserProxyAgent sends a default empty message when no code is detected. In...