llm-vscode-inference-server icon indicating copy to clipboard operation
llm-vscode-inference-server copied to clipboard

Keeps responding back with tokens

Open cmosguy opened this issue 2 years ago • 0 comments
trafficstars

I keep getting fim tokens when it responds back, am I supposed to scrub this directly in the code or is there some setting that has to be used in the extension for llm-vscode ?

<fim_prefix>
import debugpy


# create a class called car
class Car:
    # create a method called drive
    def drive(self):
        print("driving")


# create an object called my_car
my_car =    <fim_suffix>

cmosguy avatar Oct 10 '23 21:10 cmosguy