continue icon indicating copy to clipboard operation
continue copied to clipboard

Feature Request: Github Copilot code completions

Open pedrosimao opened this issue 1 year ago • 17 comments

Is your feature request related to a problem? Please describe. I am searching for a Github Copilot alternative. Continue is great, but it misses the inline code completions of Copilot, without the need of calling /edit command all the time. Also with Github Copilot we can see multiple completions suggestions, so we can compare them.

Describe the solution you'd like Continue would become a total Github Copilot killer if it adds code completion with multi suggestions.

Describe alternatives you've considered I tried multiple VS Code plugins , but Continue is the best in terms of allowing us to connect to any API, and also being hacker friendly and open-source.

pedrosimao avatar Sep 01 '23 14:09 pedrosimao

Hi @pedrosimao! Love this request. We decided not to focus on tab autocomplete at first because the features Continue provides are mostly complimentary to tab autocomplete. At some point we do plan to add this, so it would be useful to hear more about what makes you want the alternative. Open to re-evaluating this priority!

Feel free to message me on the Discord as well about this if it's easier

sestinj avatar Sep 01 '23 19:09 sestinj

Hey, thanks for the super fast reply. I have just joined the Discord server.

Regarding your question:

so it would be useful to hear more about what makes you want the alternative

I just love to be able to run everything locally on my machine (with the help of Ollama). Even though I trust Github not to mess with my data, I still prefer to be 100% safe. Imagine if some national intelligence agency manages to hijack github copilot data stream and put their hands on my secret keys?

Finally Github Copilot does not work offline. Think about working in an airplance or in a train with bad Wi-Fi... Local LLM models are the way to go!

pedrosimao avatar Sep 02 '23 10:09 pedrosimao

Btw, I found a nice article on how to do code completion extensions. https://tomassetti.me/integrating-code-completion-in-visual-studio-code/

pedrosimao avatar Sep 02 '23 10:09 pedrosimao

Another interesting code implementation: https://github.com/huggingface/huggingface-vscode

pedrosimao avatar Sep 02 '23 11:09 pedrosimao

Thank you! This is above and beyond : )

What I think I'll at least do in the coming days is take a quick attempt at adding this feature—if it's possible to do quickly, then I we can release it in beta for you to try

I want to be careful about shipping half-working features, but I can see why you would want this, and you aren't the first to ask

sestinj avatar Sep 02 '23 18:09 sestinj

This is a short code snippet extract from Fauxpilot ext, from my fork of Venthe/vscode-fauxpilot.

// extension.ts
export function activate(context: ExtensionContext) {
	console.debug("Registering Fauxpilot provider", new Date());

	let outputChannel = window.createOutputChannel("Fauxpilot");
	let extConfig = workspace.getConfiguration("fauxpilot");
	const version = context.extension.packageJSON.version;

	fauxpilotClient.version = version;
	fauxpilotClient.init(extConfig, outputChannel);
	fauxpilotClient.log("Fauxpilot start. version: " + version);

	const fileFilter = extConfig.get("fileFilter", [{ pattern: "**" }]);
	fauxpilotClient.log('fileFilter: ' + JSON.stringify(fileFilter));

	context.subscriptions.push(	
		languages.registerInlineCompletionItemProvider(
			fileFilter, new FauxpilotCompletionProvider(statusBar, extConfig)
		),
	);
}


// FauxpilotCompletionProvider.ts
import OpenAI from 'openai';
import {
    CancellationToken, InlineCompletionContext, InlineCompletionItem, InlineCompletionItemProvider, InlineCompletionList, Position, ProviderResult, Range,
    TextDocument, workspace, StatusBarItem, OutputChannel, WorkspaceConfiguration, InlineCompletionTriggerKind
} from 'vscode';
import { fetch } from './AccessBackend';

export class FauxpilotCompletionProvider implements InlineCompletionItemProvider {

    //@ts-ignore
    // because ASYNC and PROMISE
    public async provideInlineCompletionItems(document: TextDocument, position: Position, context: InlineCompletionContext, token: CancellationToken): ProviderResult<InlineCompletionItem[] | InlineCompletionList> {


        const prompt = this.getPrompt(document, position);

        // fauxpilotClient.log(`Requesting completion for prompt: ${prompt}`);
        fauxpilotClient.log(`Requesting completion for prompt, length: ${prompt?.length ?? 0}`);

        if (this.isNil(prompt)) {
            fauxpilotClient.log("Prompt is empty, skipping");
            return Promise.resolve(([] as InlineCompletionItem[]));
        }

        if (token.isCancellationRequested) {
            fauxpilotClient.log('request cancelled.');
            return [];
        }

        fauxpilotClient.log("Calling OpenAi, prompt length: " + prompt?.length);
        const promptStr = prompt?.toString();
        if (!promptStr) {
            return [];
        }

        // fetch: fetch data from openai API compatible backend.
        return fetch(promptStr).then((response) => {
            // if (token.isCancellationRequested) {
            //     fauxpilotClient.log('request cancelled.');
            //     return [];
            // }
            var result = this.toInlineCompletions(response, position);
            fauxpilotClient.log("inline completions array length: " + result.length);
            return result;
        }).finally(() => {
            fauxpilotClient.log("Finished calling OpenAi");
        });

    }

    private getPrompt(document: TextDocument, position: Position): String | undefined {
        const promptLinesCount = this.extConfig.get("maxLines") as number;

        /* 
        Put entire file in prompt if it's small enough, otherwise only
        take lines above the cursor and from the beginning of the file.
        */
        if (position.line <= promptLinesCount) {
            const range = new Range(0, 0, position.line, position.character);
            return document.getText(range);
        } else {
            const leadingLinesCount = Math.floor(LEADING_LINES_PROP * promptLinesCount);
            const prefixLinesCount = promptLinesCount - leadingLinesCount;
            const firstPrefixLine = position.line - prefixLinesCount;
            const prefix = document.getText(new Range(firstPrefixLine, 0, position.line, position.character));
            const leading = document.getText(new Range(0, 0, leadingLinesCount, 0));
            return leading + prefix;
        }
    }

    private isNil(value: String | undefined | null): boolean {
        return value === undefined || value === null || value.length === 0;
    }

    private toInlineCompletions(value: OpenAI.Completion, position: Position): InlineCompletionItem[] {
        if (!value.choices) {
            return [];
        }

        // it seems always return 1 choice.
        var choice1Text = value.choices[0].text;
        if (!choice1Text) {
            return [];
        }

        fauxpilotClient.log('Get choice text: ' + choice1Text);
        fauxpilotClient.log('---------END-OF-CHOICE-TEXT-----------');
        if (choice1Text.trim().length <= 0) {
            return [];
        }

        return [new InlineCompletionItem(choice1Text, new Range(position, position.translate(0, choice1Text.length)))];
    }

}

And the effect is:

GIF 2023-9-3 9-50-10

GIF1

The backend server of above gifs is codegen2 with fauxpilot.

But, there is a question about performance. If the completion speed is not fast enough, it will be very uncomfortable.

For me, a 3060 12G with code llama 7b Q6_K, it's fine for chat and generate a batch of code , but it's bad for inline completion, because 1 request need almost 1.5s to finish, sometimes it needs 2.5s, it's so slow.

But, it still worth a try.

Aincvy avatar Sep 03 '23 02:09 Aincvy

@Aincvy as you mentioned 7b Q6_K, does fauxpilot server support ggml/gguf?

nikhil1raghav avatar Sep 03 '23 13:09 nikhil1raghav

@nikhil1raghav I don't think so.

It supports 2 types.

  • fastertransformers
    • GPTJ models can be used, after some convert operations.
  • python
    • I didn't try this one.

text-generation-webui is support llama models. And it has an openai extension to provide openai API compatible.

Aincvy avatar Sep 04 '23 02:09 Aincvy

I'd love to see this feature too. It would definitly make "continue" the best LLM IDE integration ever! Thanks for the great work!

alfredwallace7 avatar Nov 29 '23 15:11 alfredwallace7

+1 for this feature

BrianGilbert avatar Dec 01 '23 03:12 BrianGilbert

+1

deanrie avatar Dec 03 '23 17:12 deanrie

+1

flaviodelgrosso avatar Dec 15 '23 14:12 flaviodelgrosso

Appreciate all the +1's! This feature is now on our roadmap for the next month. I'll keep this thread updated as alpha and beta versions become available for testing

sestinj avatar Dec 15 '23 23:12 sestinj

+1

lainswork avatar Jan 05 '24 10:01 lainswork

+1

TristanSchreiber avatar Jan 17 '24 13:01 TristanSchreiber

Progress is underway: https://github.com/continuedev/continue/pull/758

sestinj avatar Jan 17 '24 16:01 sestinj

Looks like the main ticket (#758) has been already merged, which is great news!

Despite, the ticket only covers VS Code. Is there any place we can follow the transfer to the Intellij plugin?

aaronfc avatar Feb 27 '24 21:02 aaronfc

This is also supported now in JetBrains! https://docs.continue.dev/walkthroughs/tab-autocomplete

sestinj avatar Jul 03 '24 22:07 sestinj