continue
continue copied to clipboard
Feature Request: Github Copilot code completions
Is your feature request related to a problem? Please describe.
I am searching for a Github Copilot alternative. Continue
is great, but it misses the inline code completions of Copilot, without the need of calling /edit
command all the time. Also with Github Copilot we can see multiple completions suggestions, so we can compare them.
Describe the solution you'd like Continue would become a total Github Copilot killer if it adds code completion with multi suggestions.
Describe alternatives you've considered I tried multiple VS Code plugins , but Continue is the best in terms of allowing us to connect to any API, and also being hacker friendly and open-source.
Hi @pedrosimao! Love this request. We decided not to focus on tab autocomplete at first because the features Continue provides are mostly complimentary to tab autocomplete. At some point we do plan to add this, so it would be useful to hear more about what makes you want the alternative. Open to re-evaluating this priority!
Feel free to message me on the Discord as well about this if it's easier
Hey, thanks for the super fast reply. I have just joined the Discord server.
Regarding your question:
so it would be useful to hear more about what makes you want the alternative
I just love to be able to run everything locally on my machine (with the help of Ollama). Even though I trust Github not to mess with my data, I still prefer to be 100% safe. Imagine if some national intelligence agency manages to hijack github copilot data stream and put their hands on my secret keys?
Finally Github Copilot does not work offline. Think about working in an airplance or in a train with bad Wi-Fi... Local LLM models are the way to go!
Btw, I found a nice article on how to do code completion extensions. https://tomassetti.me/integrating-code-completion-in-visual-studio-code/
Another interesting code implementation: https://github.com/huggingface/huggingface-vscode
Thank you! This is above and beyond : )
What I think I'll at least do in the coming days is take a quick attempt at adding this feature—if it's possible to do quickly, then I we can release it in beta for you to try
I want to be careful about shipping half-working features, but I can see why you would want this, and you aren't the first to ask
This is a short code snippet extract from Fauxpilot ext, from my fork of Venthe/vscode-fauxpilot.
// extension.ts
export function activate(context: ExtensionContext) {
console.debug("Registering Fauxpilot provider", new Date());
let outputChannel = window.createOutputChannel("Fauxpilot");
let extConfig = workspace.getConfiguration("fauxpilot");
const version = context.extension.packageJSON.version;
fauxpilotClient.version = version;
fauxpilotClient.init(extConfig, outputChannel);
fauxpilotClient.log("Fauxpilot start. version: " + version);
const fileFilter = extConfig.get("fileFilter", [{ pattern: "**" }]);
fauxpilotClient.log('fileFilter: ' + JSON.stringify(fileFilter));
context.subscriptions.push(
languages.registerInlineCompletionItemProvider(
fileFilter, new FauxpilotCompletionProvider(statusBar, extConfig)
),
);
}
// FauxpilotCompletionProvider.ts
import OpenAI from 'openai';
import {
CancellationToken, InlineCompletionContext, InlineCompletionItem, InlineCompletionItemProvider, InlineCompletionList, Position, ProviderResult, Range,
TextDocument, workspace, StatusBarItem, OutputChannel, WorkspaceConfiguration, InlineCompletionTriggerKind
} from 'vscode';
import { fetch } from './AccessBackend';
export class FauxpilotCompletionProvider implements InlineCompletionItemProvider {
//@ts-ignore
// because ASYNC and PROMISE
public async provideInlineCompletionItems(document: TextDocument, position: Position, context: InlineCompletionContext, token: CancellationToken): ProviderResult<InlineCompletionItem[] | InlineCompletionList> {
const prompt = this.getPrompt(document, position);
// fauxpilotClient.log(`Requesting completion for prompt: ${prompt}`);
fauxpilotClient.log(`Requesting completion for prompt, length: ${prompt?.length ?? 0}`);
if (this.isNil(prompt)) {
fauxpilotClient.log("Prompt is empty, skipping");
return Promise.resolve(([] as InlineCompletionItem[]));
}
if (token.isCancellationRequested) {
fauxpilotClient.log('request cancelled.');
return [];
}
fauxpilotClient.log("Calling OpenAi, prompt length: " + prompt?.length);
const promptStr = prompt?.toString();
if (!promptStr) {
return [];
}
// fetch: fetch data from openai API compatible backend.
return fetch(promptStr).then((response) => {
// if (token.isCancellationRequested) {
// fauxpilotClient.log('request cancelled.');
// return [];
// }
var result = this.toInlineCompletions(response, position);
fauxpilotClient.log("inline completions array length: " + result.length);
return result;
}).finally(() => {
fauxpilotClient.log("Finished calling OpenAi");
});
}
private getPrompt(document: TextDocument, position: Position): String | undefined {
const promptLinesCount = this.extConfig.get("maxLines") as number;
/*
Put entire file in prompt if it's small enough, otherwise only
take lines above the cursor and from the beginning of the file.
*/
if (position.line <= promptLinesCount) {
const range = new Range(0, 0, position.line, position.character);
return document.getText(range);
} else {
const leadingLinesCount = Math.floor(LEADING_LINES_PROP * promptLinesCount);
const prefixLinesCount = promptLinesCount - leadingLinesCount;
const firstPrefixLine = position.line - prefixLinesCount;
const prefix = document.getText(new Range(firstPrefixLine, 0, position.line, position.character));
const leading = document.getText(new Range(0, 0, leadingLinesCount, 0));
return leading + prefix;
}
}
private isNil(value: String | undefined | null): boolean {
return value === undefined || value === null || value.length === 0;
}
private toInlineCompletions(value: OpenAI.Completion, position: Position): InlineCompletionItem[] {
if (!value.choices) {
return [];
}
// it seems always return 1 choice.
var choice1Text = value.choices[0].text;
if (!choice1Text) {
return [];
}
fauxpilotClient.log('Get choice text: ' + choice1Text);
fauxpilotClient.log('---------END-OF-CHOICE-TEXT-----------');
if (choice1Text.trim().length <= 0) {
return [];
}
return [new InlineCompletionItem(choice1Text, new Range(position, position.translate(0, choice1Text.length)))];
}
}
And the effect is:
The backend server of above gifs is codegen2 with fauxpilot.
But, there is a question about performance. If the completion speed is not fast enough, it will be very uncomfortable.
For me, a 3060 12G with code llama 7b Q6_K, it's fine for chat and generate a batch of code , but it's bad for inline completion, because 1 request need almost 1.5s to finish, sometimes it needs 2.5s, it's so slow.
But, it still worth a try.
@Aincvy as you mentioned 7b Q6_K, does fauxpilot server support ggml/gguf?
@nikhil1raghav I don't think so.
It supports 2 types.
- fastertransformers
- GPTJ models can be used, after some convert operations.
- python
- I didn't try this one.
text-generation-webui is support llama models. And it has an openai extension to provide openai API compatible.
I'd love to see this feature too. It would definitly make "continue" the best LLM IDE integration ever! Thanks for the great work!
+1 for this feature
+1
+1
Appreciate all the +1's! This feature is now on our roadmap for the next month. I'll keep this thread updated as alpha and beta versions become available for testing
+1
+1
Progress is underway: https://github.com/continuedev/continue/pull/758
Looks like the main ticket (#758) has been already merged, which is great news!
Despite, the ticket only covers VS Code. Is there any place we can follow the transfer to the Intellij plugin?
This is also supported now in JetBrains! https://docs.continue.dev/walkthroughs/tab-autocomplete