cursorless
cursorless copied to clipboard
Prompt to enable asking LLMs about Cursorless
I want to send my cursorless cheat sheet to ChatGPT with a prompt. Can you please provide a machine readable version of the cheat sheet for me to do so? I found cursorless_cheat_sheet_get_json which can be used for that but I've heard it may go away.
I think we probably want a prompt that includes the cheatsheet, as well as some example commands, and a bit of description of what Cursorless is. I also wonder if it's better to provide examples to chatgpt as spoken forms or as Cursorless api payloads. Then we could use our spoken form generator to generate the custom spoken forms.
Would be cool to add an LLM connection that would be used by a special command, which could map the user request directly to a cursorless command. Would also be a good way to learn those complex commands, but I honestly think it could be a productivity feature :)
And to OP, yes, would be good to have a command to give chatty help. Would consider that more educational.
I’m capable of adding in something to talk to an LLM to translate natural English to some complex cursorless command or commands. It might even be good to confirm the command with the user.
My main question is: where should such a thing go?
- in cursorless talon?
- In the extension?
- somewhere else?
Thank you!
That's a good question. There are tradeoffs to putting in in either place. The advantage of extension-side is that it's easier to push updates and we have a lot more sophisticated machinery there around testing, devops, etc. The advantage of talon-side is it's easier to enable user customization
Based on those tradeoffs, do you have a gut instinct? Also happy to chat about it in a meet-up (see calendar). Let me know if you're planning to make it to a meet-up though so we can find a time where we'll both be there
Thanks Pokey!
My gut says the harder way is the better way here, for the most good of most of the folks…. On the talon side.
- But that doesn’t mean the app couldn’t maintain its own customizations.
- If this is accomplished on the Talon side, then how would a VSCode user configure their LLM connection? then again, how many people use the vscode extension and cursorless without also being aware of how to configure the Talon backend.
Next questions:
- It’d be a better user experience to stream the LLM output. Is the backend of talon capable of doing that dance with cursorless, or would it basically have built in latency from waiting for the response to complete?
- Then there is the question of… what happens with sounds that happen while the response is being generated, and how do those influence the handling of the response?
On Fri, Oct 25, 2024 at 10:15 AM Pokey Rule @.***> wrote:
That's a good question. There are tradeoffs to putting in in either place. The advantage of extension-side is that it's easier to push updates and we have a lot more sophisticated machinery there around testing, devops, etc. The advantage of talon-side is it's easier to enable user customization
Based on those tradeoffs, do you have a gut instinct? Also happy to chat about it in a meet-up (see calendar https://tinyurl.com/talon-events). Let me know if you're planning to make it to a meet-up though so we can find a time where we'll both be there
— Reply to this email directly, view it on GitHub https://github.com/cursorless-dev/cursorless/issues/2211#issuecomment-2437934968, or unsubscribe https://github.com/notifications/unsubscribe-auth/AF632FJGAOHUT4AKTU63XDLZ5JHBJAVCNFSM6AAAAABQSQ4CVGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMZXHEZTIOJWHA . You are receiving this because you commented.Message ID: @.***>
By the way, check this out
https://github.com/C-Loftus/talon-ai-tools
https://youtu.be/FctiTs6D2tM?si=QvjBmAAinngk9NrY