thefuck icon indicating copy to clipboard operation
thefuck copied to clipboard

Use AI instead of pattern recognition

Open g1thubhack3r opened this issue 7 months ago • 11 comments

As we know, thefuck uses pattern recognition.
But pattern recognition lacks extensibility and support to new commands. So why don't we train a LLM that corrects commands? Or we can use a existing LLM and construct prompts to let it correct commands.

g1thubhack3r avatar Sep 14 '25 05:09 g1thubhack3r

If you trust a next-word-prediction machine to run rampant on your system, that's great for you, and there are plenty of other tools you can use to do that. Pattern recognition is testable and can be validated for correctness. The output of an LLM cannot. As such, I do not trust LLMs to run arbitrary commands on my machine.

Just don't come crying when an AI tries to rm -rf ~ or something. Don't run that command by the way.

MaddyGuthridge avatar Sep 15 '25 11:09 MaddyGuthridge

so why confirm exists?

g1thubhack3r avatar Sep 17 '25 10:09 g1thubhack3r

I support the use of AI+confirmation

Jerry-Terrasse avatar Sep 21 '25 07:09 Jerry-Terrasse

I support the use of AI+confirmation

Thanks for your support

g1thubhack3r avatar Sep 22 '25 09:09 g1thubhack3r

If you trust a next-word-prediction machine to run rampant on your system, that's great for you, and there are plenty of other tools you can use to do that. Pattern recognition is testable and can be validated for correctness. The output of an LLM cannot. As such, I do not trust LLMs to run arbitrary commands on my machine.

Just don't come crying when an AI tries to rm -rf ~ or something. Don't run that command by the way.

Come on, everything has two sides, you can't ignore the good side because of the bad side. Plus, there isn't nothing we can do to reduce risk. We can use a existing general-use LLM and constructing prompt to let it avoid dangerous commands.

g1thubhack3r avatar Sep 22 '25 10:09 g1thubhack3r

It is nice having something that isn't using an LLM and I don't need to worry about it doing something dangerous

JoshuaWeissTBS avatar Oct 08 '25 02:10 JoshuaWeissTBS

A tool that you're supposed to use to fix your typos should have a null chance of breaking anything. LLMs which are effectively black boxes subject to hallucinations do not meet this criteria.

edit: Also, running an LLM just to watch your history and suggest typo fixes would considerably slow down both your machine and the program in itself.

AetherSky-arch avatar Oct 19 '25 00:10 AetherSky-arch

You're right. What about making this an experimental option?

g1thubhack3r avatar Oct 19 '25 07:10 g1thubhack3r

What for? This would improve neither performance nor quality of results - in fact, it would probably worsen both, in addition to increasing significantly the storage space required by thefuck.

If you're unhappy with either of those, an actual possible improvement would be the implementation of fuzzy finding.

This would address the extensibility problem (tho ive failed to see examples of said problem so far) you mentioned in your original comment.

AetherSky-arch avatar Oct 19 '25 10:10 AetherSky-arch

best of both words would be to make a plugin that enables LLM support. that way it is optional

adriandarian avatar Oct 19 '25 21:10 adriandarian

best of both words would be to make a plugin that enables LLM support. that way it is optional

Go for it…you can have ai help you write an ai plugin! 😆

pgifford avatar Oct 26 '25 06:10 pgifford