Use AI instead of pattern recognition
As we know, thefuck uses pattern recognition.
But pattern recognition lacks extensibility and support to new commands.
So why don't we train a LLM that corrects commands?
Or we can use a existing LLM and construct prompts to let it correct commands.
If you trust a next-word-prediction machine to run rampant on your system, that's great for you, and there are plenty of other tools you can use to do that. Pattern recognition is testable and can be validated for correctness. The output of an LLM cannot. As such, I do not trust LLMs to run arbitrary commands on my machine.
Just don't come crying when an AI tries to rm -rf ~ or something. Don't run that command by the way.
so why confirm exists?
I support the use of AI+confirmation
I support the use of AI+confirmation
Thanks for your support
If you trust a next-word-prediction machine to run rampant on your system, that's great for you, and there are plenty of other tools you can use to do that. Pattern recognition is testable and can be validated for correctness. The output of an LLM cannot. As such, I do not trust LLMs to run arbitrary commands on my machine.
Just don't come crying when an AI tries to
rm -rf ~or something. Don't run that command by the way.
Come on, everything has two sides, you can't ignore the good side because of the bad side. Plus, there isn't nothing we can do to reduce risk. We can use a existing general-use LLM and constructing prompt to let it avoid dangerous commands.
It is nice having something that isn't using an LLM and I don't need to worry about it doing something dangerous
A tool that you're supposed to use to fix your typos should have a null chance of breaking anything. LLMs which are effectively black boxes subject to hallucinations do not meet this criteria.
edit: Also, running an LLM just to watch your history and suggest typo fixes would considerably slow down both your machine and the program in itself.
You're right. What about making this an experimental option?
What for? This would improve neither performance nor quality of results - in fact, it would probably worsen both, in addition to increasing significantly the storage space required by thefuck.
If you're unhappy with either of those, an actual possible improvement would be the implementation of fuzzy finding.
This would address the extensibility problem (tho ive failed to see examples of said problem so far) you mentioned in your original comment.
best of both words would be to make a plugin that enables LLM support. that way it is optional
best of both words would be to make a plugin that enables LLM support. that way it is optional
Go for it…you can have ai help you write an ai plugin! 😆