autojump-rs
autojump-rs copied to clipboard
Duplicate entries in suggestions list
I'm seeing an issue where there are duplicate entries in the suggestions list.
I can replicate this with a minimal example of having
51.9615242269 /Users/foo/bar
in the autojump.txt
and running
autojump --complete bar
which returns
1__/Users/foo/bar
2__/Users/foo/bar
3__/Users/foo/bar
I added some debug statements in query
:
Entries [Entry { path: "/Users/foo/bar", weight: 51.9615242269 }]
Query Result ["/Users/foo/bar", "/Users/foo/bar", "/Users/foo/bar"]
and isolated the issue to somewhere in the matcher (src/matcher/
).
Oh wait I see in the unit test that this is expected behavior for the matcher because we have 3 different match types?
let actual: Vec<_> = matcher.execute(&haystack).collect();
let expected = vec![
// consecutive matcher
path::Path::new("/moo/foo/baz"),
path::Path::new("/foo/baz"),
// fuzzy matcher
path::Path::new("/foo/bar/baz"),
path::Path::new("/moo/foo/baz"),
path::Path::new("/baz/foo/bar"),
path::Path::new("/foo/baz"),
// anywhere matcher
path::Path::new("/foo/bar/baz"),
path::Path::new("/moo/foo/baz"),
path::Path::new("/foo/baz"),
];
I think we would want to add a unique
to make sure we don't double count the matches though. I think doing
results.sort();
results.dedup();
in query.rs
is sufficient.
Also unrelated but I think the order should be changed to consecutive, then anywhere, then fuzzy.
Much time has passed after the initial porting, I can't recall the fine details but I thought this behavior is consistent with original autojump
so I even wrote the test case in the first place.
Now it's evident the "upstream" is dead, is not reviving, improving this behavior is okay for me. It's instead a problem of finding free time and motivation to revitalize this project though...
I thought this behavior is consistent with original autojump
Yes I think you're right. https://github.com/wting/autojump/issues/348
For some reason I can't replicate it myself, but it seems it's still present. I don't think fixing it will break compatibility.
Actually no it doesn't occur in upstream. They use set
to dedup before printing:
for i, entry in enumerate(set(tab_entries)):
Edit: No wait that's apparently only in my local copy? 🤔 Maybe I added this myself long time back.