templa-rs icon indicating copy to clipboard operation
templa-rs copied to clipboard

Improve Searching Algorithm

Open mintbomb27 opened this issue 3 years ago • 8 comments

Improve the searching algorithm present in perform_search().

Currently, the search works in an O(n) time complexity by comparing the input given by the -t flag with the tags array in the submodules.json.

mintbomb27 avatar Sep 30 '21 19:09 mintbomb27

Hello! Are you assigning issues or should we just create a pull request once we've developed a solution?

If you are assigning could you please assign this to me 😄

Dinoosawruss avatar Oct 08 '21 07:10 Dinoosawruss

Hey sure! Assigning.

mintbomb27 avatar Oct 08 '21 07:10 mintbomb27

Ok brilliant

I have two main solutions currently

Solution 1 - Parallel Search The first solution I can think of is parallel searching them by finding the greatest factor of the length of Submodes and then searching equally through the array. This would not require the array to be sorted While this still O(n) it should in theory find the items faster

Issues

  • Still O(n)
  • May not see much time benefit

Pros

  • Easy to implement
  • Better when searching the entire array every time

Solution 2 - Binary Search The second solution would be to sort the array and then use a binary search - the main issue with this is that the array would need to be sorted and as the tags were searching for are also arrays this complicates things a little bit This would be O(log(n))

Issues

  • As we are searching the entire array every time this may not be the best option
  • The array will need to be sorted which will add time

Pros

  • Low time complexity

Checklist Below is a checklist I'll just use to track what I need to do. For each of the solutions I'll code it up and test it out on a large data set to see what the time benefits of each is.

  • [x] Test current implementation on small data set (10 items)
  • [x] Test current implementation on medium data set (100 items)
  • [x] Test current implementation on large data set (1000 items)
  • [x] Code up parallel search
  • [x] Test parallel search on small data set
  • [x] Test parallel search on medium
  • [x] Test parallel search on large data set
  • [x] Code up binary search
  • [x] Test binary search on small data set
  • [x] Test binary search on medium data set
  • [x] Test binary search on large data set
  • [x] Consider results
  • [x] Complete implementation of most efficient solution on average
  • [x] PR

I will use this comment throughout the process to keep everything updated. If you have any comments or questions let me know - or if you have a better solution don't hesitate to suggest it

Dinoosawruss avatar Oct 08 '21 09:10 Dinoosawruss

Current System While the time complexity may be low due to the speed of Rust the current system is already extremely fast, my collected data is shown below:

Items Time (nano seconds) Time per item (nano seconds)
10 23400 (0.0234 milliseconds) 2340
100 145500 (0.1455 milliseconds) 1455
1000 704400 (0.7044 milliseconds) 704.4

Dinoosawruss avatar Oct 09 '21 00:10 Dinoosawruss

Alternative Linear Search While testing I also thought I'd consider an alternative linear search to the one used in the existing code. The code for this linear search is below:

let x = submodules.len();

let mut i = 0;

while i < x {
    if (tags.is_empty() || submodules[i].has_one_of_tags(tags)) && 
       (name_query.is_empty() || submodules[i].name.to_lowercase().contains(&lowercase_query)) 
    {
            filtered_sm.push(submodules[i].clone());
    } 

    i+=1;
}

Time Collections

Items Time (nano seconds) Time per item (nano seconds)
10 26600 ( milliseconds) 2660
100 82400 ( milliseconds) 824
1000 506600 ( milliseconds) 506.6

This alone is a significant time improvement and may even be better than other algorithms

Dinoosawruss avatar Oct 09 '21 00:10 Dinoosawruss

@Dinoosawruss Great research! Loved the analysis. If the improved linear search algo is the best we can have as of now, then let's go for it! Really looking forward to your contribution :))

mintbomb27 avatar Oct 09 '21 08:10 mintbomb27

@Dinoosawruss can I see how you tested? I tried to reproduce the same benchmarks, but my results were quite different, my times were much more linear when increasing the number of items.

running 6 tests
test alternate_linear_search_10   ... bench:       1,246 ns/iter (+/- 66)
test alternate_linear_search_100  ... bench:      12,016 ns/iter (+/- 941)
test alternate_linear_search_1000 ... bench:     120,654 ns/iter (+/- 9,336)
test perform_search_10            ... bench:       1,239 ns/iter (+/- 182)
test perform_search_100           ... bench:      12,307 ns/iter (+/- 3,357)
test perform_search_1000          ... bench:     122,835 ns/iter (+/- 8,086)

Here's the bench.rs I used

I ran the file with cargo bench rustc 1.57.0-nightly (11491938f 2021-09-29)

kugiyasan avatar Oct 10 '21 03:10 kugiyasan

@Dinoosawruss can I see how you tested? I tried to reproduce the same benchmarks, but my results were quite different, my times were much more linear when increasing the number of items.

running 6 tests
test alternate_linear_search_10   ... bench:       1,246 ns/iter (+/- 66)
test alternate_linear_search_100  ... bench:      12,016 ns/iter (+/- 941)
test alternate_linear_search_1000 ... bench:     120,654 ns/iter (+/- 9,336)
test perform_search_10            ... bench:       1,239 ns/iter (+/- 182)
test perform_search_100           ... bench:      12,307 ns/iter (+/- 3,357)
test perform_search_1000          ... bench:     122,835 ns/iter (+/- 8,086)

Here's the bench.rs I used

I ran the file with cargo bench rustc 1.57.0-nightly (11491938f 2021-09-29)

I simply timed how long each operation took, did this 50 times and took the average While this is a somewhat nieve approach I was not at the time aware of cargo bench

Dinoosawruss avatar Oct 11 '21 16:10 Dinoosawruss