priority-queue
priority-queue copied to clipboard
A priority queue for Rust with efficient change function.
PriorityQueue
This crate implements a Priority Queue with a function to change the priority of an object.
Priority and items are stored in an IndexMap and the queue is implemented as a Heap of indexes.
Please read the API documentation here
Usage
To use this crate, simply add the following string to your Cargo.toml:
priority-queue = "1.2.3"
Version numbers follow the semver convention.
Then use the data structure inside your Rust source code as in the following Example.
Remember that, if you need serde support, you should compile using --features serde.
Example
extern crate priority_queue; // not necessary in Rust edition 2018
use priority_queue::PriorityQueue;
fn main() {
let mut pq = PriorityQueue::new();
assert!(pq.is_empty());
pq.push("Apples", 5);
pq.push("Bananas", 8);
pq.push("Strawberries", 23);
assert_eq!(pq.peek(), Some((&"Strawberries", &23)));
for (item, _) in pq.into_sorted_iter() {
println!("{}", item);
}
}
Note: in recent versions of Rust (edition 2018) the extern crate priority_queue is not necessary anymore!
Speeding up
You can use custom BuildHasher for the underlying IndexMap and therefore achieve better performance. For example you can create the queue with the speedy FxHash hasher:
use hashbrown::hash_map::DefaultHashBuilder;
let mut pq = PriorityQueue::<_, _, DefaultHashBuilder>::with_default_hasher();
Attention: FxHash does not offer any protection for dos attacks. This means that some pathological inputs can make the operations on the hashmap O(n^2). Use the standard hasher if you cannot control the inputs.
Benchmarks
Some benchmarks have been run to compare the performances of this priority queue to the standard BinaryHeap, also using the FxHash hasher. On a Ryzen 9 3900X, the benchmarks produced the following results:
test benchmarks::priority_change_on_large_double_queue ... bench: 55 ns/iter (+/- 0)
test benchmarks::priority_change_on_large_double_queue_fx ... bench: 51 ns/iter (+/- 0)
test benchmarks::priority_change_on_large_queue ... bench: 16 ns/iter (+/- 0)
test benchmarks::priority_change_on_large_queue_fx ... bench: 9 ns/iter (+/- 0)
test benchmarks::priority_change_on_large_queue_std ... bench: 160,648 ns/iter (+/- 1,999)
test benchmarks::priority_change_on_small_double_queue ... bench: 56 ns/iter (+/- 0)
test benchmarks::priority_change_on_small_double_queue_fx ... bench: 51 ns/iter (+/- 0)
test benchmarks::priority_change_on_small_queue ... bench: 16 ns/iter (+/- 0)
test benchmarks::priority_change_on_small_queue_fx ... bench: 9 ns/iter (+/- 0)
test benchmarks::priority_change_on_small_queue_std ... bench: 1,619 ns/iter (+/- 14)
test benchmarks::push_and_pop ... bench: 30 ns/iter (+/- 0)
test benchmarks::push_and_pop_double ... bench: 29 ns/iter (+/- 0)
test benchmarks::push_and_pop_double_fx ... bench: 26 ns/iter (+/- 0)
test benchmarks::push_and_pop_fx ... bench: 25 ns/iter (+/- 0)
test benchmarks::push_and_pop_min_on_large_double_queue ... bench: 388 ns/iter (+/- 7)
test benchmarks::push_and_pop_min_on_large_double_queue_fx ... bench: 387 ns/iter (+/- 3)
test benchmarks::push_and_pop_on_large_double_queue ... bench: 396 ns/iter (+/- 2)
test benchmarks::push_and_pop_on_large_double_queue_fx ... bench: 397 ns/iter (+/- 4)
test benchmarks::push_and_pop_on_large_queue ... bench: 84 ns/iter (+/- 1)
test benchmarks::push_and_pop_on_large_queue_fx ... bench: 74 ns/iter (+/- 1)
test benchmarks::push_and_pop_on_large_queue_std ... bench: 70 ns/iter (+/- 1)
test benchmarks::push_and_pop_std ... bench: 4 ns/iter (+/- 0)
The priority change on the standard queue was obtained with the following:
pq = pq.drain().map(|Entry(i, p)| {
if i == 50_000 {
Entry(i, p/2)
} else {
Entry(i, p)
}
}).collect()
The interpretation of the benchmarks is that the data structures provided by this crate is generally slightly slower than the standard Binary Heap.
On small queues (<10000 elements), the change_priority function, obtained on the standard Binary Heap with the code above, is way slower than the one provided by PriorityQueue and DoublePriorityQueue.
With the queue becoming bigger, the operation takes almost the same amount of time on PriorityQueue and DoublePriorityQueue, while it takes more and more time for the standard queue.
It also emerges that the ability to arbitrarily pop the minimum or maximum element comes with a cost, that is visible in all the operations on DoublePriorityQueue, that are slower then the corresponding operations executed on the PriorityQueue.
Contributing
Feel free to contribute to this project with pull requests and/or issues. All contribution should be under a license compatible with the GNU LGPL and with the MPL.
Changes
- 1.2.3 Further performance optimizations (mainly on
DoublePriorityQueue) - 1.2.2 Performance optimizations
- 1.2.1 Bug fix: #34
- 1.2.0 Implement DoublePriorityQueue data structure
- 1.1.1 Convert documentation to Markdown
- 1.1.0 Smooth
Q: Sizedrequirement on some methods (fix #32) - 1.0.5 Bug fix: #28
- 1.0.4 Bug fix: #28
- 1.0.3 Bug fix: #26
- 1.0.2 Added documentation link to Cargo.toml so the link is shown in the results page of crates.io
- 1.0.1 Documentation
- 1.0.0 This release contains breaking changes!
-
FromandFromIteratornow accept custom hashers -- Breaking: every usage offromandfrom_itermust specify some type to help the type inference. To use the default hasher (RandomState), often it will be enough to add something likelet pq: PriorityQueue<_, _> = PriorityQueue::from...or you can add a type definition like
type Pq<I, P> = PriorityQueue<I, P>and then use
Pq::from()orPq::from_iter() -
Support no-std architectures
-
Add a method to remove elements at arbitrary positions
-
Remove
take_mutdependency -- Breaking:change_priority_bysignature has changed. Now it takes a priority_setterF: FnOnce(&mut P). If you want you can use the unsafetake_mutyourself or also usestd::mem::replace
-
- 0.7.0 Implement the
push_increaseandpush_decreaseconvenience methods. - 0.6.0 Allow the usage of custom hasher
- 0.5.4 Prevent panic on extending an empty queue
- 0.5.3 New implementation of the
Defaulttrait avoids the requirement thatP: Default - 0.5.2 Fix documentation formatting
- 0.5.1 Add some documentation for
iter_mut() - 0.5.0 Fix #7 implementing the
iter_mutfeatures - 0.4.5 Fix #6 for
change_priorityandchange_priority_by - 0.4.4 Fix #6
- 0.4.3 Fix #4 changing the way
PriorityQueueserializes. Note that old serializedPriorityQueues may be incompatible with the new version. The API should not be changed instead. - 0.4.2 Improved performance using some unsafe code in the implementation.
- 0.4.1 Support for
serdewhen compiled with--features serde.serdemarked as optional andserde-testas dev-dipendency. Now compiling the crate won't download and compile alsoserde-test, neitherserdeif not needed. - 0.4.0 Support for serde when compiled with
cfg(serde) - 0.3.1 Fix #3
- 0.3.0 Implement PartialEq and Eq traits