bitvec
bitvec copied to clipboard
Feature request: Optional feature for rayon support
Currently, code like the following don't work:
use bitvec::prelude::*;
use rayon::prelude::*;
use std::sync::atomic::AtomicUsize;
fn main() {
let mut list = bitvec![Lsb0, AtomicUsize; 0; 100];
list.par_iter_mut().step_by(2).for_each(|mut bit| {
*bit = true;
});
println!("{:?}", list);
}
But the above works if vec![false; 100] is used instead of bitvec![Lsb0, AtomicUsize; 0; 100] (or bitvec![0; 100]), or if .par_iter_mut() is replaced with .iter_mut().
It would be great if bitvec had support for rayon to allow its data structures to be iterated through in parallel in a convenient manner (i.e., with rayon's par_iter() or par_iter_mut() methods). Afaiu, this would require rayon's ParallelIterator and/or IndexedParallelIterator traits be implemented for bitvec's types.
Fwiw, here is part of a message from @myrrlyn about one way support might be added:
rayon methods require that i directly implement its extension traits. they don't have a blanket
impl Iterator -> impl ParallelIteratortransform. you can fake it with a much uglier boilerplate by usingrayon::scopeto drive a serial iterator, then parallelize it by making the loop body spawn a thread containing the rest of the work onto the rayon scope
Sample program that currently works:
use bitvec::prelude::*;
#[test]
fn parallelism() {
let bits = bits![mut Lsb0, usize; 0; 800];
rayon::scope(|s| {
for (idx, chunk) in bits.chunks_mut(20).enumerate() {
s.spawn(move |_| {
chunk.store(!0u32);
println!("{:02}", idx);
});
}
});
assert!(bits.all());
}
Implementation notes from doing a quick sketch:
- Rayon is a good way to test that I have correctly managed
impl {Send,Sync}inbitvec; should've used that much sooner - Rayon defines new slice par-iter adapters. I also will need to do this, because implementing Rayon's iteration traits directly on my seq-iter adapters causes method resolution failures. Bless up for
macro_rules!. - My implementation should pretty much just copy directly from
rayon/src/iter/plumbing/mod.rsandrayon/src/slice/mod.rs. Bridging to the thread-pool is Rayon's problem, not mine, and they've solved it. The only real logic is in<MyType as Producer>::split_at; everything else appears to be boilerplate for gluing types together. - This appears to be fairly straightforward, but it's going to be a lot of tedious work putting everything together. Certainly doable, equally certainly not something I can afford to front-load unless requested.
Questions:
- Rayon only defines these parallel slice iterators. Should we define par-iters for everything in
bitvec/src/slice/iter.rsor just the Rayon ones? Reasons against: it's extra work that they didn't do, so by definition,[bool].rchunks(W).par_iter()doesn't work and therefore nobody can expectBitSlice.rchunks(W).par_iter()to work. Reasons for:bitvecis already a superset ofcore, so there's precedent for going above and beyond the source.
I probably won't get back to this until, oh, February, so this should serve to restore everything I learned today when I do pick it back up.
It seems rayon has added implementations for RChunks, although there hasn't yet been a version published on crates.io containing these changes.