Michael Spector
Michael Spector
The idea is to use `decQuadReduce()` function that eliminates insignificant zeroes from the number before hashing.
Here's reproduction: ``` let d1 = d128::from_str("0.12342342000000000").unwrap(); let d2 = d128::from_str("0.12342342").unwrap(); assert_eq!(d1, d2); // Succeeds let mut hasher1 = DefaultHasher::new(); d1.hash(&mut hasher1); let h1 = hasher1.finish(); let mut hasher2 =...
Proto file: ``` protobuf message Settings { repeated string currencies = 1; repeated string timezones = 2; } ``` When running `lein jar`, I'm getting these errors: ``` /home/michael/Dev/projects/af-protocol/target/protosrc/af_protocol/protobuf/Events.java:6377: error:...
1. Added method for concatenating S3 buckets. 2. Fixed UnsupportedOperationException during closing multipart upload.
The idea is to pass batches of tuples instead of one tuple at a time through the query pipeline, which will allow compiler to generate SIMD instructions in some cases.
As a side note, we might consider having a cache for common queries alongside dictionary
It's advisable to set limit to some sane number, like 10K. The downside is that there will be an additional IF branch during aggegation - have to check the performance...
GCC-based compilation is quite a bottleneck right now, especially in cases when frequent ad-hoc queries are the common ones. We should use LLVM code generators, which are much more performant.
Is it possible to make ViyaDB instance behave like a simple aggregator? There shouldn't be much work to do, but this will allow for having multilevel aggregation trees.