Implement DeepCCompress node
I see this is planned in the README \o/
We have one internally and it has proven very useful - it has two modes:
- Merge by z-depth threshold - the obvious case
- A dumb "sample reduction percentage" mode, which merges samples to approximately reduce the count by a given percentage - e.g 50% removes every other sample, 10% would remove every ~10th sample. Can be prone to visual artifacts with certain data, but has proven quite useful particularly for debugging/performance diagnostic reasons
We had also discussed some other features which would have been useful at times:
- Option to to merge samples with similar values (e.g merge all samples where the R/G/B values are identical) - most obviously useful for things like ID channels.
- Allow combination of these merge options - e.g so you can "merge 50% of the samples which are within 0.1 units", or "merge the samples within 0.00001 in rgba.red and within 1 unit".
Good thoughts, thanks. I hadn't considered percentage reduction, but was considering an option for clamping sample count, so you could specify no more than 20 samples for example. Percentage reduction over a threshold might be good, too - reduce 50% any samples over 20, say.
i started a bit to look into the logic.
the total sample count is doable and all good. so defining a number of samples are doable, but the issue i am having, is that some samples are empty in all channels. however, the new total sample count is correct. just the wrong/empty values.
so far i have 3 options in mind to reduce samples:
- percentage. reduce by given numbers. Ie 10 would reduce samples on each pixeel to 90%
- remove n-th sample. similar to percentage, but offers a different input method.
- cap. limit total amount of samples to a certain user defined treshold.
however, one thing I want to keep is the original very front and very back sample. to the entire "depth" is kept. only samples in between those shall be removed IMHO.
what i dont understand TBH is Merge by z-depth threshold - the obvious case
"Merge by z-depth threshold" is something like, "merge samples which are closer together than .1 units". So if you had samples like:
1
1.1
1.2
1.5
1.6
1.7
2
With merge threshold set to .1, you'd get something like:
1-1.2
1.5-1.7
2
Basically, in pseudo-code:
if sample2.front - sample1.back < threshold:
mergeOver(sample1, sample2)
And in my conception, you would get "thick" samples out, where the new samples front is the first samples front, and its back is the furthest sample's back.
But that may or may not be the ideal way to do it.
The reason I say that's the "obvious" case, is 1) that's how renderers handle aggregating deep samples (Mantra I know for sure does it like this, pretty sure others do too) and 2) it will keep the sort of spatial distribution of the samples in a sensible way. If there's three "clusters" of samples, there will still be three "clusters" out, so long as the clusters are further apart than the threshold.
Solves cases where the threshold wasn't set correctly in the renderer, and so your hard-surface render has like 600 samples per pixel, all within .0001 units of each other.
that makes a lot of sense i have to admit.
Is there any code related to this plugin, as i need to implement similar functionality to speed up my Deep Nodes.
If not I was thinking of just creating lots of helper functions in a separate file that can be shared between plugins.