What should happen when applying opacity less than 1 to a node with descendants?
Hello Javi,
If we have a node (SceneNode), and it has many descendants (a sub tree), what should happen when we apply opacity with a value less than 1 to it?
- It, and all of it's children should become transparent, multiplying opacities down the scene graph.
- It's content should become transparent but not that of its descendants.
- Same as (1), but the whole object should be flattened into a plane.
The reason I want your opinion is because the new specs for CSS 3D (css-transforms) state here that the answer is (3).
To see what I mean, view these two examples in Chrome 53 or higher:
- https://jsfiddle.net/trusktr/ymonmo70/16
- https://jsfiddle.net/trusktr/ymonmo70/17
The second example has opacity applied to the node that contains all the sub-nodes which compose the car. The opacity, according to spec, flattens the car into a plane (turns it into paper!).
I'm asking various 3D engine authors for opinions of what the correct or most expected behavior should be.
All the best, ~ Joe
Hi Joe:
Its a question I have stumbled uppon many times in the past.
I usually go with the first anwer, mostly because is the fastest one, and the one that gives more freedom. What if I only want the first object and not the descendants to reduce the opacity? In answers 2 and 3 I wouldnt be able to do that, I will have to create an empty root node and add another children.
In my case (option 1) If you want to do the second option the user could crawl the descendants manually (or you can supply a function). But because I like to design with an imperative mode: you do the actions by code, not by definition.
But I understand that you want more a declarative approach (like XML3D), where you set the properties of branches of the tree, and the system knows what to do.
On the other hand, the thirth option is the more hardware consuming so I never considered doing it.
My answer is that you must stick to one clear conceptual design, in my case is - freedom and performance -, in your case it could be simplicity and ease to use.
Good luck with your engine, seems very promising.
Hi Javi,
Good luck with your engine, seems very promising.
Thanks! :]
I usually go with the first anwer, mostly because is the fastest one, and the one that gives more freedom. What if I only want the first object and not the descendants to reduce the opacity? In answers 2 and 3 I wouldnt be able to do that, I will have to create an empty root node and add another children.
I think that option 2 is a subset of option 1. For example, in option 2, opacity would be applied to the object(s) at the root node, then the algo simply not continue to the child nodes. In option 1, the algo would apply to the root node just like in option 2, then traverse to the children. So I think that option 2 might give more freedom because the user can make transparent just the target node, and can optionally manually traverse the tree to apply opacity to children, whereas with option 1 it seems that opacity is applied by default and if there's no way to stop opacity multiplying then the user won't be able to accomplish the behavior of option 2.
Either way, would you agree that option 3 is the least desirable behavior because it is modifying the observed 3D structure by moving its vertices onto a flat plane?
Sorry, I meant option 2, not option 1.
About option 3 there is ways to do it without flattening, for instance in WebGL you could render first the node and descendants only writing in the depth buffer and then render again with alpha. but in your case using CSS I dont know if you could achieve the effect.