three.js icon indicating copy to clipboard operation
three.js copied to clipboard

Fix line thresold on line raycast.

Open whucj opened this issue 2 years ago • 22 comments

Description

There is no matrixWorld for calculating local threshold in the ray casting function of line, so the ray calculation result and the threshold are not compared with the same scale.

This can be fixed by decomposing matrixWorld, get the proper scale, then calculate the local threshold with the same scale.

whucj avatar Jul 04 '22 07:07 whucj

I suggest raycasting points and lines in world space. That is easy, and it produces the correct result.

FWIW I see how the current behavior is inconvenient for mouse-over use cases but I don't think scaling the threshold is the right solution. With a world-scale threshold lines that are far away are more difficult to select while those that are close to the camera may be impossible to avoid. The same kind of disconnect can happen when zooming in close to or out from a large dataset of lines or points. Ultimately I think a screenspace, pixel-perfect selection mechanism like is implemented for fat lines is better for that use case but has its own limitations, as well.

If the current world-scale-threshold implementation is going to be retained, though, I agree the scale factor should be removed entirely.

gkjohnson avatar Jul 07 '22 16:07 gkjohnson

Just for my understanding: Why is the ray intersection test now done in a different coordinate system?

Mugen87 avatar Jul 08 '22 11:07 Mugen87

Why is the ray intersection test now done in a different coordinate system?

My understanding is it was originally done that way for meshes because it was faster to transform one ray to local than to transform each vertex to world. Whether doing so here is correct is questionable.

As I said above, write the code to produce the correct answer first -- then speed it up if it matters.

WestLangley avatar Jul 08 '22 13:07 WestLangley

Hi, can this modification be merged in version r144?

whucj avatar Jul 29 '22 00:07 whucj

I can't approve this PR since changing the coordinate system so raycasting happens in world space means each vertex is now converted to world space which ends up in way more vector/matrix multiplications. Without a performance evaluation that compares the performance for complex points clouds and lines I'm not confident with this change.

I think it's better if raycasting happens in local space.

Mugen87 avatar Jul 29 '22 08:07 Mugen87

It appears to me that two additional matrix multiplications per rendered line is not measurable -- especially relative to the cost of rendering the scene.

WestLangley avatar Jul 29 '22 13:07 WestLangley

I can't approve this PR since changing the coordinate system so raycasting happens in world space means each vertex is now converted to world space which ends up in way more vector/matrix multiplications. Without a performance evaluation that compares the performance for complex points clouds and lines I'm not confident with this change.

I think it's better if raycasting happens in local space.

The performance results using the webgl_interactive_raycasting_points example are as follows.

  1. 100 million points raycast time-consuming local space before modification: 1.4s - 2.1s world space after modification: 2.2s - 3.9s

  2. 10 million points raycast time-consuming local space before modification: 0.12s - 0.21s world space after modification: 0.21s - 0.39s

  3. 1 million points raycast time-consuming local space before modification: 0.014s - 0.036s world space after modification: 0.023s - 0.041s

whucj avatar Aug 01 '22 09:08 whucj

10 million points raycast time-consuming local space before modification: 0.21s - 0.39s world space after modification: 0.12s - 0.21s

I do not believe that is possible. Hence, your test does not appear to produce meaningful results.

Besides, if you are raycasting against millions of points while rendering, you should be using a different technique.

WestLangley avatar Aug 01 '22 10:08 WestLangley

10 million points raycast time-consuming local space before modification: 0.21s - 0.39s world space after modification: 0.12s - 0.21s

I do not believe that is possible. Hence, your test does not appear to produce meaningful results.

Besides, if you are raycasting against millions of points while rendering, you should be using a different technique.

I'm very sorry, but the time-consuming for 10 million points was written backwards. Can you test the performance before and after modification on your computer ? Thanks.

whucj avatar Aug 01 '22 12:08 whucj

I think this can be merged as long a @gkjohnson agrees it produces the correct answer.

If someone can demonstrate a modification that is measurably more performant, then that can be addressed in a later PR.

WestLangley avatar Aug 01 '22 22:08 WestLangley

Okay so revisiting this and taking a look at the code again I think I and others had misinterpreted the original intent of the logic in master. The current implementation of scaling the line threshold in master is like so:

const localThreshold = threshold / ( ( this.scale.x + this.scale.y + this.scale.z ) / 3 );

This is to get the world space threshold in local coordinates to avoid performing matrix operations per vertex in the loop. The current implementation in master has two issues:

  • it does not account for any parent hierarchy scaling
  • it does not work with non-uniform scale factors (or potentially negative scale factors, though maybe this can be fixed by taking the absolute value)

The latest implementation as of 4c68353 fixes both of the above issues. And the original implementation in caafafc just fixes the first.

I'm not sure if this clarifies things for anyone else but against that backdrop I'm wondering if supporting the non-uniform scale case is really worth it. And whether the original implementation in caafafc is "good enough". Thanks for bearing with us @whucj!

gkjohnson avatar Aug 01 '22 23:08 gkjohnson

Okay so revisiting this and taking a look at the code again I think I and others had misinterpreted the original intent of the logic in master. The current implementation of scaling the line threshold in master is like so:

const localThreshold = threshold / ( ( this.scale.x + this.scale.y + this.scale.z ) / 3 );

This is to get the world space threshold in local coordinates to avoid performing matrix operations per vertex in the loop. The current implementation in master has two issues:

  • it does not account for any parent hierarchy scaling
  • it does not work with non-uniform scale factors (or potentially negative scale factors, though maybe this can be fixed by taking the absolute value)

The latest implementation as of 4c68353 fixes both of the above issues. And the original implementation in caafafc just fixes the first.

I'm not sure if this clarifies things for anyone else but against that backdrop I'm wondering if supporting the non-uniform scale case is really worth it. And whether the original implementation in caafafc is "good enough". Thanks for bearing with us @whucj!

Many thanks to everyone who participated in the discussion. I agree with the two problems you analyzed with the original implementation. Mainly in local space, non-uniform scale is not easy to handle. Although computing in world space may reduce some performance in some large scenarios, at least we can get more accurate results. If someone has a better suggestion, I can also modify in this PR.

whucj avatar Aug 02 '22 00:08 whucj

I have discussed the issue with @WestLangley and I believe we both agree that the original implementation and this new implementation are both correct.

Where they differ is that the latest implementation is slower but provides a more accurate result when a non uniform scale is involved by transforming all vertices into world space in the loop before performing the check.

I don't have strong opinions on which should get merged other than I think we've punted on non-uniform scale support for other features before until someone has complained which may be an argument for the initial, faster implementation. Are there any other strong opinions here? Otherwise I'm happy to see this merged as-is and we can optimize it further in another PR if needed.

gkjohnson avatar Aug 02 '22 16:08 gkjohnson

I vote for the initial version.

Mugen87 avatar Aug 02 '22 19:08 Mugen87

I'm fine with that and then we can reconsider if non-uniform scaling becomes a problem. @WestLangley is that okay with you? I think it's the simplest solution that doesn't impact performance and gives correct results.

gkjohnson avatar Aug 03 '22 21:08 gkjohnson

After further discussions off-line with @gkjohnson, here is what I think:

  1. Doing the calculations in local space is not correct with non-uniform scale.

  2. three.js supports non-uniform scale (object.scale is a vector3, not a float), so support for non-uniform scale here is expected.

  3. The cost of the extra matrix multiplications is not significant when rendering -- at least in my tests of frame rate.

  4. In cases where performance is an issue, then the user should use a spacial-indexing method instead.

  5. I do not support a faster, but incorrect, calculation.

  6. I support doing the calculations in world space.

If someone can devise a local-space calculation that gets the same answer, then another PR can be filed later.

WestLangley avatar Aug 04 '22 22:08 WestLangley

I can appreciate @WestLangley's push for a correct result in the general case and will inevitably lead to less confusing, inexplicable behavior for users. Given that here's what I propose:

  • Merge the current PR as-is which supports returning the correct answer for non-uniform scaling at the cost of a small performance difference.
  • If the performance difference is considered a problem we can make a subsequent PR that introduces a "fast-path" that uses the original implementation approach if and only if all scale values are equal.

What are the thoughts on this?

gkjohnson avatar Aug 05 '22 00:08 gkjohnson

three.js supports non-uniform scale (object.scale is a vector3, not a float), so support for non-uniform scale here is expected.

I don't think this statement is entirely correct, see https://github.com/mrdoob/three.js/issues/15079#issuecomment-431232639. There is a partial support. Not all routines like Matrix4.decompose() produce correct results with non-uniform scaling. So I don't think the it's right to prioritize correctness over performance in this context.

I would rather suggest to stop at some point the support for non-uniform scaling in the object (world) matrix.

Mugen87 avatar Aug 05 '22 07:08 Mugen87

I suppose I see that more as not fully supporting shear transformations which can be generated by a composition of rotations and non uniform scales. The reality of it is that a user can apply a non-uniform scale to a single object with no parents and the original fix in the PR would not work. The raycasting mechanism here is failing to produce correct results when non-uniform scale is used in a way that three.js does consistently support.

But that aside I've proposed a solution that will produce correct results in all cases and will not sacrifice performance in the cases that already work correctly. Is this not something you'd support?

I would rather suggest to stop at some point the support for non-uniform scaling in the object (world) matrix.

Per-axis scaling is something that every 3d engine I've used supports. There are 3d common widgets and gizmos in three.js and other applications that are designed around adjusting non-uniform scale. It's an expected feature in a 3d engine. Likewise, as far as I remember, nearly every engine I've used also does not cleanly address this shear matrix issue. I think this is a much larger discussion that has implications for diverging three.js' capabilities away from expected 3d transformation features.

gkjohnson avatar Aug 05 '22 21:08 gkjohnson

I still vote for the initial version of the PR. If we ever get a report about failing raycasting with non-unifrom scale, we can reconsider what you have suggested here:

If the performance difference is considered a problem we can make a subsequent PR that introduces a "fast-path" that uses the original implementation approach if and only if all scale values are equal.

I've expressed my dislike about non-uniform scaling multiple times and I think it's an anti-pattern. For all use cases where non-uniform scaling is used, alternative solutions exist. Granted, some of these are more inconvenient however you avoid complicated edge cases in the engine which are hard to catch (e.g. via different code path or explicit checks). Unity for example explicitly states to avoid non-uniform scaling whenever possible because it has negative impact on rendering performance and only limited support in various engine components.

Considering this, I really question the addition of additional (slower) code paths just to support non-uniform scaling. A technique that should be avoided in the first place.

Mugen87 avatar Aug 06 '22 07:08 Mugen87

@Mugen87 If you want to remove a three.js feature -- Object3D.scale as a Vector3 in this case -- then file a PR that removes it.

In the mean time, we honor the scale value, just like we do everywhere else.

WestLangley avatar Aug 06 '22 15:08 WestLangley

The existing PR is correct, and it is supported by myself and @gkjohnson. Plus, as @gkjohnson explained, it can be further improved if there is a demonstrated performance issue, and still return the correct answer.

WestLangley avatar Aug 06 '22 15:08 WestLangley