Pyrr icon indicating copy to clipboard operation
Pyrr copied to clipboard

apply_to_vector to lists of vectors raises ValueError("Vector size unsupported")

Open antoniotejada opened this issue 4 years ago • 6 comments

The documentation states that apply_to_vector can be used to transform a list of vectors (instead of calling apply_to_vector on each one in a Python loop)

"""
    :param numpy.array vec: The vector to modify.
        Can be a list of vectors.
"""

https://github.com/adamlwgriffiths/Pyrr/blob/34802ba0393a6e7752cf55fadecd0d7824042dc0/pyrr/matrix44.py#L196

Unfortunately the vec.size check in both matrix33 and matrix44 implementations above prevents that and raises a ValueError("Vector size unsupported")

calling apply_to_vector in a loop is 12x slower than using numpy natively ´´´ Numpy 1000000 vectors in 0.53 seconds, 1893938.98 vectors/second Pyrr no conversion 1000000 vectors in 6.20 seconds, 161238.31 vectors/second ´´´

antoniotejada avatar Sep 04 '21 08:09 antoniotejada

Some changes were applied in #108, can you test if the latest changes fix this for you?

adamlwgriffiths avatar Oct 03 '21 07:10 adamlwgriffiths

Thanks @adamlwgriffiths , unfortunately it's going to take me some time to get back to this, although I assume that change would fix it.

If you are curious about performance I gathered a bunch of comparisons for my epycc project.

antoniotejada avatar Oct 03 '21 15:10 antoniotejada

No rush. Project looks very cool, thanks for sharing

adamlwgriffiths avatar Oct 04 '21 00:10 adamlwgriffiths

Thanks and likewise! I used pyrr heavily in some weird STL / editor / 3D graphics playground I'm writing, it made things a lot easier than straight numpy's learning curve.

The slowness of Python smoothing normals even with numpy-optimized algorithms is what made me create epycc. With numpy you are SOL if your algorithm doesn't fit it, and numba is cool but the little I've used it I've found it to be a lot of guesswork and trial and error to appease it (also the codegen sucks donkey balls as you can see in epycc's performance study).

Hopefully one day I'll get my lazy ass around to push my playground to github too.

image (STLs from mz4250)

antoniotejada avatar Oct 04 '21 01:10 antoniotejada

With numpy you really need to vectorise your data so you can do mass transforms. Depending on how the data already is laid out, it can be expensive, but it is the better way to do things en-masse. You'd probably be better off doing it at either:

  • run-time via shader,
  • load time with caching (and potential re-export), like shader caches. Most mesh formats are really not suitable to direct rendering.

Although I'm sure you already figured these out. Looks very cool!

adamlwgriffiths avatar Oct 04 '21 01:10 adamlwgriffiths

Right, so STL is actually pretty friendly, it's just a vertexbuffer and a normalbuffer of independent tris, no indices so no shared vertices across faces. This means there's a lot of vertex duplication (~12x in my stats) and normals are per face, not per vertex, so it looks very faceted.

image image

I wrote several numpy vectorized algos to do normal sharing/smoothing across faces with angle thresholding, so only normals that are "close enough" are smoothened out and then shared across faces, which not only makes the model look better, but also reduces the vertex count since now you can share vertices (and normals) across most faces:

image image

The fastest numpy vectorized algo I wrote was an xyz vertex sort to put the repeated vertices together, and then do predicate masking to sum all the "close enough" normals depending on the angle between normals, and that way get the shared normal value. Still, that takes around 3-5s for that simple model of 200K vertices on my laptop.

antoniotejada avatar Oct 04 '21 19:10 antoniotejada