Jake Vanderplas

Results 652 comments of Jake Vanderplas

Thanks for the report - I suspect this may be related to a known bug in dot_general with preferred_element_type on GPU. Note that currently, we skip tests that exercise this...

Thanks for the report – XLA's floating point power computation tends to be inaccurate, particularly on GPU (note that your test case passes on CPU). This is a known issue,...

I don't know of any reason it hasn't been implemented - feel free to open a PR if you'd like to contribute!

Thanks for the report! The behavior does not arise on CPU, so I suspect this is an XLA:GPU issue. GPU hardware in general does not have good support for 64-bit...

I wonder if it would be possible to isolate the complex128 multiplication issue with a simpler XLA program? If we could do that, we would have more likelihood of a...

Colab will give you one of several GPU types depending on availability; you can run ``!cat /var/colab/hostname`` to quickly see the type of GPU backend you were assigned (P100 is...

Hi - thanks for the report. The code you linked to is in the GPU translation rule. Just to confirm: are you running this on a GPU?

Now that I look at it, there is a similar pattern for CPU. Can you say more about what issue this difference in implementation between batched and unbatched results is...

Here's a more to-the-point demonstration of the difference in behavior between batched & unbatched inverse (run on colab CPU): ```python import jax.numpy as jnp x = jnp.array([[[1, 2, 3], [4,...

Yes, it's singular. My interpretation of the issue is that it boils down to batched vs non-batched inverses using different algorithms that handle ill-posed inputs differently; my example was meant...