George Necula

Results 16 comments of George Necula

Allowing re-entrant calls is a larger project, but I will look into providing a better message. The rule is that when a callback executes on a device it blocks the...

Inside a vmap computation you will see a single call to the host, with the entire batch. You'd have to write your host function to split the data into batches...

In general, it is not possible to add a proper batching rule for `call_tf` because the called function can in principle be an arbitrary TF function for which one cannot...

Your solution should work in principle (I have not checked it in all detail), but I do not feel that it is a solution that we want to upstream to...

It seems that this is only for the case enable_xla=False. @marcvanzee PTAL

This may have been fixed incidentally by a very recent change #11816. Can you please try again at HEAD?

I think that this is not specific to multi-GPU, but can happen even with one GPU (randomly). I think it is related to #4374. There are two fixes possible: fix...

There are two updates. It turns out that the infeed/outfeed in XLA:GPU is not so easy to fix for multi-GPU. So that hope has gotten dimmer. The second update is...