albanD

Results 263 comments of albanD

Note that the broadcasting will happen automatically so you don't need to call broadcast to and can just do `a.where(a != 0, torch.tensor(not_zero)`. This happens because we don't allow float...

Sure! The first thing to try would be to write a test for this case and modify the entry in native_functions.yaml to add a method variant and see if that...

Hey! You can check the contributing doc for details of the compile flag. But in this case you can build with `USE_DISTRIBUTED=0` to disable distributed (and thus gloo) altogether. That...

I think the simplest steps are: - On the github UI, fork PyTorch - Add your fork as a new remote in your local git (git remote add name same_url_as_the_one_you_give_to_git_clone)...

> Is Torch Script going to be deprecated soon? @naveenthangudu this is being looked into. Also given that there are no maintainers from the core team working on it, I...

cc @SherlockNoMad @ezyang for questions about torch.compile integrations

Skipping sanity check because this is generated files.

Hi, Could you share the code that leads to this please?

> regarding inplace ops, I locally confirmed the version seems to be bumped appropriately I think this is only because you're on CPU: the slow code calls the single-Tensor op...

You have a `@pytorchbot label` if you need to add label (like the `ciflow/trunk` one) but you are not allowed.