Jason Ansel

Results 199 comments of Jason Ansel

conceptual.stmt: ``` let in_ptr0 = (void *)_halide_buffer_get_host((struct halide_buffer_t *)in_ptr0.buffer) let out_ptr0 = (void *)_halide_buffer_get_host((struct halide_buffer_t *)out_ptr0.buffer) let out_ptr0.min.0 = _halide_buffer_get_min((struct halide_buffer_t *)out_ptr0.buffer, 0) let out_ptr0.extent.0 = _halide_buffer_get_extent((struct halide_buffer_t *)out_ptr0.buffer, 0)...

I can try the 0D version, what is the syntax for that in Python? (`a[] = ...` is not valid Python syntax) I think there might be something wrong with...

I think the `0x8a73800` pointer is a red herring coming from how the PyTorch CUDA caching allocator works. There is a fast path for scalar constants where PyTorch will stash...

Maybe we should also give `stance="default"` default arg, so people can choose to only set skip_guard_eval_unsafe/force_backend

I'll write a pass to convert to dimensions where possible. It is possible often, but not always. A good example to illustrate the issue is [torch.as_strided()](https://pytorch.org/docs/stable/generated/torch.as_strided.html), which take the memory...

Why isn't pytorchbot flagging these as preexisting failures? Can you rebase to viable/strict?