bhack
bhack
This also why I suggest to unify the base image over nightly also for [`.devcontainer`](https://github.com/pytorch/pytorch/blob/main/.devcontainer/Dockerfile) /cc @drisspg It is hard to maintain the coherence of all these developer entry points...
Ok thanks for the clarification details. I suppose we could still use this as a feature request right?
On their official Docker hub images I see the cudnn layer: https://hub.docker.com/layers/nvidia/cuda/12.4.1-cudnn-runtime-ubuntu20.04/images/sha256-de9acc3e4d3aace101541899b95f6a4af897994124713f7131dffaf9967cb514?context=explore
@atalman Do we need just to wait for https://github.com/pytorch/pytorch/issues/119400 ?
This is also why I supposed we have a lot of misalignment opportunities between CI, nightly devel images and `.devcontainer`: In the CI we "manually" install cudnn so we diverged...
But we release both `-devel` and `-runtime` nightly images every day and I've always built extensions over the last pytorch code using `nightly-devel` as a build stage image so that...
> Also, can you please clarify: are you using cudnn in your extension? Or is it just a quirk of cmake system that searches for one even if extension does...
I think we can discuss this in a new ticket if you want. But Just to make few quick points without drifting this ticket: - Cause https://github.com/pytorch/pytorch/issues/125297 contributing c++/CUDA code...
> We are building PyTorch using pytorch/manylinux-builder:cuda12.1-main to build official PyTorch wheels, so we should probably encourage users to use. (For example it contains magma binaries, which were never part...
Thanks, Is this enough to isolate a failing compiled function in a minimal repro format? E.g. If we consider a recent report stacktrace https://github.com/pytorch/pytorch/issues/126614#issuecomment-2122567229 that failure could be generated by...