cusignal
cusignal copied to clipboard
[FEA] Add Option to Install PyTorch with Conda Jetson Env
Is your feature request related to a problem? Please describe. The end-to-end example notebook uses PyTorch but PyTorch isn't present in the conda enviroment when building on a Jetson.
Describe the solution you'd like Can you make fetching and installing the PyTorch wheel an option in the Jetson build script? I've got a script that does this, but it would be nice if it was integrated into the cuSignal tooling and tested there.
Good idea, @znmeb! I'll get this actioned in the next couple of days.
Hey @awthomp, I spent some time scouring the cuda documentation, and couldn't find anything cut-and-dry on how optional dependencies would be specified in a single environment. Would the advised solution be to create a cusignal_jetson_full.yml file similar to cusignal_full.yml with the additional dependencies like pytorch?
As an aside, would this be a good place to specify dependencies like SoapySDR and rtlsdr that are used in the other notebooks as referenced in https://github.com/rapidsai/cusignal/issues/431#issue-1028941606? Or are those such specific use-cases that it does not make sense to include them.
@cmpadden I like this approach. However, IIRC the PyTorch binary is in a pip or conda repo on x86_64 but is in an NVIDIA-specific repo for the Jetsons. I'm not doing software-defined radio so I have no idea about the others.
I see, thanks for the clarification.
I may have to defer to someone with more experience using NVIDIA-specific dependencies, as I'm not sure if there's an easy way to incorporate the pre-built PyTorch wheel file for the Jetson platform found here into our conda environment.
Our options might be to either have that package released to a conda repository, or to have a script that does something similar to the installation instructions found in the forum post.
@cmpadden NVIDIA do have repositories on conda-forge, I think. The build process is straightforward, although the conda-forge folks do have a set of names for things drawn from blacksmiths. :-) One does, of course, need a Jetson (AGX Xavier preferably) to do the actual builds. To get a true CI/CD workflow requires a significant effort.
@cmpadden and @znmeb I'm still trying to figure out the Jetson vs x86 packaging, particularly with CuPy. One of the complications is the CUDA toolkit and the flavors of each of our dependencies and their CUDA versions w/ pre-built wheels. I'll spend more time on this over the upcoming weeks, but it's most likely going to be a change on our CI/CD side, AFAIK.
@awthomp Gotcha! Thanks for the insight.
@awthomp I do know that the NGC Jetson ML Docker image has CuPy on it and NVIDIA build it from source using pip, so there's probably no pre-built wheel to use. See https://github.com/dusty-nv/jetson-containers/blob/master/Dockerfile.ml.
This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.
This issue has been labeled inactive-90d due to no recent activity in the past 90 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed.