Geeta Chauhan
Geeta Chauhan
@smk2007 Thanks for this PR. Can you please share the binaries for the DirectML plugin and add steps for CI integration and basic end to end tests with the binaries?
@lgvaz Please use eager mode deployment for now. Torchscript support for detectron models is coming as part of PyTorch 1.8 release next week
@KhyatiMehta3 HTTPS is not required to access Inference API from outside and can access over pure HTTP. Are you facing any issues when using HTTP?
@lxning Please check the Codebuild errors, failing for all the cases
@byungguan-yoon This type of problem generally happens due to Nvidia driver versions mismatch. Please share output of this command from your RTX 3080 `python -m torch.utils.collect_env`
@amathews-amd I tired compiling ROCm version of BnB from the rocm_enabled branch, but it is failing with [errors](https://gist.github.com/chauhang/518cbdd437ea6f7c32ab469a75ae8777) on AMD MI250x. Do you have any suggestions for how to resolve...
@pnunna93 I am already using ROCm 6.0 -- have added details of the pytorch environment [here](https://gist.github.com/chauhang/518cbdd437ea6f7c32ab469a75ae8777?permalink_comment_id=5005525#gistcomment-5005525).
Thanks for the suggestions, we welcome contributions from the community. Also suggest testing out the training script from chinese-llama on the llama 2 model. Based on the description, it could...
It will be good to have a debug flag for both CPU and GPU for sanity testing the setup is working fine for the different type of configs
@namannandan What is the work remaining for this PR? cc: @mreso