nos
nos copied to clipboard
⚡️ A fast and flexible PyTorch inference server that runs locally, on any cloud or AI HW.
```bash faceswap-nos-server-1 | Traceback (most recent call last): faceswap-nos-server-1 | File "/opt/agi-pack/conda/envs/nos-py310/lib/python3.10/site-packages/grpc/_server.py", line 552, in _call_behavior faceswap-nos-server-1 | response_or_iterator = behavior(argument, context) faceswap-nos-server-1 | File "/opt/agi-pack/conda/envs/nos-py310/lib/python3.10/site-packages/nos/server/_service.py", line 169, in GetModelInfo...
When registering custom models with the new `yaml` flow, `module_exec` fails to to find module attribute corresponding to the class name. This appears to be due to the `__pycache__` storing...
- rolling average cpu, gpu memory utilization - input shapes, batch size, shm region etc
Based on https://github.com/ray-project/ray/issues/19425 See: - https://conda.github.io/conda-pack/ - https://github.com/triton-inference-server/python_backend#creating-custom-execution-environments
``` E predictions = self.model(images) E File "/home/spillai/miniconda3/envs/nos-py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl E return forward_call(*args, **kwargs) E File "/home/spillai/.cache/torch/hub/Megvii-BaseDetection_YOLOX_main/yolox/models/yolox.py", line 30, in forward E fpn_outs = self.backbone(x) E File "/home/spillai/miniconda3/envs/nos-py38/lib/python3.8/site-packages/torch/nn/modules/module.py",...