IlyaMescheryakov1402

Results 5 issues of IlyaMescheryakov1402

How can I add hyper parameter section like "custom_field" on pic below? ![image](https://user-images.githubusercontent.com/58298387/161090497-8217c46f-8683-4b30-b0a0-bdbb2d8e1e21.png) but for the pipeline on decorators? I have too much hyper params in pipeline and all of...

Restart `clearml-serving-inference` on `torch.cuda.OutOfMemoryError: CUDA out of memory.` helps inference container to clear GPU memory. It would be useful for LLM inference on `clearml-serving-inference` container (requires https://github.com/allegroai/clearml-helm-charts/blob/main/charts/clearml-serving/templates/clearml-serving-inference-deployment.yaml#L74 set to 1)...

Hello! I use ClearML free (the one without configuration vault stuff) + clearml-serving module When I spinned _docker-compose_ and tried to pull model from our s3, I've got an error...

Add parser arguments for `k8s_glue_example.py` - `k8s_pending_queue_name`, `container_bash_script`, `pod_name_prefix`, `limit_pod_label`, `force_system_packages` and `debug` (all of them were in https://github.com/allegroai/clearml-agent/blob/master/clearml_agent/glue/k8s.py) Add `pod_name_prefix`, `limit_pod_label` and `force_system_packages` to docstring in `k8s.py`

In https://github.com/allegroai/clearml-serving/pull/75 was mentioned that `CLEARML_SERVING_NUM_PROCESS` has to be 1 (for k8s instance it can be set in https://github.com/allegroai/clearml-helm-charts/blob/main/charts/clearml-serving/templates/clearml-serving-inference-deployment.yaml#L77C21-L77C48), this PR set the same variable for docker-compose instance. It also...