serve
                                
                                
                                
                                    serve copied to clipboard
                            
                            
                            
                        curl 404 ResourceNotFoundException
Hello,
I am stuck with an error that I am not sure what does it mean.
when I do  curl "http://localhost:8080/models" I get :
{ "code": 404, "type": "ResourceNotFoundException", "message": "Requested resource is not found, please refer to API document." }
I make an .mar file for my model with
torch-model-archiver -f \ --model-name=classifier \ --version=1.0 \ --serialized-file=pytorch_model.bin \ --handler=custom_handler.py \ --extra-files "config.json,index_to_name.json,special_tokens_map.json,tokenizer_config.json,tokenizer.json,training_args.bin,vocab.txt" \ --export-path=model_store
All of those files are stored in the same directory.
When i run the serve torchserve --start --model-store model_store --models classifier=classifier.mar I dont get any error. normally when I do curl "http://localhost:8080/models" I will get my classifier but I instead I get that message.
is there anything that I am missing here? or should I add something?
I want to mention that I am using a handler (custom_handler.py) from GoogleCloudPlatform. also, curl localhost:8080/ping give me Healthy
Thanks!
I'm not too sure from your description if you got the behavior you're looking for with all torchserve models or just the custom one you tried out. In case it's a custom one I'd suggest you take a look at logs/model_log.log in the folder where you started torchserve as it may indicate some issue
Thank you for your response :) but I dont see anything else beside my files :/
@ma-batita I think you need use KFServing port
Getting the same issue checked the logs/model_log.log. Getting these logs.
2023-01-31T15:48:31,175 [INFO ] W-9005-mnist_1.0-stdout MODEL_LOG -  for CPU please run pip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu
2023-01-31T15:48:31,241 [INFO ] W-9014-mnist_1.0-stdout MODEL_LOG - /home/model-server/tmp/models/23f9575c3b024073932b98e2c9301e8e/compile.json is missing. PT 2.0 will not be used
2023-01-31T15:48:31,242 [INFO ] W-9014-mnist_1.0-stdout MODEL_LOG - dynamo/inductor are not installed.  
2023-01-31T15:48:31,243 [INFO ] W-9014-mnist_1.0-stdout MODEL_LOG -  For GPU please run pip3 install numpy --pre torch[dynamo] --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117
2023-01-31T15:48:31,243 [INFO ] W-9014-mnist_1.0-stdout MODEL_LOG -  for CPU please run pip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu
2023-01-31T15:48:31,289 [INFO ] W-9009-mnist_1.0-stdout MODEL_LOG - /home/model-server/tmp/models/23f9575c3b024073932b98e2c9301e8e/compile.json is missing. PT 2.0 will not be used
2023-01-31T15:48:31,290 [INFO ] W-9009-mnist_1.0-stdout MODEL_LOG - dynamo/inductor are not installed.  
2023-01-31T15:48:31,290 [INFO ] W-9009-mnist_1.0-stdout MODEL_LOG -  For GPU please run pip3 install numpy --pre torch[dynamo] --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117
2023-01-31T15:48:31,291 [INFO ] W-9009-mnist_1.0-stdout MODEL_LOG -  for CPU please run pip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu
2023-01-31T15:48:31,347 [INFO ] W-9011-mnist_1.0-stdout MODEL_LOG - /home/model-server/tmp/models/23f9575c3b024073932b98e2c9301e8e/compile.json is missing. PT 2.0 will not be used
2023-01-31T15:48:31,348 [INFO ] W-9011-mnist_1.0-stdout MODEL_LOG - dynamo/inductor are not installed.  
2023-01-31T15:48:31,349 [INFO ] W-9011-mnist_1.0-stdout MODEL_LOG -  For GPU please run pip3 install numpy --pre torch[dynamo] --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117
2023-01-31T15:48:31,350 [INFO ] W-9011-mnist_1.0-stdout MODEL_LOG -  for CPU please run pip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu
I think it is not supporting torch 2.0. But I checked the torch version in my environment it's 1.13+cpu.