serve
serve copied to clipboard
Torchserve 0.4.0 has a dependency on Torch
Context
As of torchserve 0.4.0 there is a dependency on torch. I came to realize this when moving from 0.3.1 to 0.4.0, simply updating the version of torchserve in my project and then launching the docker that loads a .mar threw the following error:
ModuleNotFoundError: No module named 'torch'
Details
Now otf_message_handler.py imports torch:
"""
OTF Codec
"""
import json
import logging
import struct
import sys
import os
import io
from builtins import bytearray
from builtins import bytes
import torch
But torch is not listed in the dependency list of torchserve. Looking at the serve/setup.py torchserve is not presented in the requirements list:
requirements = ['Pillow', 'psutil', 'future', 'packaging']
Is this expected?
Your Environment
- Installed using source? [yes/no]: no
- Are you planning to deploy it using docker container? [yes/no]: yes
- Is it a CPU or GPU environment?: CPU
- Using a default/custom handler? [If possible upload/share custom handler/model]: custom but it is not related directly
- What kind of model is it e.g. vision, text, audio?: several
- Are you planning to use local models from model-store or public url being used e.g. from S3 bucket etc.? [If public url then provide link.]: no
- Provide config.properties, logs [ts.log] and parameters used for model registration/update APIs:
- Link to your project [if any]:
Expected Behavior
Torchserve should be installable as a standalone framework without a dependency on Torch
Current Behavior
Unable to instal torchserve without installing torch as well
Possible Solution
Remove dependency on torch
Steps to Reproduce
- Update
torchserveto version0.4.0 - Make sure
torchis not installed in the same environment - Load a .mar into your application ...
Failure Logs [if any]
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/ts/model_service_worker.py", line 17, in <module>
from ts.model_loader import ModelLoaderFactory
File "/usr/local/lib/python3.7/site-packages/ts/model_loader.py", line 16, in <module>
from ts.service import Service
File "/usr/local/lib/python3.7/site-packages/ts/service.py", line 14, in <module>
from ts.protocol.otf_message_handler import create_predict_response
File "/usr/local/lib/python3.7/site-packages/ts/protocol/otf_message_handler.py", line 14, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Hi @lgarciaCoveo I'll check which PR introduced this but regardless the reasoning there is we don't want to overwrite whatever version of torch you're already using
If this is a fresh dev machine then you can run this script https://github.com/pytorch/serve/blob/master/ts_scripts/install_dependencies.py
Hello! Did you have a chance to look into this? @msaroufim
The solution you propose did not help us since we use torchserve inside our own serving library. Our library is in turn used by several teams to launch their models.
Hi @lxning it seems like a regression was introduced in this PR https://github.com/pytorch/serve/commit/8779c5d752196099bebdae036426cededda31359#diff-7e5ecbe5cca38cc12404eb53257e51113407e11d26e30f29d64fb94779500853R14
Is there any valid a reason why a message would be decoded as a tensor? https://github.com/pytorch/serve/commit/8779c5d752196099bebdae036426cededda31359#diff-7e5ecbe5cca38cc12404eb53257e51113407e11d26e30f29d64fb94779500853R118
Ok so the reason why this dependency exists was to support our workflows feature where intermediate nodes would output tensors. We are revamping this feature substantially in a coming release but for now will need to keep it.
I see that this was closed. Is there a PR that solves this?
So the reason this dependency exists is because we need it to support intermediate values in workflows. So we closed the issue because we had no plans of fixing this.
Some more recent discussions have happened though where we've decided to refactor workflows and make it more pythonic and won't require the torch dependency any longer. But we would need to spend some time doing a soft deprecation which is something we're looking into but is not an urgent priority
Cc @maaquib