envd
envd copied to clipboard
feat(lang): Support TensorBoard
Description
def build():
config.tensorboard(port="8888", log_dir="./logs")
Message from the maintainers:
Love this enhancement proposal? Give it a 👍. We prioritise the proposals with the most 👍.
I am not sure if we should support framework-specific functions. Perhaps we should make it generic to expose any services instead.
Perhaps we should make it generic to expose any services instead.
Yep, I think so. We should have a mechanism to expose services.
I think there are some questions to discuss:
- Should users care about the port? if we listen on the host's interface, user A use port 8888 but user B used it already.
- Which interface should be listening ,the container or the host.
- Which address? Loopback or external ip? We should anwser the above questions to design the service function. In my opinon, we should sperate the service and the service provider(such as Service and Deployment in K8S).
class JuypterSvc(Service):
port = 8000
interfaceLoc = container
ip = 0.0.0.0
def build():
config.juypter(service=JuypterSvc)
It is available we could provide some basic svc class for some OOTB use case:
- HostSvc: assign a free port and locahost on the host
- ContainerSvc
- PublicSvc
- K8SSvc
Should users care about the port? if we listen on the host's interface, user A use port 8888 but user B used it already.
I think in the best case, users do not need care about the port. He can use the endpoint provided by envd to visit these services.
Which interface should be listening ,the container or the host.
Currently, we keep port in the container open and use docker port forwarding to forward container ports to host.
Which address? Loopback or external IP?
Since we forward container port to host, thus localhost can be used here. I am not sure if it works for WSL @VoVAllen
WSL will automatically make ports available to host os (windows).
We definitely need a way for users to easily extend their use cases. However, the grammar may need more discussions.
Since envd already supports expose, daemon, mount, I think it can be done in the following way:
def tensorboard():
io.mount(src="/var/log/tensorboard", dest="./logs")
runtime.daemon(commands=[
["tensorboard", "--logdir", "./logs", "--port", "8888"],
]),
runtime.expose(envd_port=8888, host_port=8888, service="tensorboard")
Shall we close this issue? @gaocegege
All we need is code snipplet
Ref #91
I think we can close it after the doc is updated, WDYT.
In the future, I think we should have the mechanism to make these starlark funcs built-in in envd.
Since
envdalready supportsexpose,daemon,mount, I think it can be done in the following way:def tensorboard(): io.mount(src="/var/log/tensorboard", dest="./logs") runtime.daemon(commands=[ ["tensorboard", "--logdir", "./logs", "--port", "8888"], ]), runtime.expose(envd_port=8888, host_port=8888, service="tensorboard")Shall we close this issue? @gaocegege
I got this error
error: invalid mount config for type "bind": invalid mount path: './logs' mount path must be absolute
it is caused by dest="./logs", docker doesn't allow relative mount path.
should we expand the relative path to an absolute one?
Maybe we could had a default cwd directory and allow users to override. WDYT?
of cause, just like docker can specify a building dir, it should be the cwd.
The dest arg specifies the location inside the container, default cwd may not work for this.
I think we can define the working directory in envd, then expand the relative paths to absolute paths.