Felix Abecassis

Results 41 comments of Felix Abecassis

@cyphar Thanks for the detailed explanation, I didn't have a clear picture of the full process, especially on how you were planning to assemble the rootfs, but now I understand....

FWIW, I wanted to quantify the difference with block-level vs file-level deduplication on real data, so I wrote a few simple scripts here: https://github.com/flx42/layer-dedup-test It pulls all the tags from...

@cyphar yes it does: https://docs.docker.com/storage/storagedriver/overlayfs-driver/#overlayfs-and-docker-performance > Page Caching. OverlayFS supports page cache sharing. Multiple containers accessing the same file share a single page cache entry for that file. This makes...

> Ideally, we'd like to be able to schedule over devices, as well. This question was raised here: https://github.com/docker/docker/issues/24750 But the discussion was redirected here: https://github.com/docker/docker/issues/23917, in order to have...

@stevvooe I quickly hacked a solution, it's not too difficult: https://github.com/flx42/swarmkit/commit/a82b9fb2b1f3387baa1e4d4447ba9af4f3e05f16 This is not a PR yet, would you be interested if I do one? Or are the swarmkit features...

Forgot to mention that I can now run GPU containers by mimicking what [nvidia-docker](https://github.com/nvidia/nvidia-docker) does: ``` ./bin/swarmctl service create --device /dev/nvidia-uvm --device /dev/nvidiactl --device /dev/nvidia0 --bind /var/lib/nvidia-docker/volumes/nvidia_driver/367.35:/usr/local/nvidia --image nvidia/digits:4.0 --name...

@stevvooe Yeah, that's the biggest discussion point for sure. In engine-api, devices are resources: https://github.com/docker/engine-api/blob/master/types/container/host_config.go#L249 But in swarmkit, resources are so far "fungible" objects like CPU shares and memory, with...

Thanks @aluzzardi, PR created, it's quite basic.

@sudharkrish No, it isn't ported AFAIK.

Hello @mkh-github. You should ask them directly on their GitHub. TRTIS is open source too: https://github.com/NVIDIA/tensorrt-inference-server They will be in a better position to answer you.