Handle images created on windows machines
When building new images with build.sh on a windows machine, line endings of files within the new image are windows-like (\r\n) instead of linux-like (\n). Following an example of the /dltk/bootstrap_fast.sh file, after its creation on a windows machine:
cat -v /dltk/bootstrap_fast.sh
#!/bin/sh^M export LC_ALL=C.UTF-8^M export LANG=C.UTF-8^M ...
Containers then can not be spawned from this image on a linux machine (e.g. a Splunk instance running dsdl):
/dltk/bootstrap_fast.sh
bash: /dltk/bootstrap_fast.sh: cannot execute: required file not found
By merging the proposed changes, the line endings of bootstrap_*.sh within the /dltk/ directory are changed to be linux-like.
I suspect that this issue, creating an image on a windows machine and then spawning containers on a linux server, might arise frequently with users like me (data scientist/splunk architect who works on a company workstation using windows and maintaining Splunk instances on linux servers).
Regards from Switzerland, Schiggy
Thanks for looking into this and finding a solution @Schiggy-3000 - I wonder if your modification would also work properly if the build script is used on linux then too?
The dos2unix command that is run on /dltk/bootstrap_*.sh reads the file and saves it again with \n line ending. --> If the image is built in windows, this will replace \r\n with \n --> If the image is built in linux, this will replace \n with \n The script that is run is shown here: dos2unix repo
I will do some more extensive testing to check whether images behave as expected. It seems sensible to me to build at least one image for each respective base_image.
Note: I made a 2nd commit since the package manager for redhat uses yum rather than apt-get.
Images that I built on Linux and Windows machines and consecutively spawned containers from:
- minimal-cpu
- minimal-gpu
- golden-gpu
- ubi-functional-cpu
- golden-cpu-transformers
- golden-gpu-rapids ***
- escu-cpu
- spark
By building the images above, these dockerfiles were used:
- Dockerfile.debian
- Dockerfile.debian.escu
- Dockerfile.debian.rapids
- Dockerfile.debian.spark
- Dockerfile.debian.transformers
- Dockerfile.redhat
By building the images above, these base images were used:
- python:3.9
- nvidia/cuda:11.3.1-cudnn8-runtime-ubuntu20.04
- nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu20.04
- redhat/ubi9
- python:3.9.13-bullseye
- rapidsai/base:23.08-cuda11.2-py3.9
- jupyter/all-spark-notebook:spark-3.5.0
All tests went fine. Containers started, and I briefly checked whether the api and jupyter notebook can be reached.
*** Side note: At first, no containers could be spawned from golden-gpu-rapids. This was related to a dependancy error which was identical whether I used the current dockerfile or the updated one with the code changes (ImportError: /lib/x86_64-linux-gnu/libstdc++.so.6: version GLIBCXX_3.4.29' not found). You might wanna look into this. Once I resolved this issue, it worked just fine. Perhaps not ideal, but following the changes I made in the Dockerfile.debian.rapids:
# install nodejs RUN curl -fsSL https://deb.nodesource.com/setup_current.x | bash - RUN apt-get update --fix-missing && apt-get install -y wget bzip2 git ca-certificates nodejs>=18.0.0 build-essential
# Add the Ubuntu Toolchain PPA and update RUN apt-get install -y software-properties-common RUN add-apt-repository ppa:ubuntu-toolchain-r/test -y RUN apt-get update
# Install the specific version of libstdc++6 RUN apt-get install -y libstdc++6
# update everything RUN apt-get update && apt-get upgrade -y
Amazing, that's all really valuble findings and updates! Thank you so much @Schiggy-3000 ! I'm currently in the process of rebuilding all images and I'm happy to incorporate your improvements. Thanks for the pointer on the rapids image, I also noticed recently there was something going wrong. I'll check with the latest rapids base image version if this persists or your proposed solution can resolve this!
@pdrieger let me know if you would benefit from a helping hand in case something comes up on the topic. Always glad to help!