MFC icon indicating copy to clipboard operation
MFC copied to clipboard

Instructions on how to install on a node that doesn't have access to the internet

Open sbryngelson opened this issue 1 year ago • 1 comments

Some computers won't let us wget and pip install our way to victory when certain dependencies don't exist (e.g., systems with classified or sensitive info.). We should have instructions on how, at least roughly, to build MFC when these dependencies don't already live on the node.

sbryngelson avatar Oct 17 '24 19:10 sbryngelson

A singularity container seems like a good idea, though there would have to be containers for GPU cases, CPU cases, as well as case-optimization of the benchmark case (in case the container is being used for benchmarking purposes). That's four separate containers.

sbryngelson avatar Oct 18 '24 13:10 sbryngelson

Here is code for a docker container that works on my Mac:

# syntax=docker/dockerfile:1

############################################################
# MFC (Multiphase Flow Code) — portable CPU/GPU Dockerfile #
############################################################
#   • Ubuntu 24.04 (Noble) multi‑stage image
#   • Stage 1 compiles MFC with full tool‑chain
#   • Stage 2 is a slim runtime layer (yet able to re‑compile
#     tiny "syscheck" tests thanks to a minimal build‑essentials)
#   • Works on x86‑64 & Apple Silicon (qemu‑emulated) hosts
#   • GPU build supported via --build‑arg GPU=1 (example code)
############################################################

############################
# ---- Stage 1 : builder --
############################
ARG GPU=0
FROM ubuntu:24.04 AS builder
ARG DEBIAN_FRONTEND=noninteractive

# Core tool‑chain + dev headers
RUN apt update -y && apt install -y build-essential git tar \
                 wget make cmake gcc g++ \
                 python3 python3-dev python3-venv \
                 openmpi-bin libopenmpi-dev       \
                 libfftw3-dev python3-pip python3-venv

# (Optional) NVIDIA HPC SDK example — comment in if you need it
# ARG NVHPC_VER=24.5
# RUN if [ "${GPU}" = "1" ]; then \
#         wget -qO- https://developer.download.nvidia.com/hpc-sdk/nvhpc_${NVHPC_VER}_linux_x86_64_cuda_12.4.tar.gz | \
#             tar xz -C /opt && \
#         echo "export PATH=/opt/nvidia/hpc_sdk/Linux_x86_64/${NVHPC_VER}/compilers/bin:$PATH" >> /etc/profile.d/nvhpc.sh; \
#     fi

# Grab source and compile
WORKDIR /opt
RUN git clone --depth 1 https://github.com/mflowcode/mfc.git MFC
WORKDIR /opt/MFC
ENV OMPI_ALLOW_RUN_AS_ROOT=1
ENV OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1
RUN ./mfc.sh build -j $(nproc) -t pre_process
RUN ./mfc.sh test --dry-run -j $(nproc)

############################
# ---- Stage 2 : runtime --
############################
FROM ubuntu:24.04 AS runtime
ARG DEBIAN_FRONTEND=noninteractive

# RUN apt-get update && apt-get install -y build-essential git tar \
#                  wget make cmake gcc g++ \
#                  python3 python3-dev python3-venv \
#                  openmpi-bin libopenmpi-dev       \
#                  libhdf5-dev libfftw3-dev python3-pip python3-venv
# Core tool‑chain + dev headers
RUN apt update -y && apt install -y build-essential git tar \
                 wget make cmake gcc g++ \
                 python3 python3-dev python3-venv \
                 openmpi-bin libopenmpi-dev       \
                 libfftw3-dev python3-pip python3-venv

# Copy pre‑built MFC tree
COPY --from=builder /opt/MFC /opt/MFC

# MFC expects to be launched from its repo root
WORKDIR /opt/MFC
ENV OMPI_ALLOW_RUN_AS_ROOT=1
ENV OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1


# Expose helper script
ENV PATH="/opt/MFC:${PATH}"
ENTRYPOINT ["./mfc.sh"]
CMD ["--help"]

############################################################
# Usage examples
############################################################
# 1) Build CPU image
#       docker build -t mfc:latest .
#
# 2) Run an in‑tree example (no volume mount)
#       docker run --rm -it mfc:latest run ./examples/2D_shockbubble/case.py -n 2
#
# 3) Run a host case directory (mount under /opt/MFC/workspace)
#       docker run --rm -it -v "$PWD":/opt/MFC/workspace mfc:latest \
#           run workspace/my_case/case.py -n 4
#
# 4) GPU build (NVIDIA example — hardware + drivers required)
#       docker build --build-arg GPU=1 -t mfc:gpu .
#       docker run --gpus all --rm -it mfc:gpu run \
#           ./examples/3d_shockdroplet/case.py -n 4 --gpu
############################################################

builds as docker build --no-cache -t mfc:latest .

runs as docker run --rm -it -v "$(pwd)":/workspace mfc:latest run ./examples/2D_shockbubble/case.py -n 2 -t pre_process

sbryngelson avatar May 14 '25 09:05 sbryngelson

Seems doable & easy-one. I will play with it tonight.

Malmahrouqi3 avatar Jun 13 '25 19:06 Malmahrouqi3

I will pass on another update by the morning and I am mainly following the official documentations apptainer.org/admin-docs/master.

I looked into singularity containerization and it is a good call instead of docker due to being more supported on clusters, albeit you can containerize with either. I do not except it to slow down our CI/CD that much tbh. I can look into a yaml workflow file that generates four images (cpu, gpu, cpu-bench, gpu-bench) to be part of the weekly releases. Here is a generated script implementing https://github.com/marketplace/actions/setup-singularity.

name: Build Singularity Images

on:
  push:

jobs:
  build-singularity-images:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Set up Singularity
        uses: eWaterCycle/setup-singularity@v7

      - name: Build CPU image
        run: |
          singularity build mfc_cpu.sif Singularity.cpu

      - name: Build GPU image
        run: |
          singularity build mfc_gpu.sif Singularity.gpu

      - name: Build CPU Benchmark image
        run: |
          singularity build mfc_cpu_bench.sif Singularity.cpu_bench

      - name: Build GPU Benchmark image
        run: |
          singularity build mfc_gpu_bench.sif Singularity.gpu_bench

      - name: Upload images as artifacts
        uses: actions/upload-artifact@v4
        with:
          name: singularity-images
          path: |
            mfc_cpu.sif
            mfc_gpu.sif
            mfc_cpu_bench.sif
            mfc_gpu_bench.sif

Malmahrouqi3 avatar Jun 15 '25 00:06 Malmahrouqi3

Yes singularity is kind of the standard for using on clusters, this seems like a good approach. i would prefer a container that already builds and holds the dependencies so it can be deployed somewhere without internet.

sbryngelson avatar Jun 15 '25 02:06 sbryngelson

(github.com/mohdsaid497566/MFC-mo2/actions/workflows/container-image.yml) We definitely need a registry for the long run. Fours images with standard build instructions come out to 5.3GB. I mean for now we can just manually download the artifacts from the most recent merged branch using the link provided there to use for either online/offline clusters.

I am still debugging to make sure all images are fully functional. Also, all singularity files are identical right now. Can I get more instructions for packages/compilers to be included for each case?

Bootstrap: docker
From: ubuntu:24.04

%environment
    export OMPI_ALLOW_RUN_AS_ROOT=1
    export OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1
    export PATH="/opt/MFC:$PATH"

%post
    export DEBIAN_FRONTEND=noninteractive
    apt update -y && apt install -y \
        build-essential git tar wget make cmake gcc g++ \
        python3 python3-dev python3-venv \
        openmpi-bin libopenmpi-dev libfftw3-dev \
        python3-pip python3-venv
    cd /opt
    git clone --depth 1 https://github.com/mflowcode/mfc.git MFC
    cd /opt/MFC
    ./mfc.sh build -j $(nproc)
    ./mfc.sh test --dry-run -j $(nproc)

%runscript
    cd /opt/MFC
    exec ./mfc.sh "$@"

Malmahrouqi3 avatar Jun 15 '25 06:06 Malmahrouqi3

https://sylabs.io/docs/ does not seem to be bad at all for the free tier quotas (11.00 GB for Storage, 500 maximum build minutes monthly). We can host a full release at a time, and since each build runs for 30 min max we can release four times a month.

I will pursue this approach to give us more flexibility of options with retrieving artifacts unless there is a better alternative.

Malmahrouqi3 avatar Jun 15 '25 06:06 Malmahrouqi3

I'm curious to see how it works. What about that academic-only free one I sent you? That seemed somehow more promising (no limitations, at least).

sbryngelson avatar Jun 15 '25 14:06 sbryngelson

Can I get more instructions for packages/compilers to be included for each case?

@mohdsaid497566 I think for CPU cases then just GNU is fine. For GPU cases we will have an nvidia-only release with NVHPC compilers (you'll need to build those yourself, they aren't in apt or anything).

sbryngelson avatar Jun 15 '25 14:06 sbryngelson

I will request access to OSPool asap and if it works out then we can migrate to a dedicated access point for MFC.

Malmahrouqi3 avatar Jun 15 '25 14:06 Malmahrouqi3

Aight, just to follow up, I have an access point to OSPool and the intended use for it is as part of our CI - making sure the code does not break when containerizing, running, and testing MFC images. In similar fashion to SLURM (Frontier & Phoenix Tests), the workflow file should be able to build, run, and perform dedicated image tests fully remotely on the shared resource.

Few obstacles to expect are host os with bare minimum of packages, limited support to OpenMP/MPI, limited number of cores. Regardless, there are ways around it for sure and we can utilize the service as a temporary storage for our images as well. As far as I remember, the storage quota is like 500GB not bad at all.

Malmahrouqi3 avatar Jun 17 '25 23:06 Malmahrouqi3

I have been using Apptainer's MFC image recipes and found out they work offline just fine, as long as all necessary dependencies are pre-installed.

Malmahrouqi3 avatar Jun 17 '25 23:06 Malmahrouqi3

great. yeah you will have to apt install everything on the OS images I suppose. That said, if you are running a container then all you need is for the OSPool to support containers?

sbryngelson avatar Jun 18 '25 03:06 sbryngelson

Yup just that and the system already supports containers. Also, in the base image 'From', you can specify compilers/packages for hardware-specific support and it is gonna download and set up everything by itself which is neat.

'From: intel/oneapi-hpckit:2025.0.0-0-devel-ubuntu24.04' 'From: nvcr.io/nvidia/nvhpc:25.5-devel-cuda_multi-ubuntu22.04'

https://portal.osg-htc.org/documentation/htc_workloads/using_software/containers-singularity/

Malmahrouqi3 avatar Jun 18 '25 08:06 Malmahrouqi3

Resolved via #971 #956

sbryngelson avatar Nov 30 '25 02:11 sbryngelson