top of page
Search
  • Writer's pictureJoseph

Creating a Singularity Container for Linux Machine with GPU Support in AppleMac with Apple Silicon



Some HPC machines today use singularity containers for their machine learning workflows. Once configured Singularity containers can be very helpful in managing the machine learning configured, but configuring it is not quite straightforward. This is especially true if you are using an Apple laptop with Apple silicone:

  1. Singularity only works on Linux machines.

  2. In an Apple laptop, you can use a VM to virtualize a Linux OS, but when compiling Singularity on the VM it runs into issues as the host uses ARM-based chips.

  3. Even if you have access to a cluster that has Singularity you can't build a container as it requires Sudo privilege, which most users will not have.

  4. In the cluster you can try the --fakeroot option in the cluster but it usually runs into problems.

  5. In the cluster, you can try the --remote option, but if the building step takes more than one hour (which it will if you have a long list of packages to install) it will time out. 


This article makes a few assumptions - you are using a laptop that doesn't support Singularity (Apple Silicone). You tried to set up a Linux Virtual Machine (VM) through VM tools like Virtual Box and UTM but it is running into trouble like software crashes or no access to the internet or building Singularity. If this is not the case then there are easier ways to get started with Singularity container and you should explore those options. The main assumption I am making is that you are using an HPC machine/server that has Singularity installed but without sudo privileges.


The workflow we will use to build the singularity container is as follows:

  1. Build a Docker container in your Apple system (I use an Apple MacBook Air with M2 chips and Apple OS 14.5).

  2. Convert the apple container to a Singularity container.

  3. Run the code using the Singularity container environment while mounting data from the host file system, simultaneously using GPUs.


Docker File


A Docker file is a set of instructions detailing how to build a custom container. It specifies the base operating system, software, environment variables, files to be added from the host system, and container metadata.


Create a directory and then create a file named Dockerfile inside that directory.

mkdir pytorchDockerDir
cd pytorchDockerDir
touch Dockerfile

The Dockerfile will contain the following information:

FROM ubuntu:latest

LABEL Author="Joseph John"
LABEL Version="v1.0"
LABEL MyLabel="PyTorchCuda"

ENV  LC_ALL=C

RUN apt-get -y update
RUN apt-get install -y automake build-essential bzip2 wget git default-jre unzip

RUN mkdir -p /miniconda3
RUN mkdir -p /App
RUN mkdir -p /App/Code
RUN mkdir -p /App/Data

RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
RUN bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /miniconda3/
RUN rm Miniconda3-latest-Linux-x86_64.sh

ENV PATH=/miniconda3/bin:$PATH
ENV PYTHONPATH=/miniconda3/lib/python3.9/:$PYTHONPATH

RUN conda init
RUN conda create -y --name pytorch_env python=3.9
RUN /bin/bash -c "source activate pytorch_env && conda install -y pytorch==1.10.0 torchvision==0.11.0 torchaudio==0.10.0 cudatoolkit=10.2 -c pytorch"

Base Image: The FROM instruction specifies the base image to use for the Docker image. This is the starting point for the image.

FROM ubuntu:latest

Metadata: The LABEL instruction is used to add metadata to the image, such as the maintainer's name, version, and description.

LABEL Author="Joseph John"
LABEL Version="v1.0"
LABEL MyLabel="PyTorchCuda"

Environment Variables: The ENV instruction sets environment variables, which can be used throughout the Dockerfile and within the container.

ENV  LC_ALL=C

If these variables are needed during the build process, they should be specified in the Post section.


Dependencies Installation: The RUN instruction is used to execute commands in the shell. This is typically where you install any necessary dependencies.


RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
RUN bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /miniconda3/
RUN rm Miniconda3-latest-Linux-x86_64.sh

Building Docker Container


We are building a docker container for a 64-bit Linux machine from an Apple machine with an ARM processor (Apple silicon). This would be cross-platform building and can be done using the command :

cd pytorchDockerDir
docker build --platform linux/amd64 -t pytorch_docker_con

This will build the container image and you can verify this using the command :

docker image ls

Tarball of Docker Images


Now that I have created the container image we save the container image as a tarball.

docker save 0f2efc9d8e1f -o pytorch_docker_con.tar

where 0f2efc9d8e1f is the image id found from the docker image ls command.


The Linux server I have access to is called Whale and I will be moving this tarball to a directory (/home/joseph/docker) on Whale (This might take a while since the tarball is quite large).

scp pytorch_docker_con.tar whale:/home/joseph/docker

Host Side Commands


From this point on all commands will be run on Whale. Now we can run the following command to build a singularity sandbox image from the tarbal

cd /home/joseph/docker

singularity build --sandbox pytorchSingularityDir docker-archive://pytorch_docker_con.tar

Where pytorchSingularityDir is the directory name of the Singularity image. Now we can access the shell in the container:

singularity shell pytorchSingularityDir

We can also execute run commands on the container from the host:

singularity exec pytorchSingularityDir ls /App

Finally, we are going to mount a Python file from the host system and we are going to execute it to check Pytorch in the container can access the GPUs. I have a file testProg.py and runScript.sh in a host directory /home/joseph/docker/hostFiles. testProg.py contains the following code: 

import torch
print(torch.cuda.is_available())

and runScript.sh  contains the following code:


. activate base
conda activate pytorch_env
python3 /App/Code/testProg.py
conda deactivate

Now we can run the python program using the command:

singularity exec --nv --bind /home/joseph/docker/hostFiles:/App/Code pytorchSingularityDir sh /App/Code/runScript.sh

Where

  1. --nv is flag to use the Nvidia GPU (--rocm for AMD)

  2. --binds mounts the host directory /home/joseph/docker/hostFiles to the container directory /App/Code

  3. We can then run the script in the container using sh /App/Code/runScript.sh


If everything goes well you should be able to see True printed in the terminal.


Note: I have used ChatGPT to generate some parts of the Dockerfile as well polish some texts.

4 views0 comments

Recent Posts

See All

Using Vim and Ctags to Manage Large Projects

The usual workflow in developing an HPC application is to develop the code in local machines and then run the completed application in an HPC machine. There is no scenario where the entire application

Comments


bottom of page