All Articles

Support Articles

Table of Contents

CUDA and cuDNN (Install)

Pop!_OS 22.04 LTS

Basic CUDA runtime functionality is installed automatically with the NVIDIA driver (in the libnvidia-compute-* and nvidia-compute-utils-* packages). The maximum CUDA version supported by the libraries included with the driver can be seen using the nvidia-smi command.

Additional tools for using and developing with CUDA can be installed with the nvidia-cuda-toolkit package:

sudo apt install nvidia-cuda-toolkit

The nvidia-cuda-toolkit package is maintained by Ubuntu, and may contain an older version of CUDA than what the driver supports.

Other Versions of CUDA

The nvidia-container-toolkit package uses Docker containers to allow alternate versions of the CUDA libraries to be installed alongside the one included with the NVIDIA driver. You can see the different Docker images that are published by NVIDIA here:

This example installs a development enviroment with CUDA version 12.1.

Install Software

After making sure the system is up-to-date, install the NVIDIA container toolkit. In this example, Docker will also be installed using the package.

sudo apt update
sudo apt full-upgrade
sudo apt install nvidia-container-toolkit

The user account working with the Container Toolkit must be added to the docker group if that hasn't been done already:

sudo usermod -aG docker $USER

The last step is to add a kernel parameter:

sudo kernelstub --add-options "systemd.unified_cgroup_hierarchy=0"

...and reboot.

Configure the Docker daemon for the NVIDIA Container Runtime

Use the NVIDIA Container Toolkit CLI to configure Docker to use the NVIDIA libraries, then restart Docker:

sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

Test Configuration

Run this command to check the Docker configuration for CUDA:

docker run --rm --runtime=nvidia --gpus all nvidia/cuda:12.1.0-devel-ubuntu22.04 nvidia-smi

The output displays the CUDA version supported by the container:

Thu Mar 23 14:43:51 2023       
| NVIDIA-SMI 525.89.02    Driver Version: 525.89.02    CUDA Version: 12.1     |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0  On |                  N/A |
| 30%   37C    P5    N/A /  75W |    789MiB /  4096MiB |     16%      Default |
|                               |                      |                  N/A |
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |

Run the Container

Start a shell within the container:

docker run -it --rm --runtime=nvidia --gpus all nvidia/cuda:12.1.0-devel-ubuntu22.04 bash

Commands can then be run with CUDA support:

root@5397e7ea7f57:/# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Feb__7_19:32:13_PST_2023
Cuda compilation tools, release 12.1, V12.1.66
Build cuda_12.1.r12.1/compiler.32415258_0

The container can be viewed and managed using docker ps in another terminal or tab:

system76@pop-os:~$ docker ps
CONTAINER ID   IMAGE                                 COMMAND   CREATED         STATUS         PORTS     NAMES
5397e7ea7f57   nvidia/cuda:12.1.0-devel-ubuntu22.04   "/opt/nvidia/nvidia_…"   2 minutes ago   Up 2 minutes             boring_tesla

The container ID can be referenced to copy files into and out of the container:

system76@pop-os:~$ git clone
system76@pop-os:~$ docker cp cuda-samples/ 5397e7ea7f57:/root/cuda-samples/

Now, from within the container, an example project can be built:

root@5397e7ea7f57# cd /root/cuda-samples/Samples/0_Introduction/c++11_cuda/
root@5397e7ea7f57:~/cuda-samples/Samples/0_Introduction/c++11_cuda# make

The binary (c++11_cuda) is built:

root@5397e7ea7f57:~/cuda-samples/Samples/0_Introduction/c++11_cuda# ls -l
total 6108
-rw-rw-r-- 1 1000 1000   13679 Mar 24 16:45 Makefile
-rw-rw-r-- 1 1000 1000    2090 Mar 24 16:45 NsightEclipse.xml
-rw-rw-r-- 1 1000 1000    3556 Mar 24 16:45
-rwxr-xr-x 1 root root 1881448 Mar 24 16:48 c++11_cuda

Pop!_OS 20.04 LTS

Install the Latest NVIDIA CUDA Toolkit

To install the CUDA toolkit, run this command:

sudo apt install system76-cuda-latest

To install the cuDNN library, run this command:

sudo apt install system76-cudnn-11.2

To verify installation, run this command after a reboot:

nvcc -V

Versions in Pop!_OS 20.04 LTS

To install CUDA 10.0:

sudo apt install system76-cuda-10.0

For the respective cuDNN library:

sudo apt install system76-cudnn-10.0

To install CUDA 10.1:

sudo apt install system76-cuda-10.1

For the respective cuDNN library:

sudo apt install system76-cudnn-10.1

To install CUDA 10.2:

sudo apt install system76-cuda-10.2

For the respective cuDNN library:

sudo apt install system76-cudnn-10.2

Switch Between CUDA Versions

You can switch between each CUDA version with the following command:

sudo update-alternatives --config cuda

To verify installation, run this command to see the current version of the NVIDIA CUDA compiler:

nvcc -V

You can check the version of cuDNN with this command:

cat /usr/lib/cuda/include/cudnn_version.h | grep CUDNN_MAJOR -A 2       

Not Running Pop!_OS?

The previous instructions will work with Pop!_OS out of the box, Ubuntu and other Debian derivatives require additional commands.

ℹ️ These packages have only been tested with the System76 NVIDIA driver.

Ubuntu 20.04 LTS

echo "deb focal main" | sudo tee -a /etc/apt/sources.list.d/pop-proprietary.list
sudo apt-key adv --keyserver --recv-key 204DD8AEC33A7AFF
sudo apt update

The following article will go over installing the System76 NVIDIA driver.