Installing Tensorflow with GPU Support using Docker

  1. Cuda installation.
  2. Install nvidia-docker.
  3. Check if a GPU is available.
    lspci | grep -i nvidia
    
  4. Verify your nvidia-docker installation.
    docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
    
     Sun Mar  3 13:20:01 2019       
     +-----------------------------------------------------------------------------+
     | NVIDIA-SMI 418.39       Driver Version: 418.39       CUDA Version: 10.1     |
     |-------------------------------+----------------------+----------------------+
     | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
     | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
     |===============================+======================+======================|
     |   0  GeForce 940MX       On   | 00000000:01:00.0 Off |                  N/A |
     | N/A   49C    P8    N/A /  N/A |    380MiB /  4046MiB |      0%      Default |
     +-------------------------------+----------------------+----------------------+
                                                                                    
     +-----------------------------------------------------------------------------+
     | Processes:                                                       GPU Memory |
     |  GPU       PID   Type   Process name                             Usage      |
     |=============================================================================|
     +-----------------------------------------------------------------------------+
    

Note: nvidia-docker v1 uses the nvidia-docker alias, where v2 uses docker --runtime=nvidia.

Build from source

The following example downloads the TensorFlow :nightly-devel-gpu-py3 image and uses nvidia-docker to run the GPU-enabled container. This development image is configured to build a Python 3 pip package with GPU support:

docker pull tensorflow/tensorflow:nightly-devel-gpu-py3

docker run --runtime=nvidia -it -w /tensorflow -v $PWD:/mnt -e HOST_PERMS="$(id -u):$(id -g)" \
    tensorflow/tensorflow:nightly-devel-gpu-py3 bash

Then, within the container’s virtual environment, build the TensorFlow package with GPU support:

./configure  # answer prompts or use defaults

bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

./bazel-bin/tensorflow/tools/pip_package/build_pip_package /mnt  # create package

chown $HOST_PERMS /mnt/tensorflow-version-tags.whl

Install and verify the package within the container and check for a GPU:

pip uninstall tensorflow  # remove current version

pip install /mnt/tensorflow-version-tags.whl
cd /tmp  # don't import from source directory
python -c "import tensorflow as tf; print(tf.contrib.eager.num_gpus())"

Common Issues

References

Back to top ↑