PyTorch

PyTorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration. PyTorch is a deep learning framework that puts Python first.

Policy

PyTorch is available to users at HPC2N.

Citations

Citation in APA style

Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., … Chintala, S. (2019). PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32 (pp. 8024–8035). Curran Associates, Inc. Retrieved from http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf

Citation in Vancouver style

1.Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In: Advances in Neural Information Processing Systems 32 [Internet]. Curran Associates, Inc.; 2019. p. 8024–35. Available from: http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf

Citation in Harvard style

Paszke, A. et al., 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32. Curran Associates, Inc., pp. 8024–8035. Available at: http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.

Citation in Bibtex format

 
@incollection{NEURIPS2019_9015,
title = {PyTorch: An Imperative Style, High-Performance Deep Learning Library},
author = {Paszke, Adam and Gross, Sam and Massa, Francisco and Lerer, Adam and Bradbury, James and Chanan, Gregory and Killeen, Trevor and Lin, Zeming and Gimelshein, Natalia and Antiga, Luca and Desmaison, Alban and Kopf, Andreas and Yang, Edward and DeVito, Zachary and Raison, Martin and Tejani, Alykhan and Chilamkurthy, Sasank and Steiner, Benoit and Fang, Lu and Bai, Junjie and Chintala, Soumith},
booktitle = {Advances in Neural Information Processing Systems 32},
pages = {8024–8035},
year = {2019},
publisher = {Curran Associates, Inc.},
url = {http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf}
}

Overview

  • Production Ready
    • Transition seamlessly between eager and graph modes with TorchScript, and accelerate the path to production with TorchServe.
  • Distributed Training
    • Scalable distributed training and performance optimization in research and production is enabled by the torch.distributed backend.
  • Robust Ecosystem
    • A rich ecosystem of tools and libraries extends PyTorch and supports development in computer vision, NLP and more.
  • Cloud Support
    • PyTorch is well supported on major cloud platforms, providing frictionless development and easy scaling.

Usage at HPC2N

On HPC2N we have PyTorch available as a module.

Loading

To use the PyTorch module, add it to your environment. You can find versions with

module spider PyTorch 

and you can then find how to load a specific version (including prerequisites), with

module spider PyTorch/<VERSION> 

Note

  • Some versions of PyTorch are available in a GPU version. Those can usually be recognized by the -CUDA- as part of the version name (as in PyTorch/2.1.2-CUDA-12.1.1).

Loading PyTorch without CUDA

Example: PyTorch/1.13.1

module load GCC/12.2.0 OpenMPI/4.1.4 PyTorch/1.13.1 

Loading PyTorch with CUDA

Example: PyTorch/1.13.1

module load GCC/11.3.0 OpenMPI/4.1.4 PyTorch/1.13.1-CUDA-11.7.0-Python-3.8.6 

Running

Here are two batch script examples for running a Python code with PyTorch:

CPU example

#!/bin/bash
#SBATCH -A hpc2nXXXX-YYY # Change to your own project ID 
#SBATCH --time=00:10:00 # Asking for 10 minutes - adjust as suitable 
#SBATCH -n 1 # Asking for 1 core

# Load any modules you need, here for PyTorch/1.13.1
module load GCC/12.2.0 OpenMPI/4.1.4 PyTorch/1.13.1 

# Run your Python script
python <my-pytorch-code>.py

GPU example

#!/bin/bash
#SBATCH -A hpc2nXXXX-YYY # Change to your own project ID 
#SBATCH --time=00:10:00 # Asking for 10 minutes - adjust as is suitable 
#SBATCH -n 1 # Asking for 1 core
#SBATCH --gpus=1 # Asking for 1 GPU 
#SBATCH -C nvidia_gpu # We are happy with any Nvidia GPU 

# Load any modules you need, here for PyTorch/2.1.2-CUDA-12.1.1  
module load GCC/12.3.0 OpenMPI/4.1.5 PyTorch/2.1.2-CUDA-12.1.1   

# Run your Python script
python <my-gpu-pytorch-code>.py 

Additional info

More information can be found on