Unity
Unity
About
News
Events
Docs
Contact Us
code
search
login
Unity
Unity
About
News
Events
Docs
Contact Us
dark_mode
light_mode
code login
search

Documentation

  • Requesting An Account
  • Get Started
    • Quick Start
    • Common Terms
    • HPC Resources
    • Theory of HPC
      • Overview of threads, cores, and sockets in Slurm for HPC workflows
    • Git Guide
  • Connecting to Unity
    • SSH
    • Unity OnDemand
    • Connecting to Desktop VS Code
  • Get Help
    • Frequently Asked Questions
    • How to Ask for Help
    • Troubleshooting
  • Cluster Specifications
    • Node List
    • Partition List
      • Gypsum
    • Storage
    • Node Features (Constraints)
      • NVLink and NVSwitch
    • GPU Summary List
  • Managing Files
    • Command Line Interface (CLI)
    • Disk Quotas
    • FileZilla
    • Globus
    • Scratch: HPC Workspace
    • Unity OnDemand File Browser
  • Submitting Jobs
    • Batch Jobs
      • Array Batch Jobs
      • Large Job Counts
      • Monitor a batch job
    • Helper Scripts
    • Interactive CLI Jobs
    • Unity OnDemand
    • Message Passing Interface (MPI)
    • Slurm cheat sheet
  • Software Management
    • Building Software from Scratch
    • Conda
    • Modules
      • Module Usage
    • Renv
    • Unity OnDemand
      • JupyterLab OnDemand
    • Venv
  • Tools & Software
    • ColabFold
    • R
      • R Parallelization
    • Unity GPUs
  • Datasets
    • AI and ML
      • AlpacaFarm
      • audioset
      • biomed_clip
      • blip_2
      • blip_2
      • coco
      • Code Llama
      • DeepAccident
      • DeepSeek
      • DINO v2
      • epic-kitchens
      • florence
      • gemma
      • gpt
      • gte-Qwen2
      • ibm-granite
      • Idefics2
      • Imagenet 1K
      • inaturalist
      • infly
      • instruct-blip
      • intfloat
      • LAION
      • linq
      • llama
      • Llama2
      • llama3
      • llama4
      • Llava_OneVision
      • Lumina
      • mixtral
      • msmarco
      • natural-questions
      • objaverse
      • openai-whisper
      • pythia
      • qwen
      • R1-1776
      • rag-sequence-nq
      • red-pajama-v2
      • s1-32B
      • satlas_pretrain
      • scalabilityai
      • SlimPajama
      • t5
      • Tulu
      • V2X
      • video-MAE
      • videoMAE-v2
      • vit
      • wildchat
    • Bioinformatics
      • AlphaFold3 Databases
      • BFD/MGnify
      • Big Fantastic Database
      • checkm
      • ColabFoldDB
      • dfam
      • EggNOG
      • EggNOG
      • gmap
      • GMAP-GSNAP database (human genome)
      • GTDB
      • igenomes
      • Kraken2
      • MGnify
      • NCBI BLAST databases
      • NCBI RefSeq database
      • NCBI RefSeq database
      • Parameters of Evolutionary Scale Modeling (ESM) models
      • params
      • PDB70
      • PDB70 for ColabFold
      • PINDER
      • PLINDER
      • Protein Data Bank
      • Protein Data Bank database in mmCIF format
      • Protein Data Bank database in SEQRES records
      • Tara Oceans 18S amplicon
      • Tara Oceans MATOU gene catalog
      • Tara Oceans MGT transcriptomes
      • Uniclust30
      • UniProtKB
      • UniRef100
      • UniRef30
      • UniRef90
      • Updated databases for ColabFold
    • Using HuggingFace Datasets

Documentation

  • Requesting An Account
  • Get Started
    • Quick Start
    • Common Terms
    • HPC Resources
    • Theory of HPC
      • Overview of threads, cores, and sockets in Slurm for HPC workflows
    • Git Guide
  • Connecting to Unity
    • SSH
    • Unity OnDemand
    • Connecting to Desktop VS Code
  • Get Help
    • Frequently Asked Questions
    • How to Ask for Help
    • Troubleshooting
  • Cluster Specifications
    • Node List
    • Partition List
      • Gypsum
    • Storage
    • Node Features (Constraints)
      • NVLink and NVSwitch
    • GPU Summary List
  • Managing Files
    • Command Line Interface (CLI)
    • Disk Quotas
    • FileZilla
    • Globus
    • Scratch: HPC Workspace
    • Unity OnDemand File Browser
  • Submitting Jobs
    • Batch Jobs
      • Array Batch Jobs
      • Large Job Counts
      • Monitor a batch job
    • Helper Scripts
    • Interactive CLI Jobs
    • Unity OnDemand
    • Message Passing Interface (MPI)
    • Slurm cheat sheet
  • Software Management
    • Building Software from Scratch
    • Conda
    • Modules
      • Module Usage
    • Renv
    • Unity OnDemand
      • JupyterLab OnDemand
    • Venv
  • Tools & Software
    • ColabFold
    • R
      • R Parallelization
    • Unity GPUs
  • Datasets
    • AI and ML
      • AlpacaFarm
      • audioset
      • biomed_clip
      • blip_2
      • blip_2
      • coco
      • Code Llama
      • DeepAccident
      • DeepSeek
      • DINO v2
      • epic-kitchens
      • florence
      • gemma
      • gpt
      • gte-Qwen2
      • ibm-granite
      • Idefics2
      • Imagenet 1K
      • inaturalist
      • infly
      • instruct-blip
      • intfloat
      • LAION
      • linq
      • llama
      • Llama2
      • llama3
      • llama4
      • Llava_OneVision
      • Lumina
      • mixtral
      • msmarco
      • natural-questions
      • objaverse
      • openai-whisper
      • pythia
      • qwen
      • R1-1776
      • rag-sequence-nq
      • red-pajama-v2
      • s1-32B
      • satlas_pretrain
      • scalabilityai
      • SlimPajama
      • t5
      • Tulu
      • V2X
      • video-MAE
      • videoMAE-v2
      • vit
      • wildchat
    • Bioinformatics
      • AlphaFold3 Databases
      • BFD/MGnify
      • Big Fantastic Database
      • checkm
      • ColabFoldDB
      • dfam
      • EggNOG
      • EggNOG
      • gmap
      • GMAP-GSNAP database (human genome)
      • GTDB
      • igenomes
      • Kraken2
      • MGnify
      • NCBI BLAST databases
      • NCBI RefSeq database
      • NCBI RefSeq database
      • Parameters of Evolutionary Scale Modeling (ESM) models
      • params
      • PDB70
      • PDB70 for ColabFold
      • PINDER
      • PLINDER
      • Protein Data Bank
      • Protein Data Bank database in mmCIF format
      • Protein Data Bank database in SEQRES records
      • Tara Oceans 18S amplicon
      • Tara Oceans MATOU gene catalog
      • Tara Oceans MGT transcriptomes
      • Uniclust30
      • UniProtKB
      • UniRef100
      • UniRef30
      • UniRef90
      • Updated databases for ColabFold
    • Using HuggingFace Datasets

On this page

  • When to use SALLOC
  • When to use SBATCH
  • Submit jobs using salloc
  1. Unity
  2. Documentation
  3. Submitting Jobs
  4. Interactive CLI Jobs

Salloc job submission overview

salloc is a command in Slurm used to allocate resources. It allows users to request resources on a compute node in real-time without submitting a batch script or job to the Slurm queue.

When you run salloc, Slurm allocates resources based on your request and provides you with a shell environment on a compute node. You can then execute commands interactively, run tests, debug programs, or perform any other tasks that require access to compute resources.

You will use salloc and sbatch in different scenarios depending on your requirements and workflow in a Slurm-managed environment.

When to use SALLOC

Use salloc when:

  • You need to interactively access compute resources.
  • You need to submit tasks that require immediate access to resources for debugging, testing, or interactive computation.
  • You need on-demand access to compute nodes without the need to submit a job script to the Slurm scheduler.
  • You need to submit short-lived tasks that require immediate attention and don’t warrant the overhead of submitting a separate job script.

When to use SBATCH

sbatch allows you to submit scripts containing multiple commands, job dependencies, and other specifications for execution on compute nodes. If you have to run a single application multiple times or are trying to run a non-interactive application, you should usually use sbatch instead of salloc, since sbatch allows you to specify parameters in the file and is non-blocking.

Use sbatch when:

  • You have batch jobs that you want to submit to the Slurm scheduler for execution.
  • You need to submit an automated or scheduled execution of tasks. With sbatch, you can submit batch scripts to execute tasks without manual intervention.
  • You need to submit long-running jobs, simulations, data processing, and other tasks that can run without interactive supervision.

Submit jobs using salloc

salloc is a blocking command which means you can’t execute other commands within the shell until this command is finished (not necessarily the job, just the allocated compute node).

When you run salloc, it creates a new shell environment that you can interact with (not to be mistaken with the original shell in which you ran the command). The new shell environment allows you to interact with the allocated node.

For example, if you run salloc srun /bin/hostname and resources are available, the hostname of the allocated node is printed right away. If resources are not available, the command waits while you are pending in the queue. You can use Ctrl+C to cancel the request. In order to cancel the job once it has already started, you must input Ctrl+C twice within one second.

stylus_note
Please note that like sbatch, you can use salloc to run a batch file, but interactively, the #SBATCH directives in the file are ignored.

The command syntax is salloc <options> srun [executable] <args>.

You can use options to specify or define the resources you want for the executable. The following are some of the available options:

  • -c <num> Number of CPUs (cores) to allocate to the job per task
  • -n <num> The number of tasks to allocate (for MPI)
  • -G <num> Number of GPUs to allocate to the job
  • --mem <num>[M|G|T] Memory to allocate to the job (in MB by default)
  • -p <partition> Partition to submit the job to
lightbulb
List of Parameters
To see all available parameters, run man salloc or see the online version.

To run an interactive job with your default shell, pass the resources required to salloc and do not specify a command:

salloc -c 6 -p cpu

To run an application on the cluster that uses a GUI, you must use an interactive job, in addition to the --x11 argument:

salloc -c 6 -p cpu --x11 xclock
warning
You cannot run an interactive/GUI job using the sbatch command, you must use salloc.
Last modified: Thursday, March 13, 2025 at 10:08 AM. See the commit on GitLab.
University of Massachusetts Amherst University of Massachusetts Amherst University of Rhode Island University of Rhode Island University of Massachusetts Dartmouth University of Massachusetts Dartmouth University of Massachusetts Lowell University of Massachusetts Lowell University of Massachusetts Boston University of Massachusetts Boston Mount Holyoke College Mount Holyoke College Smith College Smith College
search
close