Unity
Unity
About
News
Events
Docs
Contact Us
code
search
login
Unity
Unity
About
News
Events
Docs
Contact Us
dark_mode
light_mode
code login
search

Documentation

  • Requesting An Account
  • Get Started
    • Quick Start
    • Common Terms
    • HPC Resources
    • Theory of HPC
      • Overview of threads, cores, and sockets in Slurm for HPC workflows
    • Git Guide
  • Connecting to Unity
    • SSH
    • Unity OnDemand
    • Connecting to Desktop VS Code
  • Get Help
    • Frequently Asked Questions
    • How to Ask for Help
    • Troubleshooting
  • Cluster Specifications
    • Node List
    • Partition List
    • Storage
    • Node Features (Constraints)
      • NVLink and NVSwitch
    • CPU Summary List
    • GPU Summary List
  • Managing Files
    • Command Line Interface (CLI)
    • Disk Quotas
    • FileZilla
    • Globus
    • RStor Research Storage System
      • Managing RStor Shares
      • RStor Usage
      • The Allocation Portal
    • Scratch: HPC Workspace
    • Unity OnDemand File Browser
  • Submitting Jobs
    • Batch Jobs
      • Array Batch Jobs
      • Large Job Counts
      • Monitor a batch job
    • Helper Scripts
    • Interactive CLI Jobs
    • Unity OnDemand
    • Message Passing Interface (MPI)
    • Slurm cheat sheet
  • Software Management
    • Building Software from Scratch
    • Conda
    • Modules
      • Module Usage
    • Renv
    • Unity OnDemand
      • JupyterLab OnDemand
    • Venv
  • Tools & Software
    • ColabFold
    • R
      • R Parallelization
    • Unity GPUs
  • Datasets
    • AI and ML
      • Allen AI
      • AlpacaFarm
      • amass
      • audioset
      • bigcode
      • biomed_clip
      • blip_2
      • bloom
      • coco
      • Code Llama
      • DeepAccident
      • DeepSeek
      • DINO v2
      • epic-kitchens
      • florence
      • FLUX.1 Kontext
      • fomo
      • gemma
      • glm
      • gte-Qwen2
      • HiDream-I1
      • ibm-granite
      • Idefics2
      • Imagenet 1K
      • inaturalist
      • infly
      • internLM
      • internvl3-8b-hf
      • intfloat
      • kinetics
      • lg
      • linq
      • Llama2
      • llama3
      • llama4
      • Llava_OneVision
      • llm-compiler
      • Lumina
      • mims
      • mixtral
      • monai
      • moonshot-ai
      • msmarco
      • natural-questions
      • objaverse
      • openai-whisper
      • Perplexity AI
      • phi
      • playgroundai
      • pythia
      • qwen
      • rag-sequence-nq
      • s1-32B
      • scalabilityai
      • sft
      • SlimPajama
      • t5
      • Tulu
      • V2X
      • video-MAE
      • vit
      • wildchat
    • Bioinformatics
      • AlphaFold3 Databases
      • BFD/MGnify
      • Big Fantastic Database
      • checkm
      • ColabFoldDB
      • Databases for ColabFold
      • dfam
      • EggNOG - version 5.0
      • EggNOG - version 6.0
      • EVcouplings databases
      • Genomes from NCBI RefSeq database
      • GMAP-GSNAP database (human genome)
      • GTDB
      • Illumina iGenomes
      • Kraken2
      • MGnify
      • NCBI BLAST databases
      • NCBI RefSeq database
      • Parameters of AlphaFold
      • Parameters of Evolutionary Scale Modeling (ESM) models
      • PDB70
      • PINDER
      • PLINDER
      • Protein Data Bank
      • Protein Data Bank database in mmCIF format
      • Protein Data Bank database in SEQRES records
      • Tara Oceans 18S amplicon
      • Tara Oceans MATOU gene catalog
      • Tara Oceans MGT transcriptomes
      • Tattabio
      • Uniclust30
      • UniProtKB
      • UniRef100
      • UniRef30
      • UniRef90
      • Updated databases for ColabFold
    • Using HuggingFace Datasets

Documentation

  • Requesting An Account
  • Get Started
    • Quick Start
    • Common Terms
    • HPC Resources
    • Theory of HPC
      • Overview of threads, cores, and sockets in Slurm for HPC workflows
    • Git Guide
  • Connecting to Unity
    • SSH
    • Unity OnDemand
    • Connecting to Desktop VS Code
  • Get Help
    • Frequently Asked Questions
    • How to Ask for Help
    • Troubleshooting
  • Cluster Specifications
    • Node List
    • Partition List
    • Storage
    • Node Features (Constraints)
      • NVLink and NVSwitch
    • CPU Summary List
    • GPU Summary List
  • Managing Files
    • Command Line Interface (CLI)
    • Disk Quotas
    • FileZilla
    • Globus
    • RStor Research Storage System
      • Managing RStor Shares
      • RStor Usage
      • The Allocation Portal
    • Scratch: HPC Workspace
    • Unity OnDemand File Browser
  • Submitting Jobs
    • Batch Jobs
      • Array Batch Jobs
      • Large Job Counts
      • Monitor a batch job
    • Helper Scripts
    • Interactive CLI Jobs
    • Unity OnDemand
    • Message Passing Interface (MPI)
    • Slurm cheat sheet
  • Software Management
    • Building Software from Scratch
    • Conda
    • Modules
      • Module Usage
    • Renv
    • Unity OnDemand
      • JupyterLab OnDemand
    • Venv
  • Tools & Software
    • ColabFold
    • R
      • R Parallelization
    • Unity GPUs
  • Datasets
    • AI and ML
      • Allen AI
      • AlpacaFarm
      • amass
      • audioset
      • bigcode
      • biomed_clip
      • blip_2
      • bloom
      • coco
      • Code Llama
      • DeepAccident
      • DeepSeek
      • DINO v2
      • epic-kitchens
      • florence
      • FLUX.1 Kontext
      • fomo
      • gemma
      • glm
      • gte-Qwen2
      • HiDream-I1
      • ibm-granite
      • Idefics2
      • Imagenet 1K
      • inaturalist
      • infly
      • internLM
      • internvl3-8b-hf
      • intfloat
      • kinetics
      • lg
      • linq
      • Llama2
      • llama3
      • llama4
      • Llava_OneVision
      • llm-compiler
      • Lumina
      • mims
      • mixtral
      • monai
      • moonshot-ai
      • msmarco
      • natural-questions
      • objaverse
      • openai-whisper
      • Perplexity AI
      • phi
      • playgroundai
      • pythia
      • qwen
      • rag-sequence-nq
      • s1-32B
      • scalabilityai
      • sft
      • SlimPajama
      • t5
      • Tulu
      • V2X
      • video-MAE
      • vit
      • wildchat
    • Bioinformatics
      • AlphaFold3 Databases
      • BFD/MGnify
      • Big Fantastic Database
      • checkm
      • ColabFoldDB
      • Databases for ColabFold
      • dfam
      • EggNOG - version 5.0
      • EggNOG - version 6.0
      • EVcouplings databases
      • Genomes from NCBI RefSeq database
      • GMAP-GSNAP database (human genome)
      • GTDB
      • Illumina iGenomes
      • Kraken2
      • MGnify
      • NCBI BLAST databases
      • NCBI RefSeq database
      • Parameters of AlphaFold
      • Parameters of Evolutionary Scale Modeling (ESM) models
      • PDB70
      • PINDER
      • PLINDER
      • Protein Data Bank
      • Protein Data Bank database in mmCIF format
      • Protein Data Bank database in SEQRES records
      • Tara Oceans 18S amplicon
      • Tara Oceans MATOU gene catalog
      • Tara Oceans MGT transcriptomes
      • Tattabio
      • Uniclust30
      • UniProtKB
      • UniRef100
      • UniRef30
      • UniRef90
      • Updated databases for ColabFold
    • Using HuggingFace Datasets

On this page

  • Features and Use Cases
  • Projects, Resources, and Allocations
    • Projects
    • Resources
    • Allocations
  • Relationship to the Account Portal
  1. Unity
  2. Documentation
  3. Managing Files
  4. RStor Research Storage System
  5. The Allocation Portal

The Allocation Portal

Visit the Allocation Portal at https://allocation.unityhpc.org/.

This document provides an overview of the Allocation Portal, including its primary features, common use cases, and key concepts such as resources, projects, and allocations. It also explains how the Allocation Portal relates to the Unity HPC Account Portal.

The Allocation Portal is powered by an open-source product called Coldfront. For more detailed technical information, refer to the Coldfront documentation.

Features and Use Cases

The Allocation Portal is a resource allocation management system. At Unity HPC, we use it to keep track of agreements such as:

  • “We rent 100TB of fast storage to Alice”
  • “We rent 200TB of slow storage to Bob”
  • “One of our compute nodes was purchased by Charlie, so he gets priority access to it”

A resource allocation management system stores this information in a database and provides tools to manage it efficiently.

Instead of contacting Unity staff via Slack or email when you need a new allocation, you can submit a request directly through the Allocation Portal. Once approved and the allocation is active, you can manage that allocation yourself at any time. You can take actions such as:

  • Add or remove users
  • Promote another user to manager so they can make changes on your behalf

These actions do not require Unity staff intervention.

Projects, Resources, and Allocations

To understand how projects, resources, and allocations fit together, consider this example:

You are a professor in the UMass Amherst College of Information and Computer Sciences researching novel machine learning models. You’d like to take advantage of large datasets available online, but your personal computer doesn’t have enough storage. You contact the Unity HPC staff and request space on the RStor Research Storage System. Unity staff approve 100 terabytes of storage for some price per year.

In this scenario:

  • Project: Your machine learning research
  • Resource: RStor
  • Allocation: 100 TB of RStor assigned to your project

Projects

A project is owned by exactly one PI, but one PI can have multiple projects.

For most users, we recommend just having one project for all of your research activities. However, there are some cases where it might make sense to create multiple projects, such as:

  • Providing access to students in a course you are teaching
  • Collaborating with PIs from other institutions
  • Compartmentalizing between different funding sources, so you don’t accidentally spend money from grant A when doing work for grant B

Resources

As of the Allocation Portal’s initial launch, the only available resource is the RStor Research Storage System. We plan to include other resources in the future, such as:

  • VAST and NESE storage systems
  • Compute clusters like Unity and Harmony
  • Priority access to specific nodes within those compute clusters

Allocations

An allocation is a specific quantity of a resource (storage, compute access, etc.) assigned to a project, with a defined expiration date. Each allocation belongs to exactly exactly one project, though one project can have multiple allocations.

Relationship to the Account Portal

The Unity HPC Platform Account Portal (formerly the “Unity web portal”) currently has some overlapping functionality with the Allocation Portal, which may cause some confusion. Both portals allow a PI to manage a list of associated users who have access to some Unity HPC Platform service.

  • In the Account Portal, this access applies specifically to the Unity batch cluster.
  • In the Allocation Portal, this access applies to some resource, which could be storage systems, compute systems, and more.

Over time, access to the Unity batch cluster will be managed as a resource within the Allocation Portal just like RStor, and the user-group management will be removed from the Account Portal.

Last modified: Tuesday, December 2, 2025 at 4:06 PM. See the commit on GitLab.
University of Massachusetts Amherst University of Massachusetts Amherst University of Rhode Island University of Rhode Island University of Massachusetts Dartmouth University of Massachusetts Dartmouth University of Massachusetts Lowell University of Massachusetts Lowell University of Massachusetts Boston University of Massachusetts Boston Mount Holyoke College Mount Holyoke College Smith College Smith College Olin College of Engineering Olin College of Engineering
search
close