Unity
Unity
About
News
Events
Docs
Contact Us
code
search
login
Unity
Unity
About
News
Events
Docs
Contact Us
dark_mode
light_mode
code login
search

Documentation

  • Requesting An Account
  • Get Started
    • Quick Start
    • Common Terms
    • HPC Resources
    • Theory of HPC
      • Overview of threads, cores, and sockets in Slurm for HPC workflows
    • Git Guide
  • Connecting to Unity
    • SSH
    • Unity OnDemand
    • Connecting to Desktop VS Code
  • Get Help
    • Frequently Asked Questions
    • How to Ask for Help
    • Troubleshooting
  • Cluster Specifications
    • Node List
    • Partition List
    • Storage
    • Node Features (Constraints)
      • NVLink and NVSwitch
    • CPU Summary List
    • GPU Summary List
  • Managing Files
    • Command Line Interface (CLI)
    • Disk Quotas
    • FileZilla
    • Globus
    • RStor Research Storage System
      • Managing RStor Shares
      • RStor Usage
      • The Allocation Portal
    • Scratch: HPC Workspace
    • Unity OnDemand File Browser
  • Submitting Jobs
    • Batch Jobs
      • Array Batch Jobs
      • Large Job Counts
      • Monitor a batch job
    • Helper Scripts
    • Interactive CLI Jobs
    • Unity OnDemand
    • Message Passing Interface (MPI)
    • Slurm cheat sheet
  • Software Management
    • Building Software from Scratch
    • Conda
    • Modules
      • Module Usage
    • Renv
    • Unity OnDemand
      • JupyterLab OnDemand
    • Venv
  • Tools & Software
    • ColabFold
    • PyTorch
    • R
      • R Parallelization
    • Unity GPUs
  • Datasets
    • AI and ML
      • Aleph-Alpha
      • Alibaba-NLP
      • Allen AI
      • AlpacaFarm
      • Amass
      • Audioset
      • BAAI
      • Bigcode
      • Biomed Clip
      • Blip 2
      • Bloom
      • ByteDance
      • COCO
      • Code Llama
      • DeepAccident
      • DeepSeek
      • DeSTA
      • Diffa
      • DINO v2
      • epic-kitchens
      • Falcon
      • Florence
      • FLUX.1 Kontext
      • Fomo
      • Gemma
      • Genmo
      • Glm
      • GPT
      • HiDream-I1
      • Ibm Granite
      • Idefics2
      • Imagenet 1K
      • Inaturalist
      • Infly
      • InternLM
      • Internvl3-8b-hf
      • Intfloat
      • Kinetics
      • LG
      • Linq
      • Llama2
      • Llama3
      • Llama4
      • Llava_OneVision
      • LLM-compiler
      • LMSys
      • Lumina
      • Mims
      • Mixtral
      • Monai
      • Moonshot-ai
      • Msmarco
      • Natural-questions
      • Nvidia
      • Objaverse
      • Openai-whisper
      • Perplexity AI
      • Phi
      • Playgroundai
      • Pythia
      • Qwen
      • Qwen2
      • Qwen3
      • Rag-sequence-nq
      • S1-32B
      • Scalabilityai
      • Sft
      • SlimPajama
      • T5
      • Tulu
      • V2X
      • Video-MAE
      • Vit
      • Wildchat
    • Bioinformatics
      • AlphaFold3 Databases
      • BFD/MGnify
      • Big Fantastic Database
      • checkm
      • ColabFoldDB
      • Databases for ColabFold
      • dfam
      • EggNOG - version 5.0
      • EggNOG - version 6.0
      • EVcouplings databases
      • Genomes from NCBI RefSeq database
      • GMAP-GSNAP database (human genome)
      • GTDB
      • Illumina iGenomes
      • Kraken2
      • MGnify
      • NCBI BLAST databases
      • NCBI RefSeq database
      • Parameters of AlphaFold
      • Parameters of Evolutionary Scale Modeling (ESM) models
      • PDB70
      • PINDER
      • PLINDER
      • Protein Data Bank
      • Protein Data Bank database in mmCIF format
      • Protein Data Bank database in SEQRES records
      • Tara Oceans 18S amplicon
      • Tara Oceans MATOU gene catalog
      • Tara Oceans MGT transcriptomes
      • Tattabio
      • Uniclust30
      • UniProtKB
      • UniRef100
      • UniRef30
      • UniRef90
      • Updated databases for ColabFold
    • Using HuggingFace Datasets

Documentation

  • Requesting An Account
  • Get Started
    • Quick Start
    • Common Terms
    • HPC Resources
    • Theory of HPC
      • Overview of threads, cores, and sockets in Slurm for HPC workflows
    • Git Guide
  • Connecting to Unity
    • SSH
    • Unity OnDemand
    • Connecting to Desktop VS Code
  • Get Help
    • Frequently Asked Questions
    • How to Ask for Help
    • Troubleshooting
  • Cluster Specifications
    • Node List
    • Partition List
    • Storage
    • Node Features (Constraints)
      • NVLink and NVSwitch
    • CPU Summary List
    • GPU Summary List
  • Managing Files
    • Command Line Interface (CLI)
    • Disk Quotas
    • FileZilla
    • Globus
    • RStor Research Storage System
      • Managing RStor Shares
      • RStor Usage
      • The Allocation Portal
    • Scratch: HPC Workspace
    • Unity OnDemand File Browser
  • Submitting Jobs
    • Batch Jobs
      • Array Batch Jobs
      • Large Job Counts
      • Monitor a batch job
    • Helper Scripts
    • Interactive CLI Jobs
    • Unity OnDemand
    • Message Passing Interface (MPI)
    • Slurm cheat sheet
  • Software Management
    • Building Software from Scratch
    • Conda
    • Modules
      • Module Usage
    • Renv
    • Unity OnDemand
      • JupyterLab OnDemand
    • Venv
  • Tools & Software
    • ColabFold
    • PyTorch
    • R
      • R Parallelization
    • Unity GPUs
  • Datasets
    • AI and ML
      • Aleph-Alpha
      • Alibaba-NLP
      • Allen AI
      • AlpacaFarm
      • Amass
      • Audioset
      • BAAI
      • Bigcode
      • Biomed Clip
      • Blip 2
      • Bloom
      • ByteDance
      • COCO
      • Code Llama
      • DeepAccident
      • DeepSeek
      • DeSTA
      • Diffa
      • DINO v2
      • epic-kitchens
      • Falcon
      • Florence
      • FLUX.1 Kontext
      • Fomo
      • Gemma
      • Genmo
      • Glm
      • GPT
      • HiDream-I1
      • Ibm Granite
      • Idefics2
      • Imagenet 1K
      • Inaturalist
      • Infly
      • InternLM
      • Internvl3-8b-hf
      • Intfloat
      • Kinetics
      • LG
      • Linq
      • Llama2
      • Llama3
      • Llama4
      • Llava_OneVision
      • LLM-compiler
      • LMSys
      • Lumina
      • Mims
      • Mixtral
      • Monai
      • Moonshot-ai
      • Msmarco
      • Natural-questions
      • Nvidia
      • Objaverse
      • Openai-whisper
      • Perplexity AI
      • Phi
      • Playgroundai
      • Pythia
      • Qwen
      • Qwen2
      • Qwen3
      • Rag-sequence-nq
      • S1-32B
      • Scalabilityai
      • Sft
      • SlimPajama
      • T5
      • Tulu
      • V2X
      • Video-MAE
      • Vit
      • Wildchat
    • Bioinformatics
      • AlphaFold3 Databases
      • BFD/MGnify
      • Big Fantastic Database
      • checkm
      • ColabFoldDB
      • Databases for ColabFold
      • dfam
      • EggNOG - version 5.0
      • EggNOG - version 6.0
      • EVcouplings databases
      • Genomes from NCBI RefSeq database
      • GMAP-GSNAP database (human genome)
      • GTDB
      • Illumina iGenomes
      • Kraken2
      • MGnify
      • NCBI BLAST databases
      • NCBI RefSeq database
      • Parameters of AlphaFold
      • Parameters of Evolutionary Scale Modeling (ESM) models
      • PDB70
      • PINDER
      • PLINDER
      • Protein Data Bank
      • Protein Data Bank database in mmCIF format
      • Protein Data Bank database in SEQRES records
      • Tara Oceans 18S amplicon
      • Tara Oceans MATOU gene catalog
      • Tara Oceans MGT transcriptomes
      • Tattabio
      • Uniclust30
      • UniProtKB
      • UniRef100
      • UniRef30
      • UniRef90
      • Updated databases for ColabFold
    • Using HuggingFace Datasets

On this page

  • Data Security
  • Purchasing & Related Policies
  • Beta-Testing Users
  • How to Request a Share
  1. Unity
  2. Documentation
  3. Managing Files
  4. RStor Research Storage System

RStor Research Storage System

RStor is a mid-performance low-cost storage system launched in 2025 for storing research data that doesn’t have unusual security requirements. RStor is mounted on the Unity batch cluster via NFS and is mountable on UMass Amherst desktop computers via SMB. Shares are managed via the new Unity HPC Platform Allocation Portal. RStor is currently only available to qualified PIs at UMass Amherst.

stylus_note

RStor is hosted at the MGHPCC and undergoes:

  • A yearly downtime for a 1-3 days every spring, usually late May or early June
  • A monthly maintenance period on the first Tuesday of every month 6-9AM

Data Security

RStor is currently approved for data levels 1 and 2 only, according to UMass Amherst’s Data Categorization Levels.

warning
RStor is not approved for any data subject to restrictions under the Export Administration Regulations (15 CFR Parts 730-774) or the International Traffic in Arms Regulations (22 CFR Parts 120-130), as well as data subject to Health Insurance Portability and Accountability Act (HIPAA) or Controlled Unclassified Information (CUI) safeguarding requirements.

Purchasing & Related Policies

  • Minimum purchase period is 1 year
  • Backed-up storage pricing is TBD
  • Backups (if applicable) are retained for 90 days and performed daily from snapshots that are taken every 4 hours. 18 snapshots are retained and can be accessed at /.zfs/snapshot but user self-restore functionality from this location hasn’t been extensively tested yet. Backups are not currently offsite or immutable.
  • Snapshots are available for backed-up and non-backed-up shares
  • Non backed-up storage pricing is TBD
  • We will do our best to accommodate share size increase requests but cannot guarantee our ability to do so in all cases. Please try to keep share size change requests to a maximum of twice per year.
  • We will pro-rate and co-term space increase requests with the original request
  • You have 30 days to get your data off RStor after the end of a purchase agreement before it will be permanently deleted.
  • UMass Amherst Research Computing & Data (RCD) will commit to keeping RStor as a service for at least 3 years from share purchase time, or will offer an analogous service to migrate to. Pricing will stay the same for the duration of the initial agreement, regardless of platform.

Beta-Testing Users

  • By the end of February 2026, beta-testing users will need to submit payment or remove their data from RStor.
  • To continue access, email drparker@umass.edu and werikson@umass.edu with:
    • Share name
    • Accounts that should have access
    • Storage size

How to Request a Share

Shares are managed via the new Unity HPC Platform Allocation Portal. You will need to provide the following:

  1. Name of share
    • No spaces, all alphanumeric characters, limit 32 characters
  2. Size of share
    • Increments of 1TB
    • Anything over 200TB might require extra time
    • Maximum ~700TB
  3. Length of time you expect to have the share (in years)
  4. Backed-up storage or non-backed up storage
  5. Speedtype to be charged
  6. Contact information for who can set up payment

Articles in RStor Research Storage System

description
Managing RStor Shares
Documentation for Managing RStor Shares.
description
RStor Usage
Documentation for RStor Usage.
description
The Allocation Portal
Documentation for The Allocation Portal.
Last modified: Thursday, March 5, 2026 at 9:46 AM. See the commit on GitLab.
University of Massachusetts Amherst University of Massachusetts Amherst University of Rhode Island University of Rhode Island University of Massachusetts Dartmouth University of Massachusetts Dartmouth University of Massachusetts Lowell University of Massachusetts Lowell University of Massachusetts Boston University of Massachusetts Boston Mount Holyoke College Mount Holyoke College Smith College Smith College Olin College of Engineering Olin College of Engineering
search
close