slurm partition limit for big memory job

by Martin Forde   Last Updated March 14, 2019 21:00 PM

I have two pools of nodes, one for default compute and the other for big memory applications.

default <- nodes[1-40]
bigmem <- mem[1-8]

How do I set up a partition limit on the default, so that if a job requests more than say 30G of memory that it's diverted to big_mem partition?

$ srun -N1 -n1 --mem=50G script.sh      # job is rejected from default and queued to bigmem
$ srun -N1 -n1 script.sh                # job is accepted to default because default mem allocation is 1GB per task.

thanks,

m

Tags : slurm


Related Questions


Slurm initialization failed

Updated July 16, 2018 11:00 AM

Allow other users to cancel jobs

Updated August 13, 2018 10:00 AM

Slurm node daemon error: Can't open PID file

Updated March 23, 2019 23:00 PM

Unable to contact slurm controller

Updated March 28, 2019 23:00 PM

How to upgrade Slurm?

Updated October 03, 2017 14:00 PM