slurm partition limit for big memory job

by Martin Forde   Last Updated March 14, 2019 21:00 PM

I have two pools of nodes, one for default compute and the other for big memory applications.

default <- nodes[1-40]
bigmem <- mem[1-8]

How do I set up a partition limit on the default, so that if a job requests more than say 30G of memory that it's diverted to big_mem partition?

$ srun -N1 -n1 --mem=50G      # job is rejected from default and queued to bigmem
$ srun -N1 -n1                # job is accepted to default because default mem allocation is 1GB per task.



Tags : slurm

Related Questions

Slurm initialization failed

Updated July 16, 2018 11:00 AM

Allow other users to cancel jobs

Updated August 13, 2018 10:00 AM

How to upgrade Slurm?

Updated October 03, 2017 14:00 PM

Why i keep getting disconnected while computing?

Updated January 24, 2019 13:00 PM