Dear list, I am looking to run large numbers of jobs as an Array Job using SLURM. I created a job file "jobs.txt", containing one configuration per line. Using SLURM_ARRAY_TASK_ID I select the appropriate line for the current task and execute the corresponding configuration in the batch script. My test file currently holds 1308 configurations, which I was unable to submit using sbatch, as the MaxArrayTask variable seems to be set to 1001. What is the optimal/proposed way of scheduling large numbers of configurations? I need one core (or thread?!) per configuration (#SBATCH --cpus-per-task 1 and #SBATCH --ntasks-per-core 1 (are both necessary?)), 20G of memory per configuration (#SBATCH --mem-per-cpu=20480M). My hope was to build these large configuration lists and then later submit them to our own batch system so that it would use all available nodes to crunch all configurations blockwise serialised as fast as possible. Is that not what the SLURM Array Jobs are supposed to do? Kind regards, Philipp -- Philipp Berger https://moves.rwth-aachen.de/people/berger/ Software Modeling and Verification Group RWTH Aachen University Phone +49/241/80-21206 Ahornstraße 55, 52056 Aachen, Germany