There are different ways to submit SLURM jobs onto a cluster and the main way is by submitting a script with the
sbatch command as shown below:
In my recent attempt to develop a web application with backend as SLURM scheduler for job management, referred SLURM documentation to redirect the STDOUT and STDERR to a file w.r.t to the job id as shown below:
#!/bin/bash #SBATCH --job-name=qsim #SBATCH --partition=standard-low #SBATCH -o $SLURM_JOB_ID.output #SBATCH -e $SLURM_JOB_ID.error python UWVr6QCFKLGgx6sRtsnRZyRrajJdbPF4CsKGUqd7S4r.py
Unfortunately, the output didn’t get substituted with the variable
$ls -lrt -rw-rw-r-- 1 vivekn vivekn 737 Sep 17 12:43 $SLURM_JOB_ID.output -rw-rw-r-- 1 vivekn vivekn 580 Sep 17 12:43 $SLURM_JOB_ID.error
After quite extensive research and the right guide from Harvard showed me that the way am referring was wrong, and had to change the SLURM Jobs script as below:
#!/bin/bash #SBATCH --job-name=qsim #SBATCH --partition=standard-low #SBATCH -o %j.output #SBATCH -e %j.error python UWVr6QCFKLGgx6sRtsnRZyRrajJdbPF4CsKGUqd7S4r.py
The %j in the filename will be substituted by the JobID at runtime.
However, I am yet to find out why the SLURM Environment variable $SLURM_JOB_ID didn’t work. I Will update once I have an explanation. If you have any suggestions or answers please write in the comment section below.