The tutorial assumes you have already worked through the Execute a Job Tutorial. Therefore, the instructions here are abbreviated but will follow the same format so you may easily consult the extended tutorial.
Table of Contents
📝 Note: Do not execute jobs on the login nodes; only use the login nodes to access your compute nodes. Processor-intensive, memory-intensive, or otherwise disruptive processes running on login nodes will be killed without warning.
Open a Bash terminal (or MobaXterm for Windows users).
ssh [email protected].
When prompted, enter your password.
Here is an example sbatch script for running a batch job on an HPC like Onyx.
#!/bin/bash#SBATCH -n 16#SBATCH -o test_%A.out#SBATCH --error test_%A.err#SBATCH --mail-user $CHANGE_TO_YOUR_EMAIL#SBATCH --mail-type ALLmodule purgemodule load gnu/5.4.0module load openmpimodule listmpirun hello_world_f
Use nano or Vim (we use Vim here) to create and edit your sbatch script.
Create your sbatch script within Vim by typing
insert mode or paste the contents of your sbatch script into Vim.
Save your file by typing
:wq! and return to the Bash shell.
program helloworlduse mpiinteger ierr, numprocs, procidcall MPI_INIT(ierr)call MPI_COMM_RANK(MPI_COMM_WORLD, procid, ierr)call MPI_COMM_SIZE(MPI_COMM_WORLD, numprocs, ierr)print *, "Hello world! I am process ", procid, "out of", numprocs, "!"call MPI_FINALIZE(ierr)stopend
Use Vim (
vim) to create your Fortran source file.
Save your file and return to the Bash shell.
Load the MPI compiler using the openmpi module.
module load openmpi
Compile the Fortran source into a binary executable file.
mpifort -o hello_world_f hello_world.f90
ls -al to verify the presence of the
hello_world_f binary in your working directory.
Before proceeding, ensure that you are still in your working directory (using
pwd) and that you still have the PE-gnu module loaded (using
We need to be in the same path/directory as our sbatch script and our Fortran binary. Use
ls -al to confirm their presence.
sbatch to schedule your batch job in the queue.
This command will automatically queue your job using slurm and produce a job number. You can check the status of your job at any time with the
squeue --job <jobnumber>
You can also stop your job at any time with the
scancel --job <jobnumber>
View your results.
You can view the contents of these files using the
less command followed by the file name.
Your output should look something like this (the output is truncated.):
Hello world! I am process 3 out of 20 !Hello world! I am process 0 out of 20 !Hello world! I am process 1 out of 20 !Hello world! I am process 7 out of 20 !Hello world! I am process 8 out of 20 !Hello world! I am process 2 out of 20 !Hello world! I am process 6 out of 20 !Hello world! I am process 11 out of 20 !...
Download your results (using the
scp command or an SFTP client) or move them to persistent storage. See our moving data section for help.