Working with Python
The tutorial assumes you have already worked through the Execute a Job Tutorial. Therefore, the instructions here are abbreviated but will follow the same format so you may easily consult the extended tutorial.
Table of Contents
📝 Note: Do not execute jobs on the login nodes; only use the login nodes to access your compute nodes. Processor-intensive, memory-intensive, or otherwise disruptive processes running on login nodes will be killed without warning.
Step 1: Access the Onyx HPC
Open a Bash terminal (or MobaXterm for Windows users).
Execute
ssh doaneusername@onyx.doane.edu.When prompted, enter your password.
Step 2: Create an sbatch Script
Example sbatch Script
Here is an example sbatch script for running a batch job on an HPC like Onyx.
#!/bin/bash
#SBATCH -n 16
#SBATCH -o test_%A.out
#SBATCH --error test_%A.err
#SBATCH --mail-user $CHANGE_TO_YOUR_EMAIL
#SBATCH --mail-type ALL
module purge
module load gnu/5.4.0
module load spack-apps
module load openmpi-3.0.0-gcc-5.4.0-clbdgmf
module load python-3.6.3-gcc-5.4.0-ctlzpuv
module load py-mpi4py-3.0.0-gcc-5.4.0-wh6rtv7
module list
mpirun python hello_world.pysbatch Procedure
Use nano or Vim (we use Vim here) to create and edit your sbatch script.
vim slurm_py_example.jobCreate your sbatch script within Vim by typing
iforinsertmode or paste the contents of your sbatch script into Vim.Save your file by typing
:wq!and return to the Bash shell.
Step 3: Create a Python Program
MPI Hello World Code
#!/usr/bin/env python
import sys
from mpi4py import MPI
size = MPI.COMM_WORLD.Get_size()
rank = MPI.COMM_WORLD.Get_rank()
name = MPI.Get_processor_name()
print("Hello, World! I am process ",rank," of ",size," on ",name)Python Procedure
Use Vim (
vim) to create your python source file within your working directory.vim hello_world.pyPaste the hello world python code into Vi.
Save your file and return to the Bash shell.
Python does not need to be compiled.
Step 4: Run the Job
Before proceeding, ensure that you are still in your working directory (using
pwd) and that you still have the openmpi module loaded (usingmodule list).We need to be in the same path/directory as our sbatch script and our python script. Use
ls -alto confirm their presence.
Use
sbatchto schedule your batch job in the queue.sbatch slurm_py_example.jobThis command will automatically queue your job using slurm and produce a job number. You can check the status of your job at any time with the
squeuecommand.squeue --job <jobnumber>You can also stop your job at any time with the
scancelcommand.scancel --job <jobnumber>View your results. You can view the contents of these files using the
lesscommand followed by the file name.less test_<jobnumber>.outYour output should look something like this (the output is truncated.):
Hello, World! I am process 11 of 20 on compute_node Hello, World! I am process 13 of 20 on compute_node Hello, World! I am process 12 of 20 on compute_node Hello, World! I am process 0 of 20 on compute_node Hello, World! I am process 15 of 20 on compute_node Hello, World! I am process 19 of 20 on compute_node Hello, World! I am process 7 of 20 on compute_node Hello, World! I am process 16 of 20 on compute_node Hello, World! I am process 3 of 20 on compute_node Hello, World! I am process 9 of 20 on compute_node . . .Download your results (using the
scpcommand or an SFTP client) or move them to persistent storage. See our moving data section for help.
Additional Examples
Last updated