You are here
MSI systems are primarily Linux based compute clusters running the CentOS operating system. Home directories are unified across all Linux machines.
Software is managed via a "module" environment. To prepare to execute a particular piece of software the appropriate software module must first be loaded.
Most MSI systems use the Portable Batch System (PBS) job scheduler to schedule computational jobs in queues. To execute a job users submit a PBS script which contains a request for computational resources, as well as the commands to begin the computation.
Computation time on the high performance systems is managed using "service units" (SUs). Service units correspond to cpu time, with the conversion being different for different systems. Research groups are given service unit quotas via an allocation proposal process.
Storage and Directory Structure
MSI home and group directories are unified across all Linux systems. When users login to any MSI Linux system they will find the same home and group directories, with their files available.
Most principal investigators (PIs) working with MSI are given a group directory. Inside each group directory are the home directories belonging to the members of the group.
Temporary file storage is available on most systems. Temporary storage directories are different on every system, and are not unified. Files kept in temporary storage do not count toward the group storage limit. Most systems allow temporary storage within the /scratch directory; Itasca additionally has the high performance /lustre filesystem for temporary file storage.
Further information regarding storage is available on the Storage webpage.
MSI maintains hundreds of software packages in support of MSI researchers. Software is managed via a module system. To prepare to execute a particular piece of software the appropriate module must first be loaded. The command to load a software module is:
module load modulename
In this example modulename should be replaced by the appropriate name of the software module to be loaded. To view a list of available software modules use the command:
When a software module is loaded it adds executable and library paths to the user environment, and might make other changes in preparation for software execution. Some useful module commands are summarized in the table below.
|module load modulename||Loads the specified software module.|
|module avail||Lists all available software modules.|
|module list||Lists software modules which are currently loaded.|
|module unload modulename||Unloads the specified software module.|
|module show modulename||Shows the actions taken by the specified software module.|
To efficiently and fairly manage MSI resources, computation sheduling is managed by the Portable Batch System (PBS) environment. Jobs are submitted to queues using job scripts that specify the resources a job requires, and the commands to begin executing a calculation. Queued jobs wait in line until the required resources are available. The priority of a job is affected by many factors as described here.
Below is a brief description of the job scheduling process; for more in-depth information please see Job Submission and Scheduling (PBS Scripts). For troubleshooting jobs please see Job Problem Solving.
A job is submitted to queue using the command:
Here jobscript is the name of the PBS script containing the job information.
To submit to a non-default queue use the command:
qsub -q queuename jobscript
A job script contains information on the required job resources and execution commands.
A sample PBS job script is shown below:
#!/bin/bash -l #PBS -l walltime=8:00:00,nodes=3:ppn=8,pmem=1000mb #PBS -m abe #PBS -M email@example.com cd ~/program_directory module load intel module load ompi/intel mpirun -np 24 program_name < inputfile > outputfile
In this PBS script the first line states that system commands should be read by the bash shell. The second line specifies the resource request, including walltime (hours:minutes:seconds), number of nodes, processor cores per node (ppn), and memory per processor core (pmem). The line reading #PBS -m abe specifies that emails should be sent to the user when the calculation aborts, begins, or ends, and the next line specifies the email address to use. The rest of the PBS script contains the commands to actually execute the calculation.
A PBS job script should contain the appopriate change directory (cd) commands; the script will begin from the user home directory. The job script will also need to contain the necessary module load commands as the calculation will begin with only the default login modules loaded. The last line is often an execution command. In this example the execution command launches a 24 core calculation that uses MPI communication.
It is possible to request resources for interactive use. Such an interactive request waits in the queue like other jobs until it reaches its time for execution. When the execution begins the user may then use the resources interactively (the command prompt returns and the assigned hardware may be used in regular Linux fashion). An example command to request nodes for interactive use is:
qsub -I -l walltime=2:00:00,nodes=2:ppn=8,pmem=2000mb
This command is entered on one line from the command prompt. The -I specification indicates that the job will be interactive. The rest of the command contains the resource request. After the command is entered the requested job will wait in the queue in the same way as all other jobs, and when the hardware becomes available the command prompt will return allowing interactive use.