Abaqus On Quest#
Warning
Multi-node Abaqus no longer works for versions 2020 or older
Tip
If you plan to run Abaqus with multi-node, please use Abaqus 2022 or newer.
How to Access Abaqus on Quest#
The Abaqus Network License is restricted to authorized groups only. Please fill out the form which is linked in the Abaqus section of McCormick Site Licensed Software page and write “Quest” when asked Device Hostname(s) on which Abaqus will be installed. After McCormick IT confirms that you are eligible to use Abaqus on Quest, Northwestern IT will provide you with access. As a condition for use, users agree to comply with University policy on Appropriate Use of Electronic Resources , Use and Copying of Computer Software , and Best Practices for Securing Devices .
Using the Abaqus GUI with Quest OnDemand#
If you would like to use the Abaqus GUI with the resources available on Quest, we recommend using Quest OnDemand. Abaqus interactive jobs can be requested using the “Interactive Apps” dropdown, or from the “Abaqus” button on the right side of the landing page.
Running Abaqus with Slurm#
Instead of waiting for the resources to then work with Abaqus interactively, you can also submit your Abaqus job through a batch job. The job will wait in the queue until the resources are available and will start without the need of any user interaction.
There are multiple options to use parallelization for Abaqus in batch jobs, which you can control what parallelization to use through the argument mp_mode. The options for mp_mode are threads and mpi. Below we will go over the differences in options and when to use which option.
Abaqus multiprocessing with argument mp_mode=threads#
The option threads can only be used when Abaqus is run on a single node. Abaqus with threads will parallelize the Abaqus simulation in such a way that each thread/process will share the same memory. This type of parallelization is helpful when working with a large amount of data. The overhead of copying the data to each process, as is the case with mp_mode=mpi (see section below), can cause the job to run out of memory.
If using the threads option, you will want to change the SBATCH flags to the following:
#SBATCH --nodes=1 # Number of nodes to request (threads are single-node)
#SBATCH --ntasks-per-node=1 # Number of tasks per node (set to 1 for Abaqus threaded)
#SBATCH --cpus-per-task=16 # Number of CPUs (threads) per task
And make sure to set mp_mode to threads in this command:
abaqus interactive analysis job=<inp file > user=<user-script> cpus=${SLURM_NPROCS} mp_mode=threads double=both memory="3 gb" scratch=${SLURM_SUBMIT_DIR}
Abaqus multiprocessing with argument mp_mode=mpi#
The option mpi can be used when the Abaqus job is running on one or multiple nodes. You would use the mpi option when you want to run multiple small or medium memory size tasks at the same time. Using the mpi option will allow the scheduler to find pockets of compute on the cluster to distribute these tasks to. In many cases, this allows your workflow to be completed faster than if the job needed to wait for all these resources to be available on just one node.
When running Abaqus with mp_mode=mpi, users must use the submissions script provided below to correctly set up Abaqus to run over multiple nodes. The script contains a block of code which creates an Abaqus environment file (abaqus_v6.env) which tells Abaqus which compute nodes and how many cores on those compute nodes are available for the simulation.
Please find an example script below.
#!/bin/bash
#SBATCH -A pXXXXX # Allocation
#SBATCH -p short # Queue
#SBATCH -t 04:00:00 # Walltime/duration of the job
#SBATCH --ntasks=20 # Number of CPUs (no need to specify how many nodes as it is MPI)
#SBATCH --mem-per-cpu=3G # Memory per cpu
#SBATCH --job-name=example_job # Name of job
# unload any modules that carried over from your command line session
module purge
# load modules you need to use
# MUST BE Abaqus 2022 OR NEWER
# module load abaqus/2025
module load abaqus/2022
### Create Abaqus environment file for current job, you can set/add your own options (Python syntax)
env_file=abaqus_v6.env
cat << EOF > ${env_file}
verbose = 1
ask_delete = OFF
mp_file_system = (DETECT, DETECT)
EOF
if [[ $SLURM_JOB_CPUS_PER_NODE =~ "," ]]; then
cpus_per_node_list=()
node_list=($(scontrol show hostname ${SLURM_NODELIST} | sort -u))
IFS=',' read -r -a cpus_per_node <<< "$SLURM_JOB_CPUS_PER_NODE"
mp_host_list="["
for i in "${!cpus_per_node[@]}"
do
if [[ ${cpus_per_node[i]} =~ "x" ]]; then
weird_syntax_stripped_out=`echo "${cpus_per_node[i]}" | sed "s/(//" | sed "s/)//" | sed "s/x/,/"`
IFS=',' read -r -a repeated_cpus_per_node <<< "$weird_syntax_stripped_out"
for (( ii=0; ii<${repeated_cpus_per_node[1]}; ii++ )); do
echo $ii
cpus_per_node_list+=(${repeated_cpus_per_node[0]})
done;
else
cpus_per_node_list+=(${cpus_per_node[i]})
fi
done
mp_host_list="["
for i in "${!node_list[@]}"
do
x=${node_list[i]}
y=${cpus_per_node_list[i]}
mp_host_list="${mp_host_list}['$x', $y],"
done
else
node_list=$(scontrol show hostname ${SLURM_NODELIST} | sort -u)
mp_host_list="["
for host in ${node_list}; do
mp_host_list="${mp_host_list}['$host', ${SLURM_CPUS_ON_NODE}],"
done
fi
mp_host_list=$(echo ${mp_host_list} | sed -e "s/,$/]/")
echo "mp_host_list=${mp_host_list}" >> ${env_file}
export I_MPI_HYDRA_BOOTSTRAP=slurm
export I_MPI_HYDRA_TOPOLIB=ipl
# A command you actually want to execute:
abaqus interactive analysis job=<inp file > user=<user-script> cpus=${SLURM_NPROCS} mp_mode=mpi double=both memory="3 gb" scratch=${SLURM_SUBMIT_DIR}