Quest Storage and Compute Resources#
Northwestern regularly invests in Quest to refresh and expand computing resources to meet the needs of the research community. This means that there are multiple types of nodes available on Quest, as older nodes are routinely replaced by newer ones.
Storage#
Quest has an IBM GPFS parallel file system with ESS storage totaling approximately 12 petabytes.
Quest has three storage locations:
- Home: - /home/<netid>- 80 GB 
- Secured to your ID only 
- Files are backed up 
 
- Projects: - /projects/<account-name>- 1 or 2 TB 
- Available for all members of the allocation 
- Files are not backed up 
 
- Scratch: - /scratch/<netid>- 5 TB (flash SSD) 
- Secured to your ID, but can be shared with others if needed 
- Good for intermediate files to help speed and performance 
- Files are deleted automatically after a period of time. 
 
Additional information can be found on the Quest file system page.
Compute Resources#
Quest comprises seven login nodes that users connect to directly and 1231 compute nodes with a total of 80,398 cores used for scheduled jobs. These nodes include 91 GPU nodes and 22 high-memory nodes. Both the login and compute nodes are running the Red Hat Enterprise Linux 8.10 operating system.
Tip
Login nodes are where most users land when connecting to Quest via applications like ssh or FastX. They are used to do things like launch jobs, but shouldn’t be used for heavy computing tasks.
Regular Compute Nodes#
A portion of Quest compute nodes are retired and replaced regularly with new nodes using current technology. This allows Quest to grow in response to researcher needs, but it means that Quest nodes have different architectures. Quest currently consists of the following architectures:
Quest 13 - Intel Emerald Rapids
- Number of Nodes: 140 nodes with 17920 cores total, 128 cores per node 
- Processor: Intel(R) Xeon(R) Platinum 8592+ CPU @ 1.9GHz 
- Memory: Per node (Per Core) 512 GB (4 GB), Type: DDR5 5600 MHz 
- Interconnect: Infiniband HDR 
- Expected retirement: Spring 2030 
Quest 12 - Intel Ice Lake
- Number of Nodes: 214 nodes with 13696 cores total, 64 cores per node 
- Processor: Intel(R) Xeon(R) Gold 6338 CPU @ 2.0GHz 
- Memory: Per node (Per Core) 256 GB (4 GB), Type: DDR4 3200 MHz 
- Interconnect: Infiniband HDR 
- Expected retirement: Fall 2028 
Quest 11 - Intel Ice Lake
- Number of Nodes: 209 nodes with 13376 cores total, 64 cores per node 
- Processor: Intel(R) Xeon(R) Gold 6338 CPU @ 2.0GHz 
- Memory: Per node (Per Core) 256 GB (4 GB), Type: DDR4 3200 MHz 
- Interconnect: Infiniband HDR Compatible 
- Expected retirement: Fall 2027 
Quest 10 - Intel Cascade Lake
- Number of Nodes: 555 nodes with 28860 cores total, 52 cores per node 
- Processor: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz 
- Memory: Per node (Per Core) 192 GB (3.7 GB), Type: DDR4 2933MHz 
- Interconnect: Infiniband EDR 
- Expected retirement: Fall 2026 
GPU Nodes#
Quest has a total of 308 GPU cards on 100 GPU nodes across General Access and Priority Access allocations. For more information on how to run on a GPU, see GPUs on Quest.
GPU resources available to general access allocations:
Quest 13 - Intel Emerald Rapids with NVIDIA H100 GPUs
- Number of Nodes: 24 nodes with 1536 cores total, 64 cores per node 
- Processor: Intel(R) Xeon(R) Platinum 8562Y+ CPU @ 2.8GHz (Emerald Rapids) 
- Memory: Per node (Per Core) 1TB (16 GB), Type: DDR5 5600 MHz 
- GPU cards per node: 4 x 80GB NVIDIA H100 (Connected with SXM5 and HBM3) 
- Interconnect: Infiniband NDR 
- Expected retirement: Fall 2029 
Quest 12 - Intel Ice Lake with NVIDIA A100 GPUs
- Number of Nodes: 18 nodes with 1152 cores total, 64 cores per node 
- Processor: Intel(R) Xeon(R) Gold 6338 CPU @ 2.0GHz (Ice Lake) 
- Memory: Per node (Per Core) 512 GB (8 GB), Type: DDR4 3200 MHz 
- GPU cards per node: 4 x 80GB NVIDIA A100 (Connected with SXM4 and HBM2) 
- Interconnect: Infiniband HDR 
- Expected retirement: Fall 2028 
Quest 10 - Intel Cascade Lake with NVIDIA A100 GPUs
- Number of Nodes: 16 nodes with 832 cores total, 52 cores per node 
- Processor: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz (Cascade Lake) 
- Memory: Per node (Per Core) 192 GB (3.7 GB), Type: DDR4 2933 MHz 
- GPU cards per node: 2 x 40GB NVIDIA A100 (Connected with PCIe) 
- Expected retirement: Fall 2026 
High-Memory Nodes#
Quest has a total of 22 high-memory nodes that include 0.5 – 2 TB memory per node for scheduled jobs. This includes four nodes with 1.5 TB memory support General Access, and the remaining nodes support Priority Access allocations. For more information on how to run on a high-memory node, see Quest Partitions/Queues.
Batch Job Limits#
A significant amount of computing is available to everyone through the General Access proposal process. Currently, there are 535 regular nodes, 58 GPU nodes, and 4 high-memory node available exclusively for use in General Access. Furthermore, the General Access jobs can run on the majority of Priority Access Quest nodes for up to 4 hours. Researchers using General Access allocations can request appropriate partitions/queues depending on their computational needs. For instance, short/normal/long partitions can be used to access the regular nodes. The short queue has access to the majority of Quest nodes and all regular nodes architectures.
