Skip to main content
IT Service Status
IT Service Status

Quest Specifications

This page contains technical information about the Quest cluster.

Northwestern regularly invests in Quest to refresh and expand computing resources to meet the needs of the research community. This includes hundreds of nodes that are available free of charge through the General Access proposal process (see below). For more information on purchasing Buy-In nodes, please see Purchasing Resources on Quest.  

Quest Architecture

Quest has an IBM GPFS parallel filesystem with ESS storage totaling approximately 8.0 petabytes. Users have access to a small (80GB) home directory, as well as a project directory optimized for high-performance computing operations. 

Quest comprises four login nodes that users connect to directly and 1136 compute nodes with a total of 62,960 cores used for scheduled jobs. These nodes include 66 GPU nodes and 25 high-memory nodes. Both the login and compute nodes are running the Red Hat Enterprise Linux 7.9 operating system. 

Regular Compute Nodes

Quest 9 - Interconnect: Infiniband EDR (Expected Retirement in Winter 2024)
  • Number of Nodes: 96 nodes with 3840 cores total, 40 cores per node 
  • Processor: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz 
  • Memory: Per node (Per Core) 192 GB (4.8 GB), Type: DDR4 2933 MHz
Quest 10 - Interconnect: Infiniband EDR (Expected Retirement in Fall 2026)
  • Number of Nodes: 532 nodes with 27612 cores total, 52 cores per node 
  • Processor: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz 
  • Memory: Per node (Per Core) 192 GB (3.7 GB), Type: DDR4 2933MHz   
Quest 11 - Interconnect: Infiniband HDR compatible (Expected Retirement in Fall 2027)
  • Number of Nodes: 208 nodes with 13312 cores total, 64 cores per node
  • Processor: Intel(R) Xeon(R) Gold 6338 CPU @ 2.0GHz
  • Memory: Per node (Per Core) 256 GB (4 GB), Type: DDR4 3200 MHz
Quest 12 - Interconnect: Infiniband HDR (Expected Retirement in Fall 2028)
  • Number of Nodes: 212 nodes with 13568 cores total, 64 cores per node
  • Processor: Intel(R) Xeon(R) Gold 6338 CPU @ 2.0GHz
  • Memory: Per node (Per Core) 256 GB (4 GB), Type: DDR4 3200 MHz

Quest 13 - Interconnect: Infiniband HDR (Expected Availability in Spring 2025)

  • Number of Nodes: 144 nodes with 18432 cores total, 128 cores per node
  • Processor:  Intel(R) Xeon(R) Platinum 8592+ CPU @ 1.9GHz (Emerald Rapids) 
  • Memory: Per node (Per Core) 512 GB (4 GB), Type: DDR4 5600 MHz

GPU Nodes

Quest has a total of 171 GPU cards on 66 GPU nodes across General Access and Buy-In allocations. We provide details on the General Access GPU nodes below.  For more information on how to use the GPUs on Quest, see GPUs on Quest.

Quest 10 - Interconnect: Infiniband EDR (Expected Retirement in Fall 2026) 

  • Number of Nodes: 16 nodes with 832 cores total, 52 cores per node  
  • Processor: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz  
  • Memory: Per node (Per Core) 192 GB (3.7 GB), Type: DDR4 2933 MHz  
  • GPU Cards: 2 x 40GB NVIDIA A100 (Connected with PCIe) 

Quest 12 - Interconnect: Infiniband HDR (Expected Retirement in Fall 2028) 

  • Number of Nodes: 18 nodes with 1152 cores total, 64 cores per node 
  • Processor: Intel(R) Xeon(R) Gold 6338 CPU @ 2.0GHz 
  • Memory: Per node (Per Core) 512 GB (8 GB), Type: DDR4 3200 MHz 
  • GPU Cards: 4 x 80GB NVIDIA A100 (Connected with SXM4 and HBM2) 

Quest 13 - Interconnect: Infiniband HDR (Expected Availability Spring 2025) 

  • Number of Nodes: 24 nodes with 1536 cores total, 64 cores per node
  • Processor: Intel(R) Xeon(R) Platinum 8562Y+ CPU @ 2.8GHz (Emerald Rapids)
  • Memory: Per node (Per Core) 1TB (16 GB), Type: DDR5 5600 MHz
  • GPU Cards: 4 x 80GB NVIDIA H100 (Connected with SXM5 and HBM3)

High-Memory Nodes

Quest has a total of 25 high-memory nodes that include 0.5 – 2 TB memory per node for scheduled jobs. This includes one node with 1.5 TB memory support General Access, and the remaining nodes support Buy-In allocations. For more information on how to run on a high-memory node, see Quest Partitions/Queues

In addition, 3 nodes with 1.5 TB memory support Quest Analytics services.

Job Limits

Researchers using Quest may submit up to 5,000 jobs at one time. General access jobs with a wall time of four hours or less can be run on most of Quest’s compute nodes and will experience the shortest wait times. 

General Access Resources and Architectures

A significant amount of computing is available to everyone through the General Access proposal process. Currently, there are 541 regular nodes, 34 GPU nodes, and 1 high-memory node available exclusively for use in General Access. Furthermore, the General Access jobs can run on the majority of dedicated or Full Access Quest nodes for up to 4 hours. Researchers using General Access allocations can request appropriate partitions/queues depending on their computational needs. For instance, short/normal/long partitions can be used to access the regular nodes. The "short” queue has access to the majority of Quest nodes and all regular nodes architectures.  

Genomics Compute Cluster

There are a large number of nodes and storage available for genomics research. For more details, please see the Genomics Compute Cluster on Quest.