Skip to main content
IT Service Status
IT Service Status

Quest Specifications

This page contains technical information about the Quest cluster.

Northwestern regularly invests in Quest to refresh and expand computing resources to meet the needs of the research community. This includes hundreds of nodes that are available free of charge through the General Access proposal process (see below). For more information on purchasing Buy-In nodes, please see Purchasing Resources on Quest.  

Quest Architecture

Quest has an IBM GPFS parallel filesystem with ESS storage totaling approximately 8.0 petabytes. Users have access to a small (80GB) home directory, as well as a project directory optimized for high-performance computing operations. 

Quest comprises four login nodes that users connect to directly and 1136 compute nodes with a total of 62,960 cores used for scheduled jobs. These nodes include 68 GPU nodes and 20 high-memory nodes. Both the login and compute nodes are running the Red Hat Enterprise Linux 7.9 operating system. 

Regular Compute Nodes

Quest 9 - Interconnect: Infiniband EDR
  • Number of Nodes: 96 nodes with 3840 cores total, 40 cores per node 
  • Processor: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz 
  • Memory: Per node (Per Core) 192 GB (4.8 GB), Type: DDR4 2666 MHzz 
Quest 10 - Interconnect: Infiniband EDR
  • Number of Nodes: 532 nodes with 27612 cores total, 52 cores per node 
  • Processor: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz 
  • Memory: Per node (Per Core) 192 GB (3.7 GB), Type: DDR4 2666 MHz   
Quest 11 - Interconnect: Infiniband HDR compatible
  • Number of Nodes: 208 nodes with 13312 cores total, 64 cores per node
  • Processor: Intel(R) Xeon(R) Gold 6338 CPU @ 2.0GHz
  • Memory: Per node (Per Core) 256 GB (4 GB), Type: DDR4 2666 MHz
Quest 12 - Interconnect: Infiniband HDR
  • Number of Nodes: 212 nodes with 13568 cores total, 64 cores per node
  • Processor: Intel(R) Xeon(R) Gold 6338 CPU @ 2.0GHz
  • Memory: Per node (Per Core) 256 GB (4 GB), Type: DDR4 2666 MHz

High-Memory Nodes

Quest has a total of 20 high-memory nodes that include 0.5 – 2 TB memory per node for scheduled jobs. This includes one node with 1.5 TB memory support General Access, and the remaining nodes support Buy-In allocations. For more information on how to run on a high-memory node, see Quest Partitions/Queues

In addition, 3 nodes with 1.5 TB memory support Quest Analytics services.

GPU Nodes

Quest has a total of 68 GPU nodes across General Access and Buy-In allocations. The GPU nodes currently consist of NVIDIA V100, A100, and H100 GPUs. For more information on how to use the GPUs on Quest, see GPUs on Quest.

Job Limits

Researchers using Quest may submit up to 5,000 jobs at one time. General access jobs with a wall time of four hours or less can be run on most of Quest’s compute nodes and will experience the shortest wait times. 

General Access Resources and Architectures

A significant amount of computing is available to everyone through the General Access proposal process. Currently, there are 381 regular nodes, 34 GPU nodes, and 4 high-memory nodes available exclusively for use in General Access. Furthermore, the General Access jobs can run on the majority of dedicated or Full Access Quest nodes for up to 4 hours. Researchers using General Access allocations can request appropriate partitions/queues depending on their computational needs. For instance, short/normal/long partitions can be used to access the regular nodes. The "short” queue has access to the majority of Quest nodes and all regular nodes architectures.  

Genomics Compute Cluster

There are a large number of nodes and storage available for genomics research. For more details, please see the Genomics Compute Cluster on Quest.