Skip to main content

Quest Technical Specifications

This page contains technical information about the Quest cluster. For an overview of the system, see Quest Overview.

Quest Architecture

Quest has a IBM GPFS parallel filesystem with ESS storage totaling approximately 8.0 petabytes. Users have access to a small (80GB) home directory, as well as a project directory with storage optimized for I/O operations.

Quest comprises four login nodes that users connect to directly and 820 compute nodes with a total of 26,348 cores used for scheduled jobs. Out of these nodes, there are 56 GPU nodes and18 high memory nodes. Both the login and compute nodes are running the Red Hat Enterprise Linux 7.5 operating system.

A significant amount of computing is available to everyone through the General Access proposal process. Currently, there are 270 regular nodes, 12 GPU nodes, and 4 high memory nodes available exclusively for use. Furthermore, the General Access jobs can run on the majority of dedicated or Full Access Quest nodes for up to 4 hours. Northwestern regularly invests in Quest to refresh and expand computing resources to meet the needs of the research community.

Quest 5* - Interconnect: Infiniband FDR

Quest 6* - Interconnect: Infiniband FDR

Quest 7 - Interconnect: Infiniband FDR

Quest 8 - Interconnect: Infiniband EDR

Quest 9 - Interconnect: Infiniband EDR

Quest 10 - Interconnect: Infiniband EDR

High-Memory Nodes

Quest has a total of 18 high-memory nodes that include 0.5 – 1.5 TB memory per node. Out of this pool, 3 nodes with 0.5 TB memory each support Quest Analytics services, 3 nodes with 0.5 TB each support general access, and the remaining nodes support buy-in allocations.

GPU Nodes

Quest has a total of 56 GPU nodes across general access and buy-in allocations. The GPU nodes currently consist of Tesla K40, K80, P100, V100, and A100 GPUs.

Job Limits

Researchers using Quest may submit up to 5,000 jobs at one time. General access jobs with a wall time of four hours or less can be run on most of Quest’s compute nodes and will experience the shortest wait times. Longer general access jobs are restricted to approximately 20 percent of Quest’s compute nodes and will experience longer wait times.

General Access Resources and Architectures

Researchers using General Access allocations can request appropriate partitions/queues depending on their computational needs. For instance, short/normal/long partitions can be used to access the regular nodes. The "short” queue has access to the vast majority of Quest nodes and all regular nodes architectures. The "normal” and “long" queues have access to a smaller pool of nodes under Quest 5-8 and Quest 10 architectures.

* An additional 304 Quest 10 nodes will replace Quest 5 and Quest 6 nodes during June 14 – August 31 timeframe. Furthermore, 7 high-memory nodes, each consisting of 1.5 TB memory, and 16 GPU nodes, each consisting of 2 x A100 GPUs, will replace the existing general access high memory and GPU nodes (K40 and K80 GPUs), respectively.



Last Updated: 6 July 2021

Get Help Back to top