GPU-Based Computational Instrument

Named after Philip Marlowe, the film noir detective, Marlowe is Stanford's first large-scale GPU computational instrument, designed to give faculty the infrastructure to train foundation models, run large-scale simulations, and pursue computational work at scales previously available only to industry.

A team of Research Data Scientists partners directly with faculty to optimize code, scale training across multiple nodes, and maximize the scientific return from every GPU-hour allocated.

  • Partner with faculty to design and execute GPU-accelerated research
  • Optimize training pipelines for multi-node scaling
  • Integrate open science practices into computational research
  • Provide technical consulting on model architecture and distributed training

Read the Stanford Report feature →

Technical Specifications
GPU248x NVIDIA H100 80GB SXM5
Nodes31 (8 GPUs per node)
CPU2x Intel Xeon Platinum 8480+ (112 cores/node)
Memory2 TB RAM per node
InterconnectNVIDIA InfiniBand NDR (400 Gb/s)
Storage2.5 PB parallel filesystem
GPU InterconnectNVLink + NVSwitch (intra-node)
Data ClassificationLow & Moderate Risk

Inside Marlowe

Stanford Data Science Presentation