Stanford University
Stanford Data Science
Marlowe Marlowe
  • Overview
  • Research Focus
  • Updates
  • Documentation
  • Apply for Access
Documentation
  • Getting Started
    • Get Access
    • Connecting
    • Filesystems
    • Globus
  • SLURM and Open OnDemand
    • SLURM on Marlowe
    • Open OnDemand
  • Software
    • Overview
    • Apptainer
    • Conda
    • Conan
    • CuDNN
    • CUDA Toolkit
    • Java
    • Nvidia HPC SDK
    • NGC NIMs
  • Support
    • Help & Support
    • FAQ
    • Tech Specs
    • Usage Violations

Marlowe Documentation

Welcome to the Marlowe technical documentation. Marlowe is a High Performance Computing (HPC) cluster managed by Stanford Research Computing, composed of an NVIDIA DGX H100 SuperPOD, 2.5PB of DDN ExaScaler Lustre storage, and 3PB of DDN IntelliFlash storage.

With 11.1 PFlops of compute power, Marlowe would place 87th in the May 2024 TOP500 rankings.

For in-depth tech specs, see the Tech Specs page.

Getting Started

Apply for access, connect via SSH, understand filesystems, and transfer data with Globus.

SLURM & Open OnDemand

Job submission, partitions, allocation policies, and the Open OnDemand web interface.

Software

Modules, Apptainer containers, Conda, CUDA toolkit, cuDNN, and more.

NGC NIMs

Using NVIDIA NIM containers on Marlowe, with Llama and Evo examples.

FAQ

Common questions about login issues, quotas, Docker, CUDA tools, and more.

Help & Support

Contact Stanford Research Computing for support, consultations, and community channels.

Marlowe

Stanford's GPU-based computational instrument

Quick Links

  • Apply for Access
  • Application Guide
  • Documentation

Resources

  • Recharge Rates
  • Cite Marlowe
Stanford Research, Stanford Data Science, Stanford University IT Research Computing

© Stanford University. Stanford, California 94305.