Enabling Data-Driven Discovery at Scale

Marlowe is Stanford's first GPU-based computational instrument: 248 NVIDIA H100 GPUs powering frontier AI research across all seven schools, managed by Stanford Data Science.

248 H100 GPUs
31 Compute Nodes
190+ Research Groups
7 Schools
2.1M+ GPU-Hours Delivered

GPU-Based Computational Instrument

Named after Philip Marlowe, the film noir detective, Marlowe is designed to give faculty the infrastructure to train foundation models, run large-scale simulations, and pursue computational work at scales previously available only to industry.

A team of Research Data Scientists partners directly with faculty to optimize code, scale training across multiple nodes, and maximize the scientific return from every GPU-hour allocated.

  • Partner with faculty to design and execute GPU-accelerated research
  • Optimize training pipelines for multi-node scaling
  • Integrate open science practices into computational research
  • Provide technical consulting on model architecture and distributed training

Read the Stanford Report feature →

New to Marlowe?

New PIs get 5,000 free GPU-hours.

Apply for Access

Latest Updates

View all →
Apr 22, 2026 talks seminars
NVIDIA & Marlowe: Training Robots with World Foundation Models
Mar 11, 2026 talks seminars
NVIDIA New AI Models: Open Models — Open to Build
Feb 20, 2026 maintenance
Planned Maintenance Window: February 24-25
Feb 18, 2026 talks seminars
NVIDIA & Marlowe: Post-training Language Agents with NeMo RL and NeMo Gym
Feb 7, 2026 research milestone
Milestone: Training POC for 80B-Parameter Brain Model on Marlowe Completed
Jan 28, 2026 talks seminars
Compute Resources @ Stanford and Beyond
Nov 12, 2025 talks seminars
NVIDIA PhysicsNeMo: Community Models and Dataset Integrations
Oct 29, 2025 talks seminars
Building Scalable, End-to-End Generative AI with NVIDIA NeMo Framework on Marlowe
Sep 24, 2025 talks seminars
Graph Neural Networks & LLMs in PyG on Marlowe
Aug 13, 2025 talks seminars
Using CuPyNumeric and the Legate Ecosystem for Multi-GPU Scaling on Marlowe
Aug 7, 2025 education
DataSci 211: Accelerating Research with Marlowe
Jul 16, 2025 talks seminars
NVIDIA Clara for AI-Enabled Healthcare and Life Sciences on Marlowe
Jun 25, 2025 talks seminars
NVIDIA Nsight Systems for Profiling Code on Marlowe
May 21, 2025 talks seminars
Distributed Training & Marlowe Multi-GPU Best Practices
Apr 29, 2025 talks seminars
Marlowe Featured at 2025 Stanford Data Science Conference
Apr 23, 2025 talks seminars
Marlowe GPU Computing Foundations: Architecture, Applications, and Acceleration
Dec 15, 2024 press
Stanford Welcomes First GPU-Based Supercomputer

Research Spotlights

View all spotlights →

Faculty from across Stanford are using Marlowe to train foundation models, perform computations and simulations at scales previously available only to industry.

Dan Yamins DY

Dan Yamins

Associate Professor of Psychology and Computer Science

School of Humanities & Sciences

Counterfactual World Modeling: Training Brain-Scale Neural Networks

Validated 80B-parameter brain-inspired neural network training on Marlowe in a February 2026 proof-of-concept. Now training PSI2-30B — a 30-billion-parameter counterfactual world model that learns to predict how the physical world changes in response to actions — in a dedicated 30-day campaign across 24 nodes, the first hero run on Marlowe.

Computational Neuroscience / AI
Read full story →
Andreas Tolias AT

Andreas Tolias

Professor of Ophthalmology and of Electrical Engineering

School of Medicine

The Enigma Project: First Foundation Model and Digital Twin of the Brain

Trained the first foundation model of mammalian visual cortex on Marlowe — a 2B-parameter multimodal transformer on recordings from 3 million neurons across 330 mice, establishing the first-ever scaling laws for neuroscience. Now scaling to build a digital twin of the primate brain with up to 1 trillion tokens of neural data across 128 GPUs. One of Marlowe's earliest and most active research groups.

Neuroscience / AI
Read full story →
Jure Leskovec JL

Jure Leskovec

Professor of Computer Science

School of Engineering

AI Virtual Cell: Genomic Foundation Models at Frontier Scale

Building the molecular foundation of the AI Virtual Cell — novel architectures designed for biological sequences (DNA, RNA, proteins) with biologically informed inductive biases, not adaptations of existing language models. Currently at 770M parameters with demonstrated scaling laws, targeting 3B-15B for open-source release comparable to ESM-3 (Science) and EVO-2 (Nature).

Computational Biology / Genomics
Ruijiang Li RL

Ruijiang Li

Associate Professor of Radiation Oncology

School of Medicine

Virtual Cell World Models for Cancer Biology

Published a vision-language foundation model in Nature (January 2025) for cancer diagnosis and predicting therapeutic response. Now building a generative world model for virtual cells on Marlowe — multi-scale AI that simulates from molecular interactions to tumor microenvironment evolution — integrating histopathology images, clinical notes, spatial transcriptomics, and over 400 million medical images at billion-parameter scale.

Medical AI / Oncology
View All Research Groups

In the Press

Stanford welcomes first GPU-based supercomputer (Stanford Report, December 2024)