Enabling Data-Driven Discovery at Scale
Marlowe is Stanford's first GPU-based computational instrument: 248 NVIDIA H100 GPUs powering frontier AI research across all seven schools, managed by Stanford Data Science.
GPU-Based Computational Instrument
Named after Philip Marlowe, the film noir detective, Marlowe is designed to give faculty the infrastructure to train foundation models, run large-scale simulations, and pursue computational work at scales previously available only to industry.
A team of Research Data Scientists partners directly with faculty to optimize code, scale training across multiple nodes, and maximize the scientific return from every GPU-hour allocated.
- Partner with faculty to design and execute GPU-accelerated research
- Optimize training pipelines for multi-node scaling
- Integrate open science practices into computational research
- Provide technical consulting on model architecture and distributed training
New to Marlowe?
New PIs get 5,000 free GPU-hours.
Latest Updates
View all →Research Spotlights
View all spotlights →Faculty from across Stanford are using Marlowe to train foundation models, perform computations and simulations at scales previously available only to industry.
Dan Yamins
Associate Professor of Psychology and Computer Science
School of Humanities & Sciences
Counterfactual World Modeling: Training Brain-Scale Neural Networks
Training an 80-billion-parameter brain-inspired neural network to study how biological neural circuits give rise to cognitive function. The model learns to predict how the world changes in response to actions, a core building block of biological intelligence.
Read full story →Andreas Tolias
Professor of Ophthalmology and of Electrical Engineering
School of Medicine
The Enigma Project: Building a Foundation Model of the Brain
Training large multimodal transformers on time-resolved neural recordings, visual stimulation, and behavioral data to build a foundation model of the primate brain, decoding how the brain processes and integrates information across sensory modalities.
Read full story →Jure Leskovec
Professor of Computer Science
School of Engineering
AI Virtual Cell: Genomic Foundation Models at Frontier Scale
Building the molecular scale of the AI Virtual Cell with novel foundation models for genomic sequences (DNA, RNA, proteins) scaling from 770M to 15B parameters. The goal is open-source models comparable to ESM-3 and EVO-2, trained at frontier LLM scale.
Ruijiang Li
Associate Professor of Radiation Oncology
School of Medicine
Multimodal AI Foundation Models for Cancer Biology and Personalized Oncology
Building multimodal foundation models integrating histopathology images, clinical notes, and spatial transcriptomics to transform cancer diagnosis and treatment. Published in Nature (January 2025), with training on over 400 million medical images and billions of text tokens.
In the Press
CoDa marks new era for computing and data science at Stanford (Stanford Report, February 2025)
Stanford welcomes first GPU-based supercomputer (Stanford Report, December 2024)