Research Focus
Marlowe enables frontier-scale AI research across Stanford, from 80-billion-parameter brain models to genomic foundation models to autonomous driving systems. Meet the researchers pushing the boundaries of what's possible with GPU computing.
Dan Yamins
Associate Professor of Psychology and Computer Science
School of Humanities & Sciences
Counterfactual World Modeling: Training Brain-Scale Neural Networks
Training an 80-billion-parameter brain-inspired neural network to study how biological neural circuits give rise to cognitive function. The model learns to predict how the world changes in response to actions, a core building block of biological intelligence.
Read full story →Andreas Tolias
Professor of Ophthalmology and of Electrical Engineering
School of Medicine
The Enigma Project: Building a Foundation Model of the Brain
Training large multimodal transformers on time-resolved neural recordings, visual stimulation, and behavioral data to build a foundation model of the primate brain, decoding how the brain processes and integrates information across sensory modalities.
Read full story →Jure Leskovec
Professor of Computer Science
School of Engineering
AI Virtual Cell: Genomic Foundation Models at Frontier Scale
Building the molecular scale of the AI Virtual Cell with novel foundation models for genomic sequences (DNA, RNA, proteins) scaling from 770M to 15B parameters. The goal is open-source models comparable to ESM-3 and EVO-2, trained at frontier LLM scale.
Ruijiang Li
Associate Professor of Radiation Oncology
School of Medicine
Multimodal AI Foundation Models for Cancer Biology and Personalized Oncology
Building multimodal foundation models integrating histopathology images, clinical notes, and spatial transcriptomics to transform cancer diagnosis and treatment. Published in Nature (January 2025), with training on over 400 million medical images and billions of text tokens.
Curtis Langlotz
Professor of Radiology and of Biomedical Data Science
School of Medicine
AI for Radiology: Large-Scale Medical Imaging Models
Developing AI systems that interpret medical images and clinical text to improve diagnostic accuracy and clinical decision-making. Professor Langlotz directs the Center for Artificial Intelligence in Medicine and Imaging (AIMI) at Stanford, one of the leading medical AI research programs in the world.
Gordon Wetzstein
Associate Professor of Electrical Engineering
School of Engineering
Video World Models for Real-Time Scene Synthesis and Autonomous Driving
Building scalable video world models that generate photorealistic novel views of dynamic scenes in real time by fine-tuning 1.3B-parameter video diffusion models conditioned on sparse 3D geometry from LiDAR, with publications at SIGGRAPH 2024 and CVPR 2025.
Brian Hie
Assistant Professor of Chemical Engineering
School of Engineering
Learning the Language of Biology: Protein and DNA Language Models
Training language models on DNA and protein sequences at multi-node scale — successfully scaled to 128 H100 GPUs with 50% model FLOPS utilization. Marlowe enables large-scale biological AI training that drives discovery in protein engineering and genomics.
Iro Armeni
Assistant Professor of Civil and Environmental Engineering
School of Engineering
4D Scene Understanding: AI for Dynamic Real-World Environments
Developing GPU-accelerated systems for understanding how physical spaces change over time, from construction sites to indoor environments. Transformer architectures for 4D instance segmentation and monocular video reconstruction require sustained multi-GPU training exceeding 72 hours per run.
James Zou
Associate Professor of Biomedical Data Science and, by courtesy, of Computer Science and of Electrical Engineering
School of Medicine
AI Agents for Biomedical Discovery: Self-Improving LLMs with Scientific Tools
Developing algorithms that enable large language models to self-improve and learn to use scientific tools like AlphaFold and biomedical databases to build deeper expertise. Fine-tuning and evaluating LLMs (7B-70B parameters) for high-impact applications in healthcare, biology, chemistry, and medicine.
Tengyu Ma
Assistant Professor of Computer Science
School of Engineering
LLM Reasoning at Scale: Reinforcement Learning for Theorem Proving and Parallel Chain-of-Thought
Using reinforcement learning to push the frontiers of LLM mathematical reasoning — training self-play theorem provers that tackle open mathematical conjectures through multi-agent collaboration, and teaching reasoning models to parallelize their long chains of thought to dramatically reduce inference latency. Demonstrated 1.8x strong scaling across 3 Marlowe nodes. Targeting ICML 2026.
Thierry Tambe
Assistant Professor of Electrical Engineering and, by courtesy, of Computer Science
School of Engineering
Efficient AI Computing: From Model Compression to Edge-Deployable Video Generation
Developing algorithms, hardware architectures, and tools to make AI computing more energy-efficient. Flagship projects include block-wise quantization of video diffusion transformers for real-time edge video generation and 2D Gaussian splatting as compact visual encoders. Blockdialect (ICML 2025) introduces fine-grained mixed-format quantization for energy-efficient LLM inference. One of Marlowe's most active groups with 41,000+ GPU-hours across five medium project allocations.
Jeffrey Glenn
Joseph D. Grant Professor and Professor of Microbiology and Immunology
School of Medicine
PiFold: Reinforcement Learning for High-Fidelity Molecular Structure Prediction
Training AlphaFold3-family models with reinforcement learning and physics-based Rosetta scoring to produce physically realistic molecular structures for drug discovery. On CASP16 targets, PiFold achieves Rosetta energy of -25.3 where competing models exceed 4,000 — and improves binding affinity by 2-3 kcal/mol on protein-ligand benchmarks. Targeting Nature Machine Intelligence and ICML 2026.
Kay Giesecke
Professor of Management Science and Engineering
School of Engineering
Time Machine: Time-Aware Pretrained LLMs for Finance and Economics
Pioneering time-aware LLMs that eliminate look-ahead bias — a fundamental flaw making current models unreliable for finance, economics, and policy analysis. Training from 7B to 14B parameter models on trillions of time-ordered tokens with novel temporal architectures. Aims to create the "ImageNet of finance" with open-source benchmarks and models. Marlowe's largest allocation request at 70,000 GPU-hours.