DY

Dan Yamins

Associate Professor of Psychology and Computer Science

School of Humanities & Sciences

Counterfactual World Modeling: Training Brain-Scale Neural Networks

Training an 80-billion-parameter brain-inspired neural network to study how biological neural circuits give rise to cognitive function. The model learns to predict how the world changes in response to actions, a core building block of biological intelligence.

Computational Neuroscience / AI
Read full story →
AT

Andreas Tolias

Professor of Ophthalmology and of Electrical Engineering

School of Medicine

The Enigma Project: Building a Foundation Model of the Brain

Training large multimodal transformers on time-resolved neural recordings, visual stimulation, and behavioral data to build a foundation model of the primate brain, decoding how the brain processes and integrates information across sensory modalities.

Neuroscience / AI
Read full story →
JL

Jure Leskovec

Professor of Computer Science

School of Engineering

AI Virtual Cell: Genomic Foundation Models at Frontier Scale

Building the molecular scale of the AI Virtual Cell with novel foundation models for genomic sequences (DNA, RNA, proteins) scaling from 770M to 15B parameters. The goal is open-source models comparable to ESM-3 and EVO-2, trained at frontier LLM scale.

Computational Biology / Genomics
RL

Ruijiang Li

Associate Professor of Radiation Oncology

School of Medicine

Multimodal AI Foundation Models for Cancer Biology and Personalized Oncology

Building multimodal foundation models integrating histopathology images, clinical notes, and spatial transcriptomics to transform cancer diagnosis and treatment. Published in Nature (January 2025), with training on over 400 million medical images and billions of text tokens.

Medical AI / Oncology
CL

Curtis Langlotz

Professor of Radiology and of Biomedical Data Science

School of Medicine

AI for Radiology: Large-Scale Medical Imaging Models

Developing AI systems that interpret medical images and clinical text to improve diagnostic accuracy and clinical decision-making. Professor Langlotz directs the Center for Artificial Intelligence in Medicine and Imaging (AIMI) at Stanford, one of the leading medical AI research programs in the world.

Medical AI / Radiology
GW

Gordon Wetzstein

Associate Professor of Electrical Engineering

School of Engineering

Video World Models for Real-Time Scene Synthesis and Autonomous Driving

Building scalable video world models that generate photorealistic novel views of dynamic scenes in real time by fine-tuning 1.3B-parameter video diffusion models conditioned on sparse 3D geometry from LiDAR, with publications at SIGGRAPH 2024 and CVPR 2025.

Computer Vision / Generative AI / Autonomous Systems
BH

Brian Hie

Assistant Professor of Chemical Engineering

School of Engineering

Learning the Language of Biology: Protein and DNA Language Models

Training language models on DNA and protein sequences at multi-node scale — successfully scaled to 128 H100 GPUs with 50% model FLOPS utilization. Marlowe enables large-scale biological AI training that drives discovery in protein engineering and genomics.

Computational Biology / Protein Engineering
IA

Iro Armeni

Assistant Professor of Civil and Environmental Engineering

School of Engineering

4D Scene Understanding: AI for Dynamic Real-World Environments

Developing GPU-accelerated systems for understanding how physical spaces change over time, from construction sites to indoor environments. Transformer architectures for 4D instance segmentation and monocular video reconstruction require sustained multi-GPU training exceeding 72 hours per run.

Computer Vision / 3D Scene Understanding
JZ

James Zou

Associate Professor of Biomedical Data Science and, by courtesy, of Computer Science and of Electrical Engineering

School of Medicine

AI Agents for Biomedical Discovery: Self-Improving LLMs with Scientific Tools

Developing algorithms that enable large language models to self-improve and learn to use scientific tools like AlphaFold and biomedical databases to build deeper expertise. Fine-tuning and evaluating LLMs (7B-70B parameters) for high-impact applications in healthcare, biology, chemistry, and medicine.

Biomedical AI / Generative Agents
TM

Tengyu Ma

Assistant Professor of Computer Science

School of Engineering

LLM Reasoning at Scale: Reinforcement Learning for Theorem Proving and Parallel Chain-of-Thought

Using reinforcement learning to push the frontiers of LLM mathematical reasoning — training self-play theorem provers that tackle open mathematical conjectures through multi-agent collaboration, and teaching reasoning models to parallelize their long chains of thought to dramatically reduce inference latency. Demonstrated 1.8x strong scaling across 3 Marlowe nodes. Targeting ICML 2026.

Machine Learning / Mathematical Reasoning
TT

Thierry Tambe

Assistant Professor of Electrical Engineering and, by courtesy, of Computer Science

School of Engineering

Efficient AI Computing: From Model Compression to Edge-Deployable Video Generation

Developing algorithms, hardware architectures, and tools to make AI computing more energy-efficient. Flagship projects include block-wise quantization of video diffusion transformers for real-time edge video generation and 2D Gaussian splatting as compact visual encoders. Blockdialect (ICML 2025) introduces fine-grained mixed-format quantization for energy-efficient LLM inference. One of Marlowe's most active groups with 41,000+ GPU-hours across five medium project allocations.

Efficient AI / Hardware-Software Co-Design
JG

Jeffrey Glenn

Joseph D. Grant Professor and Professor of Microbiology and Immunology

School of Medicine

PiFold: Reinforcement Learning for High-Fidelity Molecular Structure Prediction

Training AlphaFold3-family models with reinforcement learning and physics-based Rosetta scoring to produce physically realistic molecular structures for drug discovery. On CASP16 targets, PiFold achieves Rosetta energy of -25.3 where competing models exceed 4,000 — and improves binding affinity by 2-3 kcal/mol on protein-ligand benchmarks. Targeting Nature Machine Intelligence and ICML 2026.

Computational Biology / Drug Discovery
KG

Kay Giesecke

Professor of Management Science and Engineering

School of Engineering

Time Machine: Time-Aware Pretrained LLMs for Finance and Economics

Pioneering time-aware LLMs that eliminate look-ahead bias — a fundamental flaw making current models unreliable for finance, economics, and policy analysis. Training from 7B to 14B parameter models on trillions of time-ordered tokens with novel temporal architectures. Aims to create the "ImageNet of finance" with open-source benchmarks and models. Marlowe's largest allocation request at 70,000 GPU-hours.

AI for Finance / Time-Aware LLMs

Interested in using Marlowe for your research?

Principal Investigators new to Marlowe are eligible for 5,000 free GPU-hours. Get started with your first allocation!

Apply for Access