Alejandro Lozano

I am a 4th-year PhD candidate at the Stanford Artificial Intelligence Laboratory (SAIL), where I work on vision-language foundation models. I am fortunate to be advised by Serena Yeung-Levy and supported by the Arc Institute. I am deeply grateful to NVIDIA, Amazon, and HAI for generously funding my research. I also work part-time at Microsoft Research under the supervision of Hoifung Poon (hosted by Jeya Maria Jose). Additionally, I am an editor of the multimodal AI for scientific research chapter for the AI Index.

My work focuses on multimodal learning, retrieval-augmented multimodal systems, and agent-based reasoning, especially as they apply to real-world precision medicine. Lately, I’ve been particularly interested in reasoning-intensive multimodal retrieval. Outside of research, I like to cook, work out, hike around the Bay Area, play guitar, and meditate.

profile photo

Recent News

  • [Jan 2026] 1 paper accepted to ICLR 2026, see you in Brazil.
  • [Dec 2025] 1 paper accepted to EURIPS 2025, see you in Copenhagen.
  • [June 2025] I am interning at Microsoft Research.
  • [March 2025] Awarded Nvidia grant.
  • [February 2025] 3 papers accepted to CVPR 2025.
  • [January 2025] 2 papers accepted to ICLR 2025
  • [December 2024] 1 paper accepted to NEJM AI.
  • [September 2024] 1 paper accepted to NeurIPS 2024.

Selected Publications

(*) denotes co-first authorship. For a full list of publications, please check my Google Scholar

Teaching

  • Stanford AI4ALL Medical AI Lead Mentor
    Stanford, 2024 and 2025
  • Head Teaching assistant, CS 235: Computational Methods for Biomedical Image Analysis and Interpretation
    Stanford, 2025
  • Teaching assistant, CS 235: Computational Methods for Biomedical Image Analysis and Interpretation
    Stanford, 2022

I stole this website template from Jon Barron who published his source code here.

Visitor Map

See where visitors are coming from around the world

0 unique locations visited