Medical AI Bootcamp

A program for closely mentored research at the intersection of AI and Medicine. Open to students at Harvard & Stanford, and to medical doctors around the world.

Rolling Applications, reviewed at the end of every month.

What is the Medical AI Bootcamp?
An educational program for closely mentored research at the intersection of AI and Medicine, hosted virtually at the Rajpurkar lab. Prof. Pranav Rajpurkar is the director of the Medical AI Bootcamp, and directly mentors each project in the bootcamp.

Who can participate in the Medical AI Bootcamp?
The bootcamp is open to students at Harvard & Stanford, and to medical doctors around the world.

What are the two specializations in the Medical AI Bootcamp?
The two specializations are the AI Specialization, open to students at Stanford/Harvard, and the Medicine Specialization, open to medical doctors globally.

What are the requirements for the AI Specialization?
Candidates for the AI Specialization should have practical machine learning knowledge and the ability to code in Python. Undergraduates and graduate students are expected to sign up for research credits for two semesters or quarters. A grade of A or better in these courses is typically expected. The interview will test the candidate's ability to comfortably write Python programs.

Who is eligible for the Medicine Specialization?
The Medicine Specialization is open to medical doctors who have already received an MD or equivalent. In the upcoming round, there is a particular interest in specialists in radiology. Candidates from around the world are welcome in the medicine specialization.

How long does the Medical AI Bootcamp last?
The typical duration of the research projects in the bootcamp is 6-9 months.

What do you learn in the Medical AI Bootcamp?
You learn how to tackle an impact-driven research project in medical AI, from conception to co-authoring a manuscript.

What is the time commitment for the Medical AI Bootcamp?
Candidates are expected to dedicate between 15-20 hours per week to the bootcamp over the course of the research project.

Can students participate in the Medical AI Bootcamp while doing a summer internship?
Yes, students can do the bootcamp even when it overlaps with a summer internship, as long as they are able to dedicate the required 15-20 hours per week.

Can students get research units for participating in the Medical AI Bootcamp?
Yes, Stanford/Harvard students can get research units. At Stanford, they get research units with Professor Andrew Ng, and at Harvard, they get research units with Professor Pranav Rajpurkar.

How are the research projects in the Medical AI Bootcamp structured?
Students lead their own projects given a general direction. There are structured assignments that guide them through various aspects of the research project.

What is the typical outcome of a project in the Medical AI Bootcamp?
Participants typically end up with their own manuscript, often as first authors.

How often are group meetings held for the Medical AI Bootcamp?
There is one group meeting per week on Zoom, which will take place on a weekday from 11 AM to 12 PM ET.

How do students interact with each other during the Medical AI Bootcamp?
Students often meet with each other to discuss their research assignments and provide peer feedback, across specializations.

How are projects in the Medical AI Bootcamp mentored?
Each project receives direct mentorship from Prof. Pranav Rajpurkar. Members typically also get mentorship from collaborating faculty, research scientists, PhD students, and postdocs when applicable.

Apply

Takes 10-15 minutes. You need your CV, Transcript (if any), and written motivation for applying.

Our selection process operates on a rolling basis, so apply early. If you are chosen to proceed to the interview stage, we will reach out to you directly. Due to the high volume of applications we receive, we regret that we can only respond to candidates who are selected for the interview round.

Member Experiences

Vignav Ramesh, Undergraduate at Harvard University

Participating in the Medical AI Bootcamp was an extraordinary experience. Working within a cohort of experienced mentors and brilliant peers, I had the opportunity to leverage and build upon my past ML knowledge to tackle important problems at the forefront of medicine, and take projects all the way from conception to publication. As part of the Bootcamp, I've presented at two conferences, developed and nurtured meaningful relationships with fellow researchers, and learned a tremendous amount. I heartily recommend the Bootcamp to anyone eager to dive deeper into biomedical AI.

Jaehwan Jeong, Undergraduate at Stanford University

The Medical AI Bootcamp exceeded all my expectations. It was an incredible journey delving into the forefront of AI and Medicine. Working in dynamic interdisciplinary teams, we tackled impactful research problems with guidance from esteemed mentors. Throughout the program, I gained invaluable skills in developing cutting-edge medical AI solutions. From conceptualization to manuscript co-authorship, we honed our abilities to transform ideas into tangible outcomes. The fast-paced and collaborative environment challenged me to push boundaries and reach new heights.

Dr. Subathra Adinathan. Radiologist at JIPMER, India

The Medical AI Bootcamp was a transformative journey, as I worked from idea formation and problem formulation, till manuscript writing and submission in a dynamic interdisciplinary team. Dedicated mentors inspired in us passion and a drive for work ethic, while fostering a nurturing and inclusive environment. This empowered me to push beyond my boundaries. Working collaboratively, I had plenty of opportunities to develop and hone my programming skills. In addition, I developed a deeper insight into image analysis and interpretation as a whole. The experience supplied me with fresh perspectives, that are useful when I look at images in my practice as a radiologist as well.

Julie Chen. Undergraduate at Stanford University

Before starting the medical AI bootcamp, I didn't have any experience working on AI research, much less in medical AI research. My only previous research experience was working in a biochemistry/cell biology lab, so I initially felt very uncertain about stepping into a new domain. Thankfully, the medical AI bootcamp provided an amazing mentorship network and resources that Pranav and the team have been building up and developing for many years now. This really allowed me to dive headfirst into a new research discipline, learn a ton both about the specific research topic and general project management and coding skills, all while working on a problem with real-world applications in healthcare.

Benjamin Yan. Undergraduate at Stanford University.

The Medical AI Bootcamp was a foundational experience to my ongoing and nascent research trajectory. When I joined, I had very little exposure to university AI research; I did not know how to closely read journal articles, interface with a collaborative codebase, or lead a team effort toward a peer-reviewed publication. So these were skills and faculties I learned as a result of the bootcamp, and continue to nurture today. This would not be possible without my mentors, Prof. Rajpurkar and Dr. Michael Moor—who were incredibly knowledgeable, kind, supportive, and erudite with their shining pearls of research and career insight. I’m also grateful to my wonderful teammates; our affinity was crucial in bringing the project all the way from an early idea to a fully-fledged paper. With a hand on my heart, I recommend the bootcamp to anyone searching for an opportunity in healthcare research—especially including those with no prior experience!

Highlighted Projects

Predicting patient decompensation from continuous physiologic monitoring in the emergency department.

Published in NPJ Digital Medicine. [link]. Advised by David Kim MD PhD and Pranav Rajpurkar PhD. At the time of joining the bootcamp, Sameer Sundrani and Julie Chen were both undergraduates at Stanford. Image: Julie Chen presenting at SAEM 23.

Sameer Sundrani and Julie Chen first authored a research study on a framework called VitalML, which utilizes advanced machine learning techniques to predict the development of critical conditions in initially stable emergency department patients. By integrating traditional triage data with features derived from continuous physiologic monitoring, their models achieved exceptional accuracy in anticipating vital sign abnormalities like tachycardia, hypotension, and hypoxia within a 90-minute timeframe. These predictions surpassed the performance of models relying solely on standard triage data. This innovative approach not only enhances triage processes but also enables real-time monitoring of patients at risk of clinical deterioration. Sameer and Julie's research has the potential to revolutionize emergency and critical care, ultimately improving patient outcomes in a significant way.

Improving Radiology Report Generation Systems by Removing Hallucinated References to Non-existent Priors

Published in the Machine Learning for Health 2022 Conference [link]. Advised by Pranav Rajpurkar PhD. At the time of joining the bootcamp, Vignav Ramesh was first year undergraduate at Harvard and Nathan Chi was a first year undergraduate at Stanford. Image: Vignav Ramesh presenting, with Pranav Rajpurkar, at SAIL 23.

Vignav Ramesh and Nathan Chi first authored a project tackling the issue of hallucinated references to non-existent prior reports in deep learning models. First, they utilize a powerful language model called GPT-3 to rewrite medical reports without references to prior reports. This ensures that the generated reports are free from hallucinated references. Second, they employ a specialized language model called BioBERT for token classification, directly removing words that refer to prior reports. By applying these methods, Vignav and Nathan modify the MIMIC-CXR dataset, a collection of chest X-rays and their associated radiology reports, and retrain the CXR-RePaiR system. The resulting model, called CXR-ReDonE, outperforms previous methods in terms of clinical metrics. It achieves higher accuracy and produces radiology reports with significantly fewer hallucinatory references to prior reports. Their work marks a crucial step forward in the evolution of radiology report generation systems, bringing us closer to a future where advanced technology seamlessly supports medical professionals in providing optimal care to patients.

Multimodal Image-Text Matching Improves Retrieval-based Chest X-Ray Report Generation

Published in the Medical Imaging with Deep Learning 2023 Conference [link]. At the time of joining the bootcamp, Jaehwan Jeong was a second-year undergraduate at Stanford, Katherine Tian was an undergraduate at Harvard, Sina Hartung was a masters student in Biomedical Informatics at Harvard, & Andrew Li MD was a Gastroenterology fellow at Stanford. Image: Jaehwan Jeong giving NEJM award presentation at SAIL 23.

Jaehwan Jeong, Katherine Tian, Sina Hartung, and Dr. Andrew Li authored a research paper proposing an innovative approach to radiology report generation called X-REM (Contrastive X-Ray REport Match). The goal is to improve the retrieval-based generation of radiology reports by enhancing the coherence and accuracy of the generated text. Their approach, X-REM utilizes a novel image-text matching score to measure the similarity between a chest X-ray image and a radiology report for report retrieval. By employing a language-image model, the fine-grained interaction between the image and text is effectively captured, surpassing the limitations of cosine similarity often used in similar methods. The performance of X-REM surpasses multiple prior radiology report generation modules, demonstrating superior natural language and clinical metrics. Their work highlights the potential for significant advancements in radiology report generation, bridging the gap between AI-generated reports and those produced by human experts. With further development and evaluation, their research can contribute to the improvement of radiology practices, ultimately enhancing patient care in the medical field.

Adapt Segment Anything Models to Medical Imaging via Fine-Tuning without Domain Pretraining

Accepted for oral presentation at AAAI 2024 Spring Symposium for Clinical Foundation Models [link]. At the time of joining the bootcamp, Kevin Li was a third year undergraduate at Stanford. Image: Kevin Li giving an oral presentation at AAAI 2024 Spring Symposium series.

Kevin Li first authored a research paper examining the value of medical pre-training for foundation models in the context of parameter efficient fine-tuning. In particular, the goal is to examine whether an image segmentation foundation model with medical pre-training (MedSAM) necessarily outperforms the original model without medical pre-training (SAM) when both models undergo fine-tuning. The conclusion is that for certain fine-tuning approaches, the advantage of medical pre-training either is negligible or does not exist. An explanation for this behavior is the idea that the medical corpus MedSAM is pre-trained on is small in scale relative to the general corpus SAM is trained on, so that the advantage MedSAM derives in the zero-shot setting for medical benchmarks becomes negligible in the fine-tuning setting where task-specific supervision is paramount.