This transcript has been edited for clarity.
Eric J. Topol, MD: Hello. This is Eric Topol, for "Medicine and the Machine" on Medscape. I'm so delighted to have a chance to have an extended conversation with Fei-Fei Li, who is a professor at Stanford University. She runs the Stanford Institute for Human-Centered Artificial Intelligence (HAI). She's had an enormous influence on the field of artificial intelligence (AI) over the years and is a hero of mine and, no less, a real friend. Welcome, Fei-Fei.
Fei-Fei Li, PhD: Thank you, Eric. That was really kind. The feeling is mutual; you're also a hero of mine in digital medicine.
Topol: You're very kind. I know we're going to have a fun discussion, because you've put a lot of thought into the human side of AI. Earlier this year, you started a new institute at Stanford University. Could you go into the background about why you did that and where it stands?
Li: You're referring to the Stanford Institute of Human-Centered AI. It's new in many ways, but it's also not new because at Stanford, and elsewhere in the country, there is already a lot of research going on at the cross-disciplinary areas of not only AI technology, but the human side of understanding AI's social and humanistic impact, as well as the interdisciplinary collaboration with medicine, education, and many other fields.
The institute put together all of this—and I would call it a critical moment in history—where, for the first time in this decade, we are seeing this niche field of computer science, called "artificial intelligence," making its way into real life. We are also seeing its impact on human lives and society just exploding through the tremendous potential of its applications, products, and services in every sector of these industries.
It's really important that we understand how to guide this technology forward, both in terms of its science and in terms of its applications and impact on human lives.
Topol: Well, there's no question that the issues of AI influencing our lives are bigger, of course, in medicine. That's why it's so appropriate that you're leading this charge with this institute.
You started something many years ago, called ImageNet, which has changed the field over time. Many folks in medicine are not aware that ImageNet was kind of the precursor to the big things that are going on in medicine today, with respect to the different types of scans, whether it's images of the retina, electrocardiograms, or anything that's an image in medicine. But in many ways, it started back when you began ImageNet.
Can you tell us about what was going on in your mind back then and what that led to in the field of deep learning?
Li: My subarea of specialty in AI is at the intersection between computer vision and machine learning. Back in 2006, I was working on a very important, holy grail problem of vision, which is what we call "object recognition."
If you think about intelligent animals, like humans, we see the world in very rich ways. But a building block of our visual intelligence is recognizing hundreds of thousands of different kinds of objects that surround us, be they cats, trees, chairs, microwaves, cars, or pedestrians. Enabling machine intelligence with that capability was a holy grail—it still is.
Our field was working on it. I was a young assistant professor; it was my first year of a faculty job. I was looking at this problem, and it dawned on me that while we were working on all these machine-learning algorithms of that era, we were, as a field, playing with this very small set of data from a couple of dozen of object classes. The dataset had about a hundred, or at most, a few hundred pictures per class, which is in sharp contrast to the real world that humans and animals experience.
We were inspired by human development and recognized that there is a huge need for big data to drive learning. It drives the diversity of the different patterns, but more mathematically important, it helps any learning system to learn to generalize better instead of overfitting on a much narrower set of data patterns that do not generalize to the big world.
With that recognition, we thought, let's do something crazy, which was to map out the objects of the entire world around us. How did we go about it? We were inspired by the biggest English vocabulary taxonomy, WordNet, which was invented in the 1980s by the linguist George Miller. WordNet has more than 80,000 nouns that depict the world of objects.
We took, at the end, 22,000 object classes, which were downloaded from the Internet through different search engines, and we launched a massive crowd engineering project through Amazon Mechanical Turk, which was in its first couple of years of business. We got more than 50,000 online workers from 160-plus countries to help us clean up and label almost a billion images, and we ended up with a very curated dataset of 15 million images over 22,000 object classes, and that became ImageNet.
We immediately open-sourced it to the research community. Starting in 2010, we held an annual international competition called ImageNet Challenge to invite researchers worldwide to participate in solving this holy grail problem of computer vision.
A couple of years later, machine-learning researchers in Canada used the fairly traditional model called "convolutional neural network" to win that ImageNet challenge in 2012. That was Professor Geoff Hinton's group.
I think many people consider that work on ImageNet Challenge to be the milestone work for the start of the deep-learning era.
Topol: Right. They just were awarded earlier this year the Turing Award, or the Nobel Prize of computer science. You set them up though, Fei-Fei.
Li: Thank you for the recognition.
Topol: About 4 years ago, you gave a TED Talk about how we're teaching computers to understand pictures. I think a few million people watched that. It's just extraordinary. I remember asking you about it and I think that you showed a statue of a man on a horse and how computers can only get so good.
There is obviously a big gap of context. Miscues still occur. I remember asking you about that. We've only gone so far, right? Is that a fair statement about how we can train machines to interpret images?
Li: Yes. Despite the fact that our field has been steadily making a lot of progress, you're totally right. There is a lot of nuance, context, rich knowledge, common sense, and reasoning. This still escapes today's machine intelligence. And in the field of vision, a statue like a man on the horse might still escape machine recognition.
Even if we overtrain a system to recognize sculpture, let's say, it still does not recognize it in the same way that human intelligence does—shooting it as an art piece, understanding the context, recognizing its material, and all those rich details are still missing.
Topol: Right. Now, another part of the story in medicine is that there's a limited number of these annotated datasets. There is no ImageNet with 15 million carefully annotated images, so the same datasets just keep getting used again and again.
Do we need to have a dedicated effort to develop these datasets, or are we going to move to some kind of self-supervised, unsupervised learning?
Li: Great question, Eric. I think the answer is, as you know, a multipronged approach. It's a common frustration of researchers and developers in medicine—the lack of a good aggregated dataset—in many cases. But there are good reasons, right? Medical data need to be much more carefully curated for reasons of patient privacy and safety, and we also have much more awareness of the issues of bias that comes from data.
It's okay to make cat images biased. But it's really not okay when it comes to human lives, health, and well-being. These reasons and regulatory constraints make it much harder for medical data to be aggregated in a massive way. I do think good-hearted efforts need to be made, and I know researchers around the world are contributing different efforts. Some are joining forces.
In the meantime, the technology itself can contribute in different ways as well. In addition to the need for good datasets and training supervised learning algorithms, like you said, the machine-learning field has been making a lot of progress on these very interesting, newer techniques, such as self-supervision, transfer learning, federated learning, and unsupervised learning.
I think we're going to see a mixture of approaches. In some cases, we still need good datasets; in other cases, much more multimodal and mixed datasets make headway.
Topol: I guess since the first big wave of work has been in medicine, certainly in the image side, we're starting to see these synthetic notes based on a conversation between patient and doctor to eliminate a keyboard, potentially. And that's exciting, because that's kind of the common enemy of both patients and clinicians. So you'd be optimistic that eventually we wouldn't have to use data entry clerks—that is, clinicians—to do that work?
Li: I 100% agree with you. I'm not only optimistic, but also I really hope that we will take off as much mechanical charting burdens from our clinicians as possible because of the work we do in healthcare.
As I was taking care of my aging parents, I spent a lot of time in hospitals and I observed work by our nurses and doctors. I think their number-one wish from people like me in my field of AI is to give back more time to attend to patients rather than watching the screen and charting. I really hope, whether it's for the sake of patients, clinicians, or even from an economic point of view, that we'll see progress being made in that area.
Topol: I'd really be interested to learn about your direct experiences. I know you've cared for your parents and they've had medical needs. What has been your experience as a family member of patients undergoing care as it exists today, even in a first-rate medical center? Healthcare, at the end of the day, is humans caring for humans.
Li: First of all, as family of patients, the human experience of anxiety, fear, hope—all of this is still predominant. One thing that really helped me to believe in what I do is having that personal experience. I'm such a firm believer that technology is here to augment and enhance human work, not to replace it. This is a big theme in my own research and in Stanford's HAI Institute, especially in the field of medicine.
We've heard many talks about replacing doctors because there are machines doing better diagnostics. But having experienced the full set of medical experiences, from surgery to the intensive care unit (ICU) and extended hospital stays with my aging parents, I cannot even begin to imagine a world without nurses and doctors for our patients.
Healthcare, at the end of the day, is humans caring for humans. What made me feel so strongly convinced is that if technology can play that supportive role to help charting, improve triaging by faster early diagnosis, and serve as an extra pair of eyes to ensure patient safety, all of this would be so useful. That's what I'm working on, and I feel very passionate about it.
Topol: You've collaborated a lot with Arnie Milstein there, and you've been doing a lot of work, for example, in machine vision in the ICU. Please tell us about some of the great work that you've been tackling.
Li: Arnie is Dr Arnold Milstein, a professor of medicine at Stanford and also a national thought leader on improving healthcare quality and curtailing healthcare costs from different angles, from research to policy and business practice.
About 7-8 years ago, Arnie and I met serendipitously and recognized a very interesting moment. I, as an AI professor, was seeing the deep-learning age coming, especially through the lens of self-driving cars, where the advances of smart sensors, AI algorithms, and the lowering cost of these enabled a whole different technology that is much more pervasive and can enable transportation technology in a way that's very different from today's cars that mostly rely on human drivers.
When Arnie and I started discussing that technology; comparing notes; and looking at the healthcare delivery system, moments of errors, the big gaps of lacking enough human care to ensure patient safety, and the need to optimize more effectively so that we can give human clinicians quality time back to spend with patients and help them to do their work, all of these opportunities converged with this new technology of smart sensors and AI algorithms.
We started talking about critical scenarios in our healthcare delivery system and how we can prototype smart sensors to help. One area we identified is ICUs. There are many things that make the ICU critical, right? Our patients are fighting for their lives; our clinicians are working intensely every single minute. Any error made—negligence or honest mistakes—can cost lives.
We started talking to Intermountain Hospital in Utah and Stanford Hospital to see whether we could pilot a project on patient care in ICUs to help clinicians document whether proper patient care (eg, oral care, mobility care) is being done according to protocols because of the nature of the busy work in healthcare.
In this particular project, we installed cheap depth sensors that can collect human behavior data on patients and clinicians without infringing on their privacy, because these are not photo grabs of people's faces and identities. With that information, we can observe longitudinally, 24/7, whether proper care is being given to our patients and provide feedback in the health delivery system.
The same thing is happening in Stanford Children's Hospital, where we are piloting the hand hygiene project because we know proper hand hygiene prevents hospital-acquired infections that cost thousands of lives every year in America. Again, using these cheap sensors and a deep-learning algorithm, we can map out clinicians' hand hygiene behavior and begin to send that feedback signal to remind them to perform proper hand hygiene, which was established by the World Health Organization (WHO) as a protocol.
Topol: It's actually far-reaching how many things you can tackle with these sensors and machine vision to promote safety and better outcomes for patients in the hospital. I noticed that with the Stanford University Medical Center, you have dedicated beds for AI, isn't that right?
Li: Yes. We are definitely very early. We are collaborating with the hospital where we will scale up this project of sensors. It's very early, so we have no results to discuss, but it's supported by clinicians, hospital leaders, and AI researchers.
One thing that also excites me is that we are looping in ethicists, law scholars, and bioethicists to talk about the frontier challenges that come with this technology research. We want to stay on top of that and be very mindful to make sure our patients, clinicians, families, and stakeholders feel safe.
Topol: One of the things that, of course, is important for implementation of AI in medicine is for doctors and clinicians to have a comfort level and to understand the nuances of some of the issues that we've already touched on.
One group that I know, led by Pearse Keane in the United Kingdom, published a paper where they took doctors who had never written a line of code or had no background in computer science, and they worked with image sets. They learned about image interpretation accuracy. Do you think that it's a good idea—even though, obviously, we're not going to get doctors to develop the algorithms—for the doctors to have more familiarity with this?
Li: I'm not familiar with that project, but this general issue about how to work in a very interdisciplinary group setting and how to bring doctors and computer scientists on board is something I've been experiencing for the past 7-8 years. I think it's a fascinating journey.
I'm still learning, but one of the most important things I've learned is to spend time with each other and to understand the nature of each other's work, their concerns, their value proposition, and to have that patience and open-mindedness to embrace each other's field. It's not going to be a linear process.
I remember the early days when the computer scientists and the clinicians in our research team talked past each other. Even today, when new students or new members are brought on, there are going to be multiple meetings where we talk past each other.
As a computer scientist, one requirement I have for my computer science students joining our AI healthcare work is that they absolutely have to shadow doctors and nurses before they even talk about codes and algorithms. They need to go into the ICU, the ward, the surgical room, or the senior home to understand the human working conditions, the patients, and their families. Then we begin talking about the computer science problem.
Topol: You just nailed it. The other thing that you do, which is among your pluripotent contributions, is AI4ALL . You're not only trying to get the computer scientists to work with the clinicians, but also you're trying to develop the next generation of computer scientists. Tell us about AI4ALL.
Li: Thank you so much, Eric, for bringing that up. AI4ALL started more than 5 years ago now. That was around 2014, in the very early days of the deep-learning revolution, and the whole world, especially tech and Silicon Valley, was just lighting up with excitement and debates and concerns about this technology.
I literally just woke up one day and realized that there is a disconnect between, on the one hand, people worrying about Terminators coming next door—the machine overlords and all that. On the other hand, I live personally, every day, in a world of no diversity. My professional world of AI has very few women. Most of our technical conferences have less than 15% women attendees. And if you look at underrepresented minorities, we don't even have good stats because the number was—and is still—so small.
I made a connection that these two things are deeply, profoundly connected. If we, as a human species, care about the future of our society because of this technology, we've got to care about who is shaping this future. If a very narrow slice of humanity is the only representation in shaping this technology and at the steering wheel, we're really going to run into a danger of this technology not representing all of us.
I had a wonderful former PhD student who, at that time, was still in the last year of her PhD study: Olga Russakovsky. She's now a professor in AI at Princeton. She and I talked about our concerns, and we completely hit it off and we said we needed to do something.
For the first couple of years, 2015-2016, we piloted a program at Stanford where we invited high-school women to come study and research AI with us for a few weeks in the summer in the AI lab. That program became so wildly successful that in 2017, with the encouragement of Melinda Gates and Jensen Huang, we launched the national nonprofit called AI4ALL.
Our mission is to educate and inspire the next generation of AI technologists and thought leaders from all walks of life. AI4ALL is now 3 years old. In summer 2019, we expanded to 11 universities in North America for summer programs facing underrepresented minority students and underserved communities, such as racial minority students, low-income-family students, rural students, and women.
We're still expanding, and our goal is really to move the needle in 10-15 years, when these students will come out of their studies and make a difference in in the field of technology and AI. We're seeing some early examples already.
Topol: The reason you became my hero when I was doing my research for my book, Deep Medicine , is because I learned from the leaders of the field—obviously, you're one of them—that there is a whole range of how things are presented, from the contrarians to the masters of hyperbole. What I've always adored about you, Fei-Fei, is that you are the right balance. You call it as it is, you're a seeker of truth, and you really lay out the shortcomings.
I want to get your perspective, because this is a field that's gone through its winters and every week there's some big thing coming out, whether it's AI in medicine or even beyond that, of course. Where are we in this cycle about the hyperbole and reality for AI, especially as it pertains to medicine?
Li: This is a great question. Some people call this a bubble, but some people think this is so real. I think we're in a bubble wrap, meaning that there is a solid core and I see this technology as having a really solid potential for affecting healthcare and medicine, in deep, deep ways.
But there is a bubble wrap, which includes the hyperboles, the hype, and all these excited talks. My hope as a scientist is that the bubble wrap will burst. We'll focus on really making sure the solid core grows in the benefits of all people, especially in such areas as healthcare and medicine.
We should not use technology to create injustice, to create bias, or to amplify the existing inequity. I would love to increase access, to increase fairness, and to reduce all of these issues because of this technology. There is opportunity if we do it right, but we really need to be traversing both the human part of this technology and the technical core itself.
Topol: Fei-Fei, really, thank you so much. This has been so enjoyable and a great opportunity to get your read on this whole thing. I really appreciate it. We'll continue to follow you for all the things that you and your team are doing.
Li: Well, I continue to follow you and hear all about digital medicine news and thought leadership from you. Thank you, Eric.
Follow Medscape on Facebook , Twitter , Instagram , and YouTube
Medscape © 2020 WebMD, LLC Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape. Cite this: Clinicians' 'Number-One Wish' for Artificial Intelligence - Medscape - Feb 06, 2020. Tables
Also in Industry News
How to decide whether or not to start treatment for prostate cancer?
Analysis of the SARS-CoV-2 proteome via visual tools
$65m investment increases British Patient Capital’s exposure to life sciences and health technology