In case you missed it, there is a fascinating (and I don’t use the term lightly) post on the WSJ CIO blog by Thomas H. Davenport, a Research Fellow at the MIT Center for Digital Business. The piece is a recap of some of the early impressions gathered from various teams working with IBM’s Watson in various health-care settings. Davenport notes that while these groups have different working models with Watson, they are all early adopters, “in the sense of providing critical domain knowledge to early Watson implementations.”
So what are these teams on the front line of machine cognition reporting back to us? Here are some highlights…
1: The experience has left them convinced that Watson is part of the future of their work. While the amount of effort has been much higher than they expected, all agree that health-care will be much different in the future because of machines such as Watson. As Dr. Mark Kris, an oncologist at Memorial Sloan Kettering Cancer Center commented: “It’s been a lot more complex, and taken a lot more time, than we had thought. But this is the way medicine is going to be practiced.” Jeff Margolis, who is the chairman and CEO of Welltok (developer of the CaféWell Health Optimization Platform), noted that, “Watson learns quickly from the corpus [body of knowledge that informs its recommendations] and doesn’t forget. It has spatial awareness and temporal understanding. It’s an amazing technology.”
2: It takes more effort than they expected. As Davenport notes:
Several of the early Watson implementations were “moon shots”—highly ambitious and complex projects that would be difficult to accomplish using any technology. MSKCC’s attempt to train Watson to know how to treat lung cancer, and MDACC’s work to help improve the quality of care for cancer patients with no access to cancer specialists (starting with a solution for acute leukemia), certainly qualify for this term. MDACC actually referred to its project as a moon shot, as its goal was to build a virtual expert called MD Anderson Oncology Expert Advisor (or OEA) that is trained to not only support guideline-based and expert-recommended therapy decisions, but also share MD Anderson specialist’s experience in managing a specific type of cancer patient to maximize treatment benefits and improve outcome. In other words, it aims to share both the clinical evidence as well as the “art” of cancer care.
It’s not surprising that these difficult projects have taken a while to implement. MSKCC started working on its project in February of 2012, and it’s still going (although IBM has already made Watson Oncology Advisor based on MSKCC’s training available to hospitals in Thailand and India). In a way such projects will never be finished, as new knowledge and published content on cancer is always being published. Watson can continue to learn over time, which makes it a good technology for this purpose.
3: Watson forces one to rethink work. Dr. Steve Alberts, who is leading a Watson project at the Mayo Clinic that matches patients with clinical trials, explained how his team had to go back to square one in some aspects of their interaction with Watson: “Our researchers had an intuitive understanding of clinical trials. But when the IBM engineers were putting the knowledge into Watson, they had to ask for many clarifications and clearer wording…We realized our eligibility criteria were sometimes a bit ambiguous.”
4: Watson needs to be trained. Davenport notes that in some ways, the doctors had to approach Watson almost as they would approach a medical student. Both Dr. Kris at MSKCC and Dr. Lydia Chin at the University of Texas MD Anderson Cancer Center make this same point. “It’s an apprenticeship form,” claims Dr. Kris of his experience, “that takes years — there are lots of subtleties that Watson has to learn.” A team at LifeLearn, who were working with Watson on veterinary issues, echoed those words: “We had to start the process with teaching Watson what a dog was.” But the LifeLearn team soon learned that there is one big difference when teaching Watson: “In about five months, we’ve taught Watson the approximate curriculum equivalent of a four-year veterinary medical degree. But we’re not stopping there.”
5: Everything Watson needs to work is not always available. As Davenport notes:
...problem comes when the needed knowledge isn’t in the corpus. Dr. Kris at MSKCC comments: “We had three drugs approved in lung cancer this year. None of them are in the literature yet. And definitions of cancer and its variations are being redefined all the time as we understand the biological characteristics of each one. The science is changing more rapidly than the published literature.”
So it’s not just a matter of feeding Watson some text. MSKCC and several other early adopters have had to employ human experts to create many question/answer pairs. At LifeLearn, for example, they’ve involved 70 veterinarians and vet technicians to create over 55,000 question/answer pairs. At MDACC, the oncology solution is designed to incorporate “expert recommendations” to address this gap between the speed knowledge advances and the time it takes to be codified into online guidelines.
6: Watson itself is evolving. Davenport notes that when these early projects started there was only one type of Watson — the machine that won famously on Jeopardy. However, he notes, as of today there are approximately 32 different cognitive APIs offered under the Watson banner and more are being developed every day.
7. Automation can’t be the goal — at least not in 2016. Davenport notes that none of the teams he spoke with believed that Watson would “put anybody out of work. The human cancer applications at MSKCC and MDACC will advise, but not replace, oncologists.”
In addition to points above, the early adopters noted that, as with any new technology, organizational and process changes have to occur in order to realize value — even from a system as smart as Watson.
All in all, the impression I got from Davenport’s piece was that Watson as a technology is still in its infancy — it’s working as a kind of super-aggregator tasked with improving search and pattern recognition in support of existing work models. Indeed, after finishing the piece I went over the Watson home page and spent some time looking through some of the APIs and “apps” that are posted there.
Interestingly, what I found matched my impressions from the article. There is an app for finding a school for your kids in NYC, and one for picking stocks, as well as one for predicting election results. In general, it looks as if Watson is being asked to crunch a lot of data and then give it back to us in filtered, relevant ways. That’s all well and good, but I wonder how this will change as Watson’s power increases and, more importantly, work is designed around Watson and not vice versa. This second point is the more intriguing, as the response to the training length issue noted Mike Rhodin (the leader of the Watson division at IBM) implies:
“People ask me why it takes Watson a few years to learn oncology,” said Mike Rhodin, the head of IBM’s Watson business unit. “But I ask them how long does it take a human to learn it? The oncology leaders we are working with have spent decades learning what they know, so a few years for Watson seems reasonable.”
Rhodin’s point is valid and suggestive at the same time. What he’s saying in a roundabout way is Watson is just a student at this point — not a doctor or engineer. Give it time to “graduate” and start putting its power to work, and we will see what it can really do and change. If he’s right — and I think he is — then in the not too distant future, I can imagine new organizations being drawn up that are built not to be “supported” by Watson but quite the opposite: to support Watson in its way of working. It may sound far-fetched, but this already happens today. Radiology in its broadest sense is as much about improving and harnessing the power of scanning computers as it is about training doctors to make diagnoses. Likewise, pilots and captains today are trained as much to understand and support their ship’s computer systems as to operate the machines themselves. We ourselves will soon abandon driving to cars, admitting at that moment that they will probably be better at it than anyone not named Schumacher or Senna. I think this evolution will get here sooner than we imagine. Watson may be a student at this point, but he, and the other machines that will follow his lead, are going to learn fast.
When I brought up the preceding point to a good friend of mine, who happens to be a neuroradiologist, she reminded me of the AirAsia crash results that were recently announced. In that case, it seems that the pilots were not able to understand the signals the aircraft was sending them. Her point, of course, was that one of the risks of having Watson take point in medicine would be something like the AirAsia scenario, in which Watson’s human counterparts would simply be unable to over-ride its conclusions from lack of practice. Furthermore, one can easily imagine the lawsuits that would emerge should a human doctor override Watson incorrectly. Watson’s diagnosis would, in essence, become the “gold standard” in courts, and it would be a brave doctor indeed who would be willing to go against it. All to say that while I think Watson gives us a glimpse of the workplace of the future, a lot issues will have to be worked out to get there.
In the meantime, the Watson experiment continues, and the more I think about it, the more I am reminded of what Ken Jennings said when he lost to Watson on Jeopardy. His reaction was as as enigmatic as it was funny. “I, for one,” said Jennings, “welcome our new computer overlords.” Davenport’s article makes me think that we are not only welcoming them, we are training them as well. Let’s hope we do a good job.