Even before COVID-19, 40% of physicians said they felt burned out. But the pandemic was a tipping point. Working in jerry-rigged PPE in overcrowded, understaffed ICUs, more than 3,600 U.S. healthcare workers died in the first year of the pandemic alone. After bearing witness to the lonely deaths of some 1 million patients, holding the phone as they shared their final minutes with family members via FaceTime, more doctors are deciding to retire early, exacerbating a looming shortage. A report last year by the Association of American Medical Colleges predicts a shortage of up to 124,000 physicians by 2034. That includes a gap of as many as 48,000 primary care physicians, who report higher levels of burnout than other specialties. And it’s not just doctors: In a January 2022 survey by Prosper Insights & Analytics, just 50% of all healthcare workers said they were “happy” at work.
Happiness won’t be bought overnight. Staffing gaps will take time to fill. But in the meantime, proponents say, artificial intelligence (AI) could be used to help ease the burden on maxed-out MDs. “We need to turn every physician into a super-physician,” says Farzad Soleimani, an assistant professor in emergency medicine at Baylor College of Medicine and a partner at Houston VC firm 1984 Ventures. “At the end of the day, what clinicians do is to learn to recognize patterns. That’s the power of AI.”
Of course, there are doubters. An April 2019 Medscape survey of 1,500 doctors across Europe, Latin America, and the U.S. found that a majority were anxious or uncomfortable with AI, with U.S. physicians expressing the most skepticism (49%). Relying on algorithms for patient care also presents ethical, clinical, and legal concerns. AI may bring considerable threats of privacy problems, ethical concerns, and medical errors. Developers may unknowingly introduce biases to AI algorithms or train them using flawed or incomplete datasets. Data used to train AI systems could be vulnerable to hacking. By turning over aspects of decision-making to machines, physicians could lose their traditional autonomy and authority—and notions of liability will be tested should AI-guided recommendations result in patient harm.
Nevertheless, healthcare AI companies—including nearly 500 early-stage startups—raised a record $12 billion in funding last year, according to CB Insights. Here are just a few ways that tech companies are using deep-learning algorithms and natural language processing to automate routine tasks in hospitals, cut down hours medical providers spend on paperwork, and reduce mistakes caused by fatigue.
Speeding up pre-visit evaluations
Managing patients and preventing provider burnout starts before care recipients even show up to the office or hospital. San Francisco-based Health Note streamlines patient intake with a text-based AI chatbot that collects patient information pre-visit and automatically writes up notes for their doctor, reducing intake and documentation time by up to 90%, according to the company. Decoded Health—a spinoff of SRI International, the nonprofit research organization that developed the tech behind the computer mouse, ultrasound, and Siri—offers what it calls a “virtual medical resident” that prescreens patients using natural language processing, creating a summary of their medical complaints with actionable care recommendations. Keona Health focuses on helping nurses and non-medical staff conduct triage over the phone, guiding them through symptom checking, offering care recommendations, and automating appointment scheduling.
Helping with triage
When the ER gets slammed, AI triage tools are designed to help flag patients who need critical care and might otherwise be missed, flagging the most serious cases and prioritizing them for care. The first major clinical application of AI triage tools has been in radiology; companies including RapidAI, Viz.ai, and Arterys all have FDA approval for algorithms that detect signs of strokes, brain bleeds, and pulmonary embolisms from CT scans. Imagen‘s FDA-approved OsteoDetect analyzes wrist X-rays to detect distal radius fractures, one of the most common injuries to the joint. Mednition‘s real-time triage-guidance tool, KATE, analyzes EHR data and patient vitals collected at intake to help emergency nurses spot warning signs of sepsis, which accounts for more than 50% of hospital deaths. It is being used throughout the Adventist Health system and others to head off ER admissions through earlier treatment. ERs run by Johns Hopkins University are using Stocastic‘s TriageGO, which analyzes vital signs and other intake data, along with patient demographics and medical history to make rapid care recommendations, reducing “door to decision” time by up to 30 minutes.
Transcribing doctors’ notes
A recent study found that physicians spend an average of about 16 minutes on electronic health records for each patient visit. DeepScribe is a voice-based digital assistant that allows a doctor to have a normal conversation with their patient, transcribing it, pulling out key information, and automatically fitting it into the proper sections of the medical records. In January 2021, the San Francisco-based startup raised $30 million. Competitors include Nuance, Suki, and Corti.
There’s also Rad AI’s Omni software, a virtual assistant designed specifically for radiologists that helps write a formal “clinical impression” based on dictated notes, automatically inserting guideline recommendations and spotting potential errors.
Managing the billing process
“When folks talk about staffing shortages in healthcare they often think of nurses, doctors, and frontline care staff, but the issue is organization-wide,” says Cat Afarian, VP of communications at South San Francisco-based Akasa, a provider of AI services for healthcare operations. According to recent surveys by the Healthcare Financial Management Association, more than 57% of health systems and hospitals have 100-plus open back-office roles—in billing, registration, and scheduling—to fill. A recent survey by Change Healthcare found that 65 percent of healthcare leaders are already applying AI in their “revenue cycle management,” and by 2023, 98% anticipate doing so.
Akasa, based in San Francisco, provides services for more than 475 hospitals and health systems and 8,000-plus outpatient facilities in all 50 states, using a constantly learning AI system to help them automate insurance claims status checking, billing, and collections. Privia Health provides scheduling and billing tools for some 3,300 independent physicians, using robotic processing automation—in which an intelligent system learns a scripted process for handling repetitive billing tasks like a human would.
Aiding with testing
Lab tests shape roughly two-thirds of decisions made by physicians. Before COVID-19, medical lab professionals performed some 13 billion lab tests a year. In a February 2020 survey by the American Society for Clinical Pathology, more than 85% of medical lab workers reported burnout; 36.5% complained of inadequate staffing. That was before the additional burden of conducting well over 900 million COVID-19 tests since the pandemic began. Many hospital labs are running with 10% to 35% staff vacancies.
Automating repetitive work could let fewer people do more, and perhaps improve outcomes, too. In a 13-month pilot, the University of Texas Medical Branch hospital in Galveston used Biocogniv‘s “laboratory intelligence platform” to help process more than 325,000 COVID-19 tests and make personalized interpretations based on PCR and antibody testing, patient vitals, and medical history. The result: a near doubling in efficiency, lower rates of escalation to intensive care, and reduced mortality rates. “COVID-19 was a time of immense change both clinically and operationally,” says Peter McCaffrey, an assistant professor of pathology at the hospital and director of its pathology informatics and laboratory information systems. “With Biocogniv’s platform, we were able to scale interpretation and guidance for COVID and coordinate everyone during this time of unprecedented uncertainty.” In the company’s pipeline: laboratory-based prediction tools for sepsis, respiratory failure, and acute heart failure.
Whether AI proves itself in each of these areas or not, there is no turning back. “By minimizing or offloading repetitive diagnostic tasks, [AI can help] physicians devote more time to sophisticated clinical reasoning and judgment, and inherently human work such as engaging with multidisciplinary care teams to support patient care,” says Mark Schuster, a pediatrician and founding dean and CEO of the Kaiser Permanente Bernard J. Tyson School of Medicine in Pasadena, Calif. In addition to addressing physician specialty shortages in areas like radiology, where AI has proven highly accurate, Schuster anticipates that clinical care algorithms will become more powerful, with a “a gradual increase in precision and personalization of diagnosis and treatment.” Still, he acknowledges the potential danger that AI could reinforce biases that already exist in the healthcare system. “We recognize,” he says, “that there remains substantial risk for unmeasured biases to be introduced through machine-learning in AI.”