You’re reading the web edition of STAT Health Tech, our guide to how tech is transforming the life sciences. Sign up to get this newsletter delivered in your inbox every Tuesday and Thursday.
YouTube’s next move against misinformation
YouTube’s health division is stepping up its effort to combat medical information, with a particular focus on cancer treatment and prevention. The Google-owned video sharing company unveiled a new plan Tuesday to remove content that promotes harmful or ineffective cancer treatments, or discourages viewers from getting professional medical treatment.
Garth Graham, the head of YouTube Health, told STAT the company is focusing on cancer because it’s an area where misinformation is prevalent and particularly damaging. “If a video was to claim garlic cured cancer, or you should drink this special water or take vitamin C instead of radiation therapy — that’s the kind of content we will be removing ,” he said. On a broader level, Graham added, YouTube is trying to stake out a policy that will be clear to viewers and content developers, so that it can apply it to other medical domains where the scientific evidence is stable and clear, and misinformation is especially harmful.
Machine learning gaps in medicine
Ashley Beecly, a cardiologist at New York Presbyterian Hospital, started her talk at the Machine Learning for Health Care conference on Saturday with a revelatory question: “How many people here have implemented an AI-based model that’s patient-facing?” Only a smattering of hands went up in an audience that included many of the most advanced practitioners in the field.
I attended the two-day conference at Columbia University, which dove deeply into that implementation gap, exploring the problems that must be overcome for AI to begin to make a real difference in clinical care. Participants discussed ways to make data flow better and improve the fairness of AI models. They also zeroed in on the need to focus less on an AI model’s technical performance than whether it can be combined with a set of interventions that actually improve patient outcomes.
“I love AUCs just like I’m sure everyone here does,” said Beecly, referring to a commonly-referenced measure of AI model performance. But for models to reach the bedside, she said, they must get buy-in from clinicians who can help build the right user interface, craft interventions, and test for safety, effectiveness, and fairness.
BMS turns to a Viz.ai algorithm to detect disease
As Bristol Myers Squibb works to build its drug Camzyos into a blockbuster, it’s propped up a Viz.Ai algorithm designed to help find more people who are affected the heart condition it treats, called hypertrophic cardiomyopathy, or HCM. As part of a multi-year agreement between the companies, BMS provided funding and scientific input for the development of the algorithm. The algorithm, which received FDA clearance this month, looks at 12-lead electrocardiograms collected during routine care and flags suspected cases of HCM for further assessment.
“BMS has a vested interest in finding more patients with HCM,” said Matthew Martinez, a cardiologist who leads Viz.ai’s HCM medical advisory board. “Why? Because they’re so philanthropic? No, they want to sell more drugs. They want to help more people by identifying more people.” STAT’s Mario Aguilar has more on the algorithm and how BMS plans to use it.
How to boost local oversight of AI
AI researchers at Duke and the University of Michigan published new recommendations for improving local oversight of AI tools. Their paper in Nature Machine Intelligence bullets four interventions:
- Create standards for implementation of AI models to monitor performance over time and collect input from clinicians and patients. The researchers suggested the Centers for Medicare and Medicaid Services could require adherence to the standards as a condition of payment, as it does in other domains such as antimicrobial stewardship.
- Invest in IT systems and data infrastructure to enable pre-implementation testing of AI models, and provide training for caregivers using or overseeing them.
- Require public reporting of AI model performance so their safety, effectiveness, and fairness can be assessed across sites.
- Implement centralized evaluation of AI models beyond the FDA approval process for AI models. Many models currently in use are not reviewed by FDA, creating a large gap in oversight of AI tools that work within electronic health records.
The latest industry deals and launches
- Amazon Pharmacy, the online prescription drug business launched by the tech giant in 2020, introduced automatic coupons for savings on insulin and other diabetes care products. The company says many brands of insulin will now be available starting at $35 a month.
- iCad, a maker of AI tools for detecting cancer, said it is expanding its partnership with Google to distribute its AI mammography model along Health in mammography. Google’s breast cancer detection tool will now be available to iCad’s customers within the company’s Profound Breast Health Suite, which also features 2D and 3D mammography models developed by iCad.
- Vanderbilt University Medical Center has launched virtual nursing care in a recovery unit for heart patients. It enables nurses working in a conference room to monitor patients who agree to a high-resolution camera feed. The idea is to keep a closer watch on patients than is possible with periodic visits to the bedside.
- Cigna struck a deal with Virgin Pulse to provide personalized digital health services to its members. The platform, available through the myCigna portal, lets users set health goals and track progress by linking to personal devices, such as the Apple Watch. Based on the data, Cigna can connect them with programs such as pre-diabetes management and behavioral health services.
- The Office of the National Coordinator for Health IT said it will give $2 million to support data sharing projects. The agency will support Health eLink, a group of providers seeking to use FHIR data standards to improve the exchange of clinical information. It will also fund efforts by Boston Children’s Hospital to assess the quality and usefulness of USCDI data elements, or the core pieces of information that form the backbone of government interoperability rules.