How to Successfully Implement Artificial Intelligence in Healthcare

By Samantha McGrail

– Implementation of artificial intelligence (AI) in healthcare can be transformative and provide opportunities to improve patient outcomes, reduce costs, and impact population health. But it is vital to proceed with caution and balance healthcare AI in a thoughtful and effective way, a recent National Academy of Medicine report found.

The report “Artificial Intelligence in Health Care: The Hope, The Hype, The Promise, The Peril,” NAM members representing Boston-based Harvard Medical School, Rochester, Michigan-based Mayo Clinic, OptumLabs, and Epic, among many others, described the challenges of implementing AI in healthcare and outlined what providers must do to see successful AI implementation in their health systems.

Data models need wider adoption to support AI tool development, deployment, and maintenance, the report stated. To resolve these issues, all stakeholders and the wider healthcare community must advocate for policy, regulatory, and legislative mechanisms looking to improve data collection and the transparency around how patient health data can be best utilized to balance financial incentives.

The widely recognized inequities in health outcomes have worsened because of social determinants of health. And consumer-facing technologies have further advanced the problem and are at risk of doing so in healthcare as well.

Ethical healthcare, equity, and inclusivity should be a clearly stated goal when developing AI in healthcare. This will ensure population-representative datasets and give priority to inclusion and equality for health and healthcare. Due to the scaling that is possible with AI implementation, existing inequalities may intensify, so a single human or organizational impact is far less aggravating than in global or national AI technologies. In addition, AI tool sustainability should be evaluated to understand whether various deployment environments could impact equity and inclusivity.

In the healthcare space, transparency is key to building trust among users and stakeholders. In AI, there should be complete transparency on the composition, semantics, provenance, and quality of data used to develop AI tools, the report stated. AI developers, implementers, users, and regulators should define guidelines for clarifying the level of transparency needed across a spectrum, NAM members stressed.

Most importantly, a definitive separation of data, algorithmic, and performance reporting in AI dialogue, and the development of guidance in each of these spaces is vital, they added.

Near-term focus of human-centered tools should be at the forefront of AI implementation. This includes understanding that human override is important for trust because machine error is unacceptable to most individuals. The near-term focus should promote, develop, and evaluate tools that support humans rather than replace it with full automation.

Although AI is expected to change the medical domain, education expansion is necessary to teach individuals about AI tools and data science. This expansion must be multidisciplinary and engage various healthcare leadership, clinical teams, AI experts, humanists, ethicists, and patients. Retraining programs is essential to address any shifts, and consumer health education programs will help to inform consumers on healthcare application selection.

Furthermore, the AI community must develop a framework for implementation and maintenance by incorporating existing best practices of ethical inclusivity, software development, implementation science, and human–computer interaction. The framework should be tied to targets and objectives and AI tool costs should be weighed against use case needs.

AI regulators should be flexible, and the report suggested a gradual approach to the regulation of AI based on the level of patient risk, the level of AI autonomy, and considerations for how static or dynamic certain AI are predicted to be. Regulators should also adopt post- market surveillance mechanisms to ensure high-quality performance and engage experts to continuously evaluate AI for clinical effectiveness and safety based on real-world data.

“The wisest guidance for AI is to start with real problems in healthcare, explore the best solutions by engaging relevant stakeholders, frontline users, patients, and their families, and implement and scale the ones that meet our Quintuple Aim: better health, improved care experience, clinician well-being, lower cost, and health equity throughout,” the report stated.


Choose your Reaction!