Healthcare Leaders and the Ethical Use of Artificial Intelligence


Print this Page       Print

 

Approved by the Board of Governors Dec. 8, 2025.

Statement of the Issue

Artificial intelligence holds extraordinary potential to transform patient care and enhance operational efficiency at hospitals, health systems and other healthcare organizations. However, the use of AI also could cause or exacerbate ethical issues that negatively affect patients, communities, providers and the organizations themselves. With that, healthcare leaders should consider the various ethical concerns that could arise through their organizations’ use of AI and establish clear, value-based guidelines that ensure safe, equitable care for all.

 

Policy Position

AI use must always be subject to human judgment and oversight to avoid bias, misuse and inadvertent harm. Some of that judgement and oversight comes in the form of guidelines and processes that leaders put in place to ensure positive outcomes. With that in mind, healthcare leaders overseeing AI systems should establish ethical policies, guidelines and processes that promote fairness, equity, accessibility, accountability, inclusivity and transparency. In addition, to the extent practicable, any use of AI should assess the likelihood of bias and implement appropriate controls to mitigate the inherent risks of bias in the use of AI.

To help develop the appropriate policies, guidelines and processes, healthcare leaders might consider the following actions:

  • Engage a wide range of stakeholders—from developers to providers to patient advocate groups to community members—throughout the AI lifecycle to identify and address ethical concerns that may arise. The lifecycle includes early design phases as well as post-implementation evaluations and feedback loops.
  • Establish a multidisciplinary governance committee tasked with developing clear guidelines for AI selection, deployment and monitoring. The committee should consist of representatives from throughout the organization—including clinical, operational, technical, legal, compliance, human resources and patient advocacy teams—to oversee implementation and oversight of the AI system, ensure adherence with ethical standards, and review AI projects for alignment with organizational values and objectives.
  • Develop use cases for AI and map out scenarios to cultivate potential risk scenarios.
  • When selecting an AI vendor, work only with those who openly and transparently share data sources, algorithm design processes and bias mitigation efforts.
  • Conduct an ethics evaluation of the AI system at regular intervals (such as every three months), from the beginning stages of development through implementation and during everyday use. Reviews should assess ethical risks and benefits; algorithmic performance across diverse populations; and effects on patient safety, privacy, autonomy and provider-patient relationships.
  • Ensure that data generated by the AI system does not exacerbate or create biases. Data should be reviewed for representativeness and relevance to the populations served. AI systems also should be evaluated for differential performance across race, gender, age, geography, disability status, language and other relevant dimensions.
  • Establish training about AI ethics, biases and limitations for all employees involved in AI-related work.
  • Regularly review and update any AI-related internal policies or guidelines to reflect technological advancements, legal developments, emerging best practices in AI ethics and safety, and feedback from patients, providers and community members.
  • Develop procedures for informing patients when AI systems are involved in their care decisions. The level of disclosure should be guided by the impact of the AI system on decision-making and the patient’s right to autonomy. In cases where AI recommendations significantly influence diagnosis, treatment or prioritization, patients should be notified in clear, understandable terms, and consent should be considered.
  • Develop tailored ethical guidelines for certain AI applications, such as diagnostic AI systems, clinical decision support tools, predictive analytics for clinical deterioration and administrative tasks such as billing, scheduling and staffing.
  • Ensure that board members are aware of the risks as well as the benefits of using AI, and consider including AI experts on the board to tap into their expertise and guidance.
  • Consider using an ethicist who can provide specific guidance on issues that could unintentionally arise through the use of AI.
  • Proactively addressing AI bias and equity concerns through regular equity impact assessments of AI systems, testing AI with diverse population data, monitoring for disparate impacts across demographic groups, corrective action when bias is detected, diverse representation in AI development and oversight, and community engagement in AI governance.

December 2025