Abstract
As the efficacy of artificial intelligence (AI) in improving aspects of healthcare delivery is increasingly becoming evident, it becomes likely that AI will be incorporated in routine clinical care in the near future. This promise has led to growing focus and investment in AI medical applications both from governmental organizations and technological companies. However, concern has been expressed about the ethical and regulatory aspects of the application of AI in health care. These concerns include the possibility of biases, lack of transparency with certain AI algorithms, privacy concerns with the data used for training AI models, and safety and liability issues with AI application in clinical environments. While there has been extensive discussion about the ethics of AI in health care, there has been little dialogue or recommendations as to how to practically address these concerns in health care. In this article, we propose a governance model that aims to not only address the ethical and regulatory issues that arise out of the application of AI in health care, but also stimulate further discussion about governance of AI in health care.
About This Research
Publisher: Journal of the American Medical Informatics Association Year: 2019 Type: Applied Research Citations: 775
Source: A governance model for the application of AI in health care
Relevance
Industries: Government, Healthcare Pillars: AI Change Management & Training, AI Governance & Risk Management, AI Security & Data Protection Use Cases: Personalization & Recommendations
Tiered Accountability Architecture
The governance model establishes distinct accountability layers that correspond to organizational hierarchies within healthcare institutions. Board-level governance committees oversee strategic AI priorities and resource allocation, while clinical governance teams evaluate individual AI applications against evidence-based performance criteria. Operational oversight falls to dedicated AI stewardship roles—a new function the model recommends establishing within health informatics departments—responsible for continuous monitoring, incident reporting, and stakeholder communication regarding deployed AI systems.
Bias Detection and Demographic Calibration
Healthcare AI systems frequently exhibit performance disparities across demographic groups, a phenomenon with potentially life-threatening consequences in clinical settings. The governance model mandates systematic bias auditing using stratified validation datasets that reflect the demographic composition of the served patient population. When performance metrics diverge beyond predefined thresholds across age, gender, ethnicity, or socioeconomic strata, the model triggers mandatory remediation protocols including model recalibration, supplemental training data acquisition, or temporary deployment restrictions.
Lifecycle Governance and Decommissioning Protocols
Unlike many existing frameworks that focus exclusively on pre-deployment evaluation, this model extends governance oversight across the complete AI lifecycle. Post-deployment monitoring encompasses automated performance drift detection, periodic clinical revalidation studies, and structured feedback collection from end users. Crucially, the model includes explicit decommissioning criteria and transition protocols, ensuring that AI systems whose performance degrades below acceptable thresholds are retired safely without disrupting clinical workflows or compromising continuity of care.