Abstract
One of the key challenges in successful deployment and meaningful adoption of AI in healthcare is health system-level governance of AI applications. Such governance is critical not only for patient safety and accountability by a health system, but to foster clinician trust to improve adoption and facilitate meaningful health outcomes. In this case study, we describe the development of such a governance structure at University of Wisconsin Health (UWH) that provides oversight of AI applications from assessment of validity and user acceptability through safe deployment with continuous monitoring for effectiveness. Our structure leverages a multi-disciplinary steering committee along with project specific sub-committees. Members of the committee formulate a multi-stakeholder perspective spanning informatics, data science, clinical operations, ethics, and equity. Our structure includes guiding principles that provide tangible parameters for endorsement of both initial deployment and ongoing usage of AI applications. The committee is tasked with ensuring principles of interpretability, accuracy, and fairness across all applications. To operationalize these principles, we provide a value stream to apply the principles of AI governance at different stages of clinical implementation. This structure has enabled effective clinical adoption of AI applications. Effective governance has provided several outcomes: (1) a clear and institutional structure for oversight and endorsement; (2) a path towards successful deployment that encompasses technologic, clinical, and operational, considerations; (3) a process for ongoing monitoring to ensure the solution remains acceptable as clinical practice and disease prevalence evolve; (4) incorporation of guidelines for the ethical and equitable use of AI applications.
About This Research
Publisher: Frontiers in Digital Health Year: 2022 Type: Case Study Citations: 61
Relevance
Industries: Education, Healthcare Pillars: AI Compliance & Regulation, AI Governance & Risk Management, Board & Executive Oversight Use Cases: Knowledge Management & Search Regions: Southeast Asia
Vendor Evaluation and Procurement Governance
Clinical AI procurement decisions carry implications that extend far beyond standard technology purchasing considerations. The framework establishes mandatory evaluation criteria encompassing algorithmic transparency requirements, bias testing documentation, clinical validation evidence quality, post-market surveillance commitments, and contractual provisions for model updates and performance guarantees. Procurement governance panels include mandatory clinical representation to ensure that purchasing decisions reflect patient safety priorities alongside financial and operational considerations.
Continuous Calibration Monitoring
Unlike traditional medical devices that maintain consistent performance characteristics throughout their operational lifespan, AI systems exhibit performance drift as patient populations evolve, clinical practices change, and upstream data collection procedures are modified. The framework implements continuous calibration monitoring through statistical process control methods adapted from manufacturing quality assurance, triggering mandatory clinical review when performance metrics exceed control limits. This proactive surveillance approach detects degradation substantially earlier than periodic manual auditing.
Equity Auditing and Demographic Stratification
Clinical AI systems frequently exhibit performance disparities across patient demographic groups, reflecting biases in training data composition and feature engineering assumptions. The governance framework mandates quarterly equity audits that disaggregate diagnostic accuracy, treatment recommendation appropriateness, and risk stratification performance by age bracket, gender identity, ethnic background, socioeconomic indicators, and insurance coverage status. Identified disparities trigger remediation protocols ranging from targeted supplemental training data acquisition to temporary deployment restrictions pending model recalibration.