Why Healthcare Needs the Strictest AI Governance
Healthcare sits at the intersection of three high-stakes factors: sensitive personal data, life-and-death decisions, and heavy regulation. AI errors in healthcare can cause direct harm to patients — a miscalculated dosage, a missed diagnosis, or a leaked medical record can have consequences far beyond what occurs in other industries.
At the same time, AI has enormous potential to improve healthcare: faster diagnosis, more accurate treatment planning, reduced administrative burden, and better patient outcomes. The goal of AI governance in healthcare is not to prevent AI use — it is to ensure that AI improves care safely and responsibly.
Regulatory Landscape
Singapore
Health Sciences Authority (HSA)
- HSA regulates AI-powered medical devices under the Health Products Act
- AI software that diagnoses, monitors, or treats medical conditions is classified as a medical device
- Requires registration and conformity assessment before clinical use
Ministry of Health (MOH)
- MOH Licensing Terms and Conditions require healthcare institutions to have appropriate governance for technology use
- National Electronic Health Record (NEHR) guidelines govern data handling in healthcare AI
PDPA (Singapore)
- Patient data is personal data subject to full PDPA protection
- Healthcare institutions must obtain consent for AI processing of patient data
- The "legitimate interests" exception may apply in some clinical contexts but requires careful assessment
IMDA AI Governance Framework
- Provides principles-based guidance applicable to healthcare AI
- Emphasises transparency, accountability, and human oversight
Malaysia
Medical Device Authority (MDA)
- AI-powered medical devices are regulated under the Medical Device Act 2012
- Requires conformity assessment and establishment registration
Ministry of Health (MOH Malaysia)
- Governs healthcare facility licensing and standards
- Any AI use in clinical settings must align with healthcare facility requirements
PDPA (Malaysia)
- Health data is "sensitive personal data" under PDPA
- Requires explicit consent for processing, with limited exceptions
- Higher protection standards apply
Malaysian Medical Council (MMC)
- Medical practitioners remain responsible for clinical decisions, even when assisted by AI
- Ethical obligations require informed consent when AI is used in patient care
AI Use Cases in Healthcare
Administrative AI (Lower Risk)
| Use Case | Risk Level | Key Controls |
|---|
| Appointment scheduling | Low | Standard data privacy |
| Medical billing and coding | Low-Medium | Accuracy verification, audit trail |
| Administrative email drafting | Low | No patient data in prompts |
| Training material creation | Low | Clinical accuracy review |
| Meeting summarisation | Low-Medium | No patient identifiers |
Clinical Support AI (Higher Risk)
| Use Case | Risk Level | Key Controls |
|---|
| Clinical documentation assistance | Medium | Human review, no auto-submission |
| Literature review and research | Medium | Source verification, expert review |
| Patient communication drafts | Medium-High | Clinical review, empathy check |
| Diagnostic image analysis | High | Clinician oversight, regulatory approval |
| Treatment recommendation support | High | Clinician decision authority, evidence review |
| Drug interaction checking | High | Validated database, pharmacist oversight |
Prohibited AI Uses
The following AI uses should be prohibited without exception:
- Autonomous clinical decision-making without human oversight
- Processing identifiable patient data through non-approved AI tools
- AI-generated diagnoses presented to patients without clinician review
- Using free/consumer AI tools for any task involving patient information
- AI-based triage without clinical oversight and validation
Healthcare AI Governance Framework
Principle 1: Patient Safety First
Every AI deployment in healthcare must be evaluated against one primary question: could this harm a patient? If the answer is yes, or even maybe, additional safeguards are mandatory before proceeding.
Requirements:
- Clinical AI must have mandatory human oversight for all patient-affecting outputs
- AI outputs must be clearly identified as AI-generated, not presented as clinician opinions
- Fail-safe mechanisms must exist for AI system outages or errors
- Regular safety reviews must assess real-world AI performance against clinical standards
Principle 2: Data Protection
Healthcare data is the most sensitive category of personal data. AI governance must ensure:
- Minimum necessary data: Only the data required for the AI task should be used
- De-identification by default: Wherever possible, use de-identified or anonymised data
- Access controls: Strict role-based access to AI tools that process patient data
- Audit trails: Complete logging of who accessed what data through AI tools
- Consent management: Clear processes for obtaining and recording patient consent for AI use
Principle 3: Clinical Accountability
- AI does not replace clinical judgment — it supports it
- The treating clinician remains accountable for all clinical decisions, including those informed by AI
- AI recommendations must be documented in the patient record alongside the clinician's decision
- Clinicians must be trained to critically evaluate AI outputs and override when appropriate
Principle 4: Transparency
- Patients should be informed when AI is used in their care
- Healthcare institutions should publish their AI use policies
- AI-generated content in patient records should be labelled as AI-assisted
- Clinicians should be able to explain how AI influenced a recommendation
Implementation Checklist for Healthcare Organisations
Governance Structure
Policies and Standards
Risk and Safety
Data Protection
Training
Practical Controls for Common Healthcare AI Scenarios
Scenario: Using AI to draft clinical letters
Controls:
- Patient identifiers must not be entered into the AI tool
- Use clinical summary language, not raw patient data
- Clinician must review and approve every AI-drafted letter
- AI-generated text must be reviewed for clinical accuracy
- Final letter is the clinician's responsibility
Scenario: Using AI for medical literature research
Controls:
- Use approved enterprise AI tools only
- All AI-cited references must be verified against primary sources
- AI outputs are research aids, not clinical evidence
- Document AI use in research methodology
- Clinical decisions based on research must involve clinician judgment
Scenario: AI-assisted diagnostic imaging analysis
Controls:
- AI tool must have HSA (Singapore) or MDA (Malaysia) regulatory approval
- AI outputs are advisory — radiologist/clinician makes the final determination
- Both AI output and clinician decision are recorded
- Regular accuracy audits comparing AI outputs to clinician diagnoses
- Patient informed when AI is used in their diagnostic process
Related Reading