
Artificial intelligence is changing how healthcare research and clinical care are delivered, but the most important questions are not only technical. They are also ethical, legal, and social. As AI tools move from research settings into healthcare environments, clinicians, researchers, and administrators need better ways to evaluate how these systems affect access, fairness, transparency, trustworthiness, and performance. A recent conference on the ethical, legal, and social implications of AI frames the challenge clearly: healthcare AI must be guided by “the right ethical questions” if it is going to be developed and used more equitably in care and research.
A group of experts from UC San Diego, UC San Diego Health, and UCLA examines what responsible AI implementation requires across the healthcare landscape. Safiya U. Noble, Ph.D., UCLA, frames the ethical questions surrounding artificial intelligence in healthcare. Karandeep Singh, M.D., M.M.S.C., UC San Diego Health, addresses clinical applications of AI, including bias, fairness, and transparency. Cinnamon Bloss, Ph.D., UC San Diego, focuses on youth mental health and conversational AI. Farinaz Koushanfar, Ph.D., UC San Diego, explains privacy-preserving computing in patient data. Camille Nebeker, Ed.D., M.S., UC San Diego, examines what it means to create an ethically sourced health data repository for training machine learning and AI.
Together, these sessions point to a central issue: healthcare AI cannot be judged only by whether it works. It must also be evaluated by how it is built, whose data it uses, who benefits, who may be harmed, and whether patients and clinicians can understand and trust the systems shaping care. The course objectives emphasize fairness, accountability, patient autonomy, bias mitigation, patient privacy, human-centered care, and interdisciplinary decision-making as core parts of safe and effective AI implementation.
That broader view matters because healthcare decisions are deeply human. AI systems may help organize information, identify patterns, or support clinical work, but their use raises questions about privacy, consent, equity, and the patient-clinician relationship. Responsible implementation requires attention to the quality of the data behind these tools, the design choices built into algorithms, and the real-world settings where AI systems are used.
The future of healthcare AI depends on more than innovation. It depends on building systems that protect patient data, reduce bias, support clinical judgment, and earn trust. By bringing together perspectives from ethics, engineering, clinical practice, public health, and social science, these sessions help clarify what is at stake as AI moves from possibility to practice.