Author's Draft: Published in AI and Ethics June 13, 2025, https://doi.org/10.1007/s43681-025-00732-6.

AI, healthcare ethics, and disability: a debate

Peter Smith and Roy Rada

Abstract

This paper presents a debate on the ethical implications of artificial intelligence in healthcare, particularly concerning its impact on disabled individuals. Roy advocates for the potential benefits of automated clinical decision-making, while Peter raises concerns about its risks and limitations for the disabled community. Through their discussion, they examine the promise and perils of AI, drawing on scientific literature and firsthand experiences. The paper illuminates the ethical implications of using AI in the healthcare domain.

Introduction

Artificial intelligence (AI) is transforming healthcare, potentially improving outcomes for patients, including those with disabilities.[1] As both authors have experienced, AI raises complex ethical problems.[2] This paper will engage in a structured debate between Roy, who views AI as a promising tool for enhancing healthcare accessibility and decision-making for disabled individuals, and Peter, who warns of the dangers that AI may pose, including bias, loss of human interaction, and ethical dilemmas.

Roy argues that AI can empower disabled people by improving healthcare delivery and personalizing treatment. For instance, he cites AI applications in stroke medicine that improve outcomes.[3] Peter, however, cautions that reliance on AI threatens compassionate healthcare. For instance, Peter notes that the crucial concepts connected to the goals of ambient assisted living technologies of independence, self-determination and privacy, are often used in a superficial manner in engineering.[4] This argument pits the full-speed ahead side against the proceed with caution side. This paper addresses three complex subjects (disability, AI and ethics) each of which are defined next.

Definitions of Disability, AI, and Ethics

Disabilities have always existed, but different cultures at different times have approached disability in remarkably different ways.[5] The many views on disability can be broadly classified as medical or social.[6] The medical view is that disability is a problem of the person, directly caused by a health condition which therefore requires sustained medical care. In the social view, disability is less an attribute of an individual than a condition of the social environment which society is responsible to minimize. Francis and Silvers state:[7] ‘“Disability” is a term of art with different specialized meanings, each developed for the particular policy or program that uses it.’ Both authors are disabled but in different ways. Peter had a fall at home two decades ago, broke his neck, and has been quadriplegic since. Roy was heavily radiated for neck cancer two decades ago and suffers progressive, irreversible long-term effects of radiation.

The European Commission's AI Act was the first initiative to comprehensively regulate the development and use of AI systems at a supranational level [8] and states:[9] “AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” Some claim that society has entered the 3rd epoch of AI:[10] “AI 3.0 is the era of foundation models and generative AI.” This paper focuses on Large Language Models (LLMs) within AI, as LLMs arguably provide the greatest advance of recent times in AI capabilities as relevant to disability and ethics.

Ethics is the study of moral phenomena and has branches of normative ethics, metaethics, and applied ethics.[11] With the revolution in AI, guidelines or laws to constrain that revolution have relied on ethical arguments. The US National Academy of Medicine published in 2024 an AI code of conduct for healthcare that listed 10 Principles [12] which in summary emphasized trust. A discourse analysis [13] of guidelines for AI use in healthcare showed that AI is desirable and unavoidable, that the establishment of sound principles are essential, and that trust has a central role. The National Academy of Medicine has said [14], “There should be full transparency on the … quality of data used to develop AI tools. .. However, algorithmic transparency may not be required for all cases.” Different scenarios may require different guidelines.

A Curious Pattern

A literature review helps illustrate the differences in view of the two sides of the debate, namely proceeding full speed versus proceeding cautiously. A query of the medical literature for systematic reviews about AI, disability, and ethics showed a curious pattern. Authors from the social sciences tended to support caution or regulation, while those from engineering, further development.

For instance, one systematic review [15] by social scientists concluded that published research uses a narrow, medical model of disability and is overly optimistic about the abilities of AI. The authors said that the AI systems perpetuate biases and discriminate against individuals with disabilities and that future work should shift to a social model of disability and on policy and regulatory frameworks. A systematic review by engineers of conversational agents for the cognitively impaired [16] concluded that results were promising for conversational agents in self-management, where the agents evaluate symptoms and suggest a course of action. Those authors encourage further development of applications.

Peter and Roy are both technologists with early experiences of AI.[17] The complexity of disability, AI, and ethics requires more complex interpretation than that social scientists want regulation and technologists want automation.[18] In the case of Peter and Roy, when the patient feels his future is stable and relies heavily on the healthcare system to keep him stable, then that patient worries that AI might upset the existing order. However, when the patient is dying of iatrogenic causes and no stability or treatment exists, then an alternative order is attractive.

The Debate Further

For patients facing end-of-life, dignity therapy has been shown in controlled clinical trials to be helpful.[19] During dignity therapy, staff interview the patient about plans for end of life. After several sessions a document is created that captures the patient’s wishes, and the patient shares this with significant others and reflects on it.[20] However, staff typically do not have enough time for this, but a software tool exploiting LLMs might, and for a patient like Roy but not Peter. this would be welcome.

AI technologies can significantly enhance accessibility for disabled individuals. For instance, speech recognition software and AI-driven mobile applications enable better communication for those with mobility impairments or speech difficulties.[21] Roy emphasizes that these tools can facilitate easier interactions with healthcare providers and improve self-management of health conditions.[22]

Peter raises critical concerns about the inherent biases in AI algorithms. He points to research indicating that many AI systems are trained on datasets that lack diversity, potentially leading to inequitable treatment for disabled individuals.[17] He emphasizes the risk that AI may reinforce existing healthcare disparities rather than alleviate them.[23]

A key argument Peter makes is the importance of maintaining the human element in healthcare. He warns that over-reliance on AI could diminish the personal connection between patients and providers, which is especially vital for disabled individuals who may require nuanced understanding and empathy.[24] The loss of this relationship could negatively impact patient satisfaction and trust.[25]

Peter discusses the ethical implications of using AI in clinical decision-making. He raises questions about accountability when AI systems make errors or produce misleading recommendations.[26] In his research, he highlights the importance of establishing clear guidelines and accountability frameworks to ensure that AI is used responsibly in healthcare settings.[27]

Roy feels that his access to LLMs in inadequate because progress is too slow and resources for his type of case too limited. His access is constrained to freely available public LLMs that have limited context windows and no specialized medical background. He wishes that the healthcare industry or the government would support medically specialized LLMs for patients and that those LLMs could have full access to the patient's' record. As far as Roy is concerned, issues such as privacy, trust, and reliability are overblown for his case relative to the value that he has been able to get from LLMs. He wishes that the LLMs to which he has access were able to know his full record and have been specifically trained for medicine.

Conclusion

Despite their varying perspectives, Roy and Peter agree that trust in AI systems is important. Roy advocates for transpar- ency in AI algorithms to build confidence among patients and healthcare provider [14]. Peter concurs, emphasizing that trust is essential for effective integration of AI into clinical practice, especially for vulnerable populations [28]. Both Roy and Peter recognize that most of the money spent on AI and healthcare goes to areas other than disability, but they argue that more research is needed to explore AI's role for disabled populations [29].

While the resolution to the dilemma addressed in this debate is to involve disabled patients in the develop- ment and deployment of LLMs, the conclusion is not that straightforward. The authors have shown that different ethical conclusions come from different kinds of disabled patients. For those patients who see their disability as stable and want their healthcare unchanged, the advice is'proceed with caution with LLMs'. On the other hand, patients whose disability is progressive and irreversible may want LLMs to develop as quickly and powerfully as possible. The debate between Roy and Peter highlights the com- plexity of integrating AI into disability healthcare. While Roy sees significant potential for AI to enhance care and accessibility, Peter urges caution. To promote ethical AI use in healthcare for disabled people, both authors emphasize the necessity of including disabled individuals in discussions about AI development. By examining both the promise and perils, this paper contributes to the ongoing discourse on how to responsibly help disabled people with AI.

Author contributions The paper was written by both authors. Both contributed equally to the manuscript 50% each.

Data availability No datasets were generated or analysed during the current study.

Declarations Conflict of interest The authors declare no competing interests.

References

  1. Aluru, K.S., Transforming Healthcare: The Role of AI in Improving Patient Outcomes. International Journal of Machine Learning Research in Cybersecurity and Artificial Intelligence, 2023. 14(1): p. 451-479.
  2. Bellaby, R., The ethical problems of ‘intelligence–AI’. International Affairs, 2024. 100(6): p. 2525-2542.
  3. Daidone, M., S. Ferrantelli, and A. Tuttolomondo, Machine learning applications in stroke medicine: advancements, challenges, and future prospectives. Neural Regen Res, 2024. 19(4): p. 769-773 DOI: 10.4103/1673-5374.382228.
  4. Hartmann, K.V., N. Primc, and G. Rubeis, Lost in translation? Conceptions of privacy and independence in the technical development of AI-based AAL. Med Health Care Philos, 2023. 26(1): p. 99-110 DOI: 10.1007/s11019-022-10126-8.
  5. Rembis, M., C.J. Kudlick, and K. Nielsen, The Oxford Handbook of Disability History. 2018: Oxford University Press.
  6. Shakespeare, T., The social model of disability. The disability studies reader, 2006. 2(3): p. 197-204.
  7. Francis, L. and A. Silvers, Perspectives on the Meaning of "Disability". AMA J Ethics, 2016. 18(10): p. 1025-1033 DOI: 10.1001/journalofethics.2016.18.10.pfor2-1610.
  8. Castán, C.T., The legal concept of artificial intelligence: the debate surrounding the definition of AI System in the AI Act. BioLaw Journal-Rivista di BioDiritto, 2024(1): p. 305-344.
  9. European Union, Artificial Intelligence Act (Regulation (EU) 2024/1689), Official Journal version of 13 June 2024, in 2024/1689, E. Union, Editor. 2024, European Union: Brussels, Belgium.
  10. Howell, M.D., G.S. Corrado, and K.B. DeSalvo, Three Epochs of Artificial Intelligence in Health Care. JAMA, 2024. 331(3): p. 242-244 DOI: 10.1001/jama.2023.25057.
  11. McCloskey, H.J., Meta-ethics and normative ethics. 2013: Springer.
  12. Adams, L., et al. Artificial Intelligence in Health, Health Care, and Biomedical Science: An AI Code of Conduct Principles and Commitments. NAM Perspectives 2024. 9.
  13. Arbelaez Ossa, L., et al., AI Through Ethical Lenses: A Discourse Analysis of Guidelines for AI in Healthcare. Sci Eng Ethics, 2024. 30(3): p. 24 DOI: 10.1007/s11948-024-00486-0.
  14. Whicher, D., et al., Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. 2023, National Academy of Medicine: Washington (DC).
  15. El Morr, C., et al., AI and disability: A systematic scoping review. Health Informatics J, 2024. 30(3): p. 14604582241285743 DOI: 10.1177/14604582241285743.
  16. Huq, S.M., R. Maskeliūnas, and R. Damaševičius, Dialogue agents for artificial intelligence-based conversational systems for cognitively disabled: a systematic review. Disabil Rehabil Assist Technol, 2024. 19(3): p. 1059-1078 DOI: 10.1080/17483107.2022.2146768.
  17. Smith, P. and R. Rada, Future Trends in Expert Systems in the UK, in World-Wide Expert Systems Activities and Trends. 1994, Cognizant Communication Corporation.
  18. Smith, L. and P. Smith, The ethical issues raised by the use of Artificial Intelligence products for the disabled: an analysis by two disabled people, in Ethics in Online AI-based Systems. 2024, Elsevier. p. 121-134.
  19. Wild, E., et al., Feasibility and acceptability of virtual dignity therapy for palliative care patients with advanced cancer. Journal of Pain and Symptom Management, 2024. 67(5): p. e609-e610 DOI: 10.1016/j.jpainsymman.2024.02.032.
  20. Lim, Y., Dignity and Dignity Therapy in End-of-Life Care. J Hosp Palliat Care, 2023. 26(3): p. 145-148 DOI: 10.14475/jhpc.2023.26.3.145.
  21. Joshi, K., et al. Cognitive-Chair: AI based advanced Brain Sensing Wheelchair for Paraplegic/Quadriplegic people. in 2022 4th International Conference on Artificial Intelligence and Speech Technology (AIST). 2022. IEEE.
  22. Wald, M., AI data-driven personalisation and disability inclusion. Frontiers in artificial intelligence, 2021. 3: p. 571955.
  23. Whittaker, M., et al., Disability, bias, and AI. AI Now Institute, 2019. 8.
  24. Mainz, J.T., Medical AI: is trust really the issue? J Med Ethics, 2024. 50(5): p. 349-350 DOI: 10.1136/jme-2023-109414.
  25. Kostick-Quenet, K., et al., Trust criteria for artificial intelligence in health: normative and epistemic considerations. J Med Ethics, 2024. 50(8): p. 544-551 DOI: 10.1136/jme-2023-109338.
  26. David, P., H. Choung, and J.S. Seberger, Who is responsible? US Public perceptions of AI governance through the lenses of trust and ethics. Public Underst Sci, 2024. 33(5): p. 654-672 DOI: 10.1177/09636625231224592.
  27. Kumar, M., et al. AI-Enhanced Wheelchair Solutions for Spinal Cord Injury (SCI) Individuals. in 2024 International Conference on Signal Processing, Computation, Electronics, Power and Telecommunication (IConSCEPT). 2024. IEEE.
  28. Hatherley, J.J., Limits of trust in medical AI. J Med Ethics, 2020. 46(7): p. 478-481 DOI: 10.1136/medethics-2019-105935.
  29. Coeckelbergh, M., Health care, capabilities, and AI assistive technologies. Ethical theory and moral practice, 2010. 13: p. 181-190.

Return Home