The global pandemic has allowed AI software and hardware to be more commonplace in healthcare. However, AI technologies in healthcare are not a simple “plug and play” situation. These technologies come with some ethical and logistical challenges:
- Privacy – How will the patient be protected from unwanted intrusions into their personal domain?
- Data management – How will the data be encrypted and protected so only healthcare professionals who are part of the patient’s care will have access to Personal Health Information (PHI)? How will data management comply with the Health Insurance Portability and Accountability Act (HIPAA)?
- Consent – How will smart healthcare devices be limited to perform only the health measurements consented by the patient, and not produce unnecessary extraneous data?
- Bias – Will the dataset used to train the AI system(s) be inclusive of different patient populations to prevent bias?
- Practice implications – Will the implementation of AI change the way the physician views their patient? Will using AI increase the liability of clinical staff members who make a decision based on their clinical judgment that conflicts with AI recommendations?
The solutions to these issues are multifaceted and will likely change over time. Transparency is absolutely critical among healthcare providers, companies developing these technologies, and the federal government. HIPAA covers 18 identifiers that must remain private and are considered PHI. End to end encryption must be standardized to prevent data breaches, where hackers illegally obtain patients’ PHI and other data that may be used to steal someone’s identity. A consent process should be standard, so patient autonomy is respected.

While the aim is to have AI systems work independently, they will still need human oversight. Involving patients as soon as possible while these technologies are being developed is important. There is a higher success rate for AI processes if patients are consulted earlier in the development of a new technology, compared with technologies that do not consider the patient’s experience in the design phase. The data used to train AI systems must be representative of a diverse patient population incorporating a variety of sexes, genders, genetic ancestries, socioeconomic backgrounds, and geographies. Just as new laws in the United States have been passed to regulate healthcare, such as HIPAA (1996); GINA (2008); HITECH (2009); No Surprises Act (2022); new laws regulating AI should be implemented, preferably before they are released for clinical use. Clinical healthcare providers must be trained in the social implications of incorporating AI technologies into routine patient care. AI has the potential to cause a massive paradigm shift in healthcare, and providers must be prepared for how this will change their clinical practice.
The role of technology in healthcare will continue to expand as patient loads increase. AI and ML tools have a multitude of applications, and the literature demonstrates the benefits of different technologies. However, multiple stakeholders must work cooperatively with one another to overcome the ethical and legal challenges that come with the exciting promises of different technological advances.
About CMT
CMT offers combined expertise in laboratory, technology solutions, and HUB services to help both patients and physicians. We are the market leader in molecular diagnostic patient access, offering prior authorizations, benefits investigation, notifications, and more. Learn more about our products and how we can help your business today!