AI ETHICS IN HEALTH CARE

Artificial intelligence (AI) is rapidly entering health care and serving major roles, from automating drudgery and routine tasks in medical practice to managing patients and medical resources. As developers create AI systems to take on these tasks, several risks and challenges emerge, including the risk of injuries to patients from AI system errors, the risk to patient privacy of data acquisition and AI inference, and more. Potential solutions are complex but involve investment in infrastructure for high-quality, representative data; collaborative oversight by both the Food and Drug Administration and other health-care actors; and changes to medical education that will prepare providers for shifting roles in an evolving system.

Artificial intelligence (AI), which incorporates the fields of machine learning, language processing, and artificial intelligence, may be applied to virtually any field of drug analysis, medical education, and delivery of health care, and its potential contributions to medicine appear limitless. With its strong ability to integrate and learn from massive sets of clinical information, AI will play a role in diagnosing, clinical higher cognitive processes, and personalised drugs. For instance, AI-based diagnostic algorithms applied to mammograms are helping in the detection of skin cancer, serving as a "second opinion" for radiologists. Additionally, advanced virtual human avatars are capable of participating in meaningful conversations, which has implications for the diagnosis and treatment of medical illnesses.

This theme issue of Ethics aims to tackle a number of the moral dilemmas that arise once AI technology is employed in health care and medical education. Among the most pressing issues raised by this issue are addressing the additional risk to patient privacy and confidentiality, delineating the boundaries between the physician's and machine's roles in patient care, and adjusting future physician education to proactively confront the upcoming changes in drug application. addition, dialogue on these considerations can improve medical practitioner and patient understanding of the role AI will play in health care, serving stakeholders by developing a sensible sense of what AI will and can't do. Finally, physicians who use AI technology in their practise will benefit from anticipating potential moral pitfalls, differentiating doable solutions, and making policy recommendations.

One major theme to be addressed during this issue is the way to balance the advantages and risks of AI technology. There's a profit to be made in fleetly grouping AI technology into the health care system, as AI poses the chance to enhance the potency of health provision and the quality of patient care.

Security Risks and Challenges

While AI offers a number of possible benefits, there also are several risks:

Injuries and error: The most obvious risk is that AI systems will sometimes be wrong, and that patient injury or other health-care problems may result. If an AI system recommends the wrong drug for a patient, fails to notice a tumor on a radiological scan, or allocates a hospital bed to one patient over another because it predicted wrongly which patient would benefit more, the patient could be injured. Of course, many injuries occur due to medical error in the health-care system today, even without the involvement of AI. AI errors are potentially different for at least two reasons. First, patients and providers may react differently to injuries resulting from software than from human error. Second, if AI systems become widespread, an underlying problem in one AI system might result in injuries to thousands of patients—rather than the limited number of patients injured by any single provider’s error.

Data availability: Training AI systems requires large amounts of data from sources such as electronic health records, pharmacy records, insurance claims records, or consumer-generated information like fitness trackers or purchasing history. But health data are often problematic. Data are typically fragmented across many different systems. Even aside from the variety just mentioned, patients typically see different providers and switch insurance companies, leading to data split in multiple systems and multiple formats. This fragmentation increases the risk of error, decreases the comprehensiveness of datasets, and increases the expense of gathering data—which also limits the types of entities that can develop effective health-care AI.

Privacy concerns: Another set of risks arise around privacy. The requirement of large datasets creates incentives for developers to collect such data from many patients. Some patients may be concerned that this collection may violate their privacy, and lawsuits have been filed based on data-sharing between large health systems and AI developers. AI could implicate privacy in another way: AI can predict private information about patients even though the algorithm never received that information. (Indeed, this is often the goal of health-care AI.) For instance, an AI system might be able to identify that a person has Parkinson’s disease based on the trembling of a computer mouse, even if the person had never revealed that information to anyone else (or did not know). Patients might consider this a violation of their privacy, especially if the AI system’s inference were available to third parties, such as banks or life insurance companies.

Bias and inequality: There are risks involving bias and inequality in health-care AI. AI systems learn from the data on which they are trained, and they can incorporate biases from those data. For instance, if the data available for AI are principally gathered in academic medical centers, the resulting AI systems will know less about—and therefore will treat less effectively—patients from populations that do not typically frequent academic medical centers. Similarly, if speech-recognition AI systems are used to transcribe encounter notes, such AI may perform worse when the provider is of a race or gender underrepresented in training data.

Professional realignment: Longer-term risks involve shifts in the medical profession. Some medical specialties, such as radiology, are likely to shift substantially as much of their work becomes automatable. Some scholars are concerned that the widespread use of AI will result in decreased human knowledge and capacity over time, such that providers lose the ability to catch and correct AI errors and further to develop medical knowledge.

The nirvana fallacy: One final risk bears mention. AI has the potential for tremendous good in health care. The nirvana fallacy posits that problems arise when policymakers and others compare a new option to perfection, rather than the status quo. Health-care AI faces risks and challenges. But the current system is also rife with problems. Doing nothing because AI is imperfect creates the risk of perpetuating a problematic status quo.

Possible Solutions

There are several ways we can deal with possible risks of health-care AI:

Data generation and availability: Several risks arise from the difficulty of assembling high-quality data in a manner consistent with protecting patient privacy. One set of potential solutions turns on government provision of infrastructural resources for data, ranging from setting standards for electronic health records to directly providing technical support for high-quality data-gathering efforts in health systems that otherwise lack those resources. A parallel option is direct investment in the creation of high-quality datasets. Reflecting this direction, both the United States’ All of Us initiative and the U.K.’s BioBank aim to collect comprehensive health-care data on huge numbers of individuals. Ensuring effective privacy safeguards for these large-scale datasets will likely be essential to ensuring patient trust and participation.

Quality oversight: Oversight of AI-system quality will help address the risk of patient injury. The Food and Drug Administration (FDA) oversees some health-care AI products that are commercially marketed. The agency has already cleared several products for market entry, and it is thinking creatively about how best to oversee AI systems in health. However, many AI systems in health care will not fall under FDA’s purview, either because they do not perform medical functions (in the case of back-end business or resource-allocation AI) or because they are developed and deployed in-house at health systems themselves—a category of products FDA typically does not oversee. These health-care AI systems fall into something of an oversight gap. Increased oversight efforts by health systems and hospitals, professional organizations like the American College of Radiology and the American Medical Association, or insurers may be necessary to ensure quality of systems that fall outside the FDA’s exercise of regulatory authority.

Provider engagement and education: The integration of AI into the health system will undoubtedly change the role of health-care providers. A hopeful vision is that providers will be enabled to provide more-personalized and better care, freed to spend more time interacting with patients as humans. A less hopeful vision would see providers struggling to weather a monsoon of uninterpretable predictions and recommendations from competing algorithms. In either case—or in any option in-between—medical education will need to prepare providers to evaluate and interpret the AI systems they will encounter in the evolving health-care environment.

Thoughts

Dont Forget To Share
Your Thoughts.

Please feel free to leave your comments here, All your comments are welcomed and appreciated.


Share Thoughts

AI ETHICS

As AI gets more intelligent and capable of doing more complicated human activities, it will become more difficult to monitor, validate, anticipate, and explain their behaviour.


Stay tunned:

* we promise that we won´t spam you, never.