Warning: This content has not yet been fully revised for this year.
Source: Potential for AI in healthcare
Background
The complexities involved in healthcare industries are no less than that of tech companies. With huge amounts of data and labor-intensive tasks to be done, the struggles of healthcare employees were most vividly seen during the pandemic. Artificial Intelligence and related technologies have the potential to make lives better of these employees by assisting in various aspects of patient care, as well as administrative processes within the organizations. While the future is uncertain about the role of AI, our current focus should be on how it can be integrated into our lives to provide efficiency and simplicity.
Uses
Based on the article above, various research studies have suggested that AI can perform well at key healthcare tasks. Here are some applications:
- Recognition of potentially cancerous lesions in radiology images
- Using NLP systems to analyze unstructured clinical notes on patients, prepare reports (eg on radiology examinations), transcribe patient interactions and conduct conversations
- Surgical robots provide ‘superpowers’ to surgeons by improving their ability to see, creating precise and minimally invasive incisions, stitching wounds, etc
- AI would assist in management of referrals, patient scheduling, and exam preparations to enhance the patient experience and allow an efficient use of the facilities
- AI could help combine large amounts of clinical data to generate a more holistic view of patients
- Using predictive analytics with patient populations, healthcare providers can take preventative action, reduce health risk, and save unnecessary costs
Potential concerns
Nothing is perfect and the same applies to AI. With all the benefits it offers, there are some dangers as well.
- Distributional shift: A mismatch in data due to a change of environment or circumstance can result in erroneous predictions. For example, over time, disease patterns can change, leading to a disparity between training and operational data
- Automation complacency: Clinicians may start to trust AI tools implicitly, assuming all predictions are correct and failing to cross-check or consider alternatives
- Reinforcement of outmoded practice: AI can’t adapt when developments or changes in medical policy are implemented since they are trained using historical data
- Negative side effects: AI may suggest a treatment but fail to consider any potential unintended consequences
- Unscalable oversight: Because AI systems are capable of carrying out countless jobs and activities, including multitasking, monitoring such a machine can be near impossible
News
Provocations
- What if doctors/healthcare workers over-trust AI recommendations?
- Machine learning systems may be subject to algorithmic bias, perhaps predicting greater likelihood of disease on the basis of gender or race when those are not actually causal factors
- There could be discrepancies in a diagnosis leading to the patient being administered an incorrect treatment
- A patient would likely want to know how their disease has been diagnosed by an AI generated image but deep learning algorithms, and even physicians who are generally familiar with their operation, may be unable to provide an explanation