Home > NewsRelease > Patients Caught in the ChatGPT/AI Medical Dilemma
Text
Patients Caught in the ChatGPT/AI Medical Dilemma
From:
Dr. Patricia A. Farrell -- Psychologist Dr. Patricia A. Farrell -- Psychologist
For Immediate Release:
Dateline: Tenafly, NJ
Wednesday, May 22, 2024

 

As technology advances and ChatGPT takes hold in many medical settings, patients wonder about the wisdom and trust in AI.

Photo by Growtika on Unsplash

Patients and the medical field are both enthusiastic and worried about the rate of progress in artificial intelligence (AI) and language models like ChatGPT. OpenAI’s ChatGPT, a natural language processing model, along with other AI algorithms, has shown that it can “understand” and write text that looks like a person's writing. As this new technology spreads, it will change decisions, leading to important moral and practical discussions.

One big effect of ChatGPT on decisions is that it might help healthcare workers with their tasks. Physicians and others in healthcare are considering how ChatGPT can be used for setting priorities, evaluating symptoms, and suggesting treatments. The model's quick ability to examine large amounts of data and provide personalized insights could significantly enhance hospital processes and improve patient care.

For patients, though, using ChatGPT to decide has caused distress. Depending on AI systems could create a wall between healthcare providers and patients, making it harder for physicians to give compassionate care. Patients worry AIs might influence their treatment choices, which might not fully consider their needs and wishes.

Another major concern is that ChatGPTs' information might be biased or inaccurate. Even though it was trained on data, such as medical literature, it could still show social biases or incomplete data, leading to less-than-ideal or even harmful medical advice. This questioning is especially important when patients trust AI’s help without first talking to a doctor or nurse.

Some examples of possible AI bias include algorithms that looked at heart failure, heart surgery, and vaginal birth after cesarean delivery (VBAC). One of these algorithms caused Black patients to have more treatments than they needed. The program got it wrong when it said that non-Hispanic white women were more likely to have a vaginal birth after a C-section than women of color.

Using ChatGPT or AI for decisions prompts individuals to consider the impact on healthcare workers. How might it alter their roles and responsibilities? Healthcare providers express concerns about AI taking on tasks or decision-making responsibilities, leading to job insecurity and jeopardizing their expertise.

To address these apprehensions, healthcare institutions and policymakers must engage in dialogues with healthcare professionals and patients to fully understand their perspectives and develop strategies for the responsible integration of AI in medical decision-making. This involves establishing guidelines for using AI systems, ensuring transparency in decision-making processes, and implementing training and monitoring mechanisms to uphold the highest standards of medical care.

One primary concern is the supervision of AI itself. Without proper supervision and vigilance, the potential for problematic issues is inevitable. What is known as “autonomous AI” may not be sufficiently advanced to make specific decisions independently without supervision, and therein lies a problem.

Healthcare professionals may have numerous concerns about bias, accuracy, and transparency, but besides their concerns, what benefits might patients receive from AI integration in their medical care?

Enhanced access to information: ChatGPT’s efficient processing of large amounts of data enables the delivery of comprehensive and easily understandable information to patients regarding their health conditions, treatment options, and healthcare decisions. This empowerment may equip individuals with the knowledge to take an active role in their healthcare journey and gain a deeper understanding.

Personalized care: AI has the potential to provide tailored treatment recommendations and personalized care plans based on symptoms, medical history, and preferences. This level of customization could enhance the effectiveness of care by focusing on each patient's needs.

Efficiency and time savings: By addressing common medical queries or providing guidance on managing chronic conditions, ChatGPT might save patients' time by reducing the frequency of visits or calls to healthcare providers.

AI and Medical Errors

The field of histopathology and other medical domains is increasingly incorporating artificial intelligence (AI) tools. Integrating AI tools is expected to enhance the precision and speed of diagnoses, among other benefits. Nonetheless, because of the imperfections in AI systems, there may arise instances where new errors are introduced. These errors appear as categorizations generated by automated algorithms. The implications of these inaccuracies on patient care outcomes are not always straightforward. As a result, aspects related to the assessment of AI tool safety remain inadequately addressed. Note: One website, Doctor Penguin, tracks AI and healthcare research.

One issue is not necessarily medical errors but a change in responsibility for care. Concerns about the healthcare system are that as AI systems take on more tasks, physicians may become too dependent on AI and lose touch with their patients or their skills. People who work on medical AI might have a tremendous impact on healthcare. They should be held accountable for making safe, useful AI systems and responsibly shaping public opinion about health.

In one study of AI healthcare algorithms, up to 30% of the statements made are not backed up by any sources given, and almost half of the answers have at least one statement that is not backed up.

The answer from another algorithm said that the factors for gambling addiction are the same for all people and groups. But the source it cited said the opposite: “the results do not support the assumed equal impact of each criterion.” Another AI model suggested a starting amount for a defibrillator (one where the current flows in only one direction to help a person having a cardiac arrest). Still, the source only talked about one kind of defibrillators. It is important to tell the difference since defibrillators have changed over the years to use lower electric currents.

Dataset shift” can happen if the system is used on a different dataset than the one it was trained on. This can cause AI diagnostic issues. For this reason, a physician must always be present to check for any differences between the clinical evaluation and the AI prediction or external validation to see if the ML system can be used in all situations.

The future of AI is promising, especially in terms of staffing and burnout and the speed with which decisions may be made that benefit the patient and the provider. However, care is the watchword, and systemic checks on the algorithms and their datasets are of primary concern. Undoubtedly, there will be errors, as is expected in any healthcare system, and learning from them will increase patient safety.

Website: www.drfarrell.net

Author's page: http://amzn.to/2rVYB0J

Medium page: https://medium.com/@drpatfarrell

Twitter: @drpatfarrell

Attribution of this material is appreciated.

News Media Interview Contact
Name: Dr. Patricia A. Farrell, Ph.D.
Title: Licensed Psychologist
Group: Dr. Patricia A. Farrell, Ph.D., LLC
Dateline: Tenafly, NJ United States
Cell Phone: 201-417-1827
Jump To Dr. Patricia A. Farrell -- Psychologist Jump To Dr. Patricia A. Farrell -- Psychologist
Contact Click to Contact
Other experts on these topics