University of Maryland Study Reveals That ChatGPT Proves Beneficial in Breast Cancer Screening Advice with Certain Limitations
AI In Medicine
: As artificial intelligence becomes increasingly prevalent in everyday life; consumers are relying on tools like ChatGPT to obtain health advice.
Image generated By Stable Diffusion
Researchers at the University of Maryland School of Medicine (UMSOM) have undertaken a study to determine the accuracy and reliability of the information provided by ChatGPT, specifically in the realm of breast cancer screening advice.
The study findings showed that while the AI chatbot delivers correct information most of the time, it can occasionally provide inaccurate or even fabricated information.
In February 2023, the UMSOM study team developed 25 questions related to breast cancer screening advice and submitted each question to ChatGPT three times to assess the responses generated. The AI chatbot is known to provide varying responses to the same question. Three radiologists with fellowship training in mammography evaluated the generated responses and found that 22 out of the 25 questions received appropriate answers. However, one answer was based on outdated information, while two other questions yielded inconsistent responses with each submission.
Dr Paul Yi, MD, Assistant Professor of Diagnostic Radiology and Nuclear Medicine at UMSOM and Director of the UM Medical Intelligent Imaging Center (UM2ii) told AI in Medicine
reporters at TMN, "ChatGPT answered questions correctly about 88% of the time, which is pretty amazing."
The AI chatbot successfully provided answers concerning breast cancer symptoms, risk factors, and mammogram recommendations such as cost, age, and frequency. Furthermore, ChatGPT simplifies complex information, making it more accessible for users.
Despite these positive aspects, the study found that ChatGPT's responses lacked the comprehensiveness one might expect from a typical Google search.
Dr Hana Haver, MD, a radiology resident at the University of Maryland Medical Center and the study's lead author, pointed out that "ChatGPT provided only one set of recommendations on breast cancer screening, issued from the American Cancer Society, but did not mention differing recommendations put out by the Centers for Disease Control and Prevention (CDC) or the U.S. Preventative Services Task Force (USPSTF)."
One example of an inappropriate response involved outdated information regarding the scheduling of mammograms around COVID-19 vaccinations. The AI chatbot advised delaying a mammogram for four to six weeks after receiving a COVID-19 vaccine, which contradicted the USPSTF guidelines endorsed by the CDC since February 2022. Inconsistent responses were also given to questions about personal breast cancer risk and mammogram locations.
Dr Yi noted that "ChatGPT sometimes makes up fake journal articles or health consortiums to support its claims," emphasizing that consumers should be cautious when relying on these new, unproven technologies for health advice. Instead, individuals should continue to consult their doctors for guidanc
The UMSOM research team is now examining ChatGPT's performance in providing lung cancer screening recommendations and exploring ways to improve the AI chatbot's accuracy, comprehensiveness, and understandability for users with varying levels of education.
Dr Mark T. Gladwin, MD, Dean of the University of Maryland School of Medicine, Vice President for Medical Affairs at the University of Maryland, Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor, highlighted the medical community's responsibility to evaluate and monitor technologies like ChatGPT to protect patients from potential harm due to incorrect screening recommendations or outdated preventive health strategies.
The study conducted by the University of Maryland researchers is a crucial step in understanding the benefits and limitations of AI chatbots like ChatGPT in providing health advice.
While the AI chatbot proves helpful in most instances, it is essential for users to be aware of its limitations and consult their healthcare providers for the most reliable and accurate information.
The rapid development of AI technologies in the healthcare sector presents significant potential to transform the way people access medical information and make informed decisions about their health. However, this transformation also comes with the challenge of ensuring that the information provided is accurate, up-to-date, and relevant to the user's needs.
The findings of the University of Maryland study indicate that AI chatbots like ChatGPT are a promising resource for health information, but they must be used with caution and as a supplementary tool alongside professional medical advice.
It is crucial for developers, researchers, and healthcare professionals to collaborate and continue improving the accuracy, reliability, and comprehensiveness of these AI technologies to better serve users in the future.
In light of the study's findings, medical professionals and AI developers can work together to address the identified limitations, such as the inconsistency of responses, outdated information, and the lack of comprehensiveness in provided recommendations. Potential solutions may include regular updates to the AI chatbot's knowledge base, cross-referencing multiple sources for screening guidelines, and establishing partnerships with reputable health organizations to ensure the validity of the information being provided.
Additionally, public education and awareness campaigns can help users understand the benefits and limitations of AI chatbots like ChatGPT, encouraging them to seek professional medical advice when necessary. By fostering a better understanding of the role of AI chatbots in healthcare, users can make more informed decisions about their health while minimizing the risks associated with relying solely on AI-generated information.
In conclusion, the University of Maryland study reveals that ChatGPT can be a valuable tool for obtaining breast cancer screening advice, with an impressive 88% accuracy rate. However, certain caveats must be considered, such as the chatbot's occasional reliance on outdated information and the inconsistency of its responses. As AI chatbots continue to evolve and improve, it is the responsibility of the medical community to assess these technologies and protect users from potential harm. By addressing the identified limitations and promoting a cautious approach to AI-generated health advice, the healthcare sector can harness the potential of AI chatbots to enhance patient education and support informed decision-making.
The study findings were published in the peer reviewed journal: Radiology.
For the latest on AI In Medicine
, keep on logging to Thailand Medical News.