Journal of Gastroenterology and Hepatology (Australia), volume 39, issue 1, pages 81-106

ChatGPT on guidelines: Providing contextual knowledge to GPT allows it to provide advice on appropriate colonoscopy intervals

Publication typeJournal Article
Publication date2023-10-19
scimago Q1
SJR1.179
CiteScore7.9
Impact factor3.7
ISSN08159319, 14401746
PubMed ID:  37855067
Gastroenterology
Hepatology
Abstract
Background and Aim

Colonoscopy is commonly used in screening and surveillance for colorectal cancer. Multiple different guidelines provide recommendations on the interval between colonoscopies. This can be challenging for non‐specialist healthcare providers to navigate. Large language models like ChatGPT are a potential tool for parsing patient histories and providing advice. However, the standard GPT model is not designed for medical use and can hallucinate. One way to overcome these challenges is to provide contextual information with medical guidelines to help the model respond accurately to queries.

Our study compares the standard GPT4 against a contextualized model provided with relevant screening guidelines. We evaluated whether the models could provide correct advice for screening and surveillance intervals for colonoscopy.

Methods

Relevant guidelines pertaining to colorectal cancer screening and surveillance were formulated into a knowledge base for GPT. We tested 62 example case scenarios (three times each) on standard GPT4 and on a contextualized model with the knowledge base.

Results

The contextualized GPT4 model outperformed the standard GPT4 in all domains. No high‐risk features were missed, and only two cases had hallucination of additional high‐risk features. A correct interval to colonoscopy was provided in the majority of cases. Guidelines were appropriately cited in almost all cases.

Conclusions

A contextualized GPT4 model could identify high‐risk features and quote appropriate guidelines without significant hallucination. It gave a correct interval to the next colonoscopy in the majority of cases. This provides proof of concept that ChatGPT with appropriate refinement can serve as an accurate physician assistant.

Found 
Found 

Top-30

Journals

1
2
1
2

Publishers

1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex | MLA
Found error?