Laryngoscope

Assessing the Reporting Quality of Machine Learning Algorithms in Head and Neck Oncology

Rahul Alapati 1
Bryan Renslo 2
Sarah F. Wagoner 1
Omar Karadaghy 1
Aisha Serpedin 3
Yeo Eun Kim 3
Maria Feucht 1
Naomi Wang 1
Uma Ramesh 1
Antonio Bon-Nieves 1
Amelia Lawrence 1
Celina Virgen 1
Tuleen Sawaf 4
Andres M. Bur 1
Show full list: 15 authors
Publication typeJournal Article
Publication date2024-09-11
Journal: Laryngoscope
scimago Q1
SJR1.128
CiteScore6.5
Impact factor2.2
ISSN0023852X, 15314995
PubMed ID:  39258420
Abstract
Objective

This study aimed to assess reporting quality of machine learning (ML) algorithms in the head and neck oncology literature using the TRIPOD‐AI criteria.

Data Sources

A comprehensive search was conducted using PubMed, Scopus, Embase, and Cochrane Database of Systematic Reviews, incorporating search terms related to “artificial intelligence,” “machine learning,” “deep learning,” “neural network,” and various head and neck neoplasms.

Review Methods

Two independent reviewers analyzed each published study for adherence to the 65‐point TRIPOD‐AI criteria. Items were classified as “Yes,” “No,” or “NA” for each publication. The proportion of studies satisfying each TRIPOD‐AI criterion was calculated. Additionally, the evidence level for each study was evaluated independently by two reviewers using the Oxford Centre for Evidence‐Based Medicine (OCEBM) Levels of Evidence. Discrepancies were reconciled through discussion until consensus was reached.

Results

The study highlights the need for improvements in ML algorithm reporting in head and neck oncology. This includes more comprehensive descriptions of datasets, standardization of model performance reporting, and increased sharing of ML models, data, and code with the research community. Adoption of TRIPOD‐AI is necessary for achieving standardized ML research reporting in head and neck oncology.

Conclusion

Current reporting of ML algorithms hinders clinical application, reproducibility, and understanding of the data used for model training. To overcome these limitations and improve patient and clinician trust, ML developers should provide open access to models, code, and source data, fostering iterative progress through community critique, thus enhancing model accuracy and mitigating biases.

Level of Evidence

NA Laryngoscope, 2024

Found 
  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex
Found error?