Open Access
Open access
Cancers, volume 16, issue 6, pages 1100

Machine Learning Meets Cancer

Elena V Varlamova 1
Maria Butakova 1
Vlada V. Semyonova 2
Artem V. Poltavskiy 1
Oleg I Kit 3
Publication typeJournal Article
Publication date2024-03-08
Journal: Cancers
scimago Q1
SJR1.391
CiteScore8.0
Impact factor4.5
ISSN20726694
Cancer Research
Oncology
Abstract

The role of machine learning (a part of artificial intelligence—AI) in the diagnosis and treatment of various types of oncology is steadily increasing. It is expected that the use of AI in oncology will speed up both diagnostic and treatment planning processes. This review describes recent applications of machine learning in oncology, including medical image analysis, treatment planning, patient survival prognosis, and the synthesis of drugs at the point of care. The fast and reliable analysis of medical images is of great importance in the case of fast-flowing forms of cancer. The introduction of ML for the analysis of constantly growing volumes of big data makes it possible to improve the quality of prescribed treatment and patient care. Thus, ML is expected to become an essential technology for medical specialists. The ML model has already improved prognostic prediction for patients compared to traditional staging algorithms. The direct synthesis of the necessary medical substances (small molecule mixtures) at the point of care could also seriously benefit from the application of ML. We further review the main trends in the use of artificial intelligence-based technologies in modern oncology. This review demonstrates the future prospects of using ML tools to make progress in cancer research, as well as in other areas of medicine. Despite growing interest in the use of modern computer technologies in medical practice, a number of unresolved ethical and legal problems remain. In this review, we also discuss the most relevant issues among them.

Soldatov S.A., Pashkov D.M., Guda S.A., Karnaukhov N.S., Guda A.A., Soldatov A.V.
Algorithms scimago Q2 wos Q2 Open Access
2022-10-27 citations by CoLab: 5 PDF Abstract  
Microscopic tissue analysis is the key diagnostic method needed for disease identification and choosing the best treatment regimen. According to the Global Cancer Observatory, approximately two million people are diagnosed with colorectal cancer each year, and an accurate diagnosis requires a significant amount of time and a highly qualified pathologist to decrease the high mortality rate. Recent development of artificial intelligence technologies and scanning microscopy introduced digital pathology into the field of cancer diagnosis by means of the whole-slide image (WSI). In this work, we applied deep learning methods to diagnose six types of colon mucosal lesions using convolutional neural networks (CNNs). As a result, an algorithm for the automatic segmentation of WSIs of colon biopsies was developed, implementing pre-trained, deep convolutional neural networks of the ResNet and EfficientNet architectures. We compared the classical method and one-cycle policy for CNN training and applied both multi-class and multi-label approaches to solve the classification problem. The multi-label approach was superior because some WSI patches may belong to several classes at once or to none of them. Using the standard one-vs-rest approach, we trained multiple binary classifiers. They achieved the receiver operator curve AUC in the range of 0.80–0.96. Other metrics were also calculated, such as accuracy, precision, sensitivity, specificity, negative predictive value, and F1-score. Obtained CNNs can support human pathologists in the diagnostic process and can be extended to other cancers after adding a sufficient amount of labeled data.
Pratiwi R.A., Nurmaini S., Rini D.P., Rachmatullah M.N., Darmawahyuni A.
2021-09-01 citations by CoLab: 13 Abstract  
<span lang="EN-US">One type of skin cancer that is considered a malignant tumor is melanoma. Such a dangerous disease can cause a lot of death in the world. The early detection of skin lesions becomes an important task in the diagnosis of skin cancer. Recently, a machine learning paradigm emerged known as deep learning (DL) utilized for skin lesions classification. However, in some previous studies by using seven class images diagnostic of skin lesions classification based on a single DL approach with CNNs architecture does not produce a satisfying performance. The DL approach allows the development of a medical image analysis system for improving performance, such as the deep convolutional neural networks (DCNNs) method. In this study, we propose an ensemble learning approach that combines three DCNNs architectures such as Inception V3, Inception ResNet V2 and DenseNet 201 for improving the performance in terms of accuracy, sensitivity, specificity, precision, and F1-score. Seven classes of dermoscopy image categories of skin lesions are utilized with 10015 dermoscopy images from well-known the HAM10000 dataset. The proposed model produces good classification performance with 97.23% accuracy, 90.12% sensitivity, 97.73% specificity, 82.01% precision, and 85.01% F1-Score. This method gives promising results in classifying skin lesions for cancer diagnosis.</span>
Duggento A., Conti A., Mauriello A., Guerrisi M., Toschi N.
Seminars in Cancer Biology scimago Q1 wos Q1
2021-07-01 citations by CoLab: 49 Abstract  
Deep Learning (DL) algorithms are a set of techniques that exploit large and/or complex real-world datasets for cross-domain and cross-discipline prediction and classification tasks. DL architectures excel in computer vision tasks, and in particular image processing and interpretation. This has prompted a wave of disruptingly innovative applications in medical imaging, where DL strategies have the potential to vastly outperform human experts. This is particularly relevant in the context of histopathology, where whole slide imaging (WSI) of stained tissue in conjuction with DL algorithms for their interpretation, selection and cancer staging are beginning to play an ever increasing role in supporting human operators in visual assessments. This has the potential to reduce everyday workload as well as to increase precision and reproducibility across observers, centers, staining techniques and even pathologies. In this paper we introduce the most common DL architectures used in image analysis, with a focus on histopathological image analysis in general and in breast histology in particular. We briefly review how, state-of-art DL architectures compare to human performance on across a number of critical tasks such as mitotic count, tubules analysis and nuclear pleomorphism analysis. Also, the development of DL algorithms specialized to pathology images have been enormously fueled by a number of world-wide challenges based on large, multicentric image databases which are now publicly available. In turn, this has allowed most recent efforts to shift more and more towards semi-supervised learning methods, which provide greater flexibility and applicability. We also review all major repositories of manually labelled pathology images in breast cancer and provide an in-depth discussion of the challenges specific to training DL architectures to interpret WSI data, as well as a review of the state-of-the-art methods for interpretation of images generated from immunohistochemical analysis of breast lesions. We finally discuss the future challenges and opportunities which the adoption of DL paradigms is most likely to pose in the field of pathology for breast cancer detection, diagnosis, staging and prognosis. This review is intended as a comprehensive stepping stone into the field of modern computational pathology for a transdisciplinary readership across technical and medical disciplines.
Khened M., Kori A., Rajkumar H., Krishnamurthi G., Srinivasan B.
Scientific Reports scimago Q1 wos Q1 Open Access
2021-06-02 citations by CoLab: 132 PDF Abstract  
Histopathology tissue analysis is considered the gold standard in cancer diagnosis and prognosis. Whole-slide imaging (WSI), i.e., the scanning and digitization of entire histology slides, are now being adopted across the world in pathology labs. Trained histopathologists can provide an accurate diagnosis of biopsy specimens based on WSI data. Given the dimensionality of WSIs and the increase in the number of potential cancer cases, analyzing these images is a time-consuming process. Automated segmentation of tumorous tissue helps in elevating the precision, speed, and reproducibility of research. In the recent past, deep learning-based techniques have provided state-of-the-art results in a wide variety of image analysis tasks, including the analysis of digitized slides. However, deep learning-based solutions pose many technical challenges, including the large size of WSI data, heterogeneity in images, and complexity of features. In this study, we propose a generalized deep learning-based framework for histopathology tissue analysis to address these challenges. Our framework is, in essence, a sequence of individual techniques in the preprocessing-training-inference pipeline which, in conjunction, improve the efficiency and the generalizability of the analysis. The combination of techniques we have introduced includes an ensemble segmentation model, division of the WSI into smaller overlapping patches while addressing class imbalances, efficient techniques for inference, and an efficient, patch-based uncertainty estimation framework. Our ensemble consists of DenseNet-121, Inception-ResNet-V2, and DeeplabV3Plus, where all the networks were trained end to end for every task. We demonstrate the efficacy and improved generalizability of our framework by evaluating it on a variety of histopathology tasks including breast cancer metastases (CAMELYON), colon cancer (DigestPath), and liver cancer (PAIP). Our proposed framework has state-of-the-art performance across all these tasks and is ranked within the top 5 currently for the challenges based on these datasets. The entire framework along with the trained models and the related documentation are made freely available at GitHub and PyPi. Our framework is expected to aid histopathologists in accurate and efficient initial diagnosis. Moreover, the estimated uncertainty maps will help clinicians to make informed decisions and further treatment planning or analysis.
Chua I.S., Gaziel‐Yablowitz M., Korach Z.T., Kehl K.L., Levitan N.A., Arriaga Y.E., Jackson G.P., Bates D.W., Hassett M.
Cancer Medicine scimago Q1 wos Q2 Open Access
2021-05-07 citations by CoLab: 85 PDF
Dercle L., Henry T., Carré A., Paragios N., Deutsch E., Robert C.
Methods scimago Q1 wos Q2
2021-04-01 citations by CoLab: 31 Abstract  
Radiation therapy is a pivotal cancer treatment that has significantly progressed over the last decade due to numerous technological breakthroughs. Imaging is now playing a critical role on deployment of the clinical workflow, both for treatment planning and treatment delivery. Machine-learning analysis of predefined features extracted from medical images, i.e. radiomics, has emerged as a promising clinical tool for a wide range of clinical problems addressing drug development, clinical diagnosis, treatment selection and implementation as well as prognosis. Radiomics denotes a paradigm shift redefining medical images as a quantitative asset for data-driven precision medicine. The adoption of machine-learning in a clinical setting and in particular of radiomics features requires the selection of robust, representative and clinically interpretable biomarkers that are properly evaluated on a representative clinical data set. To be clinically relevant, radiomics must not only improve patients' management with great accuracy but also be reproducible and generalizable. Hence, this review explores the existing literature and exposes its potential technical caveats, such as the lack of quality control, standardization, sufficient sample size, type of data collection, and external validation. Based upon the analysis of 165 original research studies based on PET, CT-scan, and MRI, this review provides an overview of new concepts, and hypotheses generating findings that should be validated. In particular, it describes evolving research trends to enhance several clinical tasks such as prognostication, treatment planning, response assessment, prediction of recurrence/relapse, and prediction of toxicity. Perspectives regarding the implementation of an AI-based radiotherapy workflow are presented.
Lohmann P., Galldiks N., Kocher M., Heinzel A., Filss C.P., Stegmayr C., Mottaghy F.M., Fink G.R., Jon Shah N., Langen K.
Methods scimago Q1 wos Q2
2021-04-01 citations by CoLab: 99 Abstract  
Over the last years, the amount, variety, and complexity of neuroimaging data acquired in patients with brain tumors for routine clinical purposes and the resulting number of imaging parameters have substantially increased. Consequently, a timely and cost-effective evaluation of imaging data is hardly feasible without the support of methods from the field of artificial intelligence (AI). AI can facilitate and shorten various time-consuming steps in the image processing workflow, e.g., tumor segmentation, thereby optimizing productivity. Besides, the automated and computer-based analysis of imaging data may help to increase data comparability as it is independent of the experience level of the evaluating clinician. Importantly, AI offers the potential to extract new features from the routinely acquired neuroimages of brain tumor patients. In combination with patient data such as survival, molecular markers, or genomics, mathematical models can be generated that allow, for example, the prediction of treatment response or prognosis, as well as the noninvasive assessment of molecular markers. The subdiscipline of AI dealing with the computation, identification, and extraction of image features, as well as the generation of prognostic or predictive mathematical models, is termed radiomics. This review article summarizes the basics, the current workflow, and methods used in radiomics with a focus on feature-based radiomics in neuro-oncology and provides selected examples of its clinical application.
Seifert R., Weber M., Kocakavuk E., Rischpler C., Kersting D.
Seminars in Nuclear Medicine scimago Q1 wos Q1
2021-03-01 citations by CoLab: 78 Abstract  
Artificial intelligence and machine learning based approaches are increasingly finding their way into various areas of nuclear medicine imaging. With the technical development of new methods and the expansion to new fields of application, this trend is likely to become even more pronounced in future. Possible means of application range from automated image reading and classification to correlation with clinical outcomes and to technological applications in image processing and reconstruction. In the context of tumor imaging, that is, predominantly FDG or PSMA PET imaging but also bone scintigraphy, artificial intelligence approaches can be used to quantify the whole-body tumor volume, for the segmentation and classification of pathological foci or to facilitate the diagnosis of micro-metastases. More advanced applications aim at the correlation of image features that are derived by artificial intelligence with clinical endpoints, for example, whole-body tumor volume with overall survival. In nuclear medicine imaging of benign diseases, artificial intelligence methods are predominantly used for automated and/or facilitated image classification and clinical decision making. Automated feature selection, segmentation and classification of myocardial perfusion scintigraphy can help in identifying patients that would benefit from intervention and to forecast clinical prognosis. Automated reporting of neurodegenerative diseases such as Alzheimer's disease might be extended to early diagnosis-being of special interest, if targeted treatment options might become available. Technological approaches include artificial intelligence-based attenuation correction of PET images, image reconstruction or anatomical landmarking. Attenuation correction is of special interest for avoiding the need of a coregistered CT scan, in the process of image reconstruction artefacts might be reduced, or ultra low-dose PET images might be denoised. The development of accurate ultra low-dose PET imaging might broaden the method's applicability, for example, toward oncologic PET screening. Most artificial intelligence approaches in nuclear medicine imaging are still in early stages of development, further improvements are necessary for broad clinical applications. In this review, we describe the current trends in the context fields of body oncology, cardiac imaging, and neuroimaging while an additional section puts emphasis on technological trends. Our aim is not only to describe currently available methods, but also to place a special focus on the description of possible future developments.
Cepeda S., Arrese I., García-García S., Velasco-Casares M., Escudero-Caro T., Zamora T., Sarabia R.
World Neurosurgery scimago Q2 wos Q2
2021-02-01 citations by CoLab: 30 Abstract  
Background The consistency of meningioma is a factor that may influence surgical planning and the extent of resection. The aim of our study is to develop a predictive model of tumor consistency using the radiomic features of preoperative magnetic resonance imaging and the tumor elasticity measured by intraoperative ultrasound elastography (IOUS-E) as a reference parameter. Methods A retrospective analysis was performed on supratentorial meningiomas that were operated on between March 2018 and July 2020. Cases with IOUS-E studies were included. A semiquantitative analysis of elastograms was used to define the meningioma consistency. MRIs were preprocessed before extracting radiomic features. Predictive models were built using a combination of feature selection filters and machine learning algorithms: logistic regression, Naive Bayes, k-nearest neighbors, Random Forest, Support Vector Machine, and Neural Network. A stratified 5-fold cross-validation was performed. Then, models were evaluated using the area under the curve and classification accuracy. Results Eighteen patients were available for analysis. Meningiomas were classified as hard or soft according to a mean tissue elasticity threshold of 120. The best-ranked radiomic features were obtained from T1-weighted post-contrast, apparent diffusion coefficient map, and T2-weighted images. The combination of Information Gain and ReliefF filters with the Naive Bayes algorithm resulted in an area under the curve of 0.961 and classification accuracy of 94%. Conclusions We have developed a high-precision classification model that is capable of predicting consistency of meningiomas based on the radiomic features in preoperative magnetic resonance imaging (T2-weighted, T1-weighted post-contrast, and apparent diffusion coefficient map).
Tian L., Zhang D., Bao S., Nie P., Hao D., Liu Y., Zhang J., Wang H.
Clinical Radiology scimago Q2 wos Q2
2021-02-01 citations by CoLab: 22 Abstract  
AIM To construct and validate a radiomics-based machine-learning method for preoperative prediction of distant metastasis (DM) from soft-tissue sarcoma. MATERIALS AND METHODS Seventy-seven soft-tissue sarcomas were divided into a training set (n=54) and a validation set (n=23). The performance of three feature selection methods (ReliefF, least absolute shrinkage and selection operator [LASSO], and regularised discriminative feature selection for unsupervised learning [UDFS]) and four classifiers, random forest (RF), logistic regression (LOG), K nearest neighbour (KNN), and support vector machines (SVMs), were compared for predicting the likelihood of DM. To counter the imbalance in the frequencies of DM, each machine-learning method was trained first without subsampling, then with the synthetic minority oversampling technique (SMOTE). The performance of the radiomics model was assessed using area under the receiver-operating characteristic curve (AUC) and accuracy (ACC) values. RESULTS The performance of the LASSO and SVM algorithm combination used with SMOTE was superior to that of the algorithm combination alone. The combination of SMOTE with feature screening by LASSO and SVM classifiers had an AUC of 0.9020 and ACC of 91.30% in the validation dataset. CONCLUSION A machine-learning model based on radiomics was favourable for predicting the likelihood of DM from soft-tissue sarcoma. This will help decide treatment strategies.
Yang Z., Olszewski D., He C., Pintea G., Lian J., Chou T., Chen R.C., Shtylla B.
2021-02-01 citations by CoLab: 18 Abstract  
Thanks to advancements in diagnosis and treatment, prostate cancer patients have high long-term survival rates. Currently, an important goal is to preserve quality of life during and after treatment. The relationship between the radiation a patient receives and the subsequent side effects he experiences is complex and difficult to model or predict. Here, we use machine learning algorithms and statistical models to explore the connection between radiation treatment and post-treatment gastro-urinary function. Since only a limited number of patient datasets are currently available, we used image flipping and curvature-based interpolation methods to generate more data to leverage transfer learning. Using interpolated and augmented data, we trained a convolutional autoencoder network to obtain near-optimal starting points for the weights. A convolutional neural network then analyzed the relationship between patient-reported quality-of-life and radiation doses to the bladder and rectum. We also used analysis of variance and logistic regression to explore organ sensitivity to radiation and to develop dosage thresholds for each organ region. Our findings show no statistically significant association between the bladder and quality-of-life scores. However, we found a statistically significant association between the radiation applied to posterior and anterior rectal regions and changes in quality of life. Finally, we estimated radiation therapy dose thresholds for each organ. Our analysis connects machine learning methods with organ sensitivity, thus providing a framework for informing cancer patient care using patient reported quality-of-life metrics.
Nazari M., Shiri I., Zaidi H.
2021-02-01 citations by CoLab: 86 Abstract  
The aim of this study was to develop radiomics–based machine learning models based on extracted radiomic features and clinical information to predict the risk of death within 5 years for prognosis of clear cell renal cell carcinoma (ccRCC) patients. According to image quality and clinical data availability, we eventually selected 70 ccRCC patients that underwent CT scans. Manual volume-of-interest (VOI) segmentation of each image was performed by an experienced radiologist using the 3D slicer software package. Prior to feature extraction, image pre-processing was performed on CT images to extract different image features, including wavelet, Laplacian of Gaussian, and resampling of the intensity values to 32, 64 and 128 bin levels. Overall, 2544 3D radiomics features were extracted from each VOI for each patient. Minimum Redundancy Maximum Relevance (MRMR) algorithm was used as feature selector. Four classification algorithms were used, including Generalized Linear Model (GLM), Support Vector Machine (SVM), K-nearest Neighbor (KNN) and XGBoost. We used the Bootstrap resampling method to create validation sets. Area under the receiver operating characteristic (ROC) curve (AUROC), accuracy, sensitivity, and specificity were used to assess the performance of the classification models. The best single performance among 8 different models was achieved by the XGBoost model using a combination of radiomic features and clinical information (AUROC, accuracy, sensitivity, and specificity with 95% confidence interval were 0.95–0.98, 0.93–0.98, 0.93–0.96 and ~1.0, respectively). We developed a robust radiomics-based classifier that is capable of accurately predicting overall survival of RCC patients for prognosis of ccRCC patients. This signature may help identifying high-risk patients who require additional treatment and follow up regimens. • Radiomics–based machine learning models predict the survival of ccRCC patients. • Combination of image-preprocessing and machine-learning algorithms was investigated for prognosis modeling. • Promising and failure results were reported for different models. • XGBoost classifier indicated the highest performance than for risk stratification.
Esteva A., Chou K., Yeung S., Naik N., Madani A., Mottaghi A., Liu Y., Topol E., Dean J., Socher R.
npj Digital Medicine scimago Q1 wos Q1 Open Access
2021-01-08 citations by CoLab: 668 PDF Abstract  
A decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields—including medicine—to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques—powered by deep learning—for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit—including cardiology, pathology, dermatology, ophthalmology–and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.
Hodneland E., Dybvik J.A., Wagner-Larsen K.S., Šoltészová V., Munthe-Kaas A.Z., Fasmer K.E., Krakstad C., Lundervold A., Lundervold A.S., Salvesen Ø., Erickson B.J., Haldorsen I.
Scientific Reports scimago Q1 wos Q1 Open Access
2021-01-08 citations by CoLab: 40 PDF Abstract  
Preoperative MR imaging in endometrial cancer patients provides valuable information on local tumor extent, which routinely guides choice of surgical procedure and adjuvant therapy. Furthermore, whole-volume tumor analyses of MR images may provide radiomic tumor signatures potentially relevant for better individualization and optimization of treatment. We apply a convolutional neural network for automatic tumor segmentation in endometrial cancer patients, enabling automated extraction of tumor texture parameters and tumor volume. The network was trained, validated and tested on a cohort of 139 endometrial cancer patients based on preoperative pelvic imaging. The algorithm was able to retrieve tumor volumes comparable to human expert level (likelihood-ratio test, $$p = 0.06$$ ). The network was also able to provide a set of segmentation masks with human agreement not different from inter-rater agreement of human experts (Wilcoxon signed rank test, $$p=0.08$$ , $$p=0.60$$ , and $$p=0.05$$ ). An automatic tool for tumor segmentation in endometrial cancer patients enables automated extraction of tumor volume and whole-volume tumor texture features. This approach represents a promising method for automatic radiomic tumor profiling with potential relevance for better prognostication and individualization of therapeutic strategy in endometrial cancer.
Yang H., Wang Y., Peng H., Huang C.
Scientific Reports scimago Q1 wos Q1 Open Access
2021-01-08 citations by CoLab: 48 PDF Abstract  
Breast cancer causes metabolic alteration, and volatile metabolites in the breath of patients may be used to diagnose breast cancer. The objective of this study was to develop a new breath test for breast cancer by analyzing volatile metabolites in the exhaled breath. We collected alveolar air from breast cancer patients and non-cancer controls and analyzed the volatile metabolites with an electronic nose composed of 32 carbon nanotubes sensors. We used machine learning techniques to build prediction models for breast cancer and its molecular phenotyping. Between July 2016 and June 2018, we enrolled a total of 899 subjects. Using the random forest model, the prediction accuracy of breast cancer in the test set was 91% (95% CI: 0.85–0.95), sensitivity was 86%, specificity was 97%, positive predictive value was 97%, negative predictive value was 97%, the area under the receiver operating curve was 0.99 (95% CI: 0.99–1.00), and the kappa value was 0.83. The leave-one-out cross-validated discrimination accuracy and reliability of molecular phenotyping of breast cancer were 88.5 ± 12.1% and 0.77 ± 0.23, respectively. Breath tests with electronic noses can be applied intraoperatively to discriminate breast cancer and molecular subtype and support the medical staff to choose the best therapeutic decision.
Disci R., Gurcan F., Soylu A.
Cancers scimago Q1 wos Q1 Open Access
2025-01-02 citations by CoLab: 1 PDF Abstract  
Background/Objectives: Brain tumor classification is a crucial task in medical diagnostics, as early and accurate detection can significantly improve patient outcomes. This study investigates the effectiveness of pre-trained deep learning models in classifying brain MRI images into four categories: Glioma, Meningioma, Pituitary, and No Tumor, aiming to enhance the diagnostic process through automation. Methods: A publicly available Brain Tumor MRI dataset containing 7023 images was used in this research. The study employs state-of-the-art pre-trained models, including Xception, MobileNetV2, InceptionV3, ResNet50, VGG16, and DenseNet121, which are fine-tuned using transfer learning, in combination with advanced preprocessing and data augmentation techniques. Transfer learning was applied to fine-tune the models and optimize classification accuracy while minimizing computational requirements, ensuring efficiency in real-world applications. Results: Among the tested models, Xception emerged as the top performer, achieving a weighted accuracy of 98.73% and a weighted F1 score of 95.29%, demonstrating exceptional generalization capabilities. These models proved particularly effective in addressing class imbalances and delivering consistent performance across various evaluation metrics, thus demonstrating their suitability for clinical adoption. However, challenges persist in improving recall for the Glioma and Meningioma categories, and the black-box nature of deep learning models requires further attention to enhance interpretability and trust in medical settings. Conclusions: The findings underscore the transformative potential of deep learning in medical imaging, offering a pathway toward more reliable, scalable, and efficient diagnostic tools. Future research will focus on expanding dataset diversity, improving model explainability, and validating model performance in real-world clinical settings to support the widespread adoption of AI-driven systems in healthcare and ensure their integration into clinical workflows.
Abas Mohamed Y., Ee Khoo B., Shahrimie Mohd Asaari M., Ezane Aziz M., Rahiman Ghazali F.
2025-01-01 citations by CoLab: 3
Książek W.
Cancers scimago Q1 wos Q1 Open Access
2024-12-10 citations by CoLab: 1 PDF Abstract  
Modern technologies, particularly artificial intelligence methods such as machine learning, hold immense potential for supporting doctors with cancer diagnostics. This study explores the enhancement of popular machine learning methods using a bio-inspired algorithm—the naked mole-rat algorithm (NMRA)—to assess the malignancy of thyroid tumors. The study utilized a novel dataset released in 2022, containing data collected at Shengjing Hospital of China Medical University. The dataset comprises 1232 records described by 19 features. In this research, 10 well-known classifiers, including XGBoost, LightGBM, and random forest, were employed to evaluate the malignancy of thyroid tumors. A key innovation of this study is the application of the naked mole-rat algorithm for parameter optimization and feature selection within the individual classifiers. Among the models tested, the LightGBM classifier demonstrated the highest performance, achieving a classification accuracy of 81.82% and an F1-score of 86.62%, following two-level parameter optimization and feature selection using the naked mole-rat algorithm. Additionally, explainability analysis of the LightGBM model was conducted using SHAP values, providing insights into the decision-making process of the model.
Kale M.B., Wankhede N.L., Pawar R.S., Ballal S., Kumawat R., Goswami M., Khalid M., Taksande B.G., Upaganlawar A.B., Umekar M.J., Kopalli S.R., Koppula S.
Ageing Research Reviews scimago Q1 wos Q1
2024-11-01 citations by CoLab: 12 Abstract  
Alzheimer's disease (AD) presents a significant challenge in neurodegenerative research and clinical practice due to its complex etiology and progressive nature. The integration of artificial intelligence (AI) into the diagnosis, treatment, and prognostic modelling of AD holds promising potential to transform the landscape of dementia care. This review explores recent advancements in AI applications across various stages of AD management. In early diagnosis, AI-enhanced neuroimaging techniques, including MRI, PET, and CT scans, enable precise detection of AD biomarkers. Machine learning models analyze these images to identify patterns indicative of early cognitive decline. Additionally, AI algorithms are employed to detect genetic and proteomic biomarkers, facilitating early intervention. Cognitive and behavioral assessments have also benefited from AI, with tools that enhance the accuracy of neuropsychological tests and analyze speech and language patterns for early signs of dementia. Personalized treatment strategies have been revolutionized by AI-driven approaches. In drug discovery, virtual screening and drug repurposing, guided by predictive modelling, accelerate the identification of effective treatments. AI also aids in tailoring therapeutic interventions by predicting individual responses to treatments and monitoring patient progress, allowing for dynamic adjustment of care plans. Prognostic modelling, another critical area, utilizes AI to predict disease progression through longitudinal data analysis and risk prediction models. The integration of multi-modal data, combining clinical, genetic, and imaging information, enhances the accuracy of these predictions. Deep learning techniques are particularly effective in fusing diverse data types to uncover new insights into disease mechanisms and progression. Despite these advancements, challenges remain, including ethical considerations, data privacy, and the need for seamless integration of AI tools into clinical workflows. This review underscores the transformative potential of AI in AD management while highlighting areas for future research and development. By leveraging AI, the healthcare community can improve early diagnosis, personalize treatments, and predict disease outcomes more accurately, ultimately enhancing the quality of life for individuals with AD.
Ganatra H.A., Latifi S.Q., Baloglu O.
Bioengineering scimago Q3 wos Q2 Open Access
2024-09-26 citations by CoLab: 1 PDF Abstract  
Purpose: To develop and validate machine learning models for predicting the length of stay (LOS) in the Pediatric Intensive Care Unit (PICU) using data from the Virtual Pediatric Systems (VPS) database. Methods: A retrospective study was conducted utilizing machine learning (ML) algorithms to analyze and predict PICU LOS based on historical patient data from the VPS database. The study included data from over 100 North American PICUs spanning the years 2015–2020. After excluding entries with missing variables and those indicating recovery from cardiac surgery, the dataset comprised 123,354 patient encounters. Various ML models, including Support Vector Machine, Stochastic Gradient Descent Classifier, K-Nearest Neighbors, Decision Tree, Gradient Boosting, CatBoost, and Recurrent Neural Networks (RNNs), were evaluated for their accuracy in predicting PICU LOS at thresholds of 24 h, 36 h, 48 h, 72 h, 5 days, and 7 days. Results: Gradient Boosting, CatBoost, and RNN models demonstrated the highest accuracy, particularly at the 36 h and 48 h thresholds, with accuracy rates between 70 and 73%. These results far outperform traditional statistical and existing prediction methods that report accuracy of only around 50%, which is effectively unusable in the practical setting. These models also exhibited balanced performance between sensitivity (up to 74%) and specificity (up to 82%) at these thresholds. Conclusions: ML models, particularly Gradient Boosting, CatBoost, and RNNs, show moderate effectiveness in predicting PICU LOS with accuracy slightly over 70%, outperforming previously reported human predictions. This suggests potential utility in enhancing resource and staffing management in PICUs. However, further improvements through training on specialized databases can potentially achieve better accuracy and clinical applicability.
Waqas A., Tripathi A., Ramachandran R.P., Stewart P.A., Rasool G.
2024-07-25 citations by CoLab: 14 PDF Abstract  
Cancer research encompasses data across various scales, modalities, and resolutions, from screening and diagnostic imaging to digitized histopathology slides to various types of molecular data and clinical records. The integration of these diverse data types for personalized cancer care and predictive modeling holds the promise of enhancing the accuracy and reliability of cancer screening, diagnosis, and treatment. Traditional analytical methods, which often focus on isolated or unimodal information, fall short of capturing the complex and heterogeneous nature of cancer data. The advent of deep neural networks has spurred the development of sophisticated multimodal data fusion techniques capable of extracting and synthesizing information from disparate sources. Among these, Graph Neural Networks (GNNs) and Transformers have emerged as powerful tools for multimodal learning, demonstrating significant success. This review presents the foundational principles of multimodal learning including oncology data modalities, taxonomy of multimodal learning, and fusion strategies. We delve into the recent advancements in GNNs and Transformers for the fusion of multimodal data in oncology, spotlighting key studies and their pivotal findings. We discuss the unique challenges of multimodal learning, such as data heterogeneity and integration complexities, alongside the opportunities it presents for a more nuanced and comprehensive understanding of cancer. Finally, we present some of the latest comprehensive multimodal pan-cancer data sources. By surveying the landscape of multimodal data integration in oncology, our goal is to underline the transformative potential of multimodal GNNs and Transformers. Through technological advancements and the methodological innovations presented in this review, we aim to chart a course for future research in this promising field. This review may be the first that highlights the current state of multimodal modeling applications in cancer using GNNs and transformers, presents comprehensive multimodal oncology data sources, and sets the stage for multimodal evolution, encouraging further exploration and development in personalized cancer care.
Sinha T., Khan A., Awan M., Bokhari S.F., Ali K., Amir M., Jadhav A.N., Bakht D., Puli S.T., Burhanuddin M.
Cureus wos Q3
2024-05-28 citations by CoLab: 3

Top-30

Journals

1
2
1
2

Publishers

1
2
3
1
2
3
  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex | MLA
Found error?