Open Access
Open access

Radiation Medicine and Protection

Elsevier
Elsevier
ISSN: 26665557

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
SCImago
Q2
SJR
0.323
CiteScore
2.8
Categories
Emergency Medical Services
Public Health, Environmental and Occupational Health
Radiological and Ultrasound Technology
Radiology, Nuclear Medicine and Imaging
Areas
Health Professions
Medicine
Years of issue
2020-2025
journal names
Radiation Medicine and Protection
Publications
189
Citations
593
h-index
12
Top-3 citing journals
Top-3 countries
China (111 publications)
USA (14 publications)
Morocco (5 publications)

Most cited in 5 years

Found 
from chars
Publications found: 560
Automated Detection of Hydrocephalus in Pediatric Head Computed Tomography Using VGG 16 CNN Deep Learning Architecture and Based Automated Segmentation Workflow for Ventricular Volume Estimation
Sekkat H., Khallouqi A., Rhazouani O.E., Halimi A.
Springer Nature
Journal of Imaging Informatics in Medicine 2025 citations by CoLab: 0
Lumos: Software for Multi-level Multi-reader Comparison of Cardiovascular Magnetic Resonance Late Gadolinium Enhancement Scar Quantification
Reisdorf P., Gavrysh J., Ammann C., Fenski M., Kolbitsch C., Lange S., Hennemuth A., Schulz-Menger J., Hadler T.
Springer Nature
Journal of Imaging Informatics in Medicine 2025 citations by CoLab: 0  |  Abstract
Abstract Cardiovascular magnetic resonance imaging (CMR) offers state-of-the-art myocardial tissue differentiation. The CMR technique late gadolinium enhancement (LGE) currently provides the noninvasive gold standard for the detection of myocardial fibrosis. Typically, thresholding methods are used for fibrotic scar tissue quantification. A major challenge for standardized CMR assessment is large variations in the estimated scar for different methods. The aim was to improve quality assurance for LGE scar quantification, a multi-reader comparison tool “Lumos” was developed to support quality control for scar quantification methods. The thresholding methods and an exact rasterization approach were implemented, as well as a graphical user interface (GUI) with statistical and case-specific tabs. Twenty LGE cases were considered with half of them including artifacts and clinical results for eight scar quantification methods computed. Lumos was successfully implemented as a multi-level multi-reader comparison software, and differences between methods can be seen in the statistical results. Histograms visualize confounding effects of different methods. Connecting the statistical level with the case level allows for backtracking statistical differences to sources of differences in the threshold calculation. Being able to visualize the underlying groundwork for the different methods in the myocardial histogram gives the opportunity to identify causes for different thresholds. Lumos showed the differences in the clinical results between cases with artifacts and cases without artifacts. A video demonstration of Lumos is offered as supplementary material 1. Lumos allows for a multi-reader comparison for LGE scar quantification that offers insights into the origin of reader differences.
Radiology AI Lab: Evaluation of Radiology Applications with Clinical End-Users
Paalvast O., Sevenster M., Hertgers O., de Bliek H., Wijn V., Buil V., Knoester J., Vosbergen S., Lamb H.
Springer Nature
Journal of Imaging Informatics in Medicine 2025 citations by CoLab: 0  |  Abstract
Abstract Despite the approval of over 200 artificial intelligence (AI) applications for radiology in the European Union, widespread adoption in clinical practice remains limited. Current assessments of AI applications often rely on post-hoc evaluations, lacking the granularity to capture real-time radiologist-AI interactions. The purpose of the study is to realise the Radiology AI lab for real-time, objective measurement of the impact of AI applications on radiologists’ workflows. We proposed the user-state sensing framework (USSF) to structure the sensing of radiologist-AI interactions in terms of personal, interactional, and contextual states. Guided by the USSF, a lab was established using three non-invasive biometric measurement techniques: eye-tracking, heart rate monitoring, and facial expression analysis. We conducted a pilot test with four radiologists of varying experience levels, who read ultra-low-dose (ULD) CT cases in (1) standard PACS and (2) manually annotated (to mimic AI) PACS workflows. Interpretation time, eye-tracking metrics, heart rate variability (HRV), and facial expressions were recorded and analysed. The Radiology AI lab was successfully realised as an initial physical iteration of the USSF at a tertiary referral centre. Radiologists participating in the pilot test read 32 ULDCT cases (mean age, 52 years ± 23 (SD); 17 male; 16 cases with abnormalities). Cases were read on average in 4.1 ± 2.2 min (standard PACS) and 3.9 ± 1.9 min (AI-annotated PACS), with no significant difference (p = 0.48). Three out of four radiologists showed significant shifts (p < 0.02) in eye-tracking metrics, including saccade duration, saccade quantity, fixation duration, fixation quantity, and pupil diameter, when using the AI-annotated workflow. These changes align with prior findings linking such metrics to increased competency and reduced cognitive load, suggesting a more efficient visual search strategy in AI-assisted interpretation. Although HRV metrics did not correlate with experience, when combined with facial expression analysis, they helped identify key moments during the pilot test. The Radiology AI lab was successfully realised, implementing personal, interactional, and contextual states of the user-state sensing framework, enabling objective analysis of radiologists’ workflows, and effectively capturing relevant biometrics. Future work will focus on expanding sensing of the contextual state of the user-state sensing framework, refining baseline determination, and continuing investigation of AI-enabled tools in radiology workflows.
Radiomics with Ultrasound Radiofrequency Data for Improving Evaluation of Duchenne Muscular Dystrophy
Yan D., Li Q., Chuang Y., Lin C., Shieh J., Weng W., Tsui P.
Springer Nature
Journal of Imaging Informatics in Medicine 2025 citations by CoLab: 0
AI-Based 3D Liver Segmentation and Volumetric Analysis in Living Donor Data
Mun S.B., Choi S.T., Kim Y.J., Kim K.G., Lee W.S.
Springer Nature
Journal of Imaging Informatics in Medicine 2025 citations by CoLab: 0
A Two-Stage Lightweight Deep Learning Framework for Mass Detection and Segmentation in Mammograms Using YOLOv5 and Depthwise SegNet
Manolakis D., Bizopoulos P., Lalas A., Votis K.
Springer Nature
Journal of Imaging Informatics in Medicine 2025 citations by CoLab: 0  |  Abstract
Abstract Ensuring strict medical data privacy standards while delivering efficient and accurate breast cancer segmentation is a critical challenge. This paper addresses this challenge by proposing a lightweight solution capable of running directly in the user’s browser, ensuring that medical data never leave the user’s computer. Our proposed solution consists of a two-stage model: the pre-trained nano YoloV5 variation handles the task of mass detection, while a lightweight neural network model of just 20k parameters and an inference time of 21 ms per image addresses the segmentation problem. This highly efficient model in terms of inference speed and memory consumption was created by combining well-known techniques, such as the SegNet architecture and depthwise separable convolutions. The detection model manages an mAP@50 equal to 50.3% on the CBIS-DDSM dataset and 68.2% on the INbreast dataset. Despite its size, our segmentation model produces high-performance levels on the CBIS-DDSM (81.0% IoU, 89.4% Dice) and INbreast (77.3% IoU, 87.0% Dice) dataset.
Facilitating Radiograph Interpretation: Refined Generative Models for Precise Bone Suppression in Chest X-rays
Ibrahim S., Selim S., Elattar M.
Springer Nature
Journal of Imaging Informatics in Medicine 2025 citations by CoLab: 0
SADiff: A Sinogram-Aware Diffusion Model for Low-Dose CT Image Denoising
Niknejad Mazandarani F., Babyn P., Alirezaie J.
Springer Nature
Journal of Imaging Informatics in Medicine 2025 citations by CoLab: 0
I-BrainNet: Deep Learning and Internet of Things (DL/IoT)–Based Framework for the Classification of Brain Tumor
Ibrahim A.U., Engo G.M., Ame I., Nwekwo C.W., Al-Turjman F.
Springer Nature
Journal of Imaging Informatics in Medicine 2025 citations by CoLab: 0
A Thyroid Nodule Ultrasound Image Grading Model Integrating Medical Prior Knowledge
Chen H., Liu C., Cheng X., Jiang C., Wang Y.
Springer Nature
Journal of Imaging Informatics in Medicine 2025 citations by CoLab: 0
Robust Automatic Grading of Blunt Liver Trauma in Contrast-Enhanced Ultrasound Using Label-Noise-Resistant Models
Zhang T., Li R., Zhong Z., Zhang X., Liu T., Zhou G., Lv F.
Springer Nature
Journal of Imaging Informatics in Medicine 2025 citations by CoLab: 0
Subtraction of Temporally Sequential Digital Mammograms: Prediction and Localization of Near-Term Breast Cancer Occurrence
Loizidou K., Skouroumouni G., Savvidou G., Constantinidou A., Vlachou E.O., Yiallourou A., Pitris C., Nikolaou C.
Springer Nature
Journal of Imaging Informatics in Medicine 2025 citations by CoLab: 0  |  Abstract
Abstract The objective is to predict a possible near-term occurrence of a breast mass after two consecutive screening rounds with normal mammograms. For the purposes of this study, conducted between 2020 and 2024, three consecutive rounds of mammograms were collected from 75 women, 46 to 79 years old. Successive screenings had an average interval of $$\sim$$ ∼ 2 years. In each case, two mammographic views of each breast were collected, resulting in a dataset with a total of 450 images (3 × 2 × 75). The most recent mammogram was considered the “future” screening round and provided the location of a biopsy-confirmed malignant mass, serving as the ground truth for the training. The two normal previous mammograms (“prior” and “current”) were processed and a new subtracted image was created for the prediction. Region segmentation and post-processing were, then, applied, along with image feature extraction and selection. The selected features were incorporated into several classifiers and by applying leave-one-patient-out and k-fold cross-validation per patient, the regions of interest were characterized as benign or possible future malignancy. Study participants included 75 women (mean age, 62.5 ± 7.2; median age, 62 years). Feature selection from benign and possible future malignancy areas revealed that 14 features provided the best classification. The most accurate classification performance was achieved using ensemble voting, with 98.8% accuracy, 93.6% sensitivity, 98.8% specificity, and 0.96 AUC. Given the success of this algorithm, its clinical application could enable earlier diagnosis and improve prognosis for patients identified as at risk.
Landscape of 2D Deep Learning Segmentation Networks Applied to CT Scan from Lung Cancer Patients: A Systematic Review
Mehrnia S.S., Safahi Z., Mousavi A., Panahandeh F., Farmani A., Yuan R., Rahmim A., Salmanpour M.R.
Springer Nature
Journal of Imaging Informatics in Medicine 2025 citations by CoLab: 0
A Novel Pipeline for Adrenal Gland Segmentation: Integration of a Hybrid Post-Processing Technique with Deep Learning
Fayemiwo M., Gardiner B., Harkin J., McDaid L., Prakash P., Dennedy M.
Springer Nature
Journal of Imaging Informatics in Medicine 2025 citations by CoLab: 0  |  Abstract
Abstract Accurate segmentation of adrenal glands from CT images is essential for enhancing computer-aided diagnosis and surgical planning. However, the small size, irregular shape, and proximity to surrounding tissues make this task highly challenging. This study introduces a novel pipeline that significantly improves the segmentation of left and right adrenal glands by integrating advanced pre-processing techniques and a robust post-processing framework. Utilising a 2D UNet architecture with various backbones (VGG16, ResNet34, InceptionV3), the pipeline leverages test-time augmentation (TTA) and targeted removal of unconnected regions to enhance accuracy and robustness. Our results demonstrate a substantial improvement, with a 38% increase in the Dice similarity coefficient for the left adrenal gland and an 11% increase for the right adrenal gland on the AMOS dataset, achieved by the InceptionV3 model. Additionally, the pipeline significantly reduces false positives, underscoring its potential for clinical applications and its superiority over existing methods. These advancements make our approach a crucial contribution to the field of medical image segmentation.
Spatial–Temporal Information Fusion for Thyroid Nodule Segmentation in Dynamic Contrast-Enhanced MRI: A Novel Approach
Han B., Yang Q., Tao X., Wu M., Yang L., Deng W., Cui W., Luo D., Wan Q., Liu Z., Zhang N.
Springer Nature
Journal of Imaging Informatics in Medicine 2025 citations by CoLab: 0

Top-100

Citing journals

5
10
15
20
25
30
35
40
45
50
Show all (70 more)
5
10
15
20
25
30
35
40
45
50

Citing publishers

50
100
150
200
250
Show all (30 more)
50
100
150
200
250

Publishing organizations

2
4
6
8
10
12
14
16
18
20
Show all (56 more)
2
4
6
8
10
12
14
16
18
20

Publishing countries

20
40
60
80
100
120
China, 111, 58.73%
USA, 14, 7.41%
Morocco, 5, 2.65%
Russia, 4, 2.12%
India, 4, 2.12%
Iran, 3, 1.59%
Australia, 2, 1.06%
Austria, 2, 1.06%
Serbia, 2, 1.06%
Bangladesh, 1, 0.53%
United Kingdom, 1, 0.53%
Gabon, 1, 0.53%
Ghana, 1, 0.53%
Cameroon, 1, 0.53%
Congo-Brazzaville, 1, 0.53%
Malaysia, 1, 0.53%
Nigeria, 1, 0.53%
Pakistan, 1, 0.53%
Poland, 1, 0.53%
Turkey, 1, 0.53%
Central African Republic, 1, 0.53%
Switzerland, 1, 0.53%
Japan, 1, 0.53%
20
40
60
80
100
120