Journal of Clinical Monitoring and Computing

Springer Nature
Springer Nature
ISSN: 13871307, 15732614

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
SCImago
Q2
WOS
Q2
Impact factor
2
SJR
0.594
CiteScore
4.3
Categories
Anesthesiology and Pain Medicine
Critical Care and Intensive Care Medicine
Health Informatics
Areas
Medicine
Years of issue
1997-2002, 2004-2025
journal names
Journal of Clinical Monitoring and Computing
J CLIN MONIT COMPUT
Publications
2 639
Citations
31 752
h-index
61
Top-3 citing journals
Top-3 countries
USA (755 publications)
Germany (278 publications)
Netherlands (177 publications)

Most cited in 5 years

Found 
from chars
Publications found: 3204
An Indoor Laser Inertial SLAM Method Fusing Semantics and Planes
Wu C., Zhong R., Hu Q., Wu H., Wang Z., Xu M., Yuan X.
Q2
Wiley
Photogrammetric Record 2025 citations by CoLab: 0  |  Abstract
ABSTRACTLiDAR‐based simultaneous localization and mapping (LiDAR SLAM) technology is widely used for high‐precision 3D mapping in complex environments, especially in the fields of non‐contact remote sensing and geographic information systems. However, affected by factors such as sensor errors and dynamic environment, SLAM methods are prone to accumulate errors, which affect its accuracy and reliability. In this article, we propose a LiDAR SLAM odometry optimization method, Semantic and Planar Constraint SLAM (SPC‐SLAM). The method strengthens the effectiveness of planar constraint by introducing semantic information and combines with factor graph optimization to improve the accuracy of key‐frame pose estimation. In addition, we design a pseudo‐truth‐based threshold judgment mechanism for deciding whether it is necessary to perform semantic segmentation steps to ensure the efficiency of SLAM as much as possible. We conducted comparative experiments on part of public data in SubT‐MRS Dataset and self‐acquired campus data. The results show that in the complex indoor environments we chose, LIO‐SAM is unable to complete the whole mapping under the initial parameters, and the overall absolute trajectory error of SPC‐SLAM is reduced by about 65% compared with the FAST‐LIO2, demonstrating the potential of the method for application in accurate indoor mapping and 3D imaging.
An Integrated Segmentation Framework Utilizing Mamba Fusion for Optical and SAR Images
Ma M., Liu C., Li Q., Liu Y., Zhang D., Dun Y.
Q2
Wiley
Photogrammetric Record 2025 citations by CoLab: 0  |  Abstract
ABSTRACTSemantic segmentation of multimodal remote sensing images is an effective approach to enhancing segmentation accuracy. Optical and synthetic aperture radar (SAR) images capture ground features from distinct perspectives, offering varied information for ground observation. Effectively fusing these two modalities information and performing multimodal segmentation remains a promising yet challenging task due to their complementary nature and significant differences. The existing methods have the problem of ignoring spatial dimension information and being unable to bridge semantic gaps. However, the proposal of Selective Structured State Space Models (Mamba) provides opportunities for multimodal fusion. Therefore, we propose a segmentation framework based on Mamba fusion of optical and SAR images. The framework introduces a novel fusion module inspired by the principle of Mamba. This module selects effective features from different modalities for cross‐fusion within a global sensing field, facilitating the mutual compensation of optical and SAR image features and reducing the semantic gap. The fused features are then accurately segmented using a decoder that incorporates the Atrous Spatial Pyramid Pooling (ASPP) technique. On the WHU‐OPT‐SAR dataset, this method outperforms other state‐of‐the‐art deep learning approaches, achieving an overall accuracy (OA) of 84.13%, compared to 76.29% for the Kappa statistic.
U2DDS‐Net: A New Architecture Based on U2Net With Disaster Type for Building Damage Assessment Under Natural Disasters
Wang B., Zhao C., Li J., Sheng Q., Ling X.
Q2
Wiley
Photogrammetric Record 2025 citations by CoLab: 0  |  Abstract
ABSTRACTBuilding damage assessment in the face of natural disasters is crucial for economic development, disaster relief, and post‐disaster reconstruction. However, existing algorithms often overlook the impact of the disaster class when extracting difference features from high‐resolution pre‐ and post‐disaster image pairs obtained through satellite remote sensing, without considering the influence of the disaster type, that is, the different ways in which different disasters affect buildings. To address this limitation, we propose U2DDS‐Net, a two‐stage model based on U2Net and Swin Transformer. In stage 1, U2Net locates and segments buildings in pre‐disaster images. In stage 2, we enhance the model with the disaster‐type token and the diff swin stage module, which consider the disaster type and extract difference information at multiple scales. Experimental results on the xBD dataset demonstrate the significant improvement achieved by our approach, surpassing state‐of‐the‐art performance. Our research fills the gap by considering specific disaster types, and our two‐stage model provides accurate building damage assessment across various disaster scenarios.
SeConDA: Self‐Training Consistency Guided Domain Adaptation for Cross‐Domain Remote Sensing Image Semantic Segmentation
Zhang B., Zhang Y., Cao C., Wan Y., Yao Y., Fei L.
Q2
Wiley
Photogrammetric Record 2025 citations by CoLab: 0  |  Abstract
ABSTRACTWell‐trained remote sensing (RS) deep learning models often encounter a considerable decline in performance when applied to images that differ from the training data. This decline can be attributed to variations in imaging sensors, geographic location, imaging time, and radiation levels during image acquisition. Consequently, the widespread application of these models has been greatly impeded. An envisioned resolution to confront this challenge encompasses formulating a cross‐domain RS image semantic segmentation network integrated with self‐training consistency. This approach involves the generation of high‐quality pseudo‐labels for images in the target domain, which are then used to guide the training of the network. To enhance the model's ability to learn the data distributions of both the source and target domains, highly perturbed mixed samples are created by blending images from these domains. Additionally, adversarial training is incorporated to reduce the entropy of the model's predicted results, thereby mitigating the influence of noise present in the pseudo‐labels. As a result, this approach effectively extracts domain‐invariant features and minimizes the disparities between the distributions of the different domains. By employing the ISPRS and LoveDA datasets in a series of experiments conducted across varied scenarios, our empirical investigations evince the capacity of the proposed methodology to generalize the model to target domain data, which is achieved through the mitigation of disparities between domain distributions. It effectively alleviates the domain shift issues caused by differences in imaging locations and band combinations in RS image data and achieves state‐of‐the‐art results and validates its effectiveness.
THE PHOTOGRAMMETRIC RECORD INDEX TO VOLUME 39
Q2
Wiley
Photogrammetric Record 2024 citations by CoLab: 0
ISPRS Geospatial Week 2025: Photogrammetry and Remote Sensing for a better tomorrow
Q2
Wiley
Photogrammetric Record 2024 citations by CoLab: 0
6th Joint International Symposium on Deformation Monitoring
Q2
Wiley
Photogrammetric Record 2024 citations by CoLab: 0
4th EuroSDR Workshop on Point Cloud Processing
Q2
Wiley
Photogrammetric Record 2024 citations by CoLab: 0
Topographic Mapping from Space dedicated to Dr. Karsten Jacobsen's 80th Birthday
Q2
Wiley
Photogrammetric Record 2024 citations by CoLab: 0
Summer School on Earth Sensing 2025
Q2
Wiley
Photogrammetric Record 2024 citations by CoLab: 0
44th EARSeL Symposium
Q2
Wiley
Photogrammetric Record 2024 citations by CoLab: 0
59th Photogrammetric Week: Advancement in Photogrammetry, Remote Sensing and Geoinformatics
Q2
Wiley
Photogrammetric Record 2024 citations by CoLab: 0
Filtering of point clouds acquired by mobile laser scanner for digital terrain model generation in densely vegetated green architectures
Nardinocchi C., Esposito S.
Q2
Wiley
Photogrammetric Record 2024 citations by CoLab: 0  |  Abstract
AbstractProtection and maintenance of green architecture such as parks, countryside villas, and historic gardens are a duty of our communities. Recently, hand‐held or portable mobile laser scanner (MLS) systems exploiting simultaneous localization and mapping (SLAM) are becoming a promising alternative to terrestrial laser scanner (TLS) in densely vegetated areas to efficiently collect information for site maintenance. The main goal for such surveys stands normally in tree parameter estimation, but getting a high‐resolution, high‐quality digital terrain model (DTM) is an objective worth pursuing as it is useful in later processing. This work presents a novel algorithm for terrain filtering of TLS point clouds and compares its performance with existing packages. To this aim, it compares and evaluates the accuracy of DTM produced by three well‐established open‐source packages and an in‐house development on the test case of an ancient garden survey, where nine scans were executed using a ZEB‐HORIZON scanner. The algorithm we developed is specifically designed for creating digital terrain models (DTMs) from TLS point clouds. In contrast, two of the three software packages we analysed were originally developed for processing airborne laser scanning (ALS) data. As parameter setting significantly influenced the quality of the DTM generation we describe in detail this process for all three algorithms. Since the unavailability of ground truth data, we used the terrain points generated by all four algorithms to perform a cross‐validation.

Top-100

Citing journals

500
1000
1500
2000
2500
3000
Show all (70 more)
500
1000
1500
2000
2500
3000

Citing publishers

1000
2000
3000
4000
5000
6000
7000
8000
Show all (70 more)
1000
2000
3000
4000
5000
6000
7000
8000

Publishing organizations

10
20
30
40
50
60
70
Show all (70 more)
10
20
30
40
50
60
70

Publishing organizations in 5 years

5
10
15
20
25
30
35
40
Show all (70 more)
5
10
15
20
25
30
35
40

Publishing countries

100
200
300
400
500
600
700
800
USA, 755, 28.61%
Germany, 278, 10.53%
Netherlands, 177, 6.71%
Japan, 174, 6.59%
United Kingdom, 168, 6.37%
France, 163, 6.18%
Italy, 162, 6.14%
China, 159, 6.03%
Belgium, 125, 4.74%
Switzerland, 107, 4.05%
Canada, 101, 3.83%
India, 96, 3.64%
Denmark, 85, 3.22%
Spain, 85, 3.22%
Australia, 83, 3.15%
Republic of Korea, 72, 2.73%
Turkey, 69, 2.61%
Sweden, 69, 2.61%
Israel, 54, 2.05%
Finland, 40, 1.52%
Brazil, 32, 1.21%
Portugal, 26, 0.99%
Egypt, 26, 0.99%
New Zealand, 24, 0.91%
Poland, 24, 0.91%
UAE, 22, 0.83%
Argentina, 20, 0.76%
Austria, 19, 0.72%
Norway, 17, 0.64%
Greece, 16, 0.61%
Singapore, 15, 0.57%
Chile, 15, 0.57%
Saudi Arabia, 14, 0.53%
Ireland, 11, 0.42%
Russia, 9, 0.34%
Czech Republic, 8, 0.3%
Iran, 7, 0.27%
Malaysia, 7, 0.27%
Romania, 7, 0.27%
Thailand, 7, 0.27%
Mexico, 6, 0.23%
Slovenia, 5, 0.19%
South Africa, 5, 0.19%
Lithuania, 4, 0.15%
Uruguay, 4, 0.15%
Croatia, 4, 0.15%
Bosnia and Herzegovina, 3, 0.11%
Indonesia, 3, 0.11%
Qatar, 3, 0.11%
Algeria, 2, 0.08%
Hungary, 2, 0.08%
Colombia, 2, 0.08%
Kuwait, 2, 0.08%
Lebanon, 2, 0.08%
Andorra, 1, 0.04%
Bulgaria, 1, 0.04%
Grenada, 1, 0.04%
Jordan, 1, 0.04%
Kenya, 1, 0.04%
Cyprus, 1, 0.04%
Oman, 1, 0.04%
Serbia, 1, 0.04%
Slovakia, 1, 0.04%
Ethiopia, 1, 0.04%
Show all (34 more)
100
200
300
400
500
600
700
800

Publishing countries in 5 years

20
40
60
80
100
120
140
160
180
USA, 177, 21.96%
Germany, 87, 10.79%
Italy, 73, 9.06%
Netherlands, 61, 7.57%
China, 60, 7.44%
Belgium, 59, 7.32%
France, 56, 6.95%
United Kingdom, 54, 6.7%
Japan, 53, 6.58%
Canada, 31, 3.85%
Republic of Korea, 31, 3.85%
Denmark, 28, 3.47%
Switzerland, 27, 3.35%
Spain, 26, 3.23%
Turkey, 25, 3.1%
India, 24, 2.98%
Australia, 19, 2.36%
Brazil, 16, 1.99%
UAE, 16, 1.99%
Sweden, 13, 1.61%
Egypt, 11, 1.36%
Poland, 11, 1.36%
Portugal, 10, 1.24%
Singapore, 9, 1.12%
Austria, 8, 0.99%
Argentina, 8, 0.99%
Israel, 8, 0.99%
Chile, 8, 0.99%
Finland, 7, 0.87%
Ireland, 6, 0.74%
New Zealand, 5, 0.62%
Saudi Arabia, 5, 0.62%
Thailand, 4, 0.5%
South Africa, 4, 0.5%
Greece, 3, 0.37%
Malaysia, 3, 0.37%
Norway, 3, 0.37%
Indonesia, 2, 0.25%
Qatar, 2, 0.25%
Colombia, 2, 0.25%
Mexico, 2, 0.25%
Romania, 2, 0.25%
Croatia, 2, 0.25%
Czech Republic, 2, 0.25%
Bosnia and Herzegovina, 1, 0.12%
Hungary, 1, 0.12%
Grenada, 1, 0.12%
Jordan, 1, 0.12%
Kenya, 1, 0.12%
Lithuania, 1, 0.12%
Slovakia, 1, 0.12%
Slovenia, 1, 0.12%
Uruguay, 1, 0.12%
Ethiopia, 1, 0.12%
Show all (24 more)
20
40
60
80
100
120
140
160
180