Fraunhofer Portugal Research

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Fraunhofer Portugal Research
Short name
AICOS
Country, city
Portugal, Porto
Publications
221
Citations
2 180
h-index
22
Top-3 organizations
University of Porto
University of Porto (72 publications)
NOVA University Lisbon
NOVA University Lisbon (28 publications)
Federal University of Goiás
Federal University of Goiás (20 publications)
Top-3 foreign organizations
Federal University of Goiás
Federal University of Goiás (20 publications)
Karolinska Institute
Karolinska Institute (10 publications)
ETH Zurich
ETH Zurich (8 publications)

Most cited in 5 years

Barandas M., Folgado D., Fernandes L., Santos S., Abreu M., Bota P., Liu H., Schultz T., Gamboa H.
SoftwareX scimago Q2 wos Q2 Open Access
2020-01-01 citations by CoLab: 316 Abstract  
Time series feature extraction is one of the preliminary steps of conventional machine learning pipelines. Quite often, this process ends being a time consuming and complex task as data scientists must consider a combination between a multitude of domain knowledge factors and coding implementation. We present in this paper a Python package entitled Time Series Feature Extraction Library (TSFEL), which computes over 60 different features extracted across temporal, statistical and spectral domains. User customisation is achieved using either an online interface or a conventional Python package for more flexibility and integration into real deployment scenarios. TSFEL is designed to support the process of fast exploratory data analysis and feature extraction on time series with computational cost evaluation.
Martins J., Cardoso J.S., Soares F.
2020-08-01 citations by CoLab: 79 Abstract  
Background and Objective: Glaucoma, an eye condition that leads to permanent blindness, is typically asymptomatic and therefore difficult to be diagnosed in time. However, if diagnosed in time, Glaucoma can effectively be slowed down by using adequate treatment; hence, an early diagnosis is of utmost importance. Nonetheless, the conventional approaches to diagnose Glaucoma adopt expensive and bulky equipment that requires qualified experts, making it difficult, costly and time-consuming to diagnose large amounts of people. Consequently, new alternatives to diagnose Glaucoma that suppress these issues should be explored. Methods: This work proposes an interpretable computer-aided diagnosis (CAD) pipeline that is capable of diagnosing Glaucoma using fundus images and run offline in mobile devices. Several public datasets of fundus images were merged and used to build Convolutional Neural Networks (CNNs) that perform segmentation and classification tasks. These networks are then used to build a pipeline for Glaucoma assessment that outputs a Glaucoma confidence level and also provides several morphological features and segmentations of relevant structures, resulting in an interpretable Glaucoma diagnosis. To assess the performance of this method in a restricted environment, this pipeline was integrated into a mobile application and time and space complexities were assessed. Results: Considering the test set, the developed pipeline achieved 0.91 and 0.75 of Intersection over Union (IoU) in the optic disc and optic cup segmentation, respectively. With regards to the classification, an accuracy of 0.87 with a sensitivity of 0.85 and an AUC of 0.93 were attained. Moreover, this pipeline runs on an average Android smartphone in under two seconds. Conclusions: The results demonstrate the potential that this method can have in the contribution to an early Glaucoma diagnosis. The proposed approach achieved similar or slightly better metrics than the current CAD systems for Glaucoma assessment while running on more restricted devices. This pipeline can, therefore, be used to construct accurate and affordable CAD systems that could enable large Glaucoma screenings, contributing to an earlier diagnose of this condition.
Adcock M., Fankhauser M., Post J., Lutz K., Zizlsperger L., Luft A.R., Guimarães V., Schättin A., de Bruin E.D.
Frontiers in Medicine scimago Q1 wos Q1 Open Access
2020-01-28 citations by CoLab: 77 PDF Abstract  
Aging is associated with a decline in physical functions, cognition and brain structure. Considering that human life is based on an inseparable physical-cognitive interplay, combined physical-cognitive training through exergames is a promising approach to counteract age-related impairments. The aim of this study was to assess the effects of an in-home multicomponent exergame training on [i] physical and cognitive functions and [ii] brain volume of older adults compared to a usual care control group. Thirty-seven healthy and independently living older adults aged 65 years and older were randomly assigned to an intervention (exergame training) or a control (usual care) group. Over 16 weeks, the participants of the intervention group absolved three home-based exergame sessions per week (à 30-40 minutes) including Tai Chi-inspired exercises, dancing and step-based cognitive games. The control participants continued with their normal daily living. Pre- and post-measurements included assessments of physical (gait parameters, functional muscle strength, balance, aerobic endurance) and cognitive (processing speed, short-term attention span, working memory, inhibition, mental flexibility) functions. T1-weighted magnetic resonance imaging was conducted to assess brain volume. Thirty-one participants (mean age = 73.9 ± 6.4 years, range = 65 – 90 years, 16 female) completed the study. Inhibition and working memory significantly improved post-intervention in favour of the intervention group (inhibition: F(1) = 2.537, p = .046, n2p = .11, working memory: F(1) = 5.872, p = .015, n2p = .02). Two measures of short-term attentional span showed improvements after training in favor of the control group (F(1) = 4.309, p = .038, n2p = .03, F(1) = 8.504, p = .004, n2p = .04). No significant interaction effects were evident for physical functions or brain volume. Both groups exhibited a significant decrease in gray matter volume of frontal areas and the hippocampus over time. The findings indicate a positive influence of exergame training on executive functioning. No improvements in physical functions or brain volume were evident in this study. Better adapted individualized training challenge and a longer training period are suggested. Further studies are needed that assess training-related structural brain plasticity and its effect on performance, daily life functioning and healthy aging.
Mariano J., Marques S., Ramos M.R., Gerardo F., Cunha C.L., Girenko A., Alexandersson J., Stree B., Lamanna M., Lorenzatto M., Mikkelsen L.P., Bundgård-Jørgensen U., Rêgo S., de Vries H.
2021-02-07 citations by CoLab: 76 Abstract  
Older adults are often stereotyped as having less technological ability than younger age groups. As a result, older individuals may avoid using technology due to stereotype threat, the fear of conf...
Neves I., Folgado D., Santos S., Barandas M., Campagner A., Ronzio L., Cabitza F., Gamboa H.
2021-06-01 citations by CoLab: 61 Abstract  
Treatment and prevention of cardiovascular diseases often rely on Electrocardiogram (ECG) interpretation. Dependent on the physician's variability, ECG interpretation is subjective and prone to errors. Machine learning models are often developed and used to support doctors; however, their lack of interpretability stands as one of the main drawbacks of their widespread operation. This paper focuses on an Explainable Artificial Intelligence (XAI) solution to make heartbeat classification more explainable using several state-of-the-art model-agnostic methods. We introduce a high-level conceptual framework for explainable time series and propose an original method that adds temporal dependency between time samples using the time series' derivative. The results were validated in the MIT-BIH arrhythmia dataset: we performed a performance's analysis to evaluate whether the explanations fit the model's behaviour; and employed the 1-D Jaccard's index to compare the subsequences extracted from an interpretable model and the XAI methods used. Our results show that the use of the raw signal and its derivative includes temporal dependency between samples to promote classification explanation. A small but informative user study concludes this study to evaluate the potential of the visual explanations produced by our original method for being adopted in real-world clinical settings, either as diagnostic aids or training resource. • We present an in-depth study on the technical feasibility and practical usefulness of visual explanations for ECG classifiers • We propose using the time series derivate to support state-of-the-art XAI methods measuring feature importance considering the temporal domain • We conducted an informative user study to evaluate the potential of visual explanations on ECGs
Santos M.S., Abreu P.H., Japkowicz N., Fernández A., Soares C., Wilk S., Santos J.
Artificial Intelligence Review scimago Q1 wos Q1
2022-03-24 citations by CoLab: 61 Abstract  
Current research on imbalanced data recognises that class imbalance is aggravated by other data intrinsic characteristics, among which class overlap stands out as one of the most harmful. The combination of these two problems creates a new and difficult scenario for classification tasks and has been discussed in several research works over the past two decades. In this paper, we argue that despite some insightful information can be derived from related research, the joint-effect of class overlap and imbalance is still not fully understood, and advocate for the need to move towards a unified view of the class overlap problem in imbalanced domains. To that end, we start by performing a thorough analysis of existing literature on the joint-effect of class imbalance and overlap, elaborating on important details left undiscussed on the original papers, namely the impact of data domains with different characteristics and the behaviour of classifiers with distinct learning biases. This leads to the hypothesis that class overlap comprises multiple representations, which are important to accurately measure and analyse in order to provide a full characterisation of the problem. Accordingly, we devise two novel taxonomies, one for class overlap measures and the other for class overlap-based approaches, both resonating with the distinct representations of class overlap identified. This paper therefore presents a global and unique view on the joint-effect of class imbalance and overlap, from precursor work to recent developments in the field. It meticulously discusses some concepts taken as implicit in previous research, explores new perspectives in light of the limitations found, and presents new ideas that will hopefully inspire researchers to move towards a unified view on the problem and the development of suitable strategies for imbalanced and overlapped domains.
Lopes P., Silva E., Braga C., Oliveira T., Rosado L.
Applied Sciences (Switzerland) scimago Q2 wos Q2 Open Access
2022-09-20 citations by CoLab: 47 PDF Abstract  
The lack of transparency of powerful Machine Learning systems paired with their growth in popularity over the last decade led to the emergence of the eXplainable Artificial Intelligence (XAI) field. Instead of focusing solely on obtaining highly performing models, researchers also develop explanation techniques that help better understand the system’s reasoning for a particular output. An explainable system can be designed, developed, and evaluated from different perspectives, which enables researchers from different disciplines to work together on this topic. However, the multidisciplinary nature of XAI systems creates new challenges for condensing and structuring adequate methodologies to design and evaluate such systems. This paper presents a survey of Human-centred and Computer-centred methods to evaluate XAI systems. We propose a new taxonomy to categorize XAI evaluation methods more clearly and intuitively. This categorization gathers knowledge from different disciplines and organizes the evaluation methods according to a set of categories that represent key properties of XAI systems. Possible ways to use the proposed taxonomy in the design and evaluation of XAI systems are also discussed, alongside with some concluding remarks and future directions of research.
Cabitza F., Campagner A., Ronzio L., Cameli M., Mandoli G.E., Pastore M.C., Sconfienza L.M., Folgado D., Barandas M., Gamboa H.
2023-04-01 citations by CoLab: 44 Abstract  
In this paper, we study human-AI collaboration protocols, a design-oriented construct aimed at establishing and evaluating how humans and AI can collaborate in cognitive tasks. We applied this construct in two user studies involving 12 specialist radiologists (the knee MRI study) and 44 ECG readers of varying expertise (the ECG study), who evaluated 240 and 20 cases, respectively, in different collaboration configurations. We confirm the utility of AI support but find that XAI can be associated with a “white-box paradox”, producing a null or detrimental effect. We also find that the order of presentation matters: AI-first protocols are associated with higher diagnostic accuracy than human-first protocols, and with higher accuracy than both humans and AI alone. Our findings identify the best conditions for AI to augment human diagnostic skills, rather than trigger dysfunctional responses and cognitive biases that can undermine decision effectiveness.
Resende C., Folgado D., Oliveira J., Franco B., Moreira W., Oliveira-Jr A., Cavaleiro A., Carvalho R.
Sensors scimago Q1 wos Q2 Open Access
2021-07-08 citations by CoLab: 42 PDF Abstract  
Industry 4.0, allied with the growth and democratization of Artificial Intelligence (AI) and the advent of IoT, is paving the way for the complete digitization and automation of industrial processes. Maintenance is one of these processes, where the introduction of a predictive approach, as opposed to the traditional techniques, is expected to considerably improve the industry maintenance strategies with gains such as reduced downtime, improved equipment effectiveness, lower maintenance costs, increased return on assets, risk mitigation, and, ultimately, profitable growth. With predictive maintenance, dedicated sensors monitor the critical points of assets. The sensor data then feed into machine learning algorithms that can infer the asset health status and inform operators and decision-makers. With this in mind, in this paper, we present TIP4.0, a platform for predictive maintenance based on a modular software solution for edge computing gateways. TIP4.0 is built around Yocto, which makes it readily available and compliant with Commercial Off-the-Shelf (COTS) or proprietary hardware. TIP4.0 was conceived with an industry mindset with communication interfaces that allow it to serve sensor networks in the shop floor and modular software architecture that allows it to be easily adjusted to new deployment scenarios. To showcase its potential, the TIP4.0 platform was validated over COTS hardware, and we considered a public data-set for the simulation of predictive maintenance scenarios. We used a Convolution Neural Network (CNN) architecture, which provided competitive performance over the state-of-the-art approaches, while being approximately four-times and two-times faster than the uncompressed model inference on the Central Processing Unit (CPU) and Graphical Processing Unit, respectively. These results highlight the capabilities of distributed large-scale edge computing over industrial scenarios.
Rodrigues J., Liu H., Folgado D., Belo D., Schultz T., Gamboa H.
Biosensors scimago Q1 wos Q2 Open Access
2022-12-19 citations by CoLab: 29 PDF Abstract  
Biosignal-based technology has been increasingly available in our daily life, being a critical information source. Wearable biosensors have been widely applied in, among others, biometrics, sports, health care, rehabilitation assistance, and edutainment. Continuous data collection from biodevices provides a valuable volume of information, which needs to be curated and prepared before serving machine learning applications. One of the universal preparation steps is data segmentation and labelling/annotation. This work proposes a practical and manageable way to automatically segment and label single-channel or multimodal biosignal data using a self-similarity matrix (SSM) computed with signals’ feature-based representation. Applied to public biosignal datasets and a benchmark for change point detection, the proposed approach delivered lucid visual support in interpreting the biosignals with the SSM while performing accurate automatic segmentation of biosignals with the help of the novelty function and associating the segments grounded on their similarity measures with the similarity profiles. The proposed method performed superior to other algorithms in most cases of a series of automatic biosignal segmentation tasks; of equal appeal is that it provides an intuitive visualization for information retrieval of multimodal biosignals.
Carvalho R., Morgado A.C., Sampaio A.F., Vasconcelos M.J.
2025-02-07 citations by CoLab: 0 Abstract  
The high prevalence of chronic wounds and their consequences for people’s quality of life makes wound treatment a highly relevant topic in the context of healthcare. One vital aspect of monitoring concerns tracking wound size evolution, which guides healthcare professionals during diagnosis and serves as a key predictor of treatment efficacy. This work proposes an automatic image segmentation and measurement framework for chronic wounds using deep learning and computer vision techniques. The wound segmentation task involved exploring three prominent segmentation models: a popular convolutional neural network (DeepLabV3+), a cutting-edge transformer approach (SegFormer) and a visual foundation model (MedSAM). Traditional computer vision techniques were further applied to infer the open wound’s width, length and area in real-world units during the wound measurement task. Separate studies were performed to assess each task’s performance and a final assessment of the complete framework that couples a wound and reference marker detection model with the developed segmentation and measurement approach. For the automatic wound segmentation, MedSAM achieved the best performance with Dice scores of 88.14% and 92.25%, applied on public AZH FU and private datasets, respectively. In the wound measurement task, the area estimation achieved a mean relative error of 5.36% for the private dataset. Concerning the overall pipeline results, MedSAM experienced a decline in performance and SegFormer emerged as the best segmentation model, achieving a Dice score of 91.55% and a 16.7% mean relative error for the area estimation (which surpasses the literature results) in the private dataset, demonstrating its applicability in clinical practice.
Cerqueira V., Roque L., Soares C.
2025-01-27 citations by CoLab: 0 Abstract  
Accurate evaluation of forecasting models is essential for ensuring reliable predictions. Current practices for evaluating and comparing forecasting models focus on summarising performance into a single score, using metrics such as SMAPE. We hypothesize that averaging performance over all samples dilutes relevant information about the relative performance of models. Particularly, conditions in which this relative performance is different than the overall accuracy. We address this limitation by proposing a novel framework for evaluating univariate time series forecasting models from multiple perspectives, such as one-step ahead forecasting versus multi-step ahead forecasting. We show the advantages of this framework by comparing a state-of-the-art deep learning approach with classical forecasting techniques. While classical methods (e.g. ARIMA) are long-standing approaches to forecasting, deep neural networks (e.g. NHITS) have recently shown state-of-the-art forecasting performance in benchmark datasets. We conducted extensive experiments that show NHITS generally performs best, but its superiority varies with forecasting conditions. For instance, concerning the forecasting horizon, NHITS only outperforms classical approaches for multi-step ahead forecasting. Another relevant insight is that, when dealing with anomalies, NHITS is outperformed by methods such as Theta. These findings highlight the importance of evaluating forecasts from multiple dimensions.
Santos R., Ribeiro B., Curioso I., Barandas M., V. Carreiro A., Gamboa H., Coelho P., Fragata J., Sousa I.
2025-01-01 citations by CoLab: 0 Abstract  
Multi-label classification tasks are relevant in healthcare, as data samples are commonly associated with multiple interdependent, non-mutually exclusive outcomes. Incomplete label information often arises due to unrecorded outcomes at planned checkpoints, varying disease testing across patients, collection constraints, or human error. Dropping partially annotated samples can reduce data size, introduce bias, and compromise accuracy. To address these issues, this study introduces CORKI (Correlation-Optimised and Robust K Nearest Neighbours Imputation for Multi-label Classification), a data-centric method for partial annotation imputation in Multi-label data. This method employs proximity measures and an optional weighting term for outcome prevalence to tackle imbalanced labels. Additionally, it leverages different modalities of correlation that consider not only variable values but also missingness patterns. CORKI’s performance was compared with a domain-knowledge-based rule system and the standard sample-dropping approach on three public and one private cardiothoracic surgery datasets with diverse missing label rates. CORKI yielded performances comparable to those of the domain-knowledge approach, establishing itself as a reliable method, while being highly generalizable. Moreover, it was able to maintain imputation accuracy in demanding partial annotation scenarios, presenting drops of only 5% for missing rates of 50%.
Lopes F., Soares C., Cortez P.
2025-01-01 citations by CoLab: 0 Abstract  
This research addresses the challenge of generating synthetic data that resembles real-world data while preserving privacy. With privacy laws protecting sensitive information such as healthcare data, accessing sufficient training data becomes difficult, resulting in an increased difficulty in training Machine Learning models and in overall worst models. Recently, there has been an increased interest in the usage of Generative Adversarial Networks (GAN) to generate synthetic data since they enable researchers to generate more data to train their models. GANs, however, may not be suitable for privacy-sensitive data since they have no concern for the privacy of the generated data. We propose modifying the known Conditional Tabular GAN (CTGAN) model by incorporating a privacy-aware loss function, thus resulting in the Private CTGAN (PCTGAN) method. Several experiments were carried out using 10 public domain classification datasets and comparing PCTGAN with CTGAN and the state-of-the-art privacy-preserving model, the Differential Privacy CTGAN (DP-CTGAN). The results demonstrated that PCTGAN enables users to fine-tune the privacy fidelity trade-off by leveraging parameters, as well as that if desired, a higher level of privacy.
Silva I., Silva I., Barros A.C.
2024-12-24 citations by CoLab: 0 Abstract  
There is a gap of insights for interaction design when it comes to maintenance work, particularly in extreme environments, as is the case of cleanrooms . In this study, we employed a human-centred approach to investigate different aspects of workflow and technology within this context in order to gather insights about core functionalities to guide the development of a digital tool to support these workers in their everyday tasks. We combined a literature review, a comparative analysis of off-the-shelf tools, and semi-structured interviews to eight maintenance workers, including a manager. Our findings led to a suggestion of features for interaction design, but also to the identification of tensions which interaction designers should consider when designing for these contexts.
Rocha P.S., Bento N., Folgado D., Carreiro A.V., Santos M.O., de Carvalho M., Miranda B.
PLoS ONE scimago Q1 wos Q1 Open Access
2024-12-16 citations by CoLab: 0 PDF Abstract  
Objectives Cough dysfunction is a feature of patients with amyotrophic lateral sclerosis (ALS). The cough sounds carry information about the respiratory system and bulbar involvement. Our goal was to explore the association between cough sound characteristics and the respiratory and bulbar functions in ALS. Methods This was a single-center, cross-sectional, and case-control study. On-demand coughs from ALS patients and healthy controls were collected with a smartphone. A total of 31 sound features were extracted for each cough recording using time-frequency signal processing analysis. Logistic regression was applied to test the differences between patients and controls, and in patients with bulbar and respiratory impairment. Support vector machines (SVM) were employed to estimate the accuracy of classifying between patients and controls and between patients with bulbar and respiratory impairment. Multiple linear regressions were applied to examine correlations between cough sound features and clinical variables. Results Sixty ALS patients (28 with bulbar dysfunction, and 25 with respiratory dysfunction) and forty age- and gender-matched controls were recruited. Our results revealed clear differences between patients and controls, particularly within the frequency-related group of features (AUC 0.85, CI 0.79–0.91). Similar results were observed when comparing patients with and without bulbar dysfunction. Sound features related to intensity displayed the strongest correlation with disease severity, and were the most significant in distinguishing patients with and without respiratory dysfunction. Discussion We found a good relationship between specific cough sound features and clinical variables related to ALS functional disability. The findings relate well with some expected impact from ALS on both respiratory and bulbar contributions to the physiology of cough. Finally, our approach could be relevant for clinical practice, and it also facilitates home-based data collection.
Leites J., Cerqueira V., Soares C.
2024-11-16 citations by CoLab: 2 Abstract  
Most forecasting methods use recent past observations (lags) to model the future values of univariate time series. Selecting an adequate number of lags is important for training accurate forecasting models. Several approaches and heuristics have been devised to solve this task. However, there is no consensus about what the best approach is. Besides, lag selection procedures have been developed based on local models and classical forecasting techniques such as ARIMA. We bridge this gap in the literature by carrying out an extensive empirical analysis of different lag selection methods. We focus on deep learning methods trained in a global approach, i.e., on datasets comprising multiple univariate time series. Specifically, we use NHITS, a recently proposed architecture that has shown competitive forecasting performance. The experiments were carried out using three benchmark databases that contain a total of 2411 univariate time series. The results indicate that the lag size is a relevant parameter for accurate forecasts. In particular, excessively small or excessively large lag sizes have a considerable negative impact on forecasting performance. Cross-validation approaches show the best performance for lag selection, but this performance is comparable with simple heuristics.
Silva I.O., Soares C., Cerqueira V., Rodrigues A., Bastardo P.
2024-11-16 citations by CoLab: 0 Abstract  
TadGAN is a recent algorithm with competitive performance on time series anomaly detection. The detection process of TadGAN works by comparing observed data with generated data. A challenge in anomaly detection is that there are anomalies which are not easy to detect by analyzing the original time series but have a clear effect on its higher-order characteristics. We propose Meta-TadGAN, an adaptation of TadGAN that analyzes meta-level representations of time series. That is, it analyzes a time series that represents the characteristics of the time series, rather than the original time series itself. Results on benchmark datasets as well as real-world data from fire detectors shows that the new method is competitive with TadGAN.
Cerqueira V., Moniz N., Inácio R., Soares C.
2024-11-16 citations by CoLab: 0 Abstract  
Recent state-of-the-art forecasting methods are trained on collections of time series. These methods, often referred to as global models, can capture common patterns in different time series to improve their generalization performance. However, they require large amounts of data that might not be available. Moreover, global models may fail to capture relevant patterns unique to a particular time series. In these cases, data augmentation can be useful to increase the sample size of time series datasets. The main contribution of this work is a novel method for generating univariate time series synthetic samples. Our approach stems from the insight that the observations concerning a particular time series of interest represent only a small fraction of all observations. In this context, we frame the problem of training a forecasting model as an imbalanced learning task. Oversampling strategies are popular approaches used to handle the imbalance problem in machine learning. We use these techniques to create synthetic time series observations and improve the accuracy of forecasting models. We carried out experiments using 7 different databases that contain a total of 5502 univariate time series. We found that the proposed solution outperforms both a global and a local model, thus providing a better trade-off between these two approaches.
Urjais Gomes R., Soares C., Reis L.P.
2024-11-16 citations by CoLab: 0 Abstract  
DeepAR is a popular probabilistic time series forecasting algorithm. According to the authors, DeepAR is particularly suitable to build global models using hundreds of related time series. For this reason, it is a common expectation that DeepAR obtains poor results in univariate forecasting [10]. However, there are no empirical studies that clearly support this. Here, we compare the performance of DeepAR with standard forecasting models to assess its performance regarding 1 step-ahead forecasts. We use 100 time series from the M4 competition to compare univariate DeepAR with univariate LSTM and SARIMAX models, both for point and quantile forecasts. Results show that DeepAR obtains good results, which contradicts common perception.
Teixeira C., Gomes I., Cunha L., Soares C., van Rijn J.N.
2024-11-16 citations by CoLab: 0 Abstract  
As machine learning technologies are increasingly adopted, the demand for responsible AI practices to ensure transparency and accountability grows. To better understand the decision-making processes of machine learning models, GASTeN was developed to generate realistic yet ambiguous synthetic data near a classifier’s decision boundary. However, the results were inconsistent, with few images in the low-confidence region and noise. Therefore, we propose a new GASTeN version with a modified architecture and a novel loss function. This new loss function incorporates a multi-objective measure with a Gaussian loss centered on the classifier probability, targeting the decision boundary. Our study found that while the original GASTeN architecture yields the highest Fréchet Inception Distance (FID) scores, the updated version achieves lower Average Confusion Distance (ACD) values and consistent performance across low-confidence regions. Both architectures produce realistic and ambiguous images, but the updated one is more reliable, with no instances of GAN mode collapse. Additionally, the introduction of the Gaussian loss enhanced this architecture by allowing for adjustable tolerance in image generation around the decision boundary.
Matias P., Araújo R., Graça R., Henriques A.R., Belo D., Valada M., Lotfi N.N., Mateus E.F., Radner H., Rodrigues A.M., Studenic P., Nunes F.
2024-11-01 citations by CoLab: 0
Campagner A., Barandas M., Folgado D., Gamboa H., Cabitza F.
2024-11-01 citations by CoLab: 4
Rêgo S., Monteiro-Soares M., Dutra-Medeiros M., Camila Dias C., Nunes F.
Diabetology scimago Q2 wos Q3 Open Access
2024-10-30 citations by CoLab: 0 PDF Abstract  
Aims: This study aims to assess the perspective of doctors and nurses regarding the clinical settings and barriers to implementing opportunistic diabetic retinopathy screening with handheld fundus cameras. Design: This study was a cross-sectional, online questionnaire study. Methods: An online survey was distributed to doctors and nurses working in Portuguese primary care units and hospitals between October and November 2021. The survey assessed current fundus observation practices, potential contexts, and barriers to using handheld fundus cameras. Results: We received 299 eligible responses. About 87% of respondents (n = 255) believe in the clinical utility of handheld fundus cameras to increase patients’ access to diabetes-related retinopathy screening, and 74% (n = 218) attribute utility to identify other eye or systemic diseases. More than a third of participants (37%, n = 111) envisioned using such devices multiple times per week. The main potential barriers identified included limited time (n = 90), equipment cost (n = 48), or the lack of skills in retinal image acquisition (n = 47). Most respondents (94%, n = 275) expected a follow-up recommendation to accompany the telemedicine diagnosis. Conclusions: Doctors and nurses support the use of handheld fundus cameras. However, to optimize their implementation, some strategies should be considered, including training, telemedicine-based diagnosis, and support for follow-up through accessible, user-friendly, and efficient information systems.
Rocha P.S., Bento N., Svärd H., Lopes D.M., Hespanhol S., Folgado D., Carreiro A.V., de Carvalho M., Miranda B.
Brain Sciences scimago Q2 wos Q3 Open Access
2024-10-29 citations by CoLab: 0 PDF Abstract  
Background: Speech production is a possible way to monitor bulbar and respiratory functions in patients with amyotrophic lateral sclerosis (ALS). Moreover, the emergence of smartphone-based data collection offers a promising approach to reduce frequent hospital visits and enhance patient outcomes. Here, we studied the relationship between bulbar and respiratory functions with voice characteristics of ALS patients, alongside a speech therapist’s evaluation, at the convenience of using a simple smartphone. Methods: For voice assessment, we considered a speech therapist’s standardized tool—consensus auditory-perceptual evaluation of voice (CAPE-V); and an acoustic analysis toolbox. The bulbar sub-score of the revised ALS functional rating scale (ALSFRS-R) was used, and pulmonary function measurements included forced vital capacity (FVC%), maximum expiratory pressure (MEP%), and maximum inspiratory pressure (MIP%). Correlation coefficients and both linear and logistic regression models were applied. Results: A total of 27 ALS patients (12 males; 61 years mean age; 28 months median disease duration) were included. Patients with significant bulbar dysfunction revealed greater CAPE-V scores in overall severity, roughness, strain, pitch, and loudness. They also presented slower speaking rates, longer pauses, and higher jitter values in acoustic analysis (all p < 0.05). The CAPE-V’s overall severity and sub-scores for pitch and loudness demonstrated significant correlations with MIP% and MEP% (all p < 0.05). In contrast, acoustic metrics (speaking rate, absolute energy, shimmer, and harmonic-to-noise ratio) significantly correlated with FVC% (all p < 0.05). Conclusions: The results provide supporting evidence for the use of smartphone-based recordings in ALS patients for CAPE-V and acoustic analysis as reliable correlates of bulbar and respiratory function.

Since 2011

Total publications
221
Total citations
2180
Citations per publication
9.86
Average publications per year
15.79
Average authors per publication
6.03
h-index
22
Metrics description

Top-30

Fields of science

5
10
15
20
25
30
35
40
Electrical and Electronic Engineering, 39, 17.65%
Instrumentation, 36, 16.29%
Computer Science Applications, 28, 12.67%
General Engineering, 25, 11.31%
Analytical Chemistry, 23, 10.41%
Biochemistry, 22, 9.95%
Atomic and Molecular Physics, and Optics, 22, 9.95%
General Materials Science, 20, 9.05%
Software, 17, 7.69%
Artificial Intelligence, 14, 6.33%
General Computer Science, 12, 5.43%
Health Informatics, 12, 5.43%
General Medicine, 11, 4.98%
Information Systems, 11, 4.98%
Process Chemistry and Technology, 10, 4.52%
Computer Networks and Communications, 10, 4.52%
Signal Processing, 9, 4.07%
Fluid Flow and Transfer Processes, 8, 3.62%
Hardware and Architecture, 7, 3.17%
Human-Computer Interaction, 7, 3.17%
Computer Graphics and Computer-Aided Design, 6, 2.71%
Biomedical Engineering, 6, 2.71%
Computer Vision and Pattern Recognition, 6, 2.71%
Control and Systems Engineering, 5, 2.26%
Radiology, Nuclear Medicine and imaging, 5, 2.26%
Medicine (miscellaneous), 4, 1.81%
Psychiatry and Mental health, 4, 1.81%
Rehabilitation, 4, 1.81%
Mechanical Engineering, 3, 1.36%
Library and Information Sciences, 3, 1.36%
5
10
15
20
25
30
35
40

Journals

5
10
15
20
25
30
35
5
10
15
20
25
30
35

Publishers

10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80

With other organizations

10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80

With foreign organizations

5
10
15
20
5
10
15
20

With other countries

5
10
15
20
25
Brazil, 25, 11.31%
Italy, 14, 6.33%
Germany, 12, 5.43%
Sweden, 11, 4.98%
Spain, 10, 4.52%
United Kingdom, 9, 4.07%
Canada, 9, 4.07%
Switzerland, 9, 4.07%
USA, 7, 3.17%
France, 6, 2.71%
Austria, 6, 2.71%
Belgium, 5, 2.26%
Ireland, 5, 2.26%
Malaysia, 4, 1.81%
Morocco, 4, 1.81%
Australia, 3, 1.36%
Netherlands, 3, 1.36%
Denmark, 2, 0.9%
Israel, 2, 0.9%
India, 1, 0.45%
Latvia, 1, 0.45%
New Zealand, 1, 0.45%
Poland, 1, 0.45%
Republic of Korea, 1, 0.45%
Finland, 1, 0.45%
Czech Republic, 1, 0.45%
Japan, 1, 0.45%
5
10
15
20
25
  • We do not take into account publications without a DOI.
  • Statistics recalculated daily.
  • Publications published earlier than 2011 are ignored in the statistics.
  • The horizontal charts show the 30 top positions.
  • Journals quartiles values are relevant at the moment.