Centrum Wiskunde & Informatica (National Research Institute for Mathematics and Computer Science)

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Centrum Wiskunde & Informatica (National Research Institute for Mathematics and Computer Science)
Short name
CWI
Country, city
Netherlands, Amsterdam
Publications
3 922
Citations
80 409
h-index
109
Top-3 journals
Top-3 organizations
University of Amsterdam
University of Amsterdam (429 publications)
Vrije Universiteit Amsterdam
Vrije Universiteit Amsterdam (378 publications)
Leiden University
Leiden University (338 publications)
Top-3 foreign organizations
University of Antwerp
University of Antwerp (70 publications)
University College London
University College London (55 publications)
Aarhus University
Aarhus University (44 publications)

Most cited in 5 years

van Doorn J., van den Bergh D., Böhm U., Dablander F., Derks K., Draws T., Etz A., Evans N.J., Gronau Q.F., Haaf J.M., Hinne M., Kucharský Š., Ly A., Marsman M., Matzke D., et. al.
Psychonomic Bulletin and Review scimago Q1 wos Q1 Open Access
2020-10-09 citations by CoLab: 574 PDF Abstract  
Despite the increasing popularity of Bayesian inference in empirical research, few practical guidelines provide detailed recommendations for how to apply Bayesian procedures and interpret the results. Here we offer specific guidelines for four different stages of Bayesian statistical reasoning in a research setting: planning the analysis, executing the analysis, interpreting the results, and reporting the results. The guidelines for each stage are illustrated with a running example. Although the guidelines are geared towards analyses performed with the open-source statistical software JASP, most guidelines extend to Bayesian inference in general.
Eiben A.E., Schippers C.A.
Fundamenta Informaticae scimago Q3 wos Q4
2019-12-03 citations by CoLab: 206 Abstract  
Exploration and exploitation are the two cornerstones of problem solving by search. The common opinion about evolutionary algorithms is that they explore the search space by the (genetic) search operators, while exploitation is done by selection. This opinion is, however, questionable. In this paper we give a survey of different operators, review existing viewpoints on exploration and exploitation, and point out some discrepancies between and problems with current views.
Capper T., Gorbatcheva A., Mustafa M.A., Bahloul M., Schwidtal J.M., Chitchyan R., Andoni M., Robu V., Montakhabi M., Scott I.J., Francis C., Mbavarira T., Espana J.M., Kiesling L.
2022-07-01 citations by CoLab: 122 Abstract  
Peer-to-peer, community or collective self-consumption, and transactive energy markets offer new models for trading energy locally. Over the past five years, there has been significant growth in the amount of academic literature examining how these local energy markets might function. This systematic literature review of 139 peer-reviewed journal articles examines the market designs used in these energy trading models. A modified version of the Business Ecosystem Architecture Modelling framework is used to extract market model information from the literature, and to identify differences and similarities between the models. This paper examines how peer-to-peer, community self-consumption and transactive energy markets are described in current literature. It explores the similarities and differences between these markets in terms of participation, governance structure, topology, and design. This paper systematises peer-to-peer, community self-consumption and transactive energy market designs, identifying six archetypes. Finally, it identifies five evidence gaps which require future research before these markets could be widely adopted. These evidence gaps are the lack of: consideration of physical constraints; a holistic approach to market design and operation; consideration about how these market designs will scale; consideration of information security; and, consideration of market participant privacy. • Systematic review of the market models in 139 peer-reviewed journal articles. • Six archetypal market designs and three archetypal price formation mechanisms. • Analysis of the value, scale and participants in P2P, CSC and TE markets. • Discussion of five major research gaps in the field of P2P, CSC and TE markets.
Wang H., Minnema J., Batenburg K.J., Forouzanfar T., Hu F.J., Wu G.
Journal of Dental Research scimago Q1 wos Q1
2021-03-30 citations by CoLab: 109 Abstract  
Accurate segmentation of the jaw (i.e., mandible and maxilla) and the teeth in cone beam computed tomography (CBCT) scans is essential for orthodontic diagnosis and treatment planning. Although various (semi)automated methods have been proposed to segment the jaw or the teeth, there is still a lack of fully automated segmentation methods that can simultaneously segment both anatomic structures in CBCT scans (i.e., multiclass segmentation). In this study, we aimed to train and validate a mixed-scale dense (MS-D) convolutional neural network for multiclass segmentation of the jaw, the teeth, and the background in CBCT scans. Thirty CBCT scans were obtained from patients who had undergone orthodontic treatment. Gold standard segmentation labels were manually created by 4 dentists. As a benchmark, we also evaluated MS-D networks that segmented the jaw or the teeth (i.e., binary segmentation). All segmented CBCT scans were converted to virtual 3-dimensional (3D) models. The segmentation performance of all trained MS-D networks was assessed by the Dice similarity coefficient and surface deviation. The CBCT scans segmented by the MS-D network demonstrated a large overlap with the gold standard segmentations (Dice similarity coefficient: 0.934 ± 0.019, jaw; 0.945 ± 0.021, teeth). The MS-D network–based 3D models of the jaw and the teeth showed minor surface deviations when compared with the corresponding gold standard 3D models (0.390 ± 0.093 mm, jaw; 0.204 ± 0.061 mm, teeth). The MS-D network took approximately 25 s to segment 1 CBCT scan, whereas manual segmentation took about 5 h. This study showed that multiclass segmentation of jaw and teeth was accurate and its performance was comparable to binary segmentation. The MS-D network trained for multiclass segmentation would therefore make patient-specific orthodontic treatment more feasible by strongly reducing the time required to segment multiple anatomic structures in CBCT scans.
Hendriksen A.A., Pelt D.M., Batenburg K.J.
2020-08-26 citations by CoLab: 106 Abstract  
Recovering a high-quality image from noisy indirect measurements is an important problem with many applications. For such inverse problems, supervised deep convolutional neural network (CNN)-based denoising methods have shown strong results, but the success of these supervised methods critically depends on the availability of a high-quality training dataset of similar measurements. For image denoising, methods are available that enable training without a separate training dataset by assuming that the noise in two different pixels is uncorrelated. However, this assumption does not hold for inverse problems, resulting in artifacts in the denoised images produced by existing methods. Here, we propose Noise2Inverse, a deep CNN-based denoising method for linear image reconstruction algorithms that does not require any additional clean or noisy data. Training a CNN-based denoiser is enabled by exploiting the noise model to compute multiple statistically independent reconstructions. We develop a theoretical framework which shows that such training indeed obtains a denoising CNN, assuming the measured noise is element-wise independent, and zero-mean. On simulated CT datasets, Noise2Inverse demonstrates an improvement in peak signal-to-noise ratio and structural similarity index compared to state-of-the-art image denoising methods, and conventional reconstruction methods, such as Total-Variation Minimization. We also demonstrate that the method is able to significantly reduce noise in challenging real-world experimental datasets.
Wing A.A., Stauffer C.L., Becker T., Reed K.A., Ahn M., Arnold N.P., Bony S., Branson M., Bryan G.H., Chaboureau J., De Roode S.R., Gayatri K., Hohenegger C., Hu I., Jansson F., et. al.
2020-07-20 citations by CoLab: 103 PDF Abstract  
The Radiative-Convective Equilibrium Model Intercomparison Project (RCEMIP) is an intercomparison of multiple types of numerical models configured in radiative-convective equilibrium (RCE). RCE is an idealization of the tropical atmosphere that has long been used to study basic questions in climate science. Here, we employ RCE to investigate the role that clouds and convective activity play in determining cloud feedbacks, climate sensitivity, the state of convective aggregation, and the equilibrium climate. RCEMIP is unique among intercomparisons in its inclusion of a wide range of model types, including atmospheric general circulation models (GCMs), single column models (SCMs), cloud-resolving models (CRMs), large eddy simulations (LES), and global cloud-resolving models (GCRMs). The first results are presented from the RCEMIP ensemble of more than 30 models. While there are large differences across the RCEMIP ensemble in the representation of mean profiles of temperature, humidity, and cloudiness, in a majority of models anvil clouds rise, warm, and decrease in area coverage in response to an increase in sea surface temperature (SST). Nearly all models exhibit self-aggregation in large domains and agree that self-aggregation acts to dry and warm the troposphere, reduce high cloudiness, and increase cooling to space. The degree of self-aggregation exhibits no clear tendency with warming. There is a wide range of climate sensitivities, but models with parameterized convection tend to have lower climate sensitivities than models with explicit convection. In models with parameterized convection, aggregated simulations have lower climate sensitivities than unaggregated simulations.
Dachman-Soled D., Ducas L., Gong H., Rossi M.
2020-08-12 citations by CoLab: 75 Abstract  
We propose a framework for cryptanalysis of lattice-based schemes, when side information—in the form of “hints”—about the secret and/or error is available. Our framework generalizes the so-called primal lattice reduction attack, and allows the progressive integration of hints before running a final lattice reduction step. Our techniques for integrating hints include sparsifying the lattice, projecting onto and intersecting with hyperplanes, and/or altering the distribution of the secret vector. Our main contribution is to propose a toolbox and a methodology to integrate such hints into lattice reduction attacks and to predict the performance of those lattice attacks with side information. While initially designed for side-channel information, our framework can also be used in other cases: exploiting decryption failures, or simply exploiting constraints imposed by certain schemes (LAC, Round5, NTRU). We implement a Sage 9.0 toolkit to actually mount such attacks with hints when computationally feasible, and to predict their performances on larger instances. We provide several end-to-end application examples, such as an improvement of a single trace attack on Frodo by Bos et al. (SAC 2018). In particular, our work can estimates security loss even given very little side information, leading to a smooth measurement/computation trade-off for side-channel attacks.
Hatfield P.W., Gaffney J.A., Anderson G.J., Ali S., Antonelli L., Başeğmez du Pree S., Citrin J., Fajardo M., Knapp P., Kettle B., Kustowski B., MacDonald M.J., Mariscal D., Martin M.E., Nagayama T., et. al.
Nature scimago Q1 wos Q1
2021-05-19 citations by CoLab: 73 Abstract  
High-energy-density physics is the field of physics concerned with studying matter at extremely high temperatures and densities. Such conditions produce highly nonlinear plasmas, in which several phenomena that can normally be treated independently of one another become strongly coupled. The study of these plasmas is important for our understanding of astrophysics, nuclear fusion and fundamental physics—however, the nonlinearities and strong couplings present in these extreme physical systems makes them very difficult to understand theoretically or to optimize experimentally. Here we argue that machine learning models and data-driven methods are in the process of reshaping our exploration of these extreme systems that have hitherto proved far too nonlinear for human researchers. From a fundamental perspective, our understanding can be improved by the way in which machine learning models can rapidly discover complex interactions in large datasets. From a practical point of view, the newest generation of extreme physics facilities can perform experiments multiple times a second (as opposed to approximately daily), thus moving away from human-based control towards automatic control based on real-time interpretation of diagnostic data and updates of the physics model. To make the most of these emerging opportunities, we suggest proposals for the community in terms of research design, training, best practice and support for synthetic diagnostics and data analysis. This Perspective discusses how high-energy-density physics could tap the potential of AI-inspired algorithms for extracting relevant information and how data-driven automatic control routines may be used for optimizing high-repetition-rate experiments.
Viola I., Cesar P.
IEEE Signal Processing Letters scimago Q1 wos Q2
2020-09-15 citations by CoLab: 70 Abstract  
Point cloud representation has seen a surge of popularity in recent years, thanks to its capability to reproduce volumetric scenes in immersive scenarios. New compression solutions for streaming of point cloud contents have been proposed, which require objective quality metrics to reliably assess the level of degradation introduced by coding and transmission distortions. In this context, reduced reference metrics aim to predict the visual quality of the transmitted contents, while requiring only a small set of features to be sent in addition to the streamed media. In this paper, we propose a reduced reference metric to predict the quality of point cloud contents under compression distortions. To do so, we extract a small set of statistical features from the reference point cloud in the geometry, color and normal vector domain, which can be used at the receiver side to assess the visual degradation of the content. Using publicly available ground-truth datasets, we compare the performance of our metric to widely-used full reference metrics. Results demonstrate that our metric is able to effectively predict the level of distortion in the degraded point cloud contents, achieving high correlation values with respect to subjective scores.
Virgolin M., Alderliesten T., Witteveen C., Bosman P.A.
Evolutionary Computation scimago Q1 wos Q1
2020-06-23 citations by CoLab: 67 Abstract  
Abstract The Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) is a model-based EA framework that has been shown to perform well in several domains, including Genetic Programming (GP). Differently from traditional EAs where variation acts blindly, GOMEA learns a model of interdependencies within the genotype, that is, the linkage, to estimate what patterns to propagate. In this article, we study the role of Linkage Learning (LL) performed by GOMEA in Symbolic Regression (SR). We show that the non-uniformity in the distribution of the genotype in GP populations negatively biases LL, and propose a method to correct for this. We also propose approaches to improve LL when ephemeral random constants are used. Furthermore, we adapt a scheme of interleaving runs to alleviate the burden of tuning the population size, a crucial parameter for LL, to SR. We run experiments on 10 real-world datasets, enforcing a strict limitation on solution size, to enable interpretability. We find that the new LL method outperforms the standard one, and that GOMEA outperforms both traditional and semantic GP. We also find that the small solutions evolved by GOMEA are competitive with tuned decision trees, making GOMEA a promising new approach to SR.
Mele A.A., Herasymenko Y.
PRX Quantum scimago Q1 wos Q1 Open Access
2025-01-28 citations by CoLab: 2 PDF Abstract  
The experimental realization of increasingly complex quantum states underscores the pressing need for new methods of state learning and verification. In one such framework, quantum state tomography, the aim is to learn the full quantum state from data obtained by measurements. Without prior assumptions on the state, this task is prohibitively hard. Here, we present an efficient algorithm for learning states on n fermion modes prepared by any number of Gaussian and at most t non-Gaussian gates. By Jordan-Wigner mapping, this also includes n-qubit states prepared by nearest-neighbor matchgate circuits with at most t gates. Our algorithm is based exclusively on single-copy measurements and produces a classical representation of a state, guaranteed to be close in trace distance to the target state. The sample and time complexity of our algorithm is poly(n,2t); thus if t=O(log(n)), it is efficient. We also show that, if t scales more than logarithmically, any learning algorithm to solve the same task must be inefficient, under common cryptographic assumptions. We also provide an efficient property-testing algorithm that, given access to copies of a state, determines whether such a state is far or close to the set of states for which our learning algorithm works. In addition to the outputs of quantum circuits, our tomography algorithm is efficient for some physical target states, such as those arising in time dynamics and low-energy physics of impurity models. Beyond tomography, our work sheds light on the structure of states prepared with few non-Gaussian gates and offers an improved upper bound on their circuit complexity, enabling an efficient circuit-compilation method. Published by the American Physical Society 2025
Guo Y., Limburg A., Laarman J., Teunissen J., Nijdam S.
Physical Review Research scimago Q1 wos Q1 Open Access
2025-01-15 citations by CoLab: 0 PDF Abstract  
Using electric field induced second harmonic generation (E-FISH), we performed direction-resolved absolute electric field measurements on single-channel streamer discharges in 70 mbar (7 kPa) air with 0.2 mm and 2 ns resolutions. In order to obtain the absolute (local) electric field, we developed a deconvolution method taking into account the phase variations of E-FISH. The acquired field distribution shows good agreement with the simulation results under the same conditions, in direction, magnitude, and shape. This is the first time that E-FISH is applied to streamers of this size (>0.5 cm radius), crossing a large gap. Achieving these high resolution electric field measurements benefits further understanding of streamer discharges and enables future use of E-FISH on cylindrically symmetric (transient) electric field distributions. Published by the American Physical Society 2025
Montana J.R., Souto Arias L.A., Cirillo P., Oosterlee C.W.
Risks scimago Q2 wos Q2 Open Access
2024-12-17 citations by CoLab: 0 PDF Abstract  
We introduce the Quantum Alarm System, a novel framework that combines the informational advantages of quantum majorization applied to tail pseudo-correlation matrices with the learning capabilities of a reinforced urn process, to predict financial turmoil and market crashes. This integration allows for a more nuanced analysis of the dependence structure in financial markets, particularly focusing on extreme events reflected in the tails of the distribution. Our model is tested using the daily log-returns of the 30 constituents of the Dow Jones Industrial Average, spanning from 2 January 1992 to 30 August 2024. The results are encouraging: in the validation set, the 12-month ahead probability of correct alarm is between 73% and 80%, while maintaining a low false alarm rate. Thanks to the application of quantum majorization, the alarm system effectively captures non-traditional and emerging risk sources, such as the financial impact of the COVID-19 pandemic—an area where traditional models often fall short.
Christandl M., Lysikov V., Steffan V., Werner A.H., Witteveen F.
Quantum scimago Q1 wos Q2 Open Access
2024-12-11 citations by CoLab: 1 Abstract  
Tensor networks provide succinct representations of quantum many-body states and are an important computational tool for strongly correlated quantum systems. Their expressive and computational power is characterized by an underlying entanglement structure, on a lattice or more generally a (hyper)graph, with virtual entangled pairs or multipartite entangled states associated to (hyper)edges. Changing this underlying entanglement structure into another can lead to both theoretical and computational benefits. We study a natural resource theory which generalizes the notion of bond dimension to entanglement structures using multipartite entanglement. It is a direct extension of resource theories of tensors studied in the context of multipartite entanglement and algebraic complexity theory, allowing for the application of the sophisticated methods developed in these fields to tensor networks. The resource theory of tensor networks concerns both the local entanglement structure of a quantum many-body state and the (algebraic) complexity of tensor network contractions using this entanglement structure. We show that there are transformations between entanglement structures which go beyond edge-by-edge conversions, highlighting efficiency gains of our resource theory that mirror those obtained in the search for better matrix multiplication algorithms. We also provide obstructions to the existence of such transformations by extending a variety of methods originally developed in algebraic complexity theory for obtaining complexity lower bounds. The resource theory of tensor networks allows to compare different entanglement structures and should lead to more efficient tensor network representations and contraction algorithms.
Dieperink M., Skorikov A., Claes N., Bals S., Albrecht W.
Nanophotonics scimago Q1 wos Q1 Open Access
2024-11-28 citations by CoLab: 1 PDF Abstract  
Abstract The optical cross sections of plasmonic nanoparticles are intricately linked to their morphologies. Accurately capturing this link could allow determination of particles’ shapes from their optical cross sections alone. Electromagnetic simulations bridge morphology and optical properties, provided they are sufficiently accurate. This study examines key factors affecting simulation precision, comparing common methods and detailing the impacts of meshing accuracy, dielectric function selection, and substrate inclusion within the boundary element method. To support the method’s complex parameterization, we develop a workflow incorporating reconstruction, meshing, and mesh simplification, to enable the use of electron tomography data. We analyze how choices of reconstruction algorithm and image segmentation affect simulated optical cross sections, relating these to shape errors minimized during data processing. Optimal results are obtained using the total variation minimization (TVM) reconstruction method with Otsu thresholding and light smoothing, ensuring reliable, watertight surface meshes through the marching cubes algorithm, even for complex shapes.
Grünwald P., Ramdas A., Wang R., Ziegel J.
2024-11-25 citations by CoLab: 0 Abstract  
This half-size MFO workshop brings together researchers in mathematical statistics, probability theory, machine learning, medical sciences, and economics to discuss recent developments in sequential inference. New sequential inference methods that build on nonnegative martingale techniques allow us to elegantly solve prominent shortcomings of traditional statistical hypothesis tests. Instead of p-values, they are based on e-values which have the added benefit that their meaning is much easier to communicate to applied researchers, due to their intuitive interpretation in terms of the wealth of a gambler playing a hypothetically fair game. Significant new contributions to this fast growing research area will be presented in order to stimulate collaborations, discuss and unify notation and concepts in the fields, and tackle a variety of open problems and address current major challenges.
Smeekes O.S., de Boer T.R., van der Mei R.D., Buurman B.M., Willems H.C.
2024-11-01 citations by CoLab: 0 Abstract  
Acute hospitalization, recurrent admissions, institutionalization, and death are important adverse health outcomes. Older adults receiving home care are especially at risk of these outcomes, yet it remains unclear if this risk differs between older adults receiving different types of home care and older adults not receiving home care.
Schiffer B.F., Vreumingen D.V., Tura J., Polla S.
Quantum scimago Q1 wos Q2 Open Access
2024-05-14 citations by CoLab: 1 Abstract  
Transitions out of the ground space limit the performance of quantum adiabatic algorithms, while hardware imperfections impose stringent limitations on the circuit depth. We propose an adiabatic echo verification protocol which mitigates both coherent and incoherent errors, arising from non-adiabatic transitions and hardware noise, respectively. Quasi-adiabatically evolving forward and backward allows for an echo-verified measurement of any observable. In addition to mitigating hardware noise, our method uses positive-time dynamics only. Crucially, the estimator bias of the observable is reduced when compared to standard adiabatic preparation, achieving up to a quadratic improvement.
Coladangelo A., Majenz C., Poremba A.
Quantum scimago Q1 wos Q2 Open Access
2024-05-02 citations by CoLab: 2 Abstract  
Copy-protection allows a software distributor to encode a program in such a way that it can be evaluated on any input, yet it cannot be "pirated" – a notion that is impossible to achieve in a classical setting. Aaronson (CCC 2009) initiated the formal study of quantum copy-protection schemes, and speculated that quantum cryptography could offer a solution to the problem thanks to the quantum no-cloning theorem. In this work, we introduce a quantum copy-protection scheme for a large class of evasive functions known as "compute-and-compare programs" – a more expressive generalization of point functions. A compute-and-compare program CC[f,y] is specified by a function f and a string y within its range: on input x, CC[f,y] outputs 1, if f(x)=y, and 0 otherwise. We prove that our scheme achieves non-trivial security against fully malicious adversaries in the quantum random oracle model (QROM), which makes it the first copy-protection scheme to enjoy any level of provable security in a standard cryptographic model. As a complementary result, we show that the same scheme fulfils a weaker notion of software protection, called "secure software leasing", introduced very recently by Ananth and La Placa (eprint 2020), with a standard security bound in the QROM, i.e. guaranteeing negligible adversarial advantage. Finally, as a third contribution, we elucidate the relationship between unclonable encryption and copy-protection for multi-bit output point functions.
Cade C., Crichigno P.M.
Quantum scimago Q1 wos Q2 Open Access
2024-04-30 citations by CoLab: 4 Abstract  
We consider the complexity of the local Hamiltonian problem in the context of fermionic Hamiltonians with N=2 supersymmetry and show that the problem remains QMA-complete. Our main motivation for studying this is the well-known fact that the ground state energy of a supersymmetric system is exactly zero if and only if a certain cohomology group is nontrivial. This opens the door to bringing the tools of Hamiltonian complexity to study the computational complexity of a large number of algorithmic problems that arise in homological algebra, including problems in algebraic topology, algebraic geometry, and group theory. We take the first steps in this direction by introducing the k-local Cohomology problem and showing that it is QMA1-hard and, for a large class of instances, is contained in QMA. We then consider the complexity of estimating normalized Betti numbers and show that this problem is hard for the quantum complexity class DQC1, and for a large class of instances is contained in BQP. In light of these results, we argue that it is natural to frame many of these homological problems in terms of finding ground states of supersymmetric fermionic systems. As an illustration of this perspective we discuss in some detail the model of Fendley, Schoutens, and de Boer consisting of hard-core fermions on a graph, whose ground state structure encodes l-dimensional holes in the independence complex of the graph. This offers a new perspective on existing quantum algorithms for topological data analysis and suggests new ones.
Libera K., Valadian R., Vararattanavech P., Dasari S.N., Dallman T.J., Weerts E., Lipman L.
Poultry Science scimago Q1 wos Q1 Open Access
2024-03-01 citations by CoLab: 0 Abstract  
In broiler chickens, fractures of wings and legs are recorded at poultry slaughterhouses based on the time of occurrence. Pre-killing (PRE) fractures occur before the death of animal, so the chicken was still able to experience pain and distress associated with the injury (an animal welfare issue). Post-killing (POST) fractures occur when the chickens are deceased and fully bled-out and consequently unable to feel pain (not an animal welfare issue). Current practice dictates that fractures are recognized visually and recorded by the animal welfare officers as mandated by European Union and/or national regulations. However, new potential monitoring solutions are desired since human inspection suffers from some significant limitations including subjectivism and fatigue. One possible solution in detecting injuries is X-ray computed tomography (CT) scanning and in this study we aim to evaluate the potential of CT scanning and visual inspection in detecting limb fractures and their causes. 83 chicken wings and 60 chicken legs (n=143) were collected from a single slaughterhouse and classified by an animal welfare officer as PRE, POST or healthy (HEAL). Samples were photographed and CT scanned at a veterinary hospital. The interpretation of CT scans along with photographs took place in 3 rounds (1. CT scans only, 2. CT scans + photographs, 3. photographs only) and was performed independently by 3 veterinarians. The consistency of the interpretation in 3 rounds was compared with the animal welfare officer`s classification. Furthermore, selected samples were also analyzed by histopathological examination due to questionability of their classification (PRE/POST). In questionable samples presence of hemorrhages were confirmed, thus they fit better as PRE. The highest consistency between raters was obtained in the 2nd round, indicating that interpretation accuracy was the highest when CT scans were combined with photographs. These results indicate that CT scanning in combination with visual inspection can be used in detecting limbs fracture and potentially applied as a tool to monitor animal welfare in poultry slaughterhouses in the future.
Cade C., Folkertsma M., Niesen I., Weggemans J.
Quantum scimago Q1 wos Q2 Open Access
2023-10-10 citations by CoLab: 0 Abstract  
Run-times of quantum algorithms are often studied via an asymptotic, worst-case analysis. Whilst useful, such a comparison can often fall short: it is not uncommon for algorithms with a large worst-case run-time to end up performing well on instances of practical interest. To remedy this it is necessary to resort to run-time analyses of a more empirical nature, which for sufficiently small input sizes can be performed on a quantum device or a simulation thereof. For larger input sizes, alternative approaches are required.In this paper we consider an approach that combines classical emulation with detailed complexity bounds that include all constants. We simulate quantum algorithms by running classical versions of the sub-routines, whilst simultaneously collecting information about what the run-time of the quantum routine would have been if it were run instead. To do this accurately and efficiently for very large input sizes, we describe an estimation procedure and prove that it obtains upper bounds on the true expected complexity of the quantum algorithms.We apply our method to some simple quantum speedups of classical heuristic algorithms for solving the well-studied MAX-k-SAT optimization problem. This requires rigorous bounds (including all constants) on the expected- and worst-case complexities of two important quantum sub-routines: Grover search with an unknown number of marked items, and quantum maximum-finding. These improve upon existing results and might be of broader interest.Amongst other results, we found that the classical heuristic algorithms we studied did not offer significant quantum speedups despite the existence of a theoretical per-step speedup. This suggests that an empirical analysis such as the one we implement in this paper already yields insights beyond those that can be seen by an asymptotic analysis alone.
Herasymenko Y., Stroeks M., Helsen J., Terhal B.
Quantum scimago Q1 wos Q2 Open Access
2023-08-10 citations by CoLab: 5 Abstract  
We consider the problem of approximating the ground state energy of a fermionic Hamiltonian using a Gaussian state. In sharp contrast to the dense case [1, 2], we prove that strictly q-local sparse fermionic Hamiltonians have a constant Gaussian approximation ratio; the result holds for any connectivity and interaction strengths. Sparsity means that each fermion participates in a bounded number of interactions, and strictly q-local means that each term involves exactly q fermionic (Majorana) operators. We extend our proof to give a constant Gaussian approximation ratio for sparse fermionic Hamiltonians with both quartic and quadratic terms. With additional work, we also prove a constant Gaussian approximation ratio for the so-called sparse SYK model with strictly 4-local interactions (sparse SYK-4 model). In each setting we show that the Gaussian state can be efficiently determined. Finally, we prove that the O(n−1/2) Gaussian approximation ratio for the normal (dense) SYK-4 model extends to SYK-q for even q>4, with an approximation ratio of O(n1/2–q/4). Our results identify non-sparseness as the prime reason that the SYK-4 model can fail to have a constant approximation ratio [1, 2].
Cade C., Labib F., Niesen I.
Quantum scimago Q1 wos Q2 Open Access
2023-07-03 citations by CoLab: 0 Abstract  
We present three quantum algorithms for clustering graphs based on higher-order patterns, known as motif clustering. One uses a straightforward application of Grover search, the other two make use of quantum approximate counting, and all of them obtain square-root like speedups over the fastest classical algorithms in various settings. In order to use approximate counting in the context of clustering, we show that for general weighted graphs the performance of spectral clustering is mostly left unchanged by the presence of constant (relative) errors on the edge weights. Finally, we extend the original analysis of motif clustering in order to better understand the role of multiple `anchor nodes' in motifs and the types of relationships that this method of clustering can and cannot capture.
Jansen S., Goodenough K., de Bone S., Gijswijt D., Elkouss D.
Quantum scimago Q1 wos Q2 Open Access
2022-05-19 citations by CoLab: 11 Abstract  
Entanglement distillation is an essential building block in quantum communication protocols. Here, we study the class of near-term implementable distillation protocols that use bilocal Clifford operations followed by a single round of communication. We introduce tools to enumerate and optimise over all protocols for up to n=5 (not necessarily equal) Bell-diagonal states using a commodity desktop computer. Furthermore, by exploiting the symmetries of the input states, we find all protocols for up to n=8 copies of a Werner state. For the latter case, we present circuits that achieve the highest fidelity with perfect operations and no decoherence. These circuits have modest depth and number of two-qubit gates. Our results are based on a correspondence between distillation protocols and double cosets of the symplectic group, and improve on previously known protocols.

Since 1984

Total publications
3922
Total citations
80409
Citations per publication
20.5
Average publications per year
95.66
Average authors per publication
3.59
h-index
109
Metrics description

Top-30

Fields of science

50
100
150
200
250
300
350
400
450
Applied Mathematics, 441, 11.24%
Software, 435, 11.09%
Computer Science Applications, 429, 10.94%
Theoretical Computer Science, 319, 8.13%
Computational Theory and Mathematics, 279, 7.11%
Management Science and Operations Research, 209, 5.33%
General Computer Science, 199, 5.07%
Computational Mathematics, 197, 5.02%
Information Systems, 188, 4.79%
General Mathematics, 183, 4.67%
Modeling and Simulation, 181, 4.61%
Statistics and Probability, 175, 4.46%
Hardware and Architecture, 159, 4.05%
Computer Networks and Communications, 139, 3.54%
Discrete Mathematics and Combinatorics, 99, 2.52%
General Physics and Astronomy, 95, 2.42%
Condensed Matter Physics, 93, 2.37%
Electrical and Electronic Engineering, 89, 2.27%
Artificial Intelligence, 88, 2.24%
General Engineering, 84, 2.14%
Statistics, Probability and Uncertainty, 84, 2.14%
Molecular Biology, 67, 1.71%
Numerical Analysis, 65, 1.66%
Signal Processing, 63, 1.61%
General Medicine, 62, 1.58%
Computer Graphics and Computer-Aided Design, 62, 1.58%
Algebra and Number Theory, 58, 1.48%
Library and Information Sciences, 57, 1.45%
Industrial and Manufacturing Engineering, 55, 1.4%
Atomic and Molecular Physics, and Optics, 52, 1.33%
50
100
150
200
250
300
350
400
450

Journals

200
400
600
800
1000
1200
200
400
600
800
1000
1200

Publishers

500
1000
1500
2000
2500
500
1000
1500
2000
2500

With other organizations

50
100
150
200
250
300
350
400
450
50
100
150
200
250
300
350
400
450

With foreign organizations

10
20
30
40
50
60
70
10
20
30
40
50
60
70

With other countries

100
200
300
400
500
600
USA, 546, 13.92%
United Kingdom, 367, 9.36%
Germany, 356, 9.08%
France, 314, 8.01%
Italy, 206, 5.25%
Spain, 166, 4.23%
Belgium, 161, 4.11%
China, 109, 2.78%
Canada, 100, 2.55%
Switzerland, 94, 2.4%
Denmark, 80, 2.04%
Israel, 77, 1.96%
Australia, 75, 1.91%
Japan, 63, 1.61%
Russia, 61, 1.56%
Sweden, 58, 1.48%
Poland, 51, 1.3%
Portugal, 45, 1.15%
Singapore, 42, 1.07%
Hungary, 41, 1.05%
Greece, 38, 0.97%
Norway, 37, 0.94%
India, 34, 0.87%
Austria, 33, 0.84%
Brazil, 29, 0.74%
Finland, 26, 0.66%
Czech Republic, 24, 0.61%
New Zealand, 20, 0.51%
Ireland, 18, 0.46%
100
200
300
400
500
600
  • We do not take into account publications without a DOI.
  • Statistics recalculated daily.
  • Publications published earlier than 1984 are ignored in the statistics.
  • The horizontal charts show the 30 top positions.
  • Journals quartiles values are relevant at the moment.