,
pages 149-179
Benchmarking
1
Modl.ai, Copenhagen, Denmark
|
4
Leiden Institute of Advanced Computer Science, Leiden, Netherlands
|
Publication type: Book Chapter
Publication date: 2023-07-28
SJR: —
CiteScore: 2.1
Impact factor: —
ISSN: 16197127, 26276461
Abstract
The evaluation and analysis of optimisation algorithms through benchmarks is an important aspect of research in evolutionary computation. This is especially true in the context of many-objective optimisation, where the complexity of the problems usually makes theoretical analysis difficult. However, the availability of suitable benchmarking problems is lacking in many research areas within the field of evolutionary computation for example, optimisation under noise or with constraints. Several additional open issues in common benchmarking practice exist as well, for instance related to reproducibility and the interpretation of results. In this book chapter, we focus on discussing these issues for multi- and many-objective optimisation (MMO) specifically. We thus first provide an overview of existing MMO benchmarks and find that besides lacking in number and diversity, improvements are needed in terms of ease of use and the ability to characterise and describe benchmarking functions. In addition, we provide a concise list of common pitfalls to look out for when using benchmarks, along with suggestions of how to avoid them. This part of the chapter is intended as a guide to help improve the usability of benchmarking results in the future.
Found
Nothing found, try to update filter.
Found
Nothing found, try to update filter.
Top-30
Journals
|
1
|
|
|
International Journal of Communication Systems
1 publication, 100%
|
|
|
1
|
Publishers
|
1
|
|
|
Wiley
1 publication, 100%
|
|
|
1
|
- We do not take into account publications without a DOI.
- Statistics recalculated weekly.
Are you a researcher?
Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
1
Total citations:
1
Citations from 2024:
1
(100%)
Cite this
GOST |
RIS |
BibTex
Cite this
RIS
Copy
TY - GENERIC
DO - 10.1007/978-3-031-25263-1_6
UR - https://doi.org/10.1007/978-3-031-25263-1_6
TI - Benchmarking
T2 - Natural Computing Series
AU - Volz, Vanessa
AU - Irawan, Dani
AU - van der Blom, Koen
AU - Naujoks, Boris
PY - 2023
DA - 2023/07/28
PB - Springer Nature
SP - 149-179
SN - 1619-7127
SN - 2627-6461
ER -
Cite this
BibTex (up to 50 authors)
Copy
@incollection{2023_Volz,
author = {Vanessa Volz and Dani Irawan and Koen van der Blom and Boris Naujoks},
title = {Benchmarking},
publisher = {Springer Nature},
year = {2023},
pages = {149--179},
month = {jul}
}