IEEE Computational Intelligence Magazine, volume 15, issue 2, pages 14-23

The General Combinatorial Optimization Problem: Towards Automated Algorithm Design

Publication typeJournal Article
Publication date2020-05-01
scimago Q1
SJR2.085
CiteScore14.5
Impact factor10.3
ISSN1556603X, 15566048
Artificial Intelligence
Theoretical Computer Science
Abstract
This paper defines a new combinatorial optimization problem, namely General Combinatorial Optimization Problem (GCOP), whose decision variables are a set of parametric algorithmic components, i.e. algorithm design decisions. The solutions of GCOP, i.e. compositions of algorithmic components, thus represent different generic search algorithms. The objective of GCOP is to find the optimal algorithmic compositions for solving the given optimization problems. Solving the GCOP is thus equivalent to automatically designing the best algorithms for optimization problems. Despite recent advances, the evolutionary computation and optimization research communities are yet to embrace formal standards that underpin automated algorithm design. In this position paper, we establish GCOP as a new standard to define different search algorithms within one unified model. We demonstrate the new GCOP model to standardize various search algorithms as well as selection hyperheuristics. A taxonomy is defined to distinguish several widely used terminologies in automated algorithm design, namely automated algorithm composition, configuration and selection. We would like to encourage a new line of exciting research directions addressing several challenging research issues including algorithm generality, algorithm reusability, and automated algorithm design.
Liu S., Tang K., Yao X.
Exploiting parallelism is becoming more and more important in designing efficient solvers for computationally hard problems. However, manually building parallel solvers typically requires considerable domain knowledge and plenty of human effort. As an alternative, automatic construction of parallel portfolios (ACPP) aims at automatically building effective parallel portfolios based on a given problem instance set and a given rich configuration space. One promising way to solve the ACPP problem is to explicitly group the instances into different subsets and promote a component solver to handle each of them. This paper investigates solving ACPP from this perspective, and especially studies how to obtain a good instance grouping. The experimental results on two widely studied problem domains, the boolean satisfiability problems (SAT) and the traveling salesman problems (TSP), showed that the parallel portfolios constructed by the proposed method could achieve consistently superior performances to the ones constructed by the state-of-the-art ACPP methods, and could even rival sophisticated hand-designed parallel solvers.
Pagnozzi F., Stützle T.
2019-07-01 citations by CoLab: 35 Abstract  
Stochastic local search methods are at the core of many effective heuristics for tackling different permutation flowshop problems (PFSPs). Usually, such algorithms require a careful, manual algorithm engineering effort to reach high performance. An alternative to the manual algorithm engineering is the automated design of effective SLS algorithms through building flexible algorithm frameworks and using automatic algorithm configuration techniques to instantiate high-performing algorithms. In this paper, we automatically generate new high-performing algorithms for some of the most widely studied variants of the PFSP. More in detail, we (i) developed a new algorithm framework, EMILI, that implements algorithm-specific and problem-specific building blocks; (ii) define the rules of how to compose algorithms from the building blocks; and (iii) employ an automatic algorithm configuration tool to search for high performing algorithm configurations. With these ingredients, we automatically generate algorithms for the PFSP with the objectives makespan, total completion time and total tardiness, which outperform the best algorithms obtained by a manual algorithm engineering process.
Soria-Alcaraz J.A., Ochoa G., Sotelo-Figeroa M.A., Burke E.K.
2017-08-01 citations by CoLab: 40 Abstract  
We address the important step of determining an effective subset of heuristics in selection hyper-heuristics. Little attention has been devoted to this in the literature, and the decision is left at the discretion of the investigator. The performance of a hyper-heuristic depends on the quality and size of the heuristic pool. Using more than one heuristic is generally advantageous, however, an unnecessary large pool can decrease the performance of adaptive approaches. Our goal is to bring methodological rigour to this step. The proposed methodology uses non-parametric statistics and fitness landscape measurements from an available set of heuristics and benchmark instances, in order to produce a compact subset of effective heuristics for the underlying problem. We also propose a new iterated local search hyper-heuristic using multi-armed bandits coupled with a change detection mechanism. The methodology is tested on two real-world optimization problems: course timetabling and vehicle routing. The proposed hyper-heuristic with a compact heuristic pool, outperforms state-of-the-art hyper-heuristics and competes with problem-specific methods in course timetabling, even producing new best-known solutions in 5 out of the 24 studied instances.
Adamo T., Ghiani G., Grieco A., Guerriero E., Manni E.
2017-07-01 citations by CoLab: 14 Abstract  
We propose an Automatic Neighborhood Design algorithm.The procedure relies on the extraction of semantic features from a MIP model.The algorithm is assessed on four well-known combinatorial optimization problems. The definition of a good neighborhood structure on the solution space is a key step when designing several types of heuristics for Mixed Integer Programming (MIP). Typically, in order to achieve efficiency in the search, the neighborhood structures need to be tailored not only to the specific problem but also to the peculiar distribution of the instances to be solved (reference instance population). Nowadays, this is done by human experts through a time-consuming process comprising: (a) problem analysis, (b) literature scouting and (c) experimentation. In this paper, we illustrate an Automatic Neighborhood Design algorithm that mimics steps (a) and (c). Firstly, the procedure extracts some semantic features from a MIP compact model. Secondly, these features are used to derive automatically some neighborhood design mechanisms. Finally, the proper mix of such mechanisms is sought through an automatic configuration phase performed on a training set representative of the reference instance population. When assessed on four well-known combinatorial optimization problems, our automatically-generated neighborhoods outperform state-of-the-art model-based neighborhoods with respect to both scalability and solution quality.
Akay R., Basturk A., Kalinli A., Yao X.
Neurocomputing scimago Q1 wos Q1
2017-07-01 citations by CoLab: 15 Abstract  
Although many algorithms have been proposed, no single algorithm is better than others on all types of problems. Therefore, the search characteristics of different algorithms that show complementary behavior can be combined through portfolio structures to improve the performance on a wider set of problems. In this work, a portfolio of the Artificial Bee Colony, Differential Evolution and Particle Swarm Optimization algorithms was constructed and the first parallel implementation of the population-based algorithm portfolio was carried out by means of a Message Passing Interface environment. The parallel implementation of an algorithm or a portfolio can be performed by different models such as master-slave, coarse-grained or a hybrid of both, as used in this study. Hence, the efficiency and running time of various parallel implementations with different parameter values and combinations were investigated on benchmark problems. The performance of the parallel portfolio was compared to those of the single constituent algorithms. The results showed that the proposed models reduced the running time and the portfolio delivered a robust performance compared to each constituent algorithm. It is observed that the speedup gained over the sequential counterpart changed significantly depending on the structure of the portfolio. The portfolio is also applied to a training of neural networks which has been used for time series prediction. Result demonstrate that, portfolio is able to produce good prediction accuracy.
Pillay N., Beckedahl D.
2017-06-01 citations by CoLab: 13 Abstract  
Hyper-heuristics is an emergent technology that has proven to be effective at solving real-world problems. The two main categories of hyper-heuristics are selection and generation. Selection hyper-heuristics select existing low-level heuristics while generation hyper-heuristics create new heuristics. At the inception of the field single point searches were essentially employed by selection hyper-heuristics, however as the field progressed evolutionary algorithms are becoming more prominent. Evolutionary algorithms, namely, genetic programming, have chiefly been used for generation hyper-heuristics. Implementing evolutionary algorithm hyper-heuristics can be quite a time-consuming task which is daunting for first time researchers and practitioners who want to rather focus on the application domain the hyper-heuristic will be applied to which can be quite complex. This paper presents a Java toolkit for the implementation of evolutionary algorithm hyper-heuristics, namely, EvoHyp. EvoHyp includes libraries for a genetic algorithm selection hyper-heuristic (GenAlg), a genetic programming generation hyper-heuristic (GenProg), a distributed version of GenAlg (DistrGenAlg) and a distributed version of GenProg (DistrGenProg). The paper describes the libraries and illustrates how they can be used. The ultimate aim is to provide a toolkit which a non-expert in evolutionary algorithm hyper-heuristics can use. The paper concludes with an overview of future extensions of the toolkit.
Tyasnurita R., Ozcan E., John R.
2017-06-01 citations by CoLab: 32 Abstract  
A selection hyper-heuristic is a search method that controls a prefixed set of low-level heuristics for solving a given computationally difficult problem. This study investigates a learning-via demonstrations approach generating a selection hyper-heuristic for Open Vehicle Routing Problem (OVRP). As a chosen `expert' hyper-heuristic is run on a small set of training problem instances, data is collected to learn from the expert regarding how to decide which low-level heuristic to select and apply to the solution in hand during the search process. In this study, a Time Delay Neural Network (TDNN) is used to extract hidden patterns within the collected data in the form of a classifier, i.e an `apprentice' hyper-heuristic, which is then used to solve the `unseen' problem instances. Firstly, the parameters of TDNN are tuned using Taguchi orthogonal array as a design of experiments method. Then the influence of extending and enriching the information collected from the expert and fed into TDNN is explored on the behaviour of the generated apprentice hyper-heuristic. The empirical results show that the use of distance between solutions as an additional information collected from the expert generates an apprentice which outperforms the expert algorithm on a benchmark of OVRP instances.
López-Ibáñez M., Dubois-Lacoste J., Pérez Cáceres L., Birattari M., Stützle T.
2016-09-21 citations by CoLab: 934 Abstract  
Modern optimization algorithms typically require the setting of a large number of parameters to optimize their performance. The immediate goal of automatic algorithm configuration is to find, automatically, the best parameter settings of an optimizer. Ultimately, automatic algorithm configuration has the potential to lead to new design paradigms for optimization software. The irace package is a software package that implements a number of automatic configuration procedures. In particular, it offers iterated racing procedures, which have been used successfully to automatically configure various state-of-the-art algorithms. The iterated racing procedures implemented in irace include the iterated F-race algorithm and several extensions and improvements over it. In this paper, we describe the rationale underlying the iterated racing procedures and introduce a number of recent extensions. Among these, we introduce a restart mechanism to avoid premature convergence, the use of truncated sampling distributions to handle correctly parameter bounds, and an elitist racing procedure for ensuring that the best configurations returned are also those evaluated in the highest number of training instances. We experimentally evaluate the most recent version of irace and demonstrate with a number of example applications the use and potential of irace , in particular, and automatic algorithm configuration, in general.
Walker D.J., Keedwell E.
2016-07-20 citations by CoLab: 16 Abstract  
Hyper-heuristics have been used widely to solve optimisation problems, often single-objective and discrete in nature. Herein, we extend a recently-proposed selection hyper-heuristic to the multi-objective domain and with it optimise continuous problems. The MOSSHH algorithm operates as a hidden Markov model, using transition probabilities to determine which low-level heuristic or sequence of heuristics should be applied next. By incorporating dominance into the transition probability update rule, and an elite archive of solutions, MOSSHH generates solutions to multi-objective problems that are competitive with bespoke multi-objective algorithms. When applied to test problems, it is able to find good approximations to the true Pareto front, and yields information about the type of low-level heuristics that it uses to solve the problem.
Yin P., Lyu S., Chuang Y.
2016-06-01 citations by CoLab: 27 Abstract  
Cross-docking technology transships products from incoming vehicles directly to outgoing vehicles by using the warehouse as a temporary buffer instead of a place for storage and retrieval. The supply chain management (SCM) with cross-docks is both effective and efficient where no storage is facilitated at the cross-dock and the order-picking is replaced by fast consolidation. However, cross-docking involves interrelated operations such as vehicle routing and vehicle scheduling which require proper planning and synchronization. Traditional cross-docking methods treat the operations separately and overlook the potential advantage of cooperative planning. This paper proposes a bi-objective mathematical formulation for the cross-docking with the noted new challenges. As the addressed problem is highly constrained, we develop a cooperative coevolution approach consisting of Hyper-heuristics and Hybrid-heuristics for achieving continuous improvement in alternating objectives. The performance of our approach is illustrated with real geographical data and is compared with existing models. Statistical tests based on intensive simulations, including the convergence 95% confidence analysis and the worst-case analysis, are conducted to provide reliable performance guarantee.
Kendall G., Bai R., Błazewicz J., De Causmaecker P., Gendreau M., John R., Li J., McCollum B., Pesch E., Qu R., Sabar N., Berghe G.V., Yee A.
2016-04-01 citations by CoLab: 61
Asta S., Özcan E., Curtois T.
Knowledge-Based Systems scimago Q1 wos Q1
2016-04-01 citations by CoLab: 34 Abstract  
Nurse rostering is a well-known highly constrained scheduling problem requiring assignment of shifts to nurses satisfying a variety of constraints. Exact algorithms may fail to produce high quality solutions, hence (meta)heuristics are commonly preferred as solution methods which are often designed and tuned for specific (group of) problem instances. Hyper-heuristics have emerged as general search methodologies that mix and manage a predefined set of low level heuristics while solving computationally hard problems. In this study, we describe an online learning hyper-heuristic employing a data science technique which is capable of self-improvement via tensor analysis for nurse rostering. The proposed approach is evaluated on a well-known nurse rostering benchmark consisting of a diverse collection of instances obtained from different hospitals across the world. The empirical results indicate the success of the tensor-based hyper-heuristic, improving upon the best-known solutions for four of the instances.
Ritzinger U., Puchinger J., Hartl R.F.
2015-05-15 citations by CoLab: 297 Abstract  
Research on dynamic and stochastic vehicle routing problems received increasing interest in the last decade. It considers a novel problem class, aiming at an appropriate handling of dynamic events combined with the incorporation of stochastic information about possible future events. This survey summarises the recent literature in this area. Besides, the classification according to the available stochastic information, a new classification based on the point in time where substantial computational effort for determining decisions or decision policies arises, is introduced. Furthermore, the difference in solution quality is analysed between approaches which consider either purely dynamic or stochastic problems compared to those which consider both, stochastic and dynamic aspects. A graphical representation demonstrates the strength of the reviewed approaches incorporating dynamic and stochastic information. The survey also gives an overview on the intensity of research for the different problem classes and...
Sabar N.R., Zhang X.J., Song A.
2015-05-01 citations by CoLab: 20 Abstract  
Vehicle routing is known as the most challenging but an important problem in the transportation and logistics filed. The task is to optimise a set of vehicle routes to serve a group of customers with minimal delivery cost while respecting the problem constraints such as arriving within given time windows. This study presented a math-hyper-heuristic approach to tackle this problem more effectively and more efficiently. The proposed approach consists of two phases: a math phase and a hyper-heuristic phase. In the math phase, the problem is decomposed into sub-problems which are solved independently using the column generation algorithm. The solutions for these sub-problems are combined and then improved by the hyper-heuristic phase. Benchmark instances of large-scale vehicle routing problems with time windows were used for evaluation. The results show the effectiveness of the math phase. More importantly the proposed method achieved better solutions in comparison with two state of the art methods on all instances. The computational cost of the proposed method is also lower than that of other methods.
Maashi M., Kendall G., Özcan E.
Applied Soft Computing Journal scimago Q1 wos Q1
2015-03-01 citations by CoLab: 52 Abstract  
Graphical abstractDisplay Omitted HighlightsSelection learning hyper-heuristics is proposed for multi-objective optimization.GDA and LA are utilized as move acceptance within the hyper-heuristic framework for multi-objective optimization.The D metric is integrated into the move acceptance methods to enable the approaches to deal with multi-objective problems.The experimental results demonstrate the effectiveness of non-deterministic move acceptance strategy based methodology.The proposed methods are tested on a generic benchmark and a real-world problem. A selection hyper-heuristic is a high level search methodology which operates over a fixed set of low level heuristics. During the iterative search process, a heuristic is selected and applied to a candidate solution in hand, producing a new solution which is then accepted or rejected at each step. Selection hyper-heuristics have been increasingly, and successfully, applied to single-objective optimization problems, while work on multi-objective selection hyper-heuristics is limited. This work presents one of the initial studies on selection hyper-heuristics combining a choice function heuristic selection methodology with great deluge and late acceptance as non-deterministic move acceptance methods for multi-objective optimization. A well-known hypervolume metric is integrated into the move acceptance methods to enable the approaches to deal with multi-objective problems. The performance of the proposed hyper-heuristics is investigated on the Walking Fish Group test suite which is a common benchmark for multi-objective optimization. Additionally, they are applied to the vehicle crashworthiness design problem as a real-world multi-objective problem. The experimental results demonstrate the effectiveness of the non-deterministic move acceptance, particularly great deluge when used as a component of a choice function based selection hyper-heuristic.
Garza-Santisteban F., Cruz-Duarte J.M., Amaya I., Ortiz-Bayliss J.C., Conant-Pablos S.E., Terashima-Marín H.
Journal of Scheduling scimago Q1 wos Q3
2024-10-14 citations by CoLab: 1 Abstract  
Selection hyper-heuristics are novel tools that combine low-level heuristics into robust solvers commonly used for tackling combinatorial optimization problems. However, the training cost is a drawback that hinders their applicability. In this work, we analyze the effect of training with different problem sizes to determine whether an effective simplification can be made. We select Job Shop Scheduling problems as an illustrative scenario to analyze and propose two hyper-heuristic approaches, based on Simulated Annealing (SA) and Unified Particle Swarm Optimization (UPSO), which use a defined set of simple priority dispatching rules as heuristics. Preliminary results suggest a relationship between instance size and hyper-heuristic performance. We conduct experiments training on two different instance sizes to understand such a relationship better. Our data show that hyper-heuristics trained in small-sized instances perform similarly to those trained in larger ones. However, the extent of such an effect changes depending on the approach followed. This effect was more substantial for the model powered by SA, and the resulting behavior for small and large-sized instances was very similar. Conversely, for the model powered by UPSO, data were more outspread. Even so, the phenomenon was noticeable as the median performance was similar between small and large-sized instances. In fact, through UPSO, we achieved hyper-heuristics that performed better on the training set. However, using small-sized instances seems to overspecialize, which results in spread-out testing performance. Hyper-heuristics resulting from training with small-sized instances can outperform a synthetic Oracle on large-sized testing instances in about 50% of the runs for SA and 25% for UPSO. This allows for significant time savings during the training procedure, thus representing a worthy approach.
Ma L., Hao X., Zhou W., He Q., Zhang R., Chen L.
Complex & Intelligent Systems scimago Q1 wos Q1 Open Access
2024-08-17 citations by CoLab: 0 PDF Abstract  
In recent years, the application of Neural Combinatorial Optimization (NCO) techniques in Combinatorial Optimization (CO) has emerged as a popular and promising research direction. Currently, there are mainly two types of NCO, namely, the Constructive Neural Combinatorial Optimization (CNCO) and the Perturbative Neural Combinatorial Optimization (PNCO). The CNCO generally trains an encoder-decoder model via supervised learning to construct solutions from scratch. It exhibits high speed in construction process, however, it lacks the ability for sustained optimization due to the one-shot mapping, which bounds its potential for application. Instead, the PNCO generally trains neural network models via deep reinforcement learning (DRL) to intelligently select appropriate human-designed heuristics to improve existing solutions. It can achieve high-quality solutions but at the cost of high computational demand. To leverage the strengths of both approaches, we propose to hybrid the CNCO and PNCO by designing a hybrid framework comprising two stages, in which the CNCO is the first stage and the PNCO is the second. Specifically, in the first stage, we utilize the attention model to generate preliminary solutions for given CO instances. In the second stage, we employ DRL to intelligently select and combine appropriate algorithmic components from improvement pool, perturbation pool, and prediction pool to continuously optimize the obtained solutions. Experimental results on synthetic and real Capacitated Vehicle Routing Problems (CVRPs) and Traveling Salesman Problems(TSPs) demonstrate the effectiveness of the proposed hybrid framework with the assistance of automated algorithm design.
Xue X.
Knowledge-Based Systems scimago Q1 wos Q1
2024-06-01 citations by CoLab: 4 Abstract  
Knowledge Graph (KG) provides a structured representation of domain knowledge by formally defining entities and their relationships. However, distinct communities tend to employ different terminologies and granularity levels to describe the same entity, leading to the KG heterogeneity issue that hampers their communications. KG matching can identify semantically similar entities in two KGs, which is an effective solution to this problem. Similarity Measures (SMs) are the foundation of the KG matching technique, and due to the complexity of entity heterogeneity, it is necessary to construct a high-level SM by selecting and combining the basic SMs. However, the large number of SMs and their intricate relationships make SM construction an open challenge. Inspired by the success of Evolutionary Algorithms (EA) in addressing the entity matching problem, this work further proposes a novel Self-adaptive Designed Genetic Programming (SDGP) to automatically construct the SM for KG matching. To overcome the drawbacks of the classic EA-based matching methods, a new individual representation and a novel fitness function are proposed to enable SDGP automatically explore the SM selection and combination. Then, a new Adaptive Automatic Design (AAD) method is introduced to adaptively trade off SDGP's exploration and exploitation, which can determine the timing of AAD and efficiently determine the suitable breeding operators and control parameters for SDGP. The experiment uses the Ontology Alignment Evaluation Initiative's Knowledge Graph (KG) data set to test the performance of SDGP. The experimental results show that SDGP can effectively determine high-quality KG alignments, which significantly outperform state-of-the-art KG matching methods.
Zambrano-Gutierrez D.F., Valencia-Rivera G.H., Avina-Cervantes J.G., Amaya I., Cruz-Duarte J.M.
Fractal and Fractional scimago Q2 wos Q1 Open Access
2024-04-12 citations by CoLab: 2 PDF Abstract  
This work introduces an alternative approach for developing a customized Metaheuristic (MH) tailored for tuning a Fractional-Order Proportional-Integral-Derivative (FOPID) controller within an Automatic Voltage Regulator (AVR) system. Leveraging an Automated Algorithm Design (AAD) methodology, our strategy generates MHs by utilizing a population-based Search Operator (SO) domain, thus minimizing human-induced bias. This approach eliminates the need for manual coding or the daunting task of selecting an optimal algorithm from a vast collection of the current literature. The devised MH consists of two distinct SOs: a dynamic swarm perturbator succeeded by a Metropolis-type selector and a genetic crossover perturbator, followed by another Metropolis-type selector. This MH fine-tunes the FOPID controller’s parameters, aiming to enhance control performance by reducing overshoot, rise time, and settling time. Our research includes a comparative analysis with similar studies, revealing that our tailored MH significantly improves the FOPID controller’s speed by 1.69 times while virtually eliminating overshoot. Plus, we assess the tuned FOPID controller’s resilience against internal disturbances within AVR subsystems. The study also explores two facets of control performance: the impact of fractional orders on conventional PID controller efficiency and the delineating of a confidence region for stable and satisfactory AVR operation. This work’s main contributions are introducing an innovative method for deriving efficient MHs in electrical engineering and control systems and demonstrating the substantial benefits of precise controller tuning, as evidenced by the superior performance of our customized MH compared to existing solutions.
M.T. I., C. S.V.
2024-03-01 citations by CoLab: 2 Abstract  
A meta-evolutionary framework called Differential Evolution Ensemble Designer (DEED) has been proposed in this paper to automate the design of DE ensemble algorithms. Given the design components of DE ensembles and a set of optimization problems, DEED evolves effective and robust DE ensemble designs. The design components of DE ensemble algorithms include population management, constituent algorithms in the ensemble, information mixing amongst the sub-populations in the ensemble and the numerical parameters associated with various aspects of the ensemble. DEED employs Dynamic Structured Grammatical Evolution (DSGE) as the meta-evolutionary algorithm. A Backus–Naur form (BNF) grammar has been developed in this paper to represent the design space of DE ensembles and used by DSGE to evolve DE ensemble designs. DEED has been employed to evolve DE ensemble designs for solving 30-dimensional CEC’17 benchmark functions. The evolved designs (both the best design as well as all the final evolved designs) have been validated on CEC’14 and CEC’17 functions at 10, 30 and 50 dimensions and on real-world numerical optimization problems in CEC’11 benchmark suite. The DEED evolved designs have also been tested against the state-of-the-art algorithm configurator - irace. The performance of DEED evolved ensemble designs have been observed to be very competitive against that of manually designed and tuned state-of-the-art DE ensemble algorithms in the literature. DEED has also been demonstrated to evolve both co-operative and competitive style DE ensembles. The simulation experiments demonstrate the effectiveness as well as robustness of the evolved ensemble designs and the reliability of DEED framework in consistently evolving effective DE ensemble designs.
Meng W., Qu R.
2024-03-01 citations by CoLab: 3 Abstract  
With a recently defined AutoGCOP framework, the design of local search algorithms has been defined as the composition of elementary algorithmic components. The effective compositions of the best algorithms thus retain useful knowledge of effective algorithm design. This paper investigates machine learning to learn and extract useful knowledge in effective algorithmic compositions. The process of forecasting algorithmic components in the design of effective local search algorithms is defined as a sequence classification task, and solved by a long short-term memory (LSTM) neural network to systematically analyse algorithmic compositions. Compared with other learning models, the results reveal the superior prediction performance of the proposed LSTM. Further analysis identifies some key features of algorithmic compositions and confirms their effectiveness for improving the prediction, thus supporting effective automated algorithm design.

Top-30

Journals

1
2
3
4
1
2
3
4

Publishers

2
4
6
8
10
12
14
2
4
6
8
10
12
14
  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex | MLA
Found error?
Profiles