Journal of Scheduling

Selection hyper-heuristics and job shop scheduling problems: How does instance size influence performance?

Fernando Garza-Santisteban 1
Jorge Mario Cruz-Duarte 1
Ivan Amaya 1
Jose Carlos Ortiz-Bayliss 1
Santiago E Conant Pablos 1
Hugo Terashima Marín 1
Publication typeJournal Article
Publication date2024-10-14
scimago Q1
SJR0.793
CiteScore3.8
Impact factor1.4
ISSN10946136, 10991425
Abstract
Selection hyper-heuristics are novel tools that combine low-level heuristics into robust solvers commonly used for tackling combinatorial optimization problems. However, the training cost is a drawback that hinders their applicability. In this work, we analyze the effect of training with different problem sizes to determine whether an effective simplification can be made. We select Job Shop Scheduling problems as an illustrative scenario to analyze and propose two hyper-heuristic approaches, based on Simulated Annealing (SA) and Unified Particle Swarm Optimization (UPSO), which use a defined set of simple priority dispatching rules as heuristics. Preliminary results suggest a relationship between instance size and hyper-heuristic performance. We conduct experiments training on two different instance sizes to understand such a relationship better. Our data show that hyper-heuristics trained in small-sized instances perform similarly to those trained in larger ones. However, the extent of such an effect changes depending on the approach followed. This effect was more substantial for the model powered by SA, and the resulting behavior for small and large-sized instances was very similar. Conversely, for the model powered by UPSO, data were more outspread. Even so, the phenomenon was noticeable as the median performance was similar between small and large-sized instances. In fact, through UPSO, we achieved hyper-heuristics that performed better on the training set. However, using small-sized instances seems to overspecialize, which results in spread-out testing performance. Hyper-heuristics resulting from training with small-sized instances can outperform a synthetic Oracle on large-sized testing instances in about 50% of the runs for SA and 25% for UPSO. This allows for significant time savings during the training procedure, thus representing a worthy approach.
Cruz-Duarte J.M., Amaya I., Ortiz-Bayliss J.C., Conant-Pablos S.E., Terashima-Marín H., Shi Y.
2021-10-01 citations by CoLab: 46 Abstract  
Literature is prolific with metaheuristics for solving continuous optimisation problems. But, in practice, it is difficult to choose one appropriately for several reasons. First and foremost, ‘new’ metaheuristics are being proposed at an alarmingly fast rate, rendering impossible to know them all. Moreover, it is necessary to determine a good enough set of parameters for the selected approach. Hence, this work proposes a strategy based on a hyper-heuristic model powered by Simulated Annealing for customising population-based metaheuristics. Our approach considers search operators from 10 well-known techniques as building blocks for new ones. We test this strategy on 107 continuous benchmark functions and in up to 50 dimensions. Besides, we analyse the performance of our approach under different experimental conditions. The resulting data reveal that it is possible to obtain good-performing metaheuristics with diverse configurations for each case of study and in an automatic fashion. In this way, we validate the potential of the proposed framework for devising metaheuristics that solve continuous optimisation problems with different characteristics, similar to those from practical engineering scenarios.
Song H., Lin J.
2021-02-01 citations by CoLab: 72 Abstract  
In this paper, a genetic programming hyper heuristic (GP-HH) algorithm is proposed to solve the distributed assembly permutation flow-shop scheduling problem with sequence dependent setup times (DAPFSP-SDST) and the objective of makespan minimization. The main idea is to use genetic programming (GP) as the high level strategy to generate heuristic sequences from a pre-designed low-level heuristics (LLHs) set. In each generation, the heuristic sequences are evolved by GP and then successively operated on the solution space for better solutions. Additionally, simulated annealing is embedded into each LLH to improve the local search ability. An effective encoding and decoding pair is also presented for the algorithm to obtain feasible schedules. Finally, computational simulation and comparison are both carried out on a benchmark set and the results demonstrate the effectiveness of the proposed GP-HH. The best-known solutions are updated for 333 out of the 540 benchmark instances.
Wu C., Bai D., Chen J., Lin W., Xing L., Lin J., Cheng S.
2021-02-01 citations by CoLab: 38 Abstract  
Many practical productions are full of significant uncertainties. For example, the working environment may change, machines may breakdown, workers may become unstable, etc. In such an environment, job processing times should not be fixed numbers. In light of this situation, we investigate a single-machine problem with two-scenario-based processing times, where the goal is to minimize the maximum total completion times over two scenarios. When the uncertainty of the job processing times is confronted, the robust version of this problem is NP-hard, even for very restricted cases. To solve this problem, we derive some dominance rules and a lower bound for developing branch-and-bound algorithms to find optimal solutions. As for determining approximate solutions, we propose five heuristics, adopting combined two-scenario-based dependent processing times, to produce initial solutions and then improve each with a pairwise interchange. Further, we propose a simulated annealing hyper-heuristic incorporating the proposed seven low level heuristics to solve this problem as well. Finally, the performances of all proposed algorithms are tested and reported.
Cruz-Duarte J.M., Ortiz-Bayliss J.C., Amaya I., Shi Y., Terashima-Marín H., Pillay N.
Mathematics scimago Q2 wos Q1 Open Access
2020-11-17 citations by CoLab: 33 PDF Abstract  
Metaheuristics have become a widely used approach for solving a variety of practical problems. The literature is full of diverse metaheuristics based on outstanding ideas and with proven excellent capabilities. Nonetheless, oftentimes metaheuristics claim novelty when they are just recombining elements from other methods. Hence, the need for a standard metaheuristic model is vital to stop the current frenetic tendency of proposing methods chiefly based on their inspirational source. This work introduces a first step to a generalised and mathematically formal metaheuristic model, which can be used for studying and improving them. This model is based on a scheme of simple heuristics, which perform as building blocks that can be modified depending on the application. For this purpose, we define and detail all components and concepts of a metaheuristic (i.e., its search operators), such as heuristics. Furthermore, we also provide some ideas to take into account for exploring other search operator configurations in the future. To illustrate the proposed model, we analyse search operators from four well-known metaheuristics employed in continuous optimisation problems as a proof-of-concept. From them, we derive 20 different approaches and use them for solving some benchmark functions with different landscapes. Data show the remarkable capability of our methodology for building metaheuristics and detecting which operator to choose depending on the problem to solve. Moreover, we outline and discuss several future extensions of this model to various problem and solver domains.
Barman S.K., Maiti D.K., Maity D.
AIAA Journal scimago Q1 wos Q2
2020-10-15 citations by CoLab: 22 Abstract  
The present paper introduces mixed unified particle swarm optimization (MUPSO) algorithm to detect and quantify the delamination damages in composite beams and plates using vibration responses. MUP...
Turky A., Sabar N.R., Dunstall S., Song A.
Knowledge-Based Systems scimago Q1 wos Q1
2020-10-01 citations by CoLab: 28 Abstract  
Local search algorithms have been successfully used for many combinatorial optimisation problems. The choice of the most suitable local search algorithm is, however, a challenging task as their performance is highly dependent on the problem characteristic. In addition, most of these algorithms require users to select appropriate internal neighbourhood structures to obtain desirable performance. No single local search algorithm can consistently perform well with a fixed setting, for different types of problems or even different instances of the same problem. To address this issue, we propose a hyper-heuristic framework which incorporates multiple local search algorithms and a pool of neighbourhood structures. This framework is novel in three respects. Firstly, a two-stage hyper-heuristic structure is designed to control the selection of a local search algorithm and its internal operators. Secondly, we propose an adaptive ranking mechanism to choose the most appropriate neighbourhood structures for the current local search algorithm. The proposed mechanism uses the entropy to evaluates the contribution of the local search in terms of quality and diversity. It adaptively adjusts the pool of candidate neighbourhood structures. Thirdly, we use a population of solutions within the proposed framework to effectively navigate different areas in the solutions search space and share solutions with local search algorithms. To ensure different solutions is allocated in different regions of the search space, we propose a distance-based strategy for population updating process that allowing solutions to share local search algorithms. We have evaluated the performance of the proposed framework using two challenging optimisation problems: Multi-Capacity Bin Packing benchmark instances and Google Machine Reassignment benchmark instances. The results show the effectiveness of the proposed framework, which outperformed state-of-the-art algorithms on several problem instances.
Drake J.H., Kheiri A., Özcan E., Burke E.K.
2020-09-01 citations by CoLab: 231 Abstract  
• A review of contemporary literature of selection hyper-heuristics is given. • The existing classification of selection hyper-heuristics is extended. • Frameworks designed for hyper-heuristic development are discussed. • Critical discussion, current trends and directions for future research are included. Hyper-heuristics have emerged as a way to raise the level of generality of search techniques for computational search problems. This is in contrast to many approaches, which represent customised methods for a single problem domain or a narrow class of problem instances. The term hyper-heuristic was defined in the early 2000s as a heuristic to choose heuristics , but the idea of designing high-level heuristic methodologies can be traced back to the early 1960s. The current state-of-the-art in hyper-heuristic research comprises a set of methods that are broadly concerned with intelligently selecting or generating a suitable heuristic for a given situation. Hyper-heuristics can be considered as search methods that operate on lower-level heuristics or heuristic components, and can be categorised into two main classes: heuristic selection and heuristic generation. Here we will focus on the first of these two categories, selection hyper-heuristics. This paper gives a brief history of this emerging area, reviews contemporary selection hyper-heuristic literature, and discusses recent selection hyper-heuristic frameworks. In addition, the existing classification of selection hyper-heuristics is extended, in order to reflect the nature of the challenges faced in contemporary research. Unlike the survey on hyper-heuristics published in 2013, this paper focuses only on selection hyper-heuristics and presents critical discussion, current research trends and directions for future research.
Sanchez M., Cruz-Duarte J.M., Ortiz-Bayliss J.C., Ceballos H., Terashima-Marin H., Amaya I.
IEEE Access scimago Q1 wos Q2 Open Access
2020-07-14 citations by CoLab: 54 Abstract  
Hyper-heuristics aim at interchanging different solvers while solving a problem. The idea is to determine the best approach for solving a problem at its current state. This way, every time we make a move it gets us closer to a solution. The problem changes; so does its state. As a consequence, for the next move, a different solver may be invoked. Hyper-heuristics have been around for almost 20 years. However, combinatorial optimization problems date from way back. Thus, it is paramount to determine whether the efforts revolving around hyper-heuristic research have been targeted at the problems of the highest interest for the combinatorial optimization community. In this work, we tackle such an endeavor. We begin by determining the most relevant combinatorial optimization problems, and then we analyze them in the context of hyper-heuristics. The idea is to verify whether they remain as relevant when considering exclusively works related to hyper-heuristics. We find that some of the most relevant problem domains have also been popular for hyper-heuristics research. Alas, others have not and few efforts have been directed towards solving them. We identify the following problem domains, which may help in furthering the impact of hyper-heuristics: Shortest Path, Set Cover, Longest Path, and Minimum Spanning Tree. We believe that focusing research on ways for solving them may lead to an increase in the relevance and impact that hyper-heuristics have on combinatorial optimization problems.
Wawrzyniak J., Drozdowski M., Sanlaville É.
2020-06-01 citations by CoLab: 38 Abstract  
This paper considers algorithm selection for the berth allocation problem (BAP) under algorithm runtime limits. BAP consists in scheduling ships on berths subject to ship ready times and size constraints, for a certain objective function. For the purposes of strategic port capacity planning, BAP must be solved many times in extensive simulations, needed to account for ship traffic and handling times uncertainties, and alternative terminal designs. The algorithm selection problem (ASP) consists in selecting algorithms with the best performance for a considered application. We propose a new method of selecting a portfolio of algorithms that will solve the considered BAP instances and return good solutions. The portfolio selection is based on the performance on the training instances. The performance is measured by the runtime and solution quality. In order to select the portfolio, a linear program minimizing the solution quality loss, subject to overall runtime limit is used. Thus, the portfolio evolves with the runtime limit, which is a key parameter in designing the port capacity simulations. For the training and validating datasets, random instances and real ship traffic logs are used. A portfolio of heuristics is developed which can be used for solving large instances of BAP, emerging when time horizons of months or years are considered. The evolution of the algorithm portfolios under changing runtime limits is studied. The portfolio abilities to solve new instances are assessed.
Qu R., Kendall G., Pillay N.
2020-05-01 citations by CoLab: 39 Abstract  
This paper defines a new combinatorial optimization problem, namely General Combinatorial Optimization Problem (GCOP), whose decision variables are a set of parametric algorithmic components, i.e. algorithm design decisions. The solutions of GCOP, i.e. compositions of algorithmic components, thus represent different generic search algorithms. The objective of GCOP is to find the optimal algorithmic compositions for solving the given optimization problems. Solving the GCOP is thus equivalent to automatically designing the best algorithms for optimization problems. Despite recent advances, the evolutionary computation and optimization research communities are yet to embrace formal standards that underpin automated algorithm design. In this position paper, we establish GCOP as a new standard to define different search algorithms within one unified model. We demonstrate the new GCOP model to standardize various search algorithms as well as selection hyperheuristics. A taxonomy is defined to distinguish several widely used terminologies in automated algorithm design, namely automated algorithm composition, configuration and selection. We would like to encourage a new line of exciting research directions addressing several challenging research issues including algorithm generality, algorithm reusability, and automated algorithm design.
Soto C., Dorronsoro B., Fraire H., Cruz-Reyes L., Gomez-Santillan C., Rangel N.
2020-03-01 citations by CoLab: 47 Abstract  
This work presents a novel parallel branch and bound algorithm to efficiently solve to optimality a set of instances of the multi-objective flexible job shop scheduling problem for the first time, to the very best of our knowledge. It makes use of the well-known NSGA-II algorithm to initialize its upper bound. The algorithm is implemented for shared-memory architectures, and among its main features, it incorporates a grid representation of the solution space, and a concurrent priority queue to store and dispatch the pending sub-problems to be solved. We report the optimal Pareto front of thirteen well-known instances from the literature, which were unknown before. They will be very useful for the scientific community to provide more accuracy in the performance measurement of their algorithms. Indeed, we carefully analyze the performance of NSGA-II on these instances, comparing the results against the optimal ones computed in this work. Extensive computational experiments show that the proposed algorithm using 24 cores achieves a speedup of 15.64x with an efficiency of 65.20%.
Himanshu N., Burman A., Kumar V.
2019-12-13 citations by CoLab: 14 Abstract  
The solution of slope stability problems requires determination of critical failure surface and associated optimum/minimum factor of safety (FOS) value. Investigation of critical failure surface in any soil slope can be posed as a constrained global optimization problem. In this article, a simplified technique of generation of segmented non-circular failure surface is proposed. Unified particle swarm optimization (UPSO) method is utilized to search for the critical failure surface and associated minimum FOS of the slope. Limit equilibrium technique-based Morgenstern–Price method is used to evaluate the FOS value of the potential failure mass which satisfy both force as well as moment equilibrium. To demonstrate the performance of UPSO and its competence in yielding critical failure surface along with minimum FOS value, three benchmark problems with differing complexities are chosen from existing literatures. The results obtained are compared and are found to be in accord with the published literatures. The performance of UPSO as a solution algorithm for slope problems are established through convergence studies of various related parameters.
Garza-Santisteban F., Cruz-Duarte J.M., Amaya I., Ortiz-Bayliss J.C., Enrique Conant-Pablos S., Terashima-Marin H.
2019-12-01 citations by CoLab: 3 Abstract  
Hyper-heuristics stand as a novel tool that combines low-level heuristics into robust solvers. However, training cost is a drawback that hinders their applicability. In this work, we analyze the effect of training with different problem sizes, to determine whether an effective simplification can be made. We train selection hyper-heuristics for the Job Shop Scheduling problem through Simulated Annealing. Results from preliminary experiments suggest that the aforementioned simplification is feasible. To better understand such an effect, we carry out experiments training on two different instance sizes, 5 × 5 and 15×15, while testing on instances of size 15 × 15. Our data show that hyper-heuristics trained in small-sized instances perform similarly to those trained in larger problems. Thus, we discuss a possible explanation for this effect.
Xie J., Gao L., Peng K., Li X., Li H.
2019-06-22 citations by CoLab: 154 Abstract  
Flexible job shop scheduling problem (FJSP) is an NP-hard combinatorial optimisation problem, which has significant applications in the real world. Due to its complexity and significance, lots of attentions have been paid to tackle this problem. In this study, the existing solution methods for the FJSP in recent literature are classified into exact algorithms, heuristics and meta-heuristics, which are reviewed comprehensively. Moreover, the real-world applications of the FJSP are also introduced. Finally, the development trends of the manufacturing industry are analysed, and the future research opportunities of the FJSP are summarised in detail.
Amaya I., Ortiz-Bayliss J.C., Conant-Pablos S., Terashima-Marin H.
2019-06-01 citations by CoLab: 14 Abstract  
It is common to find that training of selection hyper-heuristics is done perturbatively. The process usually starts with a random selection module and iterates over a set of instances until finding appropriate values for such module. In this work, however, we present a model for creating selection hyper-heuristics constructively. To achieve so, we use a set of instances evolved for such a task. For each low-level heuristic, we evolved a set of problem instances that are more easily solvable by that particular heuristic (compared to the other ones). Each group contains instances easily solvable with the corresponding heuristic but not with the remaining ones. Thus, our model creates its selector by calculating the centroid of each group. For doing so, the model defines a rule that maps said centroid to one corresponding action (in this case, a low-level heuristic). To test our approach, we select the one-dimensional Bin Packing Problem and set four target performance levels (which we refer to as deltas) for instance generation. Then, we analyze all the possible combinations of deltas. We study how performance of the generated hyper-heuristics shift when they are created using a different number of instances. Our data shows the feasibility of creating a hyper-heuristic under the stated conditions. Effectiveness of the model depends on the deltas used though we observed that higher deltas are useful while lower deltas are not. For example, when considering a delta level of 2.0, our method produced hyper-heuristics with an accumulated average waste 12% lower than that of the best heuristic. But, for a delta level of 0.5, it became impossible to outperform the heuristics.
Benavides-Robles M.T., Cruz-Duarte J.M., Ortiz-Bayliss J.C., Amaya I.
IEEE Access scimago Q1 wos Q2 Open Access
2025-01-16 citations by CoLab: 0

Top-30

Journals

1
1

Publishers

1
1
  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
Share
Cite this
GOST | RIS | BibTex
Found error?