Google Scholar Linkedin Scopus ResearchGate Dblp Orcid Publons Bitbucket

IEEE CEC 2022 Competition on Dynamic Optimization Problems Generated by Generalized Moving Peaks Benchmark

This competition is supported by the IEEE CIS Task Force on Evolutionary Computation in Dynamic and Uncertain Environments (ECiDUE)

Organized by:

Background

Search space of many real-world optimization problems is dynamic in terms of the objective function, the number of variables, and/or constraints. Optimization problems that change over time and need to be solved online by an optimization method are referred to as dynamic optimization problems (DOPs). To solve DOPs, algorithms not only need to find desirable solutions but also be able to react to the environmental changes in a timely manner and find a new solution when the previous one becomes suboptimal. In this competition, we focus on evolutionary dynamic optimization methods (EDOs) for unconstrained single-objective continuous DOPs. EDOs are important class of algorithms as they form the foundation for more complex methods, such as those used in robust optimization over time, as well as large-scale and constrained dynamic optimization. This competition aims at providing a common platform for fair and easy comparison of EDOs. The competition allows competitors to run their own EDO on the problem instances generated by the generalized moving peaks benchmark (GMPB) with different characteristics and levels of difficulty.

Generalized Moving Peaks Benchmark [1]

The landscapes generated by GMPB are constructed by assembling several promising regions with a variety of controllable characteristics ranging from unimodal to highly multimodal, symmetric to highly asymmetric, smooth to highly irregular, and various degrees of variable interaction and ill-conditioning. An example of a 2-dimensional landscape generated by GMPB is shown in Figure 1.

For a detailed description of GMPB, the reader is referred to the competition support document (download from here).

The MATLAB source code of the GMPB (where mQSO [2] is used as a sample EDO) can be downloaded from here.

Figure 1: Contour and surface plots of a landscape generated by GMPB.
to enlarge the image click here

Problem instances

12 different problem instances with different characteristics are used in this competition. To generate these problem instances, the participants need to set four parameters PeakNumber, ChangeFrequency, Dimension, and ShiftSeverity in “main.m” according to the values shown in Table 1.

Table 1: Parameter setting of the 12 problem instances used in this competition.
parameter Problem instances
F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12
PeakNumber 5 10 25 50 100 10 10 10 10 10 10 10
ChangeFrequency 5000 5000 5000 5000 5000 2500 1000 500 5000 5000 5000 5000
Dimension 5 5 5 5 5 5 5 5 10 20 5 5
ShiftSeverity 1 1 1 1 1 1 1 1 1 1 2 5
RunNumber 31
EnvironmentNumber 100

Evaluation Criteria

Offline error is used as the performance indicator in this competition:

`E_(o)=1/(Tϑ)sum_(t=1)^Tsum_(c=1)^ϑ(f^"(t)"(vecx^(∘"(t)"))-f^"(t)"(vecx^(**"("(t-1)ϑ+c")")))`

where  `vecx^(∘"(t)")` is the global optimum position at the tth environment, T is the number of environments, 𝜗 is the change frequency, c is the fitness evaluation counter for each environment, and  `vecx^(**((t-1)ϑ+c))` is the best found position at the cth fitness evaluation in the tth environment. Offline error is the average of current error values over optimization process. In the source code, in each run, the values of the current errors are stored in “Problem.CurrentError” at the end of each run, the offline error is obtained.

Rules

  1. Participants are NOT allowed to change the random seed generators in the code.
  2. Participants are NOT allowed to modify “BenchmarkGenerator.m” and “fitness.m” files in the source code.
  3. Participants are NOT allowed to tune the parameters of their algorithm for individual problem instance. In other words, the values of the parameters of the algorithm must be the same for solving all problem instances.
  4. The problem instances must be considered as complete blackboxes and the algorithms must NOT use any of the internal parameters of GMPB.
  5. The algorithm can be informed about the environmental changes. Consequently, participants do not need to use any change detection mechanism in their algorithms. Note that it is also accepted if a change detection component is used.
  6. Each participant can submit more than one algorithm.
  7. Since this is the first competition on GMPB, competitors can participate with either newly proposed algorithms (unpublished) or their previously published algorithms.
  8. We may require the winner to share their source code with us. The code will not be published and is solely used to reproduce the reported results.

Submission instructions

For each algorithm, the competitors must provide a compressed folder (named as the algorithm's name) which contains 13 files. The first file is a document containing the following information:

  1. Title
  2. Names, affiliations, and emails of participants
  3. A short description of the algorithm, including the used optimizer (e.g., PSO or DE), how the population is managed and controlled (e.g., bi-population, multi-population with fixed number of sub-populations, or multi-population with clustering), and whether explicit memory (archive) is used or not.
  4. The following table which shows the best, worst, average, median, and standard deviation of the offline error values obtained by 31 runs for each problem instance.
Offline error Problem instances
F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12
Best
worst
average
median
Standard deviation

In addition, this folder also must contain 12 text files where each file contains the 31 offline error values obtained by the algorithm in 31 runs for each problem instance. Each text file must be named according to the corresponding problem instance (e.g., “F10.dat” which contains 31 entries obtained by solving F10).

If you have any questions, please do not hesitate to contact Danial Yazdani
(email: danial.yazdani@gmail.com).

The following citation information should be used to cite the technical report of this competition:

D. Yazdani, J. Branke, M. N. Omidvar, X. Li, C. Li, M. Mavrovouniotis, T. T. Nguyen, S. Yang, and X. Yao, “IEEE CEC 2022 competition on dynamic optimization problems generated by generalized moving peaks benchmark,” arXiv preprint arXiv:2106.06174, 2021.

Submit your files directly to Danial Yazdani (email to danial.yazdani@gmail.com), no later than 15 July 2022.

References

[1] D. Yazdani, M. N. Omidvar, R. Cheng, J. Branke, T. T. Nguyen, and X. Yao, “Benchmarking continuous dynamic optimization: Survey and generalized test suite,” IEEE Transactions on Cybernetics, pp. 1 – 14, 2020.
[2] T. Blackwell and J. Branke, “Multiswarms, exclusion, and anticonvergence in dynamic environments,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 4, pp. 459–472, 2006.