GECCO 2022 will feature 38 tutorials.
Title | Organizers |
---|---|
A (Biased) Introduction to Benchmarking |
|
A Gentle Introduction to Theory (for Non-Theoreticians) |
|
Ant Colony Optimisation for Software Engineers |
|
Automated Algorithm Configuration and Design |
|
Bayesian Optimization |
|
Benchmarking and analyzing iterative optimization heuristics with IOHprofiler |
|
Benchmarking Multiobjective Optimizers 2.0 |
|
CMA-ES and Advanced Adaptation Mechanisms |
|
Coevolutionary Computation for Adversarial Deep Learning |
|
Constraint-Handling Techniques used with Evolutionary Algorithms |
|
Decomposition Multi-Objective Optimization: Current Developments and Future Opportunities |
|
Difficulties in Fair Performance Comparison of Multiobjective Evolutionary Algorithms |
|
Embedding Knowledge into the Optimization Process |
|
Evolution of Neural Networks |
|
Evolutionary Computation and Evolutionary Deep Learning for Image Analysis, Signal Processing and Pattern Recognition |
|
Evolutionary Computation and Machine Learning in Security |
|
Evolutionary Computation for Feature Selection and Feature Construction |
|
Evolutionary Continuous Dynamic Optimization |
|
Evolutionary Diversity Optimisation for Combinatorial Optimisation |
|
Evolutionary Submodular Optimisation |
|
Generative Hyper-heuristics |
|
Genetic improvement: Taking real-world source code and improving it using computational search methods. |
|
Graph-based Genetic Programming |
|
Graybox Optimization and Next Generation Genetic Algorithms |
|
Introduction to automated design of scheduling heuristics with genetic programming |
|
Introductory Mathematical Programming for EC |
|
Learning Classifier Systems: Cognitive inspired machine learning for eXplainable AI |
|
Lexicase Selection |
|
Model-Based Evolutionary Algorithms |
|
Optimization Challenges at the European Space Agency |
|
Quality-Diversity Optimization |
|
Representations for Evolutionary Algorithms |
|
Runtime Analysis of Population-based Evolutionary Algorithms |
|
Selection Hyper-heuristics |
|
Sequential Experimentation by Evolutionary Algorithms |
|
Statistical Analyses for Multi-objective Stochastic Optimization Algorithms |
|
Theory and Practice of Population Diversity in Evolutionary Computation |
|
Transfer Learning in Evolutionary Spaces |
|
Tutorials
A (Biased) Introduction to Benchmarking
Scope and Content: Benchmarking is a compulsory task to assess the performance of a (new) optimization algorithm. While this appears as a mainly technical task, there are surprisingly many methodological problems that arise when benchmarking algorithms. Over the past decade, we have seen significant improvements of the state-of-the-art of benchmarking in Evolutionary Computation. To a great extent, these improvements are domain independent and apply to continuous, discrete, combinatorial, or mixed search spaces as well as for single and multi-objective optimization.
In this tutorial, we present and discuss key methodological ideas of benchmarking and emphasize on the importance of quantitative measurement, the use of instances of problems as well as choosing testbeds that do not bias results towards too easy problems. We particularly review the advantages of presenting data as empirical cumulative distributions of runtimes, a tool that everyone assessing the performance of an algorithm should know.
We will then review how this methodology is implemented within the COCO software and show how COCO can and should be used to benchmark an algorithm and write a scientific paper.
This tutorial is intended for young researchers starting in the field who need benchmarking for their research as well as for researchers that wish to get up-to-date with the latest methodological developments in benchmarking methodology.
Anne Auger
Anne Auger is a research director at the French National Institute for Research in Computer Science and Control (Inria) heading the RandOpt team. She received her diploma (2001) and PhD (2004) in mathematics from the Paris VI University. Before to join INRIA, she worked for two years (2004-2006) at ETH in Zurich. Her main research interest is stochastic continuous optimization including theoretical aspects, algorithm designs and benchmarking. She is a member of ACM-SIGECO executive committee and of the editorial board of Evolutionary Computation. She has been General chair of GECCO in 2019. She has been organizing the biannual Dagstuhl seminar "Theory of Evolutionary Algorithms" in 2008 and 2010 and all seven previous BBOB workshops at GECCO since 2009. She is co-organzing the forthcoming Dagstuhl seminar on benchmarking.
Nikolaus Hansen
Nikolaus Hansen is a research director at Inria, France. Educated in medicine and mathematics, he received a Ph.D. in civil engineering in 1998 from the Technical University Berlin under Ingo Rechenberg. Before he joined Inria, he has been working in evolutionary computation, genomics and computational science at the Technical University Berlin, the Institute of Genetic Medicine and the ETH Zurich. His main research interests are learning and adaptation in evolutionary computation and the development of algorithms applicable in practice. His best-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).
A Gentle Introduction to Theory (for Non-Theoreticians)
This tutorial addresses GECCO attendees who do not regularly use
theoretical methods in their research. For these, we give a smooth
introduction to the theory of evolutionary computation.
Complementing other introductory theory tutorials, we do not discuss
mathematical methods or particular results, but explain
- what the theory of evolutionary algorithms aims at,
- how theoretical research in evolutionary computation is conducted,
- how to interpret statements from the theory literature,
- what the most important theory contributions are, and
- what the theory community is trying to understand most at the moment.
Benjamin Doerr
Benjamin Doerr is a full professor at the French Ecole Polytechnique. He received his diploma (1998), PhD (2000) and habilitation (2005) in mathematics from Kiel University. His research area is the theory of both problem-specific algorithms and randomized search heuristics like evolutionary algorithms. Major contributions to the latter include runtime analyses for existing evolutionary algorithms, the determination of optimal parameter values, and the theory-guided design of novel operators, on-the-fly parameter choices, and whole new evolutionary algorithms. Together with Frank Neumann and Ingo Wegener, Benjamin Doerr founded the theory track at GECCO and served as its co-chair 2007-2009 and 2014. He is a member of the editorial boards of "Artificial Intelligence", "Evolutionary Computation", "Natural Computing", "Theoretical Computer Science", and three journal on classic algorithms theory. Together with Anne Auger, he edited the book "Theory of Randomized Search Heuristics" (World Scientific 2011). Together with Frank Neumann, he is an editor of the book "Theory of Evolutionary Computation - Recent Developments in Discrete Optimization" (Springer 2020).
Ant Colony Optimisation for Software Engineers
Many software engineering tasks can be formulated as search problems. Building tests requires selecting among infinite test inputs to maximise code coverage. In systems with large test-suites and limited resources, developers choose test execution order among all possible test-case permutations to maximise fault detection. Search-Based Software Engineering (SBSE) is the application of search-based optimisation algorithms to software engineering problems. In this tutorial, we showcase SBSE by demonstrating the application of Ant Colony Optimisation (ACO) to software testing.
The ACO metaheuristic is inspired by the foraging behaviour of ants. Artificial ants build candidate solutions, depositing pheromone over its solution components. Pheromone deposits are proportional to solution quality, and ants prefer components with high pheromone values. Over time, the colony converges towards an optimal solution.
This tutorial is divided into three parts. In the first part, we introduce the ACO metaheuristic. We discuss ant, graph, and pheromone matrix representations for discrete and continuous problems. In the second part, we detail an ACO application to automatic test generation for multiple corpora. Finally, in the third part, we demonstrate Isula, a Java library for implementing ACO algorithms (available at: https://github.com/cptanalatriste/isula). We use Isula to incrementally solve an instance of the Unicost Set Covering Problem.
Carlos Gavidia-Calderon
Carlos is a Postdoctoral Research Associate in Software Engineering at The Open University, and part of the SEAD Research Group. He is also a Research Associate at the UKRI TAS Node in Resilience. He graduated in 2020 from University College London with a PhD in Software Engineering, under the supervision of Earl T. Barr, Federica Sarro and Mark Harman. As a postgraduate researcher, Carlos studied cooperation in software development teams using game-theoretic models. He also has a Masters in Computer Science from Pontificia Universidad Catolica del Peru; with a dissertation on ant colony optimisation algorithms applied to computer vision. Previously, Carlos was an undergraduate student of Systems Engineering at Universidad Nacional de Ingenieria. As a software engineer, Carlos worked on projects ranging from mobile applications to large-scale information systems. He has also taught courses on programming fundamentals and algorithms to undergraduate engineering students.
Héctor Menéndez Benito
Hector D. Menendez is currently a lecturer in Computer Science at Middlesex University London. He is a computer scientist (BSc, MSc and PhD) and a mathematician (BSc and MSc). He started working in machine learning during his PhD but during his postdoc at University College London (UCL) with Dr David Clark he started exploring different areas of ?Comprology?, mainly security, malware, diversity, and testing. He has extensive experience in applications of bio-inspired computation, where the most relevant are related to the designs of several genetic and ant colony optimization algorithms to the clustering and the automatic test generation problems, two significant areas in the fields of machine learning and software testing. He is a researcher, developer and business manager. He is currently collaborating with multiple researchers all around the world in three research topics: malware, software testing and machine learning; as a developer, he is creating the MLigther system for holistically testing machine learning; and as a business manager, he is leading Endless Science, a new publisher that will reinvent open scientific publications.
Automated Algorithm Configuration and Design
Most optimization algorithms, including evolutionary algorithms and
metaheuristics, and general-purpose solvers for integer or constraint
programming, have often many parameters that need to be properly
designed and tuned for obtaining the best results on a particular
problem. Automatic (offline) algorithm design methods help algorithm
users to determine the parameter settings that optimize the
performance of the algorithm before the algorithm is actually
deployed. Moreover, automatic offline algorithm design methods may
potentially lead to a paradigm shift in algorithm design because they
enable algorithm designers to explore much larger design spaces than
by traditional trial-and-error and experimental design
procedures. Thus, algorithm designers can focus on inventing new
algorithmic components, combine them in flexible algorithm frameworks,
and let final algorithm design decisions be taken by automatic
algorithm design techniques for specific application contexts.
This tutorial is structured into two main parts. In the first part, we
will give an overview of the algorithm design and tuning problem,
review recent methods for automatic algorithm design, and illustrate
the potential of these techniques using recent, notable applications
from the presenters' and other researchers work. In the second part of
the tutorial will focus on a detailed discussion of more complex
scenarios, including multi-objective problems, anytime algorithms,
heterogeneous problem instances, and the automatic generation of
algorithms from algorithm frameworks. The focus of this second part of
the tutorial is, hence, on practical but challenging applications of
automatic algorithm design. The second part of the tutorial will
demonstrate how to tackle algorithm design tasks using our irace
software (http://iridia.ulb.ac.be/irace), which implements the
iterated racing procedure for automatic algorithm design. We will
provide a practical step-by-step guide on using irace for the typical
algorithm design scenario.
Manuel López-Ibáñez
Dr. López-Ibáñez is a "Beatriz Galindo" Senior Distinguished Researcher at the University of Málaga, Spain and a senior lecturer in the Decision and Cognitive Sciences Research Centre at the Alliance Manchester Business School, University of Manchester, UK. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 32 journal papers, 9 book chapters and 54 papers in peer-reviewed proceedings of international conferences on diverse areas such as evolutionary algorithms, multi-objective optimization, and various combinatorial optimization problems. His current research interests are experimental analysis and the automatic configuration and design of stochastic optimization algorithms, for single and multi-objective problems. He is the lead developer and current maintainer of the irace software package for automatic algorithm configuration (http://iridia.ulb.ac.be/irace).
Thomas Stützle
Thomas Stützle is a research director of the Belgian F.R.S.-FNRS working at the IRIDIA laboratory of Université libre de Bruxelles (ULB), Belgium. He received his PhD and his habilitation in computer science both from the Computer Science Department of Technische Universität Darmstadt, Germany, in 1998 and 2004, respectively. He is the co-author of two books about ``Stochastic Local Search: Foundations and Applications and ``Ant Colony Optimization and he has extensively published in the wider area of metaheuristics including 22 edited proceedings or books, 11 journal special issues, and more than 250 journal, conference articles and book chapters, many of which are highly cited. He is associate editor of Computational Intelligence, Evolutionary Computation and Applied Mathematics and Computation and on the editorial board of seven other journals. His main research interests are in metaheuristics, swarm intelligence, methodologies for engineering stochastic local search algorithms, multi-objective optimization, and automatic algorithm configuration. In fact, since more than a decade he is interested in automatic algorithm configuration and design methodologies and he has contributed to some effective algorithm configuration techniques such as F-race, Iterated F-race and ParamILS.
Leslie Pérez Cáceres
Dr. Pérez Cáceres is a Lecturer (Profesora Asociada) at Pontificia Universidad Católica de Valparaíso, Chile. She received the M.S. degree in computer science in 2011 from the Universidad Técnica Federico Santa María, Chile, and the Ph.D. degree from the Université Libre de Bruxelles (ULB) in 2017. She is a contributor in the development of the irace software package (http://iridia.ulb.ac.be/irace), a state of the art configurator. Her research interests are in automatic algorithm configuration, metaheuristics and, combinatorial optimization.
Bayesian Optimization
One of the strengths of evolutionary algorithms (EAs) is that they can be applied to black-box optimisation problems. For the sub-class of low-dimensional continuous black-box problems that are expensive to evaluate, Bayesian optimization (BO) has become a very popular alternative. BO has applications ranging from hyperparameter tuning of deep learning models and design optimization in engineering to stochastic optimization in operational research.
Bayesian optimization builds a probabilistic surrogate model, usually a Gaussian process, based on all previous evaluations. Gaussian processes are not only able to predict the quality of new solutions, but also approximate the uncertainty around the prediction. This information is then used to decide what solution to evaluate next, explicitly trading off exploration (high uncertainty) and exploitation (high quality). This trade-off is modeled by the acquisition function which quantifies how ‘interesting’ a solution is to evaluate.
Besides both being successful black-box and derivative-free optimizers, EAs and BO have more similarities. They can both handle multiple objectives and noise. EAs have been enhanced with surrogate models (including Gaussian processes) to better handle expensive function evaluations, and EAs are often used within BO to optimize the acquisition function.
The tutorial will introduce the general BO framework for black-box optimisation with and without noise, specifically highlighting the similarities and differences to evolutionary computation. The most commonly used acquisition functions will be explained, and how they are optimised using, e.g., evolutionary algorithms. Furthermore, we will cover multiobjective Bayesian optimisation, with a particular focus on noise handling strategies, and give examples of practical applications such as simulation-optimisation and hyperparameter optimisation.
Sebastian Rojas Gonzalez
Sebastian Rojas Gonzalez is currently a post-doctoral research fellow at the Surrogate Modeling Lab, in Ghent University, Belgium. Until 2020 he worked at the Department of Decision Sciences of KU Leuven, Belgium, on a PhD in multiobjective simulation-optimisation. He previously obtained a M.Sc. in Applied Mathematics from the Harbin Institute of Technology in Harbin, China, and a masters degree in Industrial Systems Engineering from the Costa Rica Institute of Technology in 2012. Since 2016 his research work has centered on developing stochastic optimization algorithms assisted by machine learning techniques for multi-criteria decision making.
Jürgen Branke
Jürgen Branke is Professor of Operational Research and Systems at Warwick Business School (UK). His main research interests are the adaptation of metaheuristics to problems under uncertainty (including optimization in stochastic and dynamic environments) as well as evolutionary multiobjective optimization. Prof. Branke has been an active researcher in evolutionary computation for 25 years, and has published more than 200 papers in international peer-reviewed journals and conferences. He is editor of the ACM Transactions on Evolutionary Learning and Optimization, area editor of the Journal of Heuristics and associate editor of the IEEE Transactions on Evolutionary Computation and the Evolutionary Computation Journal.
Ivo Couckuyt
Ivo Couckuyt received a M.Sc. degree in Computer Science from the University of Antwerp in 2007. In 2013 he finished a PhD degree working at the Internet Technology and Data Science Lab (IDLab) at Ghent University, Belgium. He was attached to Ghent University – imec where he workedas a post-doctoral research fellow until 2021. Since September 2021, he is an associate professor in the IDLab. His main research interests are automation in machine learning, data analytics, data-efficient machine learning and surrogate modeling algorithms.
Benchmarking and analyzing iterative optimization heuristics with IOHprofiler
Evaluating sampling-based optimization algorithms via sound benchmarking is a core concern in evolutionary computation. IOHprofiler supports researchers in this task by providing an easy-to-use, interactive, and highly customizable environment for benchmarking any iterative optimizer. The experimenter module provides easy access to common problem sets (e.g. BBOB functions) and modular logging functionality that can be easily combined with other optimization functions. The resulting logs (and logs from other platforms, e.g. COCO and Nevergrad) are fully interoperable with the IOHanalyzer, which provides access to highly interactive performance analysis, in the form of a wide array of visualizations and statistical analyses. A GUI, hosted at https://iohanalyzer.liacs.nl/ makes these analysis tools easy to access. Data from many repositories (e.g. COCO, Nevergrad) are pre-processed, such that the effort required to compare performance to existing algorithms is greatly reduced.
This tutorial will introduce the motivation of the IOHprofiler project and its core functionality. The key features and components will be highlighted and demonstrated by the organizers. Afterwards, there will be a hands-on session where the usage of the various analysis functionalities are demonstrated. Guided examples will be provided to highlight the many aspects of algorithm performance which can be explored using the interactive GUI. For this session, no software installation is necessary.
The tutorial addresses all GECCO participants interested in comparing the empirical performance of optimization heuristics — with or without programming experience.
IOHprofiler is fully open-source, available at https://github.com/IOHprofiler and its documentation can be found at https://iohprofiler.github.io/.
Carola Doerr
Carola Doerr, formerly Winzen, is a permanent CNRS researcher at Sorbonne University in Paris, France. Carola's main research activities are in the analysis of black-box optimization algorithms, both by mathematical and by empirical means. Carola is regularly involved in the organization of events around evolutionary computation and related topics, for example as program chair for PPSN 2020, FOGA 2019 and the theory tracks of GECCO 2015 and 2017, as guest editor for IEEE Transactions on Evolutionary Computation and Algorithmica, as organizer of Dagstuhl seminars and Lorentz Center workshops. Carola is an associate editor of ACM Transactions on Evolutionary Learning and Optimization (TELO) and board member of the Evolutionary Computation journal. Her works have received several awards, among them the Otto Hahn Medal of the Max Planck Society, best paper awards at EvoApplications and CEC, and four best paper awards at GECCO.
Hao Wang
Hao Wangobtained his PhD (cum laude) from Leiden University in2018. He is currently employed as an assistant professor of computer science at Leiden University. Previously, he has a research stay at Sorbonne University, France. He received the Best Paper Award at the PPSN2016conference and was a best paper award finalist at the IEEE SMC 2017 conference. His research interests are in the analysis and improvement of efficient global optimization for mixed-continuous search spaces, Evolution strategies, Bayesian optimization, and benchmarking.
Diederick Vermetten
Diederick Vermetten is a PhD student at LIACS. He is part of the core development team of IOHprofiler, with a focus on the IOHanalyzer. His research interests include benchmarking of optimization heuristics, dynamic algorithm selection and configuration as well as hyperparameter optimization.
Thomas Bäck
Thomas Bäck is Full Professor of Computer Science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 - 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. Thomas Bäck has more than 350 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and most recently, the Handbook of Natural Computing. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft für Informatik, GI) in 1995 and the IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award in 2015.
Jacob de Nobel
Jacob de Nobel is a PhD student at LIACS, and is currently one of the core developers for the IOHexperimenter. His research concerns the real world application of optimization algorithms for finding better speech encoding strategies for cochlear implants, which are neuroprosthesis for people with profound hearing loss.
Furong Ye
Furong Ye is a PhD student at LIACS. He is part of the core development team of IOHprofiler, with a focus on the IOHexperimenter. His PhD topic is benchmarking discrete optimization heuristics, from building a sound experimental environment to algorithm configuration. His research interests are empirical analysis of algorithm performance and (dynamic) algorithm configuration.
Benchmarking Multiobjective Optimizers 2.0
Benchmarking is an important part of algorithm design, selection and recommendation---both in single-objective and multiobjective optimization. Benchmarking multiobjective solvers seems at first sight more complicated than benchmarking single-objective ones as there exists no natural total order on the objective space. In the past, comparisons of multiobjective solvers have therefore been done either entirely visually (at first) or via quality indicators and the attainment function. Only very recently did we realize that the quality indicator approach transforms a multiobjective problem into a single-objective (set-based) problem and thus all recent progress from the rapidly evolving single-objective benchmarking field can be transferred to the multiobjective case as well. Moreover, many multiobjective test functions have been proposed in the past but not much has changed in the last 15 or so years in terms of addressing the disadvantages of those problems (like Pareto sets on constraint boundaries, usage of distance variables, etc.).
In this tutorial, we will discuss the past and future of benchmarking multiobjective optimizers. In particular, we will discuss the new view on benchmarking multiobjective algorithms by falling back on single-objective comparisons and thus being able to use all methodologies and tools from the single-objective domain such as empirical distributions of runtimes. We will also discuss the advantages and drawbacks of some widely used multiobjective test suites and explain how we can do better, going back to the roots of what a multi-objective problem is in practice, namely the simultaneous optimization of multiple objective functions. We will also discuss recent advances in the visualization of (multiobjective) problem landscapes and compare the previous and newly proposed benchmark problems in the context of those landscape visualizations.
Dimo Brockhoff
Dimo Brockhoff received his diploma in computer science from University of Dortmund, Germany in 2005 and his PhD (Dr. sc. ETH) from ETH Zurich, Switzerland in 2009. After two postdocs at Inria Saclay Ile-de-France (2009-2010) and at Ecole Polytechnique (2010-2011), he joined Inria in November 2011 as a permanent researcher (first in its Lille - Nord Europe research center and since October 2016 in the Saclay - Ile-de-France one). His research interests are focused on evolutionary multiobjective optimization (EMO), in particular on theoretical aspects of indicator-based search and on the benchmarking of blackbox algorithms in general. Dimo has co-organized all BBOB workshops since 2013 and was EMO track co-chair at GECCO'2013 and GECCO'2014.
Tea Tušar
Tea Tušar is a research fellow at the Department of Intelligent Systems of the Jozef Stefan Institute in Ljubljana, Slovenia. She was awarded the PhD degree in Information and Communication Technologies by the Jozef Stefan International Postgraduate School for her work on visualizing solution sets in multiobjective optimization. She has completed a one-year postdoctoral fellowship at Inria Lille in France where she worked on benchmarking multiobjective optimizers. Her research interests include evolutionary algorithms for singleobjective and multiobjective optimization with emphasis on visualizing and benchmarking their results and applying them to real-world problems.
CMA-ES and Advanced Adaptation Mechanisms
The Covariance-Matrix-Adaptation Evolution Strategy (CMA-ES) is nowadays considered as the state-of-the art continuous stochastic search algorithm, in particular for optimization of non-separable, ill-conditioned and rugged functions. The CMA-ES consists of different components that adapt the step-size and the covariance matrix separately.
This tutorial will focus on CMA-ES and provide the audience with an overview of the different adaptation mechanisms used within CMA-ES and the scenarios where their relative impact is important. We will in particular present the rank-one update, rank-mu update, and active CMA for the covariance matrix adaptation. Different step-size mechanisms (CSA and TPA) as well as the scenario where they should be applied will be discussed.
We will address important design principles as well as questions related to parameter tuning that always accompany algorithm design. The input parameters such as the initial mean, the initial step-size, and the population size will be discussed in relation with the ruggedness of functions. Restart strategies that automatize the input parameter tuning will be presented.
Youhei Akimoto
Youhei Akimoto is an associate professor at University of Tsukuba, Japan. He received the B.S. degree in computer science in 2007, and the M.S. and Ph.D. degrees in computational intelligence and systems science from Tokyo Institute of Technology, Japan, in 2008 and 2011, respectively. From 2010 to 2011, he was a Research Fellow of JSPS in Japan, and from 2011 to 2013, he was a Post-Doctoral Research Fellow with INRIA in France. From 2013 to 2018, he was an Assistant Professor with Shinshu University, Japan. Since 2018, he has been an Associate Professor with University of Tsukuba, Japan as well as a Visiting Researcher with the Center for Advanced Intelligence Projects, RIKEN. He served as a Track Chair for the continuous optimization track of GECCO in 2015 and 2016. He is an Associate Editor of ACM TELO and is on the editorial board of the ECJ. He won the Best Paper Award at GECCO 2018 and FOGA 2019. His research interests include design principles, theoretical analyses, and applications of stochastic search heuristics.
Nikolaus Hansen
Nikolaus Hansen is a research director at Inria, France. Educated in medicine and mathematics, he received a Ph.D. in civil engineering in 1998 from the Technical University Berlin under Ingo Rechenberg. Before he joined Inria, he has been working in evolutionary computation, genomics and computational science at the Technical University Berlin, the Institute of Genetic Medicine and the ETH Zurich. His main research interests are learning and adaptation in evolutionary computation and the development of algorithms applicable in practice. His best-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).
Coevolutionary Computation for Adversarial Deep Learning
In recent years, machine learning with Generative Adversarial Networks (GANs) has been recognized as a powerful method for generative modeling. Generative modeling is the problem of estimating the underlying distribution of a set of samples. GANs accomplish this using unsupervised learning. They have also been extended to handle semi-supervised and fully supervised learning paradigms. GANs have been successfully applied to many domains. They can generate novel images (e.g., image colorization or super-resolution, photograph editing, and text-to-image translation), sound (e.g., voice translation and music generation), and video (e.g., video-to-video translation, deep fakes generation, and AI-assisted video calls), finding application in domains of multimedia information, engineering, science, design, art, and games.
GANs are an adversarial paradigm. Two NNs compete with each other using antagonistic lost function to train the parameters with gradient descent. This connects them to evolution because evolution also exhibits adversarial engagements and competitive coevolution. In fact, the evolutionary computation community’s study of coevolutionary pathologies and its work on competitive and cooperative coevolutionary algorithms offers a means of solving convergence impasses often encountered in GAN training.
In this tutorial we will explain:
+ The main concepts of generative modeling and adversarial learning. + GAN gradient-based training and the main pathologies that prevent ideal convergence. Specifically, we will explain mode collapse, oscillation, and vanishing gradients. + Coevolutionary algorithms and how they can be applied to train GANs. Specifically, we will explain how algorithm enhancements address non-ideal convergence + To demonstrate we will draw upon the open-source Lipizzaner framework (url: http://lipizzaner.csail.mit.edu/). This framework is easy to use and extend. It sets up a spatial grid of communicating populations of GANs. + Students will be given the opportunity to set up and use the Lipizzaner framework during the tutorial by means of a jupyter notebook expressly developed for teaching purposes.Jamal Toutouh
I am a Marie Skłodowska Currie Postdoctoral Fellow at Massachusetts Institute of Technology (MIT) in the USA, at the MIT CSAIL Lab. I obtained my Ph.D. in Computer Engineering at the University of Malaga (Spain). The dissertation, Natural Computing for Vehicular Networks, was awarded the 2018 Best Spanish Ph.D. Thesis in Smart Cities. My dissertation focused on the application of Machine Learning methods inspired by Nature to address Smart Mobility problems.
My current research explores the combination of Nature-inspired gradient-free and gradient-based methods to address Adversarial Machine Learning. The main idea is to devise new algorithms to improve the efficiency and efficacy of the state-of-the-art methodology by mainly applying co-evolutionary approaches. Besides, I am working on the application of Machine Learning to address problems related to Smart Mobility, Smart Cities, and Climate Change.
UnaMay OReilly
Una-May O'Reilly is the leader of the AnyScale Learning For All (ALFA) group at MIT CSAIL. ALFA focuses on evolutionary algorithms, machine learning, and frameworks for large-scale knowledge mining, prediction and analytics. The group has projects in cyber security using coevolutionary algorithms to explore adversarial dynamics in networks and malware detection. Una-May received the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe in 2013. She is a Junior Fellow (elected before age 40) of the International Society of Genetic and Evolutionary Computation, which has evolved into ACM Sig-EVO. She now serves as Vice-Chair of ACM SigEVO. She served as chair of the largest international Evolutionary Computation Conference, GECCO, in 2005.
Constraint-Handling Techniques used with Evolutionary Algorithms
Evolutionary Algorithms (EAs), when used for global optimization, can be seen as unconstrained optimization techniques. Therefore, they require an additional mechanism to incorporate constraints of any kind (i.e., inequality, equality, linear, nonlinear) into their fitness function. Although the use of penalty functions (very popular with mathematical programming techniques) may seem an obvious choice, this sort of approach requires a careful fine tuning of the penalty factors to be used. Otherwise, an EA may be unable to reach the feasible region (if the penalty is too low) or may reach quickly the feasible region but being unable to locate solutions that lie in the boundary with the infeasible region (if the penalty is too severe).
This has motivated the development of a number of approaches to incorporate constraints into the fitness function of an EA. This tutorial
will cover the main proposals in current use, including novel approaches
such as the use of tournament rules based on feasibility, multiobjective
optimization concepts, hybrids with mathematical programming
techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial
immune systems, among others. Other topics such as the importance of
maintaining diversity, current benchmarks and the use of alternative
search engines (e.g., particle swarm optimization, differential evolution,
evolution strategies, etc.) will be also discussed (as time allows).
Carlos Coello Coello
Carlos Artemio Coello Coello received a PhD in Computer Science from Tulane University (USA) in 1996. His research has mainly focused on the design of new multi-objective optimization algorithms based on bio-inspired metaheuristics, which is an area in which he has made pioneering contributions. He currently has over 500 publications which, according to Google Scholar, report over 58,800 citations (with an h-index of 96). He has received several awards, including the National Research Award (in 2007) from the Mexican Academy of Science (in the area of exact sciences), the 2009 Medal to the Scientific Merit from Mexico City's congress, the Ciudad Capital: Heberto Castillo 2011 Award for scientists under the age of 45, in Basic Science, the 2012 Scopus Award (Mexico's edition) for being the most highly cited scientist in engineering in the 5 years previous to the award and the 2012 National Medal of Science in Physics, Mathematics and Natural Sciences from Mexico's presidency (this is the most important award that a scientist can receive in Mexico). He also received the Luis Elizondo Award from the Instituto Tecnológico de Monterrey in 2019. He is the recipient of the prestigious 2013 IEEE Kiyo Tomiyasu Award, ""for pioneering contributions to single- and multiobjective optimization techniques using bioinspired metaheuristics"", and of the 2016 The World Academy of Sciences (TWAS) Award in ?Engineering Sciences?. Since January 2011, he is an IEEE Fellow. He is also the recipient of the 2021 IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award. He is also Associate Editor of several international journals including Evolutionary Computation and the IEEE Transactions on Emerging Topics in Computational Intellience. Since January 2021, he is the Editor-in-Chief of the IEEE Transactions on Evolutionary Computation. He is currently Full Professor with distinction at the Computer Science Department of CINVESTAV-IPN in Mexico City, Mexico.
Decomposition Multi-Objective Optimization: Current Developments and Future Opportunities
Evolutionary multi-objective optimization (EMO) has been a major research topic in the field of evolutionary computation for many years. It has been generally accepted that combination of evolutionary algorithms and traditional optimization methods should be a next generation multi-objective optimization solver. As the name suggests, the basic idea of the decomposition-based technique is to transform the original complex problem into simplified subproblem(s) so as to facilitate the optimization. Decomposition methods have been well used and studied in traditional multi-objective optimization. MOEA/D decomposes a multi-objective problem into a number of subtasks, and then solves them in a collaborative manner. MOEA/D provides a very natural bridge between multi-objective evolutionary algorithms and traditional decomposition methods. It has been a commonly used evolutionary algorithmic framework in recent years.
Within this tutorial, a comprehensive introduction to MOEA/D will be given and selected research results will be presented in more detail. More specifically, we are going to (i) introduce the basic principles of MOEA/D in comparison with other two state-of-the-art EMO frameworks, i.e., Pareto- and indicator-based frameworks; (ii) present a general overview of state-of-the-art MOEA/D variants and their applications; (iii) discuss the future opportunities for possible further developments.
Ke Li
Ke Li is a UKRI Future Leaders Fellow, a Turing Fellow, and a Senior Lecturer of Computer Science at the University of Exeter (UoE). He was the founding chair of IEEE Computational Intelligence Society (CIS) Task Force 12 from 2017 to 2021. This is an international consortium that brings together global researchers to promote an active state of themed areas of the decomposition-based techniques in CI. Related activities include workshops and special sessions associated with major conferences in EC (GECCO, CEC and PPSN) and webinars since 2018. He has been a Publication Chair of EMO 2021, a dedicated conference on evolutionary multi-objective optimization and multi-criterion decision-making. Moreover, he has been an academic mentor of early career researchers (ECRs) in IEEE CIS since 2020. He has co-chaired a summer school in 2021 themed on "Data-Driven AI/CI: Theory and Applications" sponsored by IEEE Computational Intelligence Society. This summer school aims to i) provide a unique opportunity to ECRs from around the world to learn about state-of-the-art AI/CI techniques and applications; ii) interact with world-renowned high-profile experts; and iii) inspire new ideas and collaborations. Dr Li has been serving as an Associate Editor of four academic journals including IEEE Trans. Evol. Comput. Moreover, he was a guest editor of a special issues on the topic of semantic computing and personalization in Neurocomputing journal and a special issue on the topic of advances of AI in visual systems in Multimedia Tools and Applications journal.
Qingfu Zhang
Qingfu Zhang is a Professor at the Department of Computer Science, City University of Hong Kong. His main research interests include evolutionary computation, optimization, neural networks, data analysis, and their applications. He is currently leading the Metaheuristic Optimization Research (MOP) Group in City University of Hong Kong. Professor Zhang is an Associate Editor of the IEEE Transactions on Evolutionary Computation and the IEEE Transactions Cybernetics. He was awarded the 2010 IEEE Transactions on Evolutionary Computation Outstanding Paper Award. He is on the list of the Thomson Reuters 2016 and 2017 highly cited researchers in computer science. He is an IEEE fellow.
Difficulties in Fair Performance Comparison of Multiobjective Evolutionary Algorithms
The field of evolutionary multi-objective optimization (EMO) has been very active in recent years. Various EMO algorithms are proposed every year. The performance of a newly designed algorithm is usually evaluated by computational experiments in comparison with existing algorithms. However, fair comparison of different EMO algorithms is not easy since the evaluated performance of each algorithm usually depends on experimental setting. This is also because solution sets instead of solutions are evaluated.
In this tutorial, we will explain and discuss various difficulties in fair performance comparison of EMO algorithms related to the following four issues: (i) the termination condition of each algorithm, (ii) the population size of each algorithm, (iii) performance indicators, (iv) test problems. For each issue, its strong effects on comparison results are clearly demonstrated. Our discussions on those difficulties are to encourage the future development of the EMO research field without excessively focusing on the proposal of overly-specialized new algorithms in a specific setting. This is because those algorithms are not likely to work well on various real-world tasks. Then, we will discuss the handling of each issue for fair comparison. We will also suggest some promising future research topics related to each issue.
Hisao Ishibuchi
Hisao Ishibuchi is a Chair Professor at Southern University of Science and Technology, China. He was the IEEE Computational Intelligence Society (CIS) Vice-President for Technical Activities in 2010-2013 and the Editor-in-Chief of the IEEE Computational Intelligence Magazine in 2014-2019. Currently he is an IEEE CIS Administrative Committee Member (2014-2019, 2021-2023), an IEEE CIS Distinguished Lecturer (2015-2017, 2021-2023), and an Associate Editor of several journals such as IEEE Trans. on Evolutionary Computation, IEEE Trans. on Cybernetics, and IEEE Access. He is also General Chair of IEEE WCCI 2014. His research on evolutionary multi-objective optimization received an Outstanding Paper Award from IEEE Trans. on Evolutionary Computation in 2020, and Best Paper Awards from GECCO 2004, 2017, 2018, 2020, 2021 and EMO 2019.
Lie Meng Pang
Lie Meng Pang received her Bachelor of Engineering degree in Electronic and Telecommunication Engineering and Ph.D. degree in Electronic Engineering from the Faculty of Engineering, Universiti Malaysia Sarawak, Malaysia, in 2012 and 2018, respectively. She is currently a research associate with the Department of Computer Science and Engineering, Southern University of Science and Technology (SUSTech), China. Her current research interests include evolutionary multi-objective optimization and fuzzy systems.
Ke Shang
Ke Shang received the B.S. and Ph.D. degrees from Xi'an Jiaotong University, China, in 2009 and 2016, respectively. He is currently a research assistant professor at Southern University of Science and Technology, China. His current research interests include evolutionary multi-objective optimization and its applications. He received GECCO 2018 Best Paper Award, CEC 2019 First Runner-up Conference Paper Award, GECCO 2021 Best Paper Award and best paper nomination at PPSN 2020.
Embedding Knowledge into the Optimization Process
Real-world optimization problems are usually large-scale and involve several constraints and sometimes even finding a single feasible/acceptable solution is a challenging task. To solve these complex real-world problems, heuristics and concept-based approaches can be very helpful and narrow down the search space. Here, I am going to talk about four approaches used in order to incorporate information into the problem and the optimization process, listed below:
1) Semi-Independent Variable
2) Boundary update
3) Variable functioning
4) Variable grouping and co-evolution
These four approaches are coupled with several evolutionary optimization algorithms and the results show that they are practical and effective approaches, and lead to better solutions with fewer function evaluations in most cases. This tutorial should motivate optimization researchers and practitioners to pay more attention to embedding different sources of knowledge into the optimization process to boost it.
Amir H Gandomi
Amir H. Gandomi is a Professor of Data Science and an ARC DECRA Fellow at the Faculty of Engineering & Information Technology, University of Technology Sydney. Prior to joining UTS, Prof. Gandomi was an Assistant Professor at Stevens Institute of Technology, USA and a distinguished research fellow in BEACON center, Michigan State University, USA. Prof. Gandomi has published over two hundred journal papers and seven books which collectively have been cited 24,000+ times (H-index = 71). He has been named as one of the most influential scientific minds and Highly Cited Researcher (top 1% publications and 0.1% researchers) for four consecutive years, 2017 to 2021. He also ranked 18th in GP bibliography among more than 12,000 researchers. He has served as associate editor, editor and guest editor in several prestigious journals such as AE of IEEE TBD and IEEE IoTJ. Prof Gandomi is active in delivering keynotes and invited talks. His research interests are global optimisation and (big) data analytics using machine learning and evolutionary computations in particular.
Evolution of Neural Networks
Evolution of artificial neural networks has recently emerged as a powerful technique in two areas. First, while the standard value-function based reinforcement learning works well when the environment is fully observable, neuroevolution makes it possible to disambiguate hidden state through memory. Such memory makes new applications possible in areas such as robotic control, game playing, and artificial life. Second, deep learning performance depends crucially on the network design, i.e. its architecture, hyperparameters, and other elements. While many such designs are too complex to be optimized by hand, neuroevolution can be used to do so automatically. Such evolutionary AutoML can be used to achieve good deep learning performance even with limited resources, or state-of-the art performance with more effort. It is also possible to optimize other aspects of the architecture, like its size, speed, or fit with hardware. In this tutorial, I will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, (2) methods for neural architecture search and evolutionary AutoML, and (3) applications of these techniques in control, robotics, artificial life, games, image processing, and language.
Risto Miikkulainen
Risto Miikkulainen is a Professor of Computer Science at the University of Texas at Austin and Associate VP of Evolutionary AI at Cognizant. He received an M.S. in Engineering from Helsinki University of Technology (now Aalto University) in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His current research focuses on methods and applications of neuroevolution, as well as neural network models of natural language processing and vision; he is an author of over 450 articles in these research areas. At Cognizant, and previously as CTO of Sentient Technologies, he is scaling up these approaches to real-world problems. Risto is an IEEE Fellow, recipient of the IEEE CIS EC Pioneer Award, INNS Gabor Award, ISAL Outstanding Paper of the Decade Award, as well as 10 Best-Paper Awards at GECCO.
Evolutionary Computation and Evolutionary Deep Learning for Image Analysis, Signal Processing and Pattern Recognition
The intertwining disciplines of image analysis, signal processing and pattern recognition are major fields of computer science, computer engineering and electrical and electronic engineering, with past and on-going research covering a full range of topics and tasks, from basic research to a huge number of real-world industrial applications.
Among the techniques studied and applied within these research fields, evolutionary computation (EC) including evolutionary algorithms, swarm intelligence and other paradigms is playing an increasingly relevant role. Recently, evolutionary deep learning has also attracted very good attention to these fields. The terms Evolutionary Image Analysis and Signal Processing and Evolutionary Computer Vision are more and more commonly accepted as descriptors of a clearly defined research area and family of techniques and applications. This has also been favoured by the recent availability of environments for computer hardware and systems such as GPUs and grid/cloud/parallel computing systems, whose architecture and computation paradigm fit EC algorithms extremely well, alleviating the intrinsically heavy computational burden imposed by such techniques and allowing even for real-time applications.
The tutorial will introduce the general framework within which Evolutionary Image Analysis, Signal Processing and Pattern Recognition can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications. The application areas to be covered will include edge detection, segmentation, object tracking, object recognition, motion detection, image classification and recognition. EC techniques to be covered will include genetic algorithms, genetic programming, particle swarm optimisation, evolutionary multi-objective optimisation as well as memetic/hybrid paradigms. We take a focus on the use of evolutionary deep learning idea for image analysis --- this includes automatic learning architectures, learning parameters and transfer functions of convolutional neural networks (and autoencoders and genetic programming if time allows). The use of GPU boxes will be discussed for real-time/fast object classification. We will show how such EC techniques can be effectively applied to image analysis and signal processing problems and provide promising results. A deep network-like approach will also be discussed for GP, in which classification is improved by co-evolving an embedding of the input pattern and a classifier at the same time.
Mengjie Zhang
Mengjie Zhang is a Fellow of Royal Society of New Zealand, a Fellow of IEEE, and currently Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation Research Group. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, Associate Dean (Research and Innovation) in the Faculty of Engineering, and Chair of the Research Committee of the Faculty of Engineering and School of Engineering and Computer Science. His research is mainly focused on evolutionary computation, particularly genetic programming, particle swarm optimisation and learning classifier systems with application areas of feature selection/construction and dimensionality reduction, computer vision and image processing, evolutionary deep learning and transfer learning, job shop scheduling, multi-objective optimisation, and clustering and classification with unbalanced and missing data. He is also interested in data mining, machine learning, and web information extraction. Prof Zhang has published over 700 research papers in refereed international journals and conferences in these areas. He has been serving as an associated editor or editorial board member for over 10 international journals including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, the Evolutionary Computation Journal (MIT Press), ACM Transactions on Evolutionary Learning and Optimisation, Genetic Programming and Evolvable Machines (Springer), IEEE Transactions on Emergent Topics in Computational Intelligence, Applied Soft Computing, and Engineering Applications of Artificial Intelligence, and as a reviewer of over 30 international journals. He has been a major chair for eight international conferences. He has also been serving as a steering committee member and a program committee member for over 80 international conferences including all major conferences in evolutionary computation. Since 2007, he has been listed as one of the top ten world genetic programming researchers by the GP bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html). He is the Tutorial Chair for GECCO 2014, an AIS-BIO Track Chair for GECCO 2016, an EML Track Chair for GECCO 2017, and a GP Track Chair for GECCO 2020 and 2021. Since 2012, he has been co-chairing several parts of IEEE CEC, SSCI, and EvoIASP/EvoApplications conference (he has been involving major EC conferences such as GECCO, CEC, EvoStar, SEAL). Since 2014, he has been co-organising and co-chairing the special session on evolutionary feature selection and construction at IEEE CEC and SEAL, and also delivered a keynote/plenary talk for IEEE CEC 2018,IEEE ICAVSS 2018, DOCSA 2019, IES 2017 and Chinese National Conference on AI in Law 2017. Prof Zhang was the Chair of the IEEE CIS Intelligent Systems Applications, the IEEE CIS Emergent Technologies Technical Committee, and the IEEE CIS Evolutionary Computation Technical Committee; a Vice-Chair of the IEEE CIS Task Force on Evolutionary Computer Vision and Image Processing, and the IEEE CIS Task Force on Evolutionary Deep Learning and Applications; and also the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.
Stefano Cagnoni
Stefano Cagnoni graduated in Electronic Engineering at the University of Florence, Italy, where he also obtained a PhD in Biomedical Engineering and was a postdoc until 1997. In 1994 he was a visiting scientist at the Whitaker College Biomedical Imaging and Computation Laboratory at the Massachusetts Institute of Technology. Since 1997 he has been with the University of Parma, where he has been Associate Professor since 2004. Recent research grants include: a grant from Regione Emilia-Romagna to support research on industrial applications of Big Data Analysis, the co-management of industry/academy cooperation projects: the development, with Protec srl, of a computer vision-based fruit sorter of new generation and, with the Italian Railway Network Society (RFI) and Camlin Italy, of an automatic inspection system for train pantographs; a EU-funded “Marie Curie Initial Training Network" grant for a four-year research training project in Medical Imaging using Bio-Inspired and Soft Computing. He has been Editor-in-chief of the "Journal of Artificial Evolution and Applications" from 2007 to 2010. From 1999 to 2018, he was chair of EvoIASP, an event dedicated to evolutionary computation for image analysis and signal processing, then a track of the EvoApplications conference. From 2005 to 2020, he has co-chaired MedGEC, a workshop on medical applications of evolutionary computation at GECCO. Co-editor of journal special issues dedicated to Evolutionary Computation for Image Analysis and Signal Processing. Member of the Editorial Board of the journals “Evolutionary Computation” and “Genetic Programming and Evolvable Machines”. He has been awarded the "Evostar 2009 Award" in recognition of the most outstanding contribution to Evolutionary Computation.
Evolutionary Computation and Machine Learning in Security
The interplay between artificial intelligence (AI) and security is becoming more prominent and important in recent years. The ever-increasing amounts of available data and computing power, as well as the goal of running automated evaluations, bring new security and privacy challenges.
Now, researchers study not only the AI applications for the security domain but also the security of AI.
First, we provide a brief introduction to domains where AI (we concentrate on machine learning and evolutionary algorithms) is used to solve problems from the security domain.
The first domain we consider is cryptology with topics like the evolution of Boolean functions and S-boxes, attacks on Physically Unclonable Functions, side-channel attacks, fault injection attacks, hardware Trojan detection/prevention/insertion, attacks on logic locking, AI-based countermeasures.
Next, we discuss several successful AI applications to the security domains like fuzzing and network security.
Finally, we consider the challenges in making AI secure against various fault mechanisms.
Our tutorial will cover a number of different security challenges and how AI can help solve those. We will also discuss how to recognize what AI technique is suitable for a specific security problem.
We end this tutorial with a discussion about how experiences in solving AI problems could help solve problems in security and vice versa. To that end, we emphasize a collaborative approach between the two communities.
Stjepan Picek
Stjepan Picek is an associate professor in the Digital Security group at Radboud University, The Netherlands. His research interests are security/cryptography, machine learning, and evolutionary computation. Prior to the associate professor position, Stjepan was an assistant professor at TU Delft, The Netherlands, a postdoctoral researcher at MIT, USA, and a postdoctoral researcher at KU Leuven, Belgium. Stjepan finished his PhD in 2015 with a topic on cryptology and evolutionary computation techniques. Stjepan also has several years of experience working in industry and government. Up to now, Stjepan has given more than 30 invited talks at conferences and summer schools and published more than 140 refereed papers in both evolutionary computation and cryptography journals and conferences. Stjepan is a member of the organization committee for International Summer School on Real-world Crypto and Privacy.
Domagoj Jakobovic
Domagoj Jakobovic received his PhD degree in 2005 at the Faculty of Electrical Engineering and Computing, University of Zagreb, on the subject of generating scheduling heuristics with genetic programming. He is currently full professor at the Department of Electronics, Microlelectronics, Computer and Intelligent Systems. His research interests include evolutionary algorithms, optimization methods and parallel algorithms. Most notable contributions are in the area of machine supported scheduling, optimization problems in cryptography, parallelization and improvement of evolutionary algorithms. He has published more than 100 papers, lead several research projects and served as a reviewer for many international journals and conferences. He has supervised seven doctoral theses and more than 170 bachelor and master theses.
Evolutionary Computation for Feature Selection and Feature Construction
In data mining/big data and machine learning, many real-world problems such as bio-data classification and biomarker detection, image analysis, text mining often involve a large number of features/attributes. However, not all the features are essential since many of them are redundant or even irrelevant, and the useful features are typically not equally important. Using all the features for classification or other data mining tasks typically does not produce good results due to the big dimensionality and the large search space. This problem can be solved by feature selection to select a small subset of original (relevant) features or feature construction to create a smaller set of high-level features using the original low-level features.
Feature selection and construction are very challenging tasks due to the large search space and feature interaction problems. Exhaustive search for the best feature subset of a given dataset is practically impossible in most situations. A variety of heuristic search techniques have been applied to feature selection and construction, but most of the existing methods still suffer from stagnation in local optima and/or high computational cost. Due to the global search potential and heuristic guidelines, evolutionary computation techniques such as genetic algorithms, genetic programming, particle swarm optimisation, ant colony optimisation, differential evolution and evolutionary multi-objective optimisation have been recently used for feature selection and construction for dimensionality reduction, and achieved great success. Many of these methods only select/construct a small number of important features, produce higher accuracy, and generated small models that are easy to understand/interpret and efficient to run on unseen data. Evolutionary computation techniques have now become an important means for handling big dimensionality issues where feature selection and construction are required. Furthermore, feature selection and dimensionality reduction has also been a main approach to explainable machine learning and interpretable AI.
The tutorial will introduce the general framework within which evolutionary feature selection and construction can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications. The application areas to be covered will include bio-data classification and biomarker detection, image analysis and pattern classification, symbolic regression, network security and intrusion detection, and text mining. EC techniques to be covered will include genetic algorithms, genetic programming, particle swarm optimisation, differential evolution, ant colony optimisation, artificial bee colony optimisation, and evolutionary multi-objective optimisation. We will show how such evolutionary computation techniques (with a focus on particle swarm optimisation and genetic programming) can be effectively applied to feature selection/construction and dimensionality reduction and provide promising results.
Bing Xue
Bing Xue is currently a Professor of Artificial Intelligence and Program Director of Science in the School of Engineering and Computer Science at VUW. She has over 200 papers published in fully refereed international journals and conferences and her research focuses mainly on evolutionary computation, data mining, machine learning, classification, symbolic regression, feature selection, evolving deep neural networks, image analysis, transfer learning, multi-objective machine learning. Dr Xue is currently the Chair of IEEE CIS Task Force on Transfer Learning & Transfer Optimization, and Vice-Chair of IEEE Task Force on Evolutionary Feature Selection and Construction and of IEEE CIS Task Force on Evolutionary Deep Learning and Applications. She was the Chair of IEEE Computational Intelligence Society (CIS) Data Mining and Big Data Analytics Technical Committee. Prof Xue is the organizer of the special session on Evolutionary Feature Selection and Construction in IEEE Congress on Evolutionary Computation (CEC) 2015, 2016, 2017, 2018, 2019, 2020, and 2021. Prof Xue has been a chair for a number of international conferences including the Chair of Women@GECCO 2018, a co-Chair of the Evolutionary Machine Learning Track for GECCO 2019 and 2020, a co-Chair of the first Neuroevolution Track for GECCO 2021. She is the Lead Chair of IEEE Symposium on Computational Intelligence in Feature Analysis, Selection, and Learning in Image and Pattern Recognition (FASLIP) at SSCI 2016, 2017, 2018, 2019 and 2020, a Program Co-Chair of the 7th International Conference on Soft Computing and Pattern Recognition (SoCPaR2015), a Program Chair of the 31st Australasian Joint Conference on Artificial Intelligence (AI 2018), Finance Chair of 2019 IEEE Congress on Evolutionary Computation, a Workshop Chair of 2021 IEEE International Conference on Data Mining (ICDM), a Conference Activities Chair of 2021 IEEE Symposium Series on Computational Intelligence (IEEE SSCI 2021), a General Co-Chair of the 35th International Conference on Image and Vision Computing New Zealand (IVCNZ 2020), a Tutorial Co-Chair of 2022 IEEE World Congress on Computational Intelligence (IEEE WCCI 2022), and a Publication Co-Chair of Proceedings of the 25th European Conference on Genetic Programming (EuroGP 2022). She is an Editor of the IEEE Computational Intelligence Society Newsletter. She is an Associate Editor or Member of the Editorial Board for over ten international journals, including IEEE Transactions on Evolutionary Computation, IEEE Computational Intelligence Magazine, IEEE Transactions on Artificial Intelligence, IEEE Transactions on Emerging Topics in Computational Intelligence, and ACM Transactions on Evolutionary Learning and Optimization.
Mengjie Zhang
Mengjie Zhang is a Fellow of Royal Society of New Zealand, a Fellow of IEEE, and currently Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation Research Group. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, Associate Dean (Research and Innovation) in the Faculty of Engineering, and Chair of the Research Committee of the Faculty of Engineering and School of Engineering and Computer Science. His research is mainly focused on evolutionary computation, particularly genetic programming, particle swarm optimisation and learning classifier systems with application areas of feature selection/construction and dimensionality reduction, computer vision and image processing, evolutionary deep learning and transfer learning, job shop scheduling, multi-objective optimisation, and clustering and classification with unbalanced and missing data. He is also interested in data mining, machine learning, and web information extraction. Prof Zhang has published over 700 research papers in refereed international journals and conferences in these areas. He has been serving as an associated editor or editorial board member for over 10 international journals including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, the Evolutionary Computation Journal (MIT Press), ACM Transactions on Evolutionary Learning and Optimisation, Genetic Programming and Evolvable Machines (Springer), IEEE Transactions on Emergent Topics in Computational Intelligence, Applied Soft Computing, and Engineering Applications of Artificial Intelligence, and as a reviewer of over 30 international journals. He has been a major chair for eight international conferences. He has also been serving as a steering committee member and a program committee member for over 80 international conferences including all major conferences in evolutionary computation. Since 2007, he has been listed as one of the top ten world genetic programming researchers by the GP bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html). He is the Tutorial Chair for GECCO 2014, an AIS-BIO Track Chair for GECCO 2016, an EML Track Chair for GECCO 2017, and a GP Track Chair for GECCO 2020 and 2021. Since 2012, he has been co-chairing several parts of IEEE CEC, SSCI, and EvoIASP/EvoApplications conference (he has been involving major EC conferences such as GECCO, CEC, EvoStar, SEAL). Since 2014, he has been co-organising and co-chairing the special session on evolutionary feature selection and construction at IEEE CEC and SEAL, and also delivered a keynote/plenary talk for IEEE CEC 2018,IEEE ICAVSS 2018, DOCSA 2019, IES 2017 and Chinese National Conference on AI in Law 2017. Prof Zhang was the Chair of the IEEE CIS Intelligent Systems Applications, the IEEE CIS Emergent Technologies Technical Committee, and the IEEE CIS Evolutionary Computation Technical Committee; a Vice-Chair of the IEEE CIS Task Force on Evolutionary Computer Vision and Image Processing, and the IEEE CIS Task Force on Evolutionary Deep Learning and Applications; and also the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.
Evolutionary Continuous Dynamic Optimization
Change is an inescapable aspect of natural and artificial systems, and adaptation is central to their resilience. Optimization problems are no exception to this maxim. Indeed, the viability of businesses and their operational success depends heavily on their effectiveness in responding to a change in the myriad of optimization problems they entail. For an optimization problem, this boils down to the efficiency of an algorithm to find and maintain a quality solution to an ever-changing problem. Ubiquity of dynamic optimization problems demands extensive research into design and development of algorithms capable of dealing with various types of change.
Inspired by biological evolution and natural self-organized systems, evolutionary algorithms and swarm intelligence methods have been vastly used for solving dynamic optimization problems due to their natural capability in dealing with environmental changes. Indeed, both classes have been successfully applied to dynamic optimization problems with various environmental and dynamic characteristics. However, one cannot directly apply them to tackle dynamic optimization problems as these methods are originally designed for optimization in static environments and cannot cope with the challenges of a dynamic optimization problem alone. Hence, they are usually used together with some other components to form evolutionary dynamic optimization methods.
This Tutorial is dedicated to exploring the recent advances in the field of evolutionary continuous dynamic optimization. This tutorial first describes the definition of dynamic optimization problems and explains different classes of these problems. Then, components of evolutionary continuous dynamic optimization methods are described. This is followed by describing the state-of-the-art and well-known used benchmarks in the field. This tutorial also introduces the commonly used performance indicators that have been used for analyzing and comparing the performance of the algorithms. After introducing several real-world applications, the tutorial is concluded by discussing some of the current challenges, the gap between academic research and real-world problems, and some potential future research directions.
Overall, this tutorial takes the form of a critical survey of the existing methods with an emphasis on articulating the challenges in continuous dynamic optimization problems in order to stimulate further research interest in this area.
Danial Yazdani
Danial Yazdani received his Ph.D. degree in computer science from Liverpool John Moores University, Liverpool, UK, in 2018. He is currently a Research Assistant Professor with the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China. Besides dynamically changing environments, his main research interests include evolutionary algorithms, large-scale optimization, simulation optimization, and their applications. He was a recipient of the Best Thesis Award from the Faculty of Engineering and Technology, Liverpool John Moores University, and the SUSTech Presidential Outstanding Postdoctoral Award from the Southern University of Science and Technology. He is a member of the IEEE Task Force on Evolutionary Computation in Dynamic and Uncertain Environments, and the IEEE Task Force on Large-Scale Global Optimization.
Xin Yao
Xin Yao obtained his Ph.D. in 1990 from the University of Science and Technology of China (USTC), MSc in 1985 from North China Institute of Computing Technologies, and BSc in 1982 from USTC. He is a Chair Professor of Computer Science at the Southern University of Science and Technology, Shenzhen, China, and a part-time Professor of Computer Science at the University of Birmingham, UK. He is an IEEE Fellow and was a Distinguished Lecturer of the IEEE Computational Intelligence Society (CIS). His major research interests include evolutionary computation, ensemble learning, and their applications to software engineering. Prof. Yao's paper on evolving artificial neural networks won the 2001 IEEE Donald G. Fink Prize Paper Award. He also won 2010, 2016, and 2017 IEEE Transactions on Evolutionary Computation Outstanding Paper Awards, 2011 IEEE Transactions on Neural Networks Outstanding Paper Award, and many other best paper awards. He received a prestigious Royal Society Wolfson Research Merit Award in 2012, the IEEE CIS Evolutionary Computation Pioneer Award in 2013, and the 2020 IEEE Frank Rosenblatt Award. He was the President (2014-15) of IEEE CIS and the Editor-in-Chief (2003-08) of IEEE Transactions on Evolutionary Computation.
Evolutionary Diversity Optimisation for Combinatorial Optimisation
Designing evolutionary computation techniques for diversity optimisation is highly beneficial to many industrial application areas as it provides a large variety of high quality and innovative design choices. Evolutionary diversity optimisation aims to construct a diverse set of solutions that all meet given quality criteria. Moreover, diversity optimisation in terms of features on the underlying problem allows to obtain a better understanding of possible solutions to the problem and can be used for algorithm selection when dealing with combinatorial optimisation problems such as the Traveling Salesperson Problem.
This tutorial gives an overview on this relatively new research area within evolutionary computation. We will point out design principles for evolutionary diversity optimisation and discuss important components such as the design of mutation operators, features of solutions, and the impact of different diversity measures. Furthermore, we report on theoretical investigations that lay the foundations for a better understanding on the working principles of evolutionary diversity optimisation methods. \\
Preliminary outline:
- Introduction and Motivation
- Definition: Evolutionary Diversity Optimisation
- Application Areas
- Diverse TSP Instances and Images
- Indicators for Evolutionary Diversity Optimisation
- Operators for Creating Diverse Sets of Solutions
- Theoretical Results
- A Related Field: Novelty Search
- Outlook and Topics for Future Work
Jakob Bossek
Jakob Bossek received his bachelor degree in Statistics and diploma in Computer Science from the TU Dortmund University (Germany) in 2013 and 2014, respectively. In 2018 he finished his PhD in Information Systems at the University of Münster, Germany. He held postdoc positions at the University of Münster, Germany and the University of Adelaide, Australia. Currently, he is affiliated at his alma mater, the Department for Information Systems at the University of Münster. His broad research topics include practical and theoretical aspects of bio-inspired problem solving, in particular evolutionary algorithms for NP-hard combinatorial optimisation problems.
Aneta Neumann
Aneta Neumann graduated in Computer Science from the Christian-Albrechts-University of Kiel, Germany and received her PhD from the University of Adelaide, Australia. She is currently a postdoctoral researcher at the School of Computer Science, The University of Adelaide, Australia. She has recently participated in artistic exhibition SALA 2016-17 and 2020 in Adelaide. She presented invited talks at UCL London, Goldsmiths, University of London, University of Nottingham, University of Sheffield, Hasso Plattner Institut University of Potsdam, Sorbonne University, University of Melbourne, University of Sydney in 2016-19. She received the Lorentz Center Grant 2020, Optimization Meets Machine Learning, Leiden, The Netherlands, the ACM Women scholarship, sponsored by Google, Microsoft, and Oracle, the Hans-Juergen and Marianna Ohff Research Grant in 2018, and the Best Paper Nomination at GECCO 2019 in the track “Genetic Algorithms”. She is a co-designer and co-lecturer for the EdX Big Data Fundamentals course in the Big Data MicroMasters® program and Online Pearson Education, Working with Big Data. Her main research interest focuses on bio-inspired computation, particularly dynamic and stochastic optimization, evolutionary diversity optimization and digital art. She serves as the co-chair of the Real-World Applications track at GECCO 2021and GECCO 2022.
Frank Neumann
Frank Neumann is a professor and the leader of the Optimisation and Logistics group at the University of Adelaide and an Honorary Professorial Fellow at the University of Melbourne. His current position is funded by the Australian Research Council through a Future Fellowship and focuses on AI-based optimisation methods for problems with stochastic constraints. Frank has been the general chair of the ACM GECCO 2016 and co-organised ACM FOGA 2013 in Adelaide. He is an Associate Editor of the journals "Evolutionary Computation" (MIT Press) and ACM Transactions on Evolutionary Learning and Optimization. In his work, he considers algorithmic approaches in particular for combinatorial and multi-objective optimization problems and focuses on theoretical aspects of evolutionary computation as well as high impact applications in the areas of cybersecurity, renewable energy, logistics, and mining.
Evolutionary Submodular Optimisation
Submodular functions play a key role in the area of optimisation and machine learning as many real world problems can be stated in terms of a submodular function. Submodular functions capture problem that face a diminishing return when adding additional components to a solution.
The goal of the tutorial is to give an overview on the area of evolutionary submodular optimisation that has gained increasing attention in the evolutionary computation and artificial intelligence community in recent years.
The audience will gain an overview of recent research on submodular optimisation using evolutionary computing methods and corresponding results published at conferences such as GECCO, PPSN, AAAI and IJCAI.
This new innovate research area has shown that evolutionary computing techniques achieve the same theoretical performance guarantees as traditional approaches, but usually perform much better in practice. The tutorial will summarize the current state of the art in this research area and point out research gaps and future research directions.
The target audience includes researchers interested in the area of evolutionary computation for combinatorial optimisation. An interest in theoretical guarantees of algorithms with respect to their approximation behaviour and/or interest in experimental algorithmic research sets a good basis for tutorial attention, but is not required as the tutorial will cover both theoretical and applied aspects of evolutionary submodular optimisation.
Aneta Neumann
Aneta Neumann graduated in Computer Science from the Christian-Albrechts-University of Kiel, Germany and received her PhD from the University of Adelaide, Australia. She is currently a postdoctoral researcher at the School of Computer Science, The University of Adelaide, Australia. She has recently participated in artistic exhibition SALA 2016-17 and 2020 in Adelaide. She presented invited talks at UCL London, Goldsmiths, University of London, University of Nottingham, University of Sheffield, Hasso Plattner Institut University of Potsdam, Sorbonne University, University of Melbourne, University of Sydney in 2016-19. She received the Lorentz Center Grant 2020, Optimization Meets Machine Learning, Leiden, The Netherlands, the ACM Women scholarship, sponsored by Google, Microsoft, and Oracle, the Hans-Juergen and Marianna Ohff Research Grant in 2018, and the Best Paper Nomination at GECCO 2019 in the track “Genetic Algorithms”. She is a co-designer and co-lecturer for the EdX Big Data Fundamentals course in the Big Data MicroMasters® program and Online Pearson Education, Working with Big Data. Her main research interest focuses on bio-inspired computation, particularly dynamic and stochastic optimization, evolutionary diversity optimization and digital art. She serves as the co-chair of the Real-World Applications track at GECCO 2021and GECCO 2022.
Frank Neumann
Frank Neumann is a professor and the leader of the Optimisation and Logistics group at the University of Adelaide and an Honorary Professorial Fellow at the University of Melbourne. His current position is funded by the Australian Research Council through a Future Fellowship and focuses on AI-based optimisation methods for problems with stochastic constraints. Frank has been the general chair of the ACM GECCO 2016 and co-organised ACM FOGA 2013 in Adelaide. He is an Associate Editor of the journals "Evolutionary Computation" (MIT Press) and ACM Transactions on Evolutionary Learning and Optimization. In his work, he considers algorithmic approaches in particular for combinatorial and multi-objective optimization problems and focuses on theoretical aspects of evolutionary computation as well as high impact applications in the areas of cybersecurity, renewable energy, logistics, and mining.
Chao Qian
Chao Qian is an Associate Professor in the School of Artificial Intelligence, Nanjing University, China. He received the BSc and PhD degrees in the Department of Computer Science and Technology from Nanjing University. After finishing his PhD in 2015, he became an Associate Researcher in the School of Computer Science and Technology, University of Science and Technology of China, until 2019, when he returned to Nanjing University.
His research interests are mainly theoretical analysis of evolutionary algorithms and designing evolutionary algorithms with provable approximation guarantee for sophisticated optimization problems, particularly in machine learning. He has published one book “Evolutionary Learning: Advances in Theories and Algorithms” and more than 30 papers in top-tier journals (e.g., AIJ, TEvC, ECJ, Algorithmica, TCS) and conferences (e.g., AAAI, IJCAI, NIPS). He has won the ACM GECCO 2011 Best Theory Paper Award, the IDEAL 2016 Best Paper Award, and the IEEE CEC 2021 Best Student Paper Award Nomination. He is an editorial board member of the Memetic Computing journal. He is a member of Evolutionary Computation Technical Committee, and the chair of IEEE Computational Intelligence Society (CIS) Task Force on Theoretical Foundations of Bio-inspired Computation.
Generative Hyper-heuristics
The automatic design of algorithms has been an early aim of both machine learning and AI, but has proved elusive. The aim of this tutorial is to introduce generative hyper-heuristics as a principled approach to the automatic design of algorithms. Hyper-heuristics are metaheuristics applied to a space of algorithms; i.e., any general heuristic method of sampling a set of candidate algorithms. In particular, this tutorial will demonstrate how to mine existing algorithms to obtain algorithmic primitives for the generative hyper-heuristic to compose new algorithmic solutions from, and to employ various types of genetic programming to execute the composition process; i.e., the search of program space.
This tutorial will place generative hyper-heuristics in the context of genetic
programming - which differs in that it constructs solutions from scratch
using atomic primitives - as well as genetic improvement - which takes a
program as starting point and improves on it (a recent direction introduced
by William Langdon).
The approach proceeds from the observation that it is possible to
define an invariant framework for the core of any class of algorithms
(often by examining existing human-written algorithms for inspiration).
The variant components of the algorithm can then be generated by genetic
programming. Each instance of the framework therefore defines a family of algorithms. While this allows searches in constrained search spaces based on problem knowledge, it does not in any way limit the generality of this approach, as the template can be chosen to be any executable program and the primitive set can be selected to be Turing-complete. Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence.
This leads to a technique for mass-producing algorithms that can be
customized to the context of end-use. This is perhaps best illustrated as
follows: typically a researcher might create a travelling salesperson
algorithm (TSP) by hand. When executed, this algorithm returns a solution
to a specific instance of the TSP. We will describe a method that
generates TSP algorithms that are tuned to representative instances of
interest to the end-user. This method has been applied to a growing number of domains including; data mining/machine learning, combinatorial problems including bin packing (on- and off-line), Boolean satisfiability, job shop scheduling, exam timetabling, image recognition, black-box function optimization, wind-farm layout, and the automated design of meta-heuristics themselves (from selection and mutation operators to the overall meta-heuristic architecture).
This tutorial will provide a step-by-step guide which takes the novice through the distinct stages of automatic design. Examples will illustrate and reinforce the issues of practical application. This technique has repeatedly produced results which outperform their manually designed counterparts, and a theoretical underpinning will be given to demonstrate why this is the case. Automatic design will become an increasingly attractive proposition as the cost of human design will only increase in-line with inflation, while the speed of processors increases in-line with Moore's law, thus making automatic design attractive for industrial application. Basic knowledge of genetic programming will be assumed.
Daniel Tauritz
Daniel Tauritz is an Associate Professor in the Department of Computer Science and Software Engineering at Auburn University (AU), Interim Director and Chief Cyber AI Strategist of the Auburn Cyber Research Center, the founding Head of AU’s Biomimetic Artificial Intelligence Research Group (BioAI Group), a cyber consultant for Sandia National Laboratories, a Guest Scientist at Los Alamos National Laboratory (LANL), and founding academic director of the LANL/AU Cyber Security Sciences Institute (CSSI). He received his Ph.D. in 2002 from Leiden University. His research interests include the design of generative hyper-heuristics and self-configuring evolutionary algorithms and the application of computational intelligence techniques in cyber security, critical infrastructure protection, and program understanding. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members.
John Woodward
John R. Woodward is a lecturer at the Queen Mary University of London. Formerly he was a lecturer at the University of Stirling, within the CHORDS group (http://chords.cs.stir.ac.uk/) and was employed on the DAASE project (http://daase.cs.ucl.ac.uk/). Before that he was a lecturer for four years at the University of Nottingham. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, particularly Search Based Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming. He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 50 talks at International Conferences and as an invited speaker at Universities. He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities.
Genetic improvement: Taking real-world source code and improving it using computational search methods.
Within the large field of Search-Based Software Engineering (SBSE), the relatively young area of Genetic Improvement (GI) aims to improve the properties of software at the level of source-code. This means that it operates directly on Java or C source code, and it typically starts with real-world software. This makes GI attractive for industrial applications, e.g. in contrast to Program Synthesis that aims to computationally create applications from scratch. In this tutorial, we demonstrate how we can apply GI methodologies to improve functional properties, such as fixing bugs and increasing accuracy, and also to optimise the physical properties of code, such as power consumption, size of code, bandwidth, and execution time. We will, furthermore, show and discuss the importance of involving the end users in the research and development of GI tools. This applies especially to industry software developers and how the tools deliver or suggest their improvements.
The aim of the tutorial is to:
• examine the motives for evolving source code directly, rather than a language built from a function set and terminal set which has to be interpreted after a program has been evolved
• understand different approaches to implementing GI including operating directly on text files, and operating on abstract syntax trees
• appreciate the new research questions that can be addressed while operating on actual source code
• understand some of the issues regarding measuring non-functional properties such as execution time and power consumption
• examine some early examples of GI as well as current state-of-the-art examples.
• understanding links between GI and other techniques such as hyper-heuristics, automatic parameter tuning, and deep parameter tuning
• highlight some of the multi-objective research where programs have been evolved that lie on the Pareto front with axes representing different non-functional properties
• discuss the human-computer interaction between software developers and tools that provide automatic-programming-type services.
• give an introduction to GI in No Time - an open source simple micro-framework for GI (https://github.com/gintool/gin).
Saemundur Haraldsson
Saemundur O. Haraldsson is a Lecturer at the University of Stirling. He has co-organised every version of this tutorial and a number of the GI workshop iterations. He has multiple publications on Genetic Improvement, including two that have received best paper awards. Additionally, he co-authored the first comprehensive survey on GI 1 which was published in 2017. He has been invited to give talks on the subject in two Crest Open Workshops and multiple times for an industrial audience in Iceland. His PhD thesis (submitted in May 2017) details his work on the world's first live GI integration in an industrial application.
Alexander Brownlee
Alexander (Sandy) Brownlee is a Lecturer in the Division of Computing Science and Mathematics at the University of Stirling. His main topics of interest are in search-based optimisation methods and machine learning, with a focus on decision support tools, and applications in civil engineering, transportation and software engineering. He has published over 70 peer-reviewed papers on these topics. He has worked with several leading businesses including BT, KLM, and IES on industrial applications of optimisation and machine learning. He serves as a reviewer for several journals and conferences in evolutionary computation, civil engineering and transportation, and is currently an Editorial Board member for the journal Complex And Intelligent Systems. He has been an organiser of several workshops and tutorials at GECCO, CEC and PPSN on genetic improvement of software.
John Woodward
John R. Woodward is a lecturer at the Queen Mary University of London. Formerly he was a lecturer at the University of Stirling, within the CHORDS group (http://chords.cs.stir.ac.uk/) and was employed on the DAASE project (http://daase.cs.ucl.ac.uk/). Before that he was a lecturer for four years at the University of Nottingham. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, particularly Search Based Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming. He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 50 talks at International Conferences and as an invited speaker at Universities. He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities.
Emily Winter
Emily Winter is a Senior Research Associate at Lancaster University, specialising in the human and socio-technical aspects of software engineering. She works on the Fixie Project: Exploiting Defect Prediction for Automatic Software Repair, investigating developer needs and preferences for how they interact with an Automatic Software Repair tool. As part of her research, she is seconded as a contractor to Bloomberg LP.
Graph-based Genetic Programming
Although the classical way to represent programs in Genetic Programming (GP) is by means of an expression tree, different GP variants with alternative representations have been proposed throughout the years. One such representation is the Directed Acyclic Graph (DAG), adopted by methods like Cartesian Genetic Programming (CGP), Linear Genetic Programming (LGP), Parallel Distributed Genetic Programming (PDGP), and, more recently, Evolving Graphs by Graph Programming (EGGP). The aim of this tutorial is to consider this methods from a unified perspective as graph-based GP, present their historical background, representation features, operators, applications, and available implementations. More specifically, we:
- Present a historical overview and taxonomy of different graph-based GP methods, giving a special focus on the fundamental aspects of the most popular ones.
- Based on a recent effort, highlight differences in how the DAG representation is used and manipulated by different graph-based methods and present initial results on how this impacts their performance and how they compare to the traditional tree representation.
- Present examples of successful applications of graph-based GP, for example, CGP applied to the design of digital circuits, neural networks and image operators.
- Demonstrate available implementations and benchmarks of some graph-based GP methods.
Roman Kalkreuth
Roman Kalkreuth is currently a research associate at TU Dortmund University in Germany. His research is located in the field of graph-based Genetic Programming. Primarily, his research focuses on the development and analysis of genetic operators and selection strategies for Cartesian Genetic Programming. After receiving a Master of Science in Computer Vision and Computatinal Intelligece (2012) from South Westphalia University of Applied Sciences, he started his Ph.D. study in 2014 at the Department of Computer Science of the TU Dortmund University. Since 2015, he is a research associate of the computational intelligence research group of Prof. Dr. Guenter Rudolph. Roman Kalkreuth defended his dissertation in July this year. Afterwards he became a Postdoc of Professor Rudolph.
Leo Francoso Dal Piccol Sotto
Leo Sotto works as a research scientist at the Fraunhofer Institute for Algorithms and Scientific Computing (SCAI), Germany. He received his bachelor (2015) and his Ph.D. (2020) in Computer Science from the Federal University of Sao Paulo, Brazil. He has worked with Linear Genetic Programming (LGP) applied to, for example, regression and classification of cardiac arrhythmia, Estimation of Distribution Algorithms, investigated properties of the DAG representation in LGP such as the role of non-effective code, and investigated DAG-based representations for genetic programming.
Zdeněk Vašíček
Graybox Optimization and Next Generation Genetic Algorithms
Over the last 10 years, developments in Gray Box Optimization makes it possible to construct new forms of Genetic Algorithms that do not use random mutation or random recombination. Instead, for certain classes of NP Hard problems (ranging from MAXSAT to Graph Coloring to the Traveling Salesman Problem), it is possible to exactly compute the location of improving moves (often in constant time). There are also new, highly efficient forms of greedy deterministic recombination. In many combinatorial domains, this makes random mutation and random recombination obsolete.
Deterministic “Partition Crossover” locally decomposes a recombination graph into q subgraphs in O(n) time. It can then identify the best of 2^q possible offspring. For example, for q=40, partition crossover returns the best of one trillion possible offspring. If the parents are local optima, the offspring are guaranteed to be locally optimal in the largest hyperplane subspace containing both parents. Offspring are typically also local optima in the full space. This allows partition crossover to directly “tunnel” between local optima, moving directly from local optimum to local optimum. New theoretical results show that the local optima for combinatorial problems are largely arranged in a hierarchically organized lattice structure. Furthermore, both deterministic and stochastic forms of recombination are capable of “climbing” the lattice structure when tunneling between local optima. There can be exponentially many local optima in a single lattice structure, all of which are escaped during recombination.
The tutorial will also show how “transforms” can be used to convert any pseudo-Boolean function (of polynomial size) into k-bounded pseudo-Boolean functions. This makes it possible to use much more powerful forms of Graybox evolutionary algorithms which can exploit k-bounded nonlinearity.
Darrell Whitley
Prof. Darrell Whitley has been active in Evolutionary Computation since 1986, and has published more than 250 papers. These papers have garnered more than 28,000 citations. Dr. Whitley’s H-index is 68. He introduced the first “steady state genetic algorithm” with rank based selection, published some of the earliest papers on neuroevolution, and has worked on dozens of real world applications of evolutionary algorithms. He has served as Editor-in-Chief of the journal Evolutionary Computation, and served as Chair of the Governing Board of ACM Sigevo from 2007 to 2011. He is a Fellow of the ACM recognized for his contributions to Evolutionary Computation, and he has been awarded the 2022 IEEE PIONEER Award in Evolutionary Computation.
Introduction to automated design of scheduling heuristics with genetic programming
From the very beginning, artificial intelligence researchers have strived to develop methods that can learn to solve difficult problems on their own, with as little user intervention as possible. Although this goal is far from being achieved, much progress has been made in this area, which in turn has led to the development of hyperheuristics, methods that can automatically develop new heuristics for various problems. Since their inception, hyperheuristics have attracted a great deal of attention and have been successfully applied in various domains.
One application area where hyperheuristics have demonstrated their advantages has been in solving various scheduling problems. One reason for this is that many real-world problems are dynamic in nature, meaning that not all information is available at the outset or that information may change over time. This makes it difficult to apply standard optimization methods and requires the development of customized methods to solve such problems. However, due to the inherent complexity of scheduling problems and the many variants available, it is almost impossible to develop suitable and effective methods for all problems. Therefore, hyperheuristics have provided the means to efficiently and automatically develop new heuristics for a variety of scheduling problems.
The aim of this tutorial is to give a general introduction to hyperheuristic methods and their application to a real problem. This is achieved by giving an overview of the general hyperheuristic framework, introducing and explaining all key concepts using a selected scheduling problem. The tutorial is organised as a step-by-step guide, explaining all the steps required to efficiently apply hyperheuristics to the selected problem. The tutorial covers all the major design decisions that need to be made (hyperheuristics, experimental design, parameter tuning, attribute selection, etc.), but also focuses on advanced concepts that have been successfully applied (ensemble learning, multiobjective optimization, etc.). Thus, the tutorial gives an overview of the whole design process and describes each step, giving advice or pointing out pitfalls.
The tutorial is intended as an entry level tutorial, where no knowledge about scheduling problems is assumed, but a basic knowledge and understanding of evolutionary algorithms and genetic programming is required. Although the focus of the tutorial is on designing heuristics for scheduling problems, many of the conclusions and advice given during the tutorial can be used to design heuristics for other problems. By the end of the tutorial, participants should know how to use genetic programming as a hyperheuristic and what steps are necessary to apply it to create heuristics for a concrete optimization problem.
Domagoj Jakobovic
Domagoj Jakobovic received his PhD degree in 2005 at the Faculty of Electrical Engineering and Computing, University of Zagreb, on the subject of generating scheduling heuristics with genetic programming. He is currently full professor at the Department of Electronics, Microlelectronics, Computer and Intelligent Systems. His research interests include evolutionary algorithms, optimization methods and parallel algorithms. Most notable contributions are in the area of machine supported scheduling, optimization problems in cryptography, parallelization and improvement of evolutionary algorithms. He has published more than 100 papers, lead several research projects and served as a reviewer for many international journals and conferences. He has supervised seven doctoral theses and more than 170 bachelor and master theses.
Marko Đurasević
Marko Đurasević received his PhD degree from the Faculty of Electrical Engineering and Computing, University of Zagreb in February 2018 on the subject of generating dispatching rules for the unrelated machines environment. He is currently employed as an Assistant Professor at the Department of Electronics, Microelectronics, Intelligent and Computer and Intelligent Systems of the Faculty of Electrical Engineering and Computing. His research interests include the field of evolutionary computing, optimization methods, machine learning, and scheduling problems. He has published nineteen journal and conference papers.
Su Nguyen
Su Nguyen is a Senior Research Fellow and Algorithm Lead at the Centre for Data Analytics and Cognition (CDAC), La Trobe University, Australia. He received his Ph.D. degree in Artificial Intelligence and Data Analytics from Victoria University of Wellington (VUW), Wellington, New Zealand, in 2013. His expertise includes computational intelligence, optimization, data analytics, large-scale simulation, and their applications in energy, operations management, and social networks. His current research focuses on novel people-centric artificial intelligence to enhance explainability and human-AI interaction by combining the power of evolutionary computation techniques and advanced machine learning algorithms such as deep learning and incremental learning. His works have been published in top peer-reviewed journals in evolutionary computation and operations research such as IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, Evolutionary Computation Journal, Computers and Operations Research. He serves as a member in the IEEE CIS Technical Committee on Data Mining. He is the guest editor of special issue on ?Automated Design and Adaption of Heuristics for Scheduling and Combinatorial Optimization? in Genetic Programming and Evolvable Machines journal. He also serves as a reviewer of high-quality journals and top conferences in evolutionary computation, operations research, data mining, and artificial intelligence.
Yi Mei
Yi Mei is a Senior Lecturer at the School of Engineering and Computer Science, Victoria University of Wellington, Wellington, New Zealand. He received his BSc and PhD degrees from University of Science and Technology of China in 2005 and 2010, respectively. His research interests include evolutionary computation and learning in scheduling and combinatorial optimisation, hyper-heuristics, genetic programming, automatic algorithm design. Yi has more than 150 fully refereed publications, including the top journals in EC and Operations Research (OR) such as IEEE TEVC, IEEE Transactions on Cybernetics, European Journal of Operational Research, ACM Transactions on Mathematical Software, and top EC conferences (GECCO). He won an IEEE Transactions on Evolutionary Computation Outstanding Paper Award 2017, and a Victoria University of Wellington Early Research Excellence Award 2018. As the sole investigator, he won the 2nd prize of the Competition at IEEE WCCI 2014: Optimisation of Problems with Multiple Interdependent Components. He serves as a Vice-Chair of the IEEE CIS Emergent Technologies Technical Committee, and a member of three IEEE CIS Task Forces and two IEEE CIS Technical Committees. He is an Editorial Board Member of International Journal of Bio-Inspired Computation, an Associate Editor of International Journal of Applied Evolutionary Computation and International Journal of Automation and Control, and a guest editor of a special issue of the Genetic Programming Evolvable Machine journal. He has organised a number of special sessions in international conferences such as IEEE CEC. He serves as a reviewer of over 50 international journals including the top journals in EC and OR. He was an Outstanding Reviewer for Applied Soft Computing in 2015 and 2017, and IEEE Transactions on Cybernetics in 2018.
Mengjie Zhang
Mengjie Zhang is a Fellow of Royal Society of New Zealand, a Fellow of IEEE, and currently Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation Research Group. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, Associate Dean (Research and Innovation) in the Faculty of Engineering, and Chair of the Research Committee of the Faculty of Engineering and School of Engineering and Computer Science. His research is mainly focused on evolutionary computation, particularly genetic programming, particle swarm optimisation and learning classifier systems with application areas of feature selection/construction and dimensionality reduction, computer vision and image processing, evolutionary deep learning and transfer learning, job shop scheduling, multi-objective optimisation, and clustering and classification with unbalanced and missing data. He is also interested in data mining, machine learning, and web information extraction. Prof Zhang has published over 700 research papers in refereed international journals and conferences in these areas. He has been serving as an associated editor or editorial board member for over 10 international journals including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, the Evolutionary Computation Journal (MIT Press), ACM Transactions on Evolutionary Learning and Optimisation, Genetic Programming and Evolvable Machines (Springer), IEEE Transactions on Emergent Topics in Computational Intelligence, Applied Soft Computing, and Engineering Applications of Artificial Intelligence, and as a reviewer of over 30 international journals. He has been a major chair for eight international conferences. He has also been serving as a steering committee member and a program committee member for over 80 international conferences including all major conferences in evolutionary computation. Since 2007, he has been listed as one of the top ten world genetic programming researchers by the GP bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html). He is the Tutorial Chair for GECCO 2014, an AIS-BIO Track Chair for GECCO 2016, an EML Track Chair for GECCO 2017, and a GP Track Chair for GECCO 2020 and 2021. Since 2012, he has been co-chairing several parts of IEEE CEC, SSCI, and EvoIASP/EvoApplications conference (he has been involving major EC conferences such as GECCO, CEC, EvoStar, SEAL). Since 2014, he has been co-organising and co-chairing the special session on evolutionary feature selection and construction at IEEE CEC and SEAL, and also delivered a keynote/plenary talk for IEEE CEC 2018,IEEE ICAVSS 2018, DOCSA 2019, IES 2017 and Chinese National Conference on AI in Law 2017. Prof Zhang was the Chair of the IEEE CIS Intelligent Systems Applications, the IEEE CIS Emergent Technologies Technical Committee, and the IEEE CIS Evolutionary Computation Technical Committee; a Vice-Chair of the IEEE CIS Task Force on Evolutionary Computer Vision and Image Processing, and the IEEE CIS Task Force on Evolutionary Deep Learning and Applications; and also the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.
Introductory Mathematical Programming for EC
Global optimization of complex models has been for several decades approached by means of formal algorithms as well as
Mathematical Programming (MP) (often branded as Operations Research, yet strongly rooted at Theoretical CS), and simultaneously has been treated by a wide range of dedicated heuristics (frequently under the label of Soft Computing) – where EC resides. The former is applicable when explicit modeling is available, whereas the latter is typically utilized for simulation- or experimentation-based optimization (but also applicable for explicit models).
These two branches complement each other, yet practically studied under two independent CS disciplines.
It is widely recognized nowadays that EC scholars become stronger, better-equipped researchers when obtaining knowledge on this so-called "optimization complement". In other words, the claim that our EC education should encompass basic MP is untenable at present times, and this tutorial aims at bridging the gap for our community's scholars.
The tutorial comprises three parts, aiming to introduce basic MP for EC audience.
The first part introduces the fundamentals of MP. It overviews mathematical optimization and outlines its taxonomy when classified by the underlying model: convex optimization (linear programs (pure-LP) and non-linear programs), versus combinatorial optimization (integer and mixed-integer linear programs M)ILP), integer quadratic programs (IQP.
It then discusses some of the theoretical aspects, such as polyhedra and the duality theorem.
The second part focuses on MP in practice. The tutorial presents the principles of MP modeling, with emphasis on the roles of constraints and auxiliary/dummy decision variables in MP. It is compared to equivalent modeling for EC heuristics, which operate differently with respect to such components. It then covers selected algorithms for the major classes of problems (Dantzig's Simplex for pure-LP, Ellipsoid for convex models, and Branch-and-bound for ILP).
The third part constitutes an interactive demo session of problem-solving. We plan to model in a precept-style, using ILOG Studio's OPL, several ILP benchmark problems. We then plan to demonstrate their rapid solution-attainment by a state-of-the-art solver, namely IBM's CPLEX.
Ofer Shir
Ofer Shir is an Associate Professor of Computer Science. He currently serves as the Head of the Computer Science Department in Tel-Hai College, and as a Principal Investigator at Migal-Galilee Research Institute – both located in the Upper Galilee, Israel. Ofer Shir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel (conferred 2003), and both MSc and PhD in Computer Science from Leiden University, The Netherlands (conferred 2004, 2008; PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz in the Department of Chemistry – where he specialized in computational aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics. His current topics of interest include Statistical Learning in Theory and in Practice, Experimental Optimization, Theory of Randomized Search Heuristics, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Machine Learning.
Learning Classifier Systems: Cognitive inspired machine learning for eXplainable AI
Learning classifier systems (LCSs) are a concept for developing a rule-based machine learning technique by applying discovery algorithms and learning components. LCSs combines the global search of evolutionary algorithms with the local optimization of reinforcement or supervised learning. Holland and Reitman developed the first LCS in 1970s and named it CS-1, i.e. ``Cognitive System One'' to acknowledge its neuroscience bases.
It is one of the earliest AI systems that was developed on the principles of cognition. LCSs have been developed by applying two different approaches, i.e. Michigan approach, and Pittsburgh approach. Michigan-style LCSs, being the most prevalent and archetypal of the rule-based machine learning algorithms, will be the focus of this tutorial. The Michigan approach-based LCSs evolve a population of cooperative classifiers such that each individual classifier represents a distinct and unique rule with associated parameters. These systems have incremental learning and the whole population of the classifiers produces the learned solution. This approach is different from other evolutionary computation techniques because it does not require population initialization.
The decision-making process of an LCS-based system is interpretable, which is related to the hot topic of eXplainable AI (XAI). It is possible to know what features and at what confidence level led to a decision. LCSs can work both online and offline, they are extremely flexible, applicable to a larger range of problems, and are highly adaptive. A broad range of LCS-based applications has been developed in different domains such as behavior modeling, data mining, classification, modeling, regression, path planning, optimization, and approximation problems.
The goals of this tutorial are three-fold: first, to provide a gentle introduction to LCSs, how they work, what types of problems they can solve, and to discuss the advantages, disadvantages, and limitations of LCSs; second, to provide a hands-on experience of one of the state-of-the-art variant of LCSs (XCSCFC or ExSTraCS) to solve a real-world problem, interpretation and statistical analysis of the experimental results; third, to highlight the recent advances in LCSs especially adaptation to real-world problems ranging from bioinformatics to robotic navigation domains. This part will be contributed by Prof. Will Browne which will enhance the LCSs tutorial he presented at GECCO last year.
Abubakar Siddique
Dr. Siddique is a post-doctoral research fellow at the School of Engineering and Computer Science, Victoria University of Wellington. His main research lies in lateralized systems based on artificially intelligent techniques, particularly evolutionary computation, to provide efficient solutions for challenging and complex problems. He is engaged as author and reviewer for different journals and international conferences including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, IEEE Computational Intelligence Magazine, GECCO, IEEE CEC, and EuroGP. Dr. Siddique did his Bachelor in Computer Science from Quaid-i-Azam University, Master in Computer Engineering from U.E.T Taxila, and Ph.D. in Computer Engineering from Victoria University of Wellington. He was the recipient of the VUWSA Gold Award and the "Student Of The Session'' award during his Ph.D. and bachelor studies, respectively. He spent nine years at Elixir Pakistan, a California-based software company. His last designation was a Principal Software Engineer where he led a team of software developers. He developed enterprise-level software for customers such as Xerox, IBM, and Finis.
Will N. Browne
Prof. Will Browne's research focuses on applied cognitive systems. Specifically, how to use inspiration from natural intelligence to enable computers/machines/robots to behave usefully. This includes cognitive robotics, learning classifier systems, and modern heuristics for industrial application. Prof. Browne has been co-track chair for the Genetics-Based Machine Learning (GBML) track and the co-chair for the Evolutionary Machine Learning track at the Genetic and Evolutionary Computation Conference. He has also provided tutorials on Rule-Based Machine Learning and Advanced Learning Classifier Systems at GECCO, chaired the International Workshop on Learning Classifier Systems (LCSs), and lectured graduate courses on LCSs. He has co-authored the first textbook on LCSs Introduction to Learning Classifier Systems, Springer 2017. Currently, he is Professor and Chair in Manufacturing Robotics at Queensland University of Technology, Brisbane, Queensland, Australia.
Lexicase Selection
Lexicase parent selection decides which individuals in an EC population are selected not based on a single, aggregated fitness value, but instead based on random sequences of test cases (or objectives) used to filter the population down to a single individual. By never aggregating performance on separate cases, lexicase selection effectively makes use of additional information for parent selection. In Genetic Programming, this results in a semantic-aware parent selection, although lexicase selection has proven useful in other EC systems without a genotypic/semantic distinction. Since lexicase selection places importance on performing well on subsets of the test cases rather than generally on all cases at once, it often selects specialist individuals who perform well on some parts of a problem while poorly on others. These specialists contribute to the maintenance of increased diversity in populations evolved under lexicase selection. However, unlike some methods that focus entirely on diversity, lexicase selection also provides strong selection pressure toward individuals that perform well on many combinations of test cases, which can lead to excellent overall performance.
Lexicase selection has recently been used in various evolutionary computation systems, often outperforming tournament selection as well as other more advanced parent selection techniques. While originally formulated for Genetic Programming, it has also been used for evolutionary many-objective optimization in Genetic Algorithms, in Learning Classifier Systems, in Evolutionary Robotics, and as a method of building machine learning ensembles. Many variants of lexicase selection have been proposed to tackle different issues. In particular, epsilon lexicase selection has performed well in problem domains where errors on test cases are real numbers. We will discuss these variants and when you might want to use them.
This tutorial will give an introduction to lexicase selection and will discuss major research directions involving both advances in its use and explanations for why it performs well. We will discuss its effects on population diversity, its ability to select specialist individuals, and its propensity toward hyperselection, where it selects the same individual many times in one generation. Participants will come away with a strong understanding of lexicase selection and how it can be applied to any test case-based evolutionary computation system. In addition, they will gain insight into the research frontiers in lexicase selection.
Thomas Helmuth
Thomas Helmuth, PhD Assistant Professor of Computer Science, Hamilton College Thomas is an assistant professor of computer science at Hamilton College in New York. He received his Ph.D. from University of Massachusetts Amherst working with professor Lee Spector. His research focuses on applications of genetic programming to automatic software synthesis. He has contributed to genetic programming in the areas of parent selection, stack-based program representations (in particular Push), and the benchmarking of program synthesis systems.
William La Cava
William La Cava is a faculty member in the Computational Health Informatics Program (CHIP) at Boston Children’s Hospital and Harvard Medical School. He received his PhD from UMass Amherst with a focus on interpretable modeling of dynamical systems. Prior to joining CHIP, he was a post-doctoral fellow and research associate in the Institute for Biomedical Informatics at the University of Pennsylvania.
Model-Based Evolutionary Algorithms
In model-based evolutionary algorithms (MBEAs) the variation operators are guided by the use of a model that conveys problem-specific information so as to increase the chances that combining the currently available solutions leads to improved solutions. Such models can be constructed beforehand for a specific problem, or they can be learnt during the optimization process.
Replacing traditional crossover and mutation operators by building and using models enables the use of machine learning techniques for automatic discovery of problem regularities and subsequent exploitation of these regularities, thereby enabling the design of optimization techniques that can automatically adapt to a given problem. This is an especially useful feature when considering optimization in a black-box setting. The use of models can furthermore also have major implications for grey-box settings where not everything about the problem is considered to be unknown a priori.
Well-known types of MBEAs are Estimation-of-Distribution Algorithms (EDAs) where probabilistic models of promising solutions are built and samples are subsequently drawn from these models to generate new solutions.
A more recent class of MBEAs is the family of Optimal Mixing EAs such as the Linkage Tree GA and, more generally, various GOMEA variants.
The tutorial will mainly focus on the latter types of MBEAs.
Dirk Thierens
Dr. Dirk Thierens is a lecturer/senior researcher at the Department of Information and Computing Sciences at Utrecht University, where he is teaching courses on Evolutionary Computation and Computational Intelligence. He has (co)-authored over 100 peer reviewed papers in Evolutionary Computation. His main current research interests are focused on the design and application of structure learning techniques in the framework of population-based, stochastic search. Dirk contributed to the organization of previous GECCO conferences as track chair, workshop organizer, Editor-in-Chief, and past member of the SIGEVO ACM board.
Peter A. N. Bosman
Peter Bosman is a senior researcher in the Life Sciences research group at the Centrum Wiskunde & Informatica (CWI) (Centre for Mathematics and Computer Science) located in Amsterdam, the Netherlands. Peter obtained both his MSc and PhD degrees on the design and application of estimation-of-distribution algorithms (EDAs). He has (co-)authored over 150 refereed publications on both algorithmic design aspects and real-world applications of evolutionary algorithms. At the GECCO conference, Peter has previously been track (co-)chair, late-breaking-papers chair, (co-)workshop organizer, (co-)local chair (2013) and general chair (2017).
Optimization Challenges at the European Space Agency
Challenging optimization problems arise in the context of space exploration. These include optimization of trajectories, spacecraft maneuvers, mission planning, etc. Tackling these optimization problems often requires expensive computations and simulations of astrodynamics. Techniques used in the specialized literature include Evolutionary Algorithms, Particle Swarm Optimization and others black-box optimizers. These problems are, however, quite different from the ones tackled in the black-box optimization literature, having a large number of decision variables, very rugged landscapes, and dynamic behavior. Expert knowledge about the application scenario is often useful to understand the behavior of optimization algorithms.
This tutorial will introduce the basic knowledge required to understand optimization problems in Space at a level that the audience of GECCO can understand. Several optimization problems will be described in detail and possible solution approaches will be discussed. Software tools and example codes will be provided to demonstrate the problems and potential solutions.
The goal of the tutorial is to introduce these problems to the audience of GECCO in an approachable manner that fosters interest and further advances both the application area and the evolutionary computation field. We believe these problems provide challenging real-world examples of the type of optimization problems that are of interest to the GECCO community, but the “bar to entry” prevents many members in the community from engaging with this application area. The aim of the tutorial is to lower this bar as much as possible.
Dario Izzo
Dr. Izzo graduated in Aeronautical Engineering from the University Sapienza of Rome in 1999 and later obtained a second master in “Satellite Platforms” at the University of Cranfield in the UK and a Ph.D. in Mathematical Modelling in 2003, at the University Sapienza of Rome. In 2004, he moved to the European Space Agency (ESA) in the Netherlands as a research fellow in Mission Analysis Dr. Izzo is now heading the Advanced Concepts Team and manageing its interface to the rest of ESA. During the years spent with tha ACT, he has led studies in interplanetary trajectory design and artificial intelligence and he took part in several other innovative researches on diverse fields. He started the Global Trajectory Optimization Competitions events, the ESA’s Summer of Code in Space, and the Kelvins competition platform (https://kelvins.esa.int/). Dr. Izzo has published more than 150 papers in journals, conferences and books. In GECCO 2013, he received the Humies Gold Medal for the work on grand tours of the galilean moons and, the following year, he won the 8th edition of the Global Trajectory Optimization Competition, organized by NASA/JPL, leading a mixed team of ESA/JAXA scientists. His interests range from computer science, open source software development, interplanetary trajectory optimization, biomimetics and artificial intelligence.
Manuel López-Ibáñez
Dr. López-Ibáñez is a "Beatriz Galindo" Senior Distinguished Researcher at the University of Málaga, Spain and a senior lecturer in the Decision and Cognitive Sciences Research Centre at the Alliance Manchester Business School, University of Manchester, UK. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 32 journal papers, 9 book chapters and 54 papers in peer-reviewed proceedings of international conferences on diverse areas such as evolutionary algorithms, multi-objective optimization, and various combinatorial optimization problems. His current research interests are experimental analysis and the automatic configuration and design of stochastic optimization algorithms, for single and multi-objective problems. He is the lead developer and current maintainer of the irace software package for automatic algorithm configuration (http://iridia.ulb.ac.be/irace).
Quality-Diversity Optimization
A fascinating aspect of natural evolution is its ability to produce a diversity of organisms that are all high performing in their niche. By contrast, the main artificial evolution algorithms are focused on pure optimization, that is, finding a single high-performing solution.
Quality-Diversity optimization (or illumination) is a recent type of evolutionary algorithm that bridges this gap by generating large collections of diverse solutions that are all high-performing. This concept was introduced by the ``Generative and Developmental Systems community between 2011 (Lehman & Stanley, 2011) and 2015 (Mouret and Clune, 2015) with the ``Novelty Search with Local Competition and ``MAP-Elites'' evolutionary algorithms. The main differences with multi-modal optimization algorithms are that (1) Quality Diversity typically works in the behavioral space (or feature space), and not in the genotypic space, and (2) Quality Diversity attempts to fill the whole behavior space, even if the niche is not a peak in the fitness landscape. In the last 5 years, more than 100 papers have been written about quality diversity, many of them in the GECCO community (a non exhaustive list is available at https://quality-diversity.github.io/papers).
The collections of solutions obtained by Quality Diversity algorithms open many new applications for evolutionary computation. In robotics, it was used to create repertoires of neural network policies (Nilsson & Cully, 2021 best paper of the NE track), to allow robots to adapt to damage in a few minutes (Cully & et al. 2015, Nature); in engineering, it can be used to propose a diversity of optimized aerodynamic shapes (Gaier et al., 2018 best paper of the CS track); they were also recently used in video games (Khalifa et al., 2018) and for Workforce Scheduling and Routing Problems (WSRP) (Urquhart & Hart, 2018).
This tutorial will give an overview of the various questions addressed in Quality-Diversity optimization, relying on examples from the literature. Past achievements and major contributions, as well as specific challenges in Quality-Diversity optimization will be described. The tutorial will in particular focus on:
- what is Quality-Diversity optimization?
- similarities and differences with traditional evolutionary algorithms;
- existing variants of Quality-Diversity algorithms;
- example of application: Learning behavioural repertoires in robotics, evolving 3D shapes for aerodynamic design;
- open questions and future challenges.
The tutorial will effectively complement the Complex Systems track, which usually contains several papers about Quality-Diversity algorithms. For instance, 47% (7/15) of the papers accepted in the CS track in 2021 used Quality-Diversity optimization, 56% (5/9) in 2020, 36% (4/11) in 2019 and 20% (3/15) in 2018. After the conference, the video of the tutorial will be hosted on the quality-diversity website (https://quality-diversity.github.io/), which gathers papers and educational content related to quality-diversity algorithms.
Antoine Cully
Antoine Cully is Senior Lecturer (Associate Professor) at Imperial College London (United Kingdom). His research is at the intersection between artificial intelligence and robotics. He applies machine learning approaches, like evolutionary algorithms, to robots to increase their versatility and their adaptation capabilities. In particular, he has recently developed Quality-Diversity optimization algorithms to enable robots to autonomously learn large behavioural repertoires. For instance, this approach enabled legged robots to autonomously learn how to walk in every direction or to adapt to damage situations. Antoine Cully received the M.Sc. and the PhD degrees in robotics and artificial intelligence from the Pierre et Marie Curie University of Paris, France, in 2012 and 2015, respectively, and the engineer degree from the School of Engineering Polytech’Sorbonne, in 2012. His PhD dissertation has received three Best-Thesis awards. He has published several journal papers in prestigious journals including Nature, IEEE Transaction in Evolutionary Computation, and the International Journal of Robotics Research. His work was featured on the cover of Nature (Cully et al., 2015) and received the "Outstanding Paper of 2015" and "Outstanding Paper of 2020" awards from the Society for Artificial Life, and the French "La Recherche" award (2016). He also received the best paper award from GECCO 2021 (in the NE track).
Jean-Baptiste Mouret
Jean-Baptiste Mouret is a senior researcher ("directeur de recherche) at Inria, a French research institute dedicated to computer science and mathematics. He was previously an assistant professor ("mâitre de conférences) at ISIR (Institute for Intelligent Systems and Robotics), which is part of Université Pierre et Marie Curie - Paris 6 (UPMC, now Sorbonne Université). He obtained a M.S. in computer science from EPITA in 2004, a M.S. in artificial intelligence from the Pierre and Marie Curie University (Paris, France) in 2005, and a Ph.D. in computer science from the same university in 2008. He was the principal investigator of an ERC grant (ResiBots - Robots with animal-like resilience, 2015-2020) and was the recipient of a French "ANR young researcher grant (Creadapt - Creative adaptation by Evolution, 2012-2015). Overall, J.-B. Mouret conducts researches that intertwine evolutionary algorithms, neuro-evolution, and machine learning to make robots more adaptive. His work was featured on the cover of Nature (Cully et al., 2015) and it received the "2017 ISAL Award for Distinguished Young Investigator in the field of Artificial Life, the "Outstanding Paper of 2015 award from the Society for Artificial Life (2016), the French "La Recherche" award (2016), 3 GECCO best paper awards (2011, GDS track; 2017 & 2018, CS track), and the IEEE CEC "best student paper" award (2009). He co-chaired the "Evolutionary Machine Learning track at GECCO 2019 and the "Generative and Developmental Systems'' track in 2015.
Stéphane Doncieux
Stéphane Doncieux is Professeur des Universités (Professor) in Computer Science at Sorbonne University, Paris, France. He is engineer of the ENSEA, a French electronic engineering school. He obtained a Master's degree in Artificial Intelligence and Pattern Recognition in 1999. He pursued and defended a PhD in Computer Science in 2003. He was responsible, with Bruno Gas, of the SIMA research team since its creation in 2007 and up to 2011. From 2011 to 2018, he was the head of the AMAC (Architecture and Models of Adaptation and Cognition) research team with 11 permanent researchers, 3 post-doc students and engineers and 11 PhD students. As from January 2019, he is deputy director of the ISIR lab, one of the largest robotics lab in France. He has organized several workshops on ER at conferences like GECCO or IEEE-IROS and has edited 2 books. Stéphane Doncieux was co-chair of the GECCO complex systems track in 2019 and 2020. His research is in cognitive robotics, with a focus on the use of evolutionary algorithms in the context of synthesis of robot controllers. He worked on selective pressures and on the use of evolutionary methods in a developmental robotics approach in which the evolutionary algorithms are used for their creativity to bootstrap a cognitive process and allow it to acquire an experience that can be later redescribed in another representation for a faster and more effective task resolution. This is the goal of the H2020 DREAM European project that he has coordinated (http://dream.isir.upmc.fr).
Representations for Evolutionary Algorithms
Successful and efficient use of evolutionary algorithms depends on the choice of the genotype, the problem representation (mapping from genotype to phenotype) and on the choice of search operators that are applied to the genotypes. These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration. The reverse is also true: deciding between alternative operators is only meaningful for a given representation.
Research in the last few years has identified a number of key concepts to analyse the influence of representation-operator combinations on the performance of evolutionary algorithms. Relevant concepts are the locality and redundancy of representations. Locality is a result of the interplay between the search operator and the genotype-phenotype mapping. Representations have high locality if the application of variation operators results in new solutions similar to the original ones. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes. Redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes or search operators favor some kind of phenotypes.
The tutorial gives a brief overview about existing guidelines for representation design, illustrates different aspects of representations, gives a brief overview of models describing the different aspects, and illustrates the relevance of the aspects with practical examples.
It is expected that the participants have a basic understanding of EA principles.
Franz Rothlauf
He received a Diploma in Electrical Engineering from the University of Erlangen, Germany, a Ph.D. in Information Systems from the University of Bayreuth, Germany, and a Habilitation from the University of Mannheim, Germany, in 1997, 2001, and 2007, respectively. Since 2007, he is professor of Information Systems at the University of Mainz. He has published more than 100 technical papers in the context of planning and optimization, evolutionary computation, e-business, and software engineering, co-edited several conference proceedings and edited books, and is author of the books "Representations for Genetic and Evolutionary Algorithms" and "Design of Modern Heuristics". At the University Mainz, he is Academic Director of the Executive MBA program (since 2013) and Chief Information Officer (since 2016). His main research interests are the application of modern heuristics in planning and optimization systems. He is a member of the Editorial Board of Evolutionary Computation Journal (ECJ), ACM Transactions on Evolutionary Learning and Optimization (TELO), and Business & Information Systems Engineering (BISE). Since 2007, he is member of the Executive Committee of ACM SIGEVO. He was treasurer of ACM SIGEVO between 2011 and 2019. Since 2019, he serves as chair for ACM SIGEVO. He has been organizer of a number of workshops and tracks on heuristic optimization, chair of EvoWorkshops in 2005 and 2006, co-organizer of the European workshop series on "Evolutionary Computation in Communications, Networks, and Connected Systems", co-organizer of the European workshop series on "Evolutionary Computation in Transportation and Logistics", and co-chair of the program committee of the GA track at GECCO 2006. He was conference chair of GECCO 2009.
Runtime Analysis of Population-based Evolutionary Algorithms
Populations are at the heart of evolutionary algorithms (EAs). They provide the genetic variation which selection acts upon. A complete picture of EAs can only be obtained if we understand their population dynamics. A rich theory on runtime analysis (also called time-complexity analysis) of EAs has been developed over the last 20 years. The goal of this theory is to show, via rigorous mathematical means, how the performance of EAs depends on their parameter settings and the characteristics of the underlying fitness landscapes. Initially, runtime analysis of EAs was mostly restricted to simplified EAs that do not employ large populations, such as the (1+1) EA. This tutorial introduces more recent techniques that enable runtime analysis of EAs with realistic population sizes. The tutorial begins with a brief overview of the generational, non-elitist population-based EAs that are covered by the techniques. We recall the common selection mechanisms and how to measure the selection pressure they induce. The main part of the tutorial covers in detail widely applicable techniques tailored to the analysis of populations. We discuss random family trees and branching processes, drift and concentration of measure in populations, and level-based analyses. To illustrate how these techniques can be applied, we consider several fundamental questions: What characteristics of fitness landscapes (such as local optima ands fitness valleys) determine their time-complexity. When are populations necessary for efficient optimisation with EAs? What determines an EA's tolerance for uncertainty, e.g. in form of noisy or partially available fitness? When do non-elitism outperform elitist EAs? What is the appropriate balance between exploration and exploitation and how does this depend on relationships between mutation and selection rates?
Per Kristian Lehre
Dr Lehre is Senior Lecturer in the School of Computer Science at the
University of Birmingham (since Jan 2017). Before joining Birmingham,
he was since 2011 Assistant Professor with the University of
Nottingham. He obtained MSc and PhD degrees in Computer Science from
the Norwegian University of Science and Technology (NTNU) in
Trondheim, He completed the PhD in 2006 under the supervision of Prof
Pauline Haddow, and joined the School of Computer Science at The
University of Birmingham, UK, as a Research Fellow in January 2007
with Prof Xin Yao. He was a Post Doctoral Fellow at DTU Informatics,
Technical University of Denmark in Lyngby, Denmark from April 2010.
Dr Lehre's research interests are in theoretical aspects of
nature-inspired search heuristics, in particular runtime analysis of
population-based evolutionary algorithms. His research has won
numerous best paper awards, including GECCO (2013, 2010, 2009, 2006),
ICSTW (2008), and ISAAC (2014). He is vice-chair of IEEE Task Force on
Theoretical Foundations of Bio-inspired Computation, and a member of
the editorial board of Evolutionary Computation and associate editor
of IEEE Transactions of Evolutionary Computation. He has guest-edited
special issues of Theoretical Computer Science and IEEE Transaction on
Evolutionary Computation on theoretical foundations of evolutionary
computation. Dr Lehre has given many tutorials on evolutionary
computation in summer schools (UK: 2007, 2015, 2016, France: 2009,
2010, and 2011, Estonia: 2010), as well as major conferences and
workshops (GECCO 2013-2020, CEC 2013 and 2015, PPSN 2016, 2018, 2020,
and ThRaSH 2013). He was the main coordinator of the 2M euro
EU-funded project SAGE which brought together theory of evolutionary
computation and population genetics.
Pietro Simone Oliveto
Pietro S. Oliveto is Head of the Algorithms Group at the University of Sheffield, UK. He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. He has been EPSRC PhD+ Fellow (2009-2010) and EPSRC Postdoctoral Fellow (2010-2013) at Birmingham and Vice-Chancellor's Fellow (2013-2016) and EPSRC Early Career Fellow at Sheffield.
His main research interest is the performance analysis of bio-inspired computation techniques including evolutionary algorithms, genetic programming, artificial immune systems, hyper-heuristics and algorithm configuration. He has guest-edited journal special issues of Computer Science and Technology, Evolutionary Computation, Theoretical Computer Science and IEEE Transactions on Evolutionary Computation. He has co-Chaired the the IEEE symposium on Foundations of Computational Intelligence (FOCI) from 2015 to 2021 and has been co-program Chair of the ACM Conference on Foundations of Genetic Algorithms (FOGA 2021). He is part of the Steering Committee of the annual workshop on Theory of Randomized Search Heuristics (ThRaSH), Leader of the Benchmarking Working Group of the COST Action ImAppNIO, member of the EPSRC Peer Review College and Associate Editor of IEEE Transactions on Evolutionary Computation.
Selection Hyper-heuristics
Hyper-heuristics are a set of techniques that operate on (and explore) the space of heuristics as opposed to directly searching the space of solutions. There goal is to raise the level of generality by offering search methodologies to solve a wide range of optimisation problems instead of single problem domain. This is in contrast to many approaches, which represent customised methods for a single problem domain or a narrow class of problem instances. The current state-of-the-art in hyper-heuristic research comprises a set of methods that are broadly split into one of two classes: They can be used to select between existing heuristics (e.g. mutation operations or hill climbers) or generate new heuristics to aid in solving hard computational problems. This tutorial focuses on the selection hyper-heuristics which utilises two consecutive methods: a "selection method" to select a suitable low level heuristic from a suite of heuristics and apply it to a candidate solution, and a "move acceptance" method which decides whether to accept or reject the newly generated solution.
The tutorial gives a brief history of this emerging area, discusses recent selection hyper-heuristic frameworks, presents open problems and delivers an interesting, educational demonstration of how Python can be used to implement selection hyper-heuristics for some optimisation problems.
Ahmed Kheiri
Ahmed Kheiri is a Senior Lecturer (Associate Professor) at Lancaster University. He received his B.Sc. (Hons - First Class) from the University of Khartoum, Sudan, and received his M.Sc. (Distinction) and PhD. from the University of Nottingham, UK. He held research positions at the University of Exeter, and the Cardiff School of Mathematics. He has designed and implemented intelligent, ready-to-use hyper-heuristic methods for decision support and applied them to a wide range of real-world problems. He has been successful in winning research funding from a variety of sources including EPSRC and KTP. He has published more than 40 refereed papers in reputable journals and highly respected international conferences. He has published two invited review papers on selection hyper-heuristics and Meteheuristics in EJOR. During his career, he received several academic awards some are awarded from participation in international optimisation challenges. In 2020, he received the Lancaster University Management School Dean's Award for his excellent achievements across the board in research, teaching and engagement.
Ed Keedwell
Ed Keedwell is Professor of Artificial Intelligence, and a Fellow of the Alan Turing Insitute. He joined the Computer Science discipline in 2006 and was appointed as a lecturer in 2009. He has research interests in optimisation (e.g. genetic algorithms, swarm intelligence, hyper-heuristics) machine learning and AI-based simulation and their application to a variety of difficult problems in bioinformatics and engineering yielding over 160 journal and conference publications. He leads a research group focusing on applied artificial intelligence and has been involved with successful funding applications totalling over £3.5 million from the EPSRC, Innovate UK, EU and industry. Particular areas of current interest are the optimisation of transportation systems, the development of sequence-based hyper-heuristics and human-in-the-loop optimisation methods for applications in engineering.
Sequential Experimentation by Evolutionary Algorithms
This tutorial addresses applications of Evolutionary Algorithms (EAs) to global optimization tasks where the objective function cannot be calculated (no explicit model nor a simulation exist), but rather requires an experimental measurement in the real-world – e.g., in pharmaceuticals, biocatalyst design, protein expression, quantum processes, engineering – to mention only a few.
The use of EAs for experimental optimization is placed in its historical context with an overview of the landmark studies in this area carried out in the 1960s at the Technical University of Berlin. At the same time, statistics-based Design of Experiments (DoE) methodologies, rooted in the late 1950s, constitute a gold-standard in existing laboratory equipment, and are therefore reviewed as well at an introductory level to the GECCO audience.
The main characteristics of experimental optimization work, in comparison to optimization of simulated systems, are discussed, and practical guidelines for real-world experiments with EAs are given. For example, experimental problems can constrain the evolution due to overhead considerations, interruptions, changes of variables, missing assays, imposed population-sizes, and target assays that have different evaluation times (in the case of multiple objective optimization problems).
Selected modern-day case studies show the persistence of experimental optimization problems today. These cover experimental quantum systems, assay-based “wet experiments” such as in combinatorial drug discovery and protein expression, as well as others. These applications can throw EAs out of their normal operating envelope, and raise research questions in a number of different areas ranging across constrained EAs, multiple objective EAs, robust and reliable methods for noisy and dynamic problems, and metamodeling methods for expensive evaluations.
Ofer Shir
Ofer Shir is an Associate Professor of Computer Science. He currently serves as the Head of the Computer Science Department in Tel-Hai College, and as a Principal Investigator at Migal-Galilee Research Institute – both located in the Upper Galilee, Israel. Ofer Shir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel (conferred 2003), and both MSc and PhD in Computer Science from Leiden University, The Netherlands (conferred 2004, 2008; PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz in the Department of Chemistry – where he specialized in computational aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics. His current topics of interest include Statistical Learning in Theory and in Practice, Experimental Optimization, Theory of Randomized Search Heuristics, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Machine Learning.
Thomas Bäck
Thomas Bäck is Full Professor of Computer Science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 - 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. Thomas Bäck has more than 350 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and most recently, the Handbook of Natural Computing. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft für Informatik, GI) in 1995 and the IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award in 2015.
Statistical Analyses for Multi-objective Stochastic Optimization Algorithms
Moving to the era of explainable AI, a comprehensive comparison of the performance of multi-objective stochastic optimization algorithms has become an increasingly important task. One of the most common ways to compare the performance of stochastic optimization algorithms is to apply statistical analyses. However, for performing them, there are still caveats that need to be addressed for acquiring relevant and valid conclusions. First of all, the selection of the performance measures (i.e., quality indicators) should be done with a great care since some measures can be correlated and their data is then further involved into statistical analyses. Further, the statistical analyses require good knowledge from the user to apply them properly, which is often lacking and leads to incorrect conclusions. Next, the standard approaches can be influenced by outliers (e.g., poor runs) or some statistically insignificant differences (solutions within some ε-neighborhood) that exist in the data. The analysis is even more complicated because the selection of quality indicators as performance measures for multi-objective optimization algorithms can be biased to the user's preference.
This tutorial will provide an overview of the current approaches for analyzing algorithms performance with special emphasis on caveats that are often overlooked. We will show how these can be easily avoided by applying simple principles that lead to Deep Statistical Comparison. The tutorial will not be based on equations, but mainly examples through which a deeper understanding of statistics will be achieved. Examples will be based on various comparison scenarios for multi-objective optimization algorithms. The tutorial will end with a demonstration of a web-service-based framework (i.e. DSCTool) for statistical comparison of multi-objective stochastic optimization algorithms. In addition, R and Python clients for performing the analyses will be also presented.
Tome Eftimov
Tome Eftimov is a researcher at the Computer Systems Department at the Jožef Stefan Institute, Ljubljana, Slovenia. He is a visiting assistant professor at the Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University, Skopje. He was a postdoctoral research fellow at the Stanford University, USA, where he investigated biomedical relations outcomes by using AI methods. In addition, he was a research associate at the University of California, San Francisco, investigating AI methods for rheumatology concepts extraction from electronic health records. He obtained his PhD in Information and Communication Technologies (2018). His research interests include statistical data analysis, metaheuristics, natural language processing, representation learning, and machine learning. He has been involved in courses on probability and statistics, and statistical data analysis. The work related to Deep Statistical Comparison was presented as tutorial (i.e. IJCCI 2018, IEEE SSCI 2019, GECCO 2020, and PPSN 2020) or as invited lecture to several international conferences and universities. He is an organizer of several workshops related to AI at high-ranked international conferences. He is a coordinator of a national project “Mr-BEC: Modern approaches for benchmarking in evolutionary computation” and actively participates in European projects.
Peter Korošec
Peter Korošec received his Ph.D. degree from the Jo?ef Stefan Postgraduate School, Ljubljana, Slovenia, in 2006. Since 2002, he has been a researcher at the Computer Systems Department, Jo?ef Stefan Institute, Ljubljana. His current areas of research include understanding principles behind meta-heuristic optimization and parallel/distributed computing. He participated in several tutorials related to statistical analysis for optimization algorithms presented in different international conferences and co-organized a workshop on understanding evolutionary optimization behavior.
Theory and Practice of Population Diversity in Evolutionary Computation
Divergence of character is a cornerstone of natural evolution. On the contrary, evolutionary optimization processes are plagued by an endemic lack of population diversity: all candidate solutions eventually crowd the very same areas in the search space. The problem is usually labeled with the oxymoron “premature convergence” and has very different consequences on the different applications, almost all deleterious. At the same time, case studies from theoretical runtime analyses irrefutably demonstrate the benefits of diversity.
This tutorial will give an introduction into the area of “diversity promotion”: we will define the term “diversity” in the context of Evolutionary Computation, showing how practitioners tried, with mixed results, to promote it.
Then, we will analyze the benefits brought by population diversity in specific contexts, namely global exploration and enhancing the power of crossover. To this end, we will survey recent results from rigorous runtime analysis on selected problems. The presented analyses rigorously quantify the performance of evolutionary algorithms in the light of population diversity, laying the foundation for a rigorous understanding of how search dynamics are affected by the presence or absence of diversity and the introduction of diversity mechanisms.
Dirk Sudholt
Dirk Sudholt is a Full Professor and Chair of Algorithms for Intelligent Systems at the University of Passau, Germany. He previously held a post as Senior Lecturer at the University of Sheffield, UK, and founding head of the Algorithms research group. He obtained his PhD in computer science in 2008 from the Technische Universitaet Dortmund, Germany, under the supervision of Prof. Ingo Wegener. His research focuses on the computational complexity of randomized search heuristics such as evolutionary algorithms and swarm intelligence. Most relevant to this tutorial is his work on runtime analysis of diversity mechanisms and the benefits of crossover in genetic algorithms. Dirk has served as chair of FOGA 2017, the GECCO Theory track in 2016 and 2017 and as guest editor for Algorithmica. He is a member of the editorial board of Evolutionary Computation and The Computer Journal and associate editor for Natural Computing. He has more than 100 refereed publications and won 9 best paper awards at GECCO and PPSN.
Giovanni Squillero
Giovanni Squillero is an associate professor of computer science at Politecnico di Torino, Department of Control and Computer Engineering. His research mixes computational intelligence and machine learning, with particular emphasis on evolutionary computation, bio-inspired meta-heuristics, and multi-agent systems; in more down-to-earth activities, he studies approximate optimization techniques able to achieve acceptable solutions with reasonable resources. The industrial applications of his work range from electronic CAD to bio-informatics. Up to April 2022, he is credited as an author in 3 books, 36 journal articles, 11 book chapters, and 154 papers in conference proceedings; he is also listed among the editors in 16 volumes.
Squillero has been a Senior Member of the IEEE since 2014; currently, he is serving in the technical committee of the IEEE Computational Intelligence Society Games, and in the editorial board of Genetic Programming and Evolvable Machines. He was the program chair of the European Conference on the Applications of Evolutionary Computation in 2016 and 2017, and he is now a member of the EvoApplications steering committee. In 2018 he co-organized EvoML, the workshop on Evolutionary Machine Learning at PPSN; in 2016 and 2017, MPDEA, the workshop on Measuring and Promoting Diversity in Evolutionary Algorithms at GECCO; and from 2004 to 2014, EvoHOT, the Workshops on Evolutionary Hardware Optimization Techniques.
Since 1998, Squillero lectured 66 university courses (15 Ph.D. and 51 M.S./B.Sc.; 36 in Italian and 30 in English); he contributed to additional 37 courses as an assistant or in other subsidiary roles. He was given the opportunity to present his research in 14 international events among invited talks, seminars and tutorials.
Transfer Learning in Evolutionary Spaces
Evolutionary algorithms have been effectively applied to various search spaces. Traditionally evolutionary algorithms explore a solution space. However, since their inception the application of evolutionary algorithms have been extended to other spaces including the program, heuristic and design spaces. More recently the potential of transfer learning in evolutionary algorithms, focusing predominantly on the solution and program spaces, has been established.
This tutorial examines the use of transfer learning in the application of evolutionary algorithms to four spaces, namely, the solution, program, heuristic (hyper-heuristics) and design (automated design of machine learning and search algorithms) spaces. The tutorial will provide an overview of transfer learning for the four spaces in terms of what to transfer, when to transfer and how to transfer knowledge. A case study for each of the spaces will be presented. The benefits of transfer learning for each of the four spaces will be highlighted. The challenges associated transfer learning in these evolutionary spaces will also be examined.
Determining what knowledge to transfer, when to transfer the knowledge and how to transfer the knowledge for the different spaces is itself an optimization problem. Traditionally this has been done manually. The tutorial will also look at how this process can be automated. A Python library ATLEA (Automated Transfer Learning for Evolutionary Algorithms) for the automated design of transfer learning in evolutionary algorithms will be presented.
Nelishia Pillay
Nelishia Pillay is a Professor at the University of Pretoria, South Africa. She holds the Multichoice Joint-Chair in Machine Learning and SARChI Chair in Artificial Intelligence. She is chair of the IEEE Technical Committee on Intelligent Systems Applications, IEEE Task Force on Hyper-Heuristics and the IEEE Task Force on Automated Algorithm Design, Configuration and Selection. Her research areas include hyper-heuristics, automated design of machine learning and search techniques, combinatorial optimization, genetic programming, genetic algorithms and deep learning for and more generally machine learning and optimization for sustainable development. These are the focus areas of the NICOG (Nature-Inspired Computing Optimization) research group which she has established.