Loading...
 

Call for Workshop Papers

Instructions for Workshop papers

Workshop papers must be submitted using the GECCO submission site. After login, the authors need to select the "Workshop Paper" submission form. In the form, the authors must select the workshop they are submitting to. To see a sample of the "Workshop Paper" submission form go to GECCO's submission site and chose "Sample Submission Forms".

Submitted papers must not exceed 8 pages (excluding references) and are required to be in compliance with the GECCO 2022 Papers Submission Instructions. Please, review the Workshops instructions, since some Workshops could reduce the page limit. It is recommended to use the same templates as the papers submitted to the main tracks. It is not required to remove the author information if the workshop does not have a double-blind review process (please, check the workshop description or the workshop organizers on this).

All accepted papers will be presented at the corresponding workshop and appear in the GECCO Conference Companion Proceedings. By submitting a paper, the author(s) agree that, if their paper is accepted, they will:

  • Submit a final, revised, camera-ready version to the publisher on or before the camera-ready deadline
  • Register at least one author before May 2, 2022 to attend the conference
  • Provide a pre-recorded version of the talk or poster and be present during its online transmission (which will occur during the days of the conference) to answer questions from the (online) audience.



Important Dates

  • Submission opening: February 7, 2022
  • Submission deadline: April 11, 2022
  • Notification: April 25, 2022
  • Camera-ready: May 2, 2022


Each paper accepted needs to have at least one author registered before the author registration deadline. If an author is presenting more than one paper at the conference, she/he does not pay any additional registration fees.

List of Workshops

TitleOrganizers
AABOH — Analysing algorithmic behaviour of optimisation heuristics
  • Anna V Kononova LIACS, Leiden University, The Netherlands
  • Hao Wang Leiden University, The Netherlands
  • Michael Emmerich LIACS, Leiden University, The Netherlands
  • Peter A. N. Bosman Centre for Mathematics and Computer Science, The Netherlands
  • Daniela Zaharie West University of Timisoara, Romania
  • Fabio Caraffini Institute of Artificial Intelligence, De Montfort University, Leicester, UK
  • Johann Dreo Pasteur Institute and CNRS, France
BBOB 2022 — Black Box Optimization Benchmarking 2022
  • Anne Auger Inria, France
  • Konstantin Dietrich Technische Hochschule Köln
  • Paul Dufossé Inria and Thales Defense Mission Systems
  • Tobias Glasmachers Ruhr-Universität Bochum, Germany
  • Nikolaus Hansen Inria and Ecole Polytechnique, France
  • Olaf Mersmann Technische Hochschule Köln
  • Petr Pošík Czech Technical University, Czech Republic
  • Tea Tušar Jožef Stefan Institute, Slovenia
  • Dimo Brockhoff Inria and Ecole Polytechnique, France
BENCH@GECCO2022 — Good Benchmarking Practices for Evolutionary Computation
  • Carola Doerr CNRS and Sorbonne University, France
  • Tome Eftimov Jožef Stefan Institute, Slovenia
  • Pascal Kerschke TU Dresden, Germany
  • Boris Naujoks Cologne University of Applied Sciences, Germany
  • Mike Preuss Leiden Institute of Advanced Computer Science
  • Vanessa Volz modl.ai (Denmark)
DTEO — 5th GECCO Workshop on Decomposition Techniques in Evolutionary Optimization
  • Bilel Derbel University of Lille, France
  • Ke Li University of Exeter, UK
  • Xiaodong Li RMIT University, Australia
  • Saúl Zapotecas Autonomous Metropolitan University
  • Qingfu Zhang City University of Hong Kong
EAHPC — Second Workshop on Evolutionary Algorithms in High Performance Computing
  • Mark Coletti Oak Ridge National Laboratory, USA
  • Catherine (Katie) Schuman Oak Ridge National Laboratory, USA
  • Eric “Siggy” Scott MITRE, USA
  • Robert Patton Oak Ridge National Laboratory, USA
  • Paul Wiegand Winthrop University
  • Jeffrey K. Bassett not given
  • Chathika Gunaratne Oak Ridge National Laboratory, USA
EC + DM — Evolutionary Computation and Decision Making
  • Tinkle Chugh University of Exeter, UK
  • Richard Allmendinger The University of Manchester, UK
  • Jussi Hakanen University of Jyväskylä, Finland
ECADA 2022 — 12th Workshop on Evolutionary Computation for the Automated Design of Algorithms
  • Daniel Tauritz Auburn University, USA
  • John Woodward Queen Mary University of London, UK
  • Manuel López-Ibáñez University of Málaga, Spain
ECXAI — Evolutionary Computation and Explainable AI
  • John McCall Robert Gordon University, UK
  • Jaume Bacardit Newcastle University, UK
  • Alexander Brownlee University of Stirling
  • Stefano Cagnoni University of Parma
  • Giovanni Iacca University of Trento, Italy
  • David Walker University of Plymouth, UK
EGML-EC — Enhancing Generative Machine Learning with Evolutionary Computation
  • Jamal Toutouh MIT, USA
  • UnaMay OReilly MIT, USA
  • Penousal Machado University of Coimbra, CISUC, DEI
  • João Correia University of Coimbra, Portugal
  • Sergio Nesmachnow Universidad de la República, Uruguay
EQUM — Workshop on Evolutionary Optimization in Uncertainty Quantification Models
  • Josu Ceberio University of the Basque Country (UPV/EHU)
  • Rafael Villanueva Universitat Politècnica de València (UPV), Spain
  • Ignacio Hidalgo Universidad Complutense de Madrid, Spain
  • Francisco Fernandez de Vega Universidad de Extremadura, Spain
EVORL — Evolutionary Reinforcement Learning Workshop
  • Giuseppe Paolo Sorbonne Université - SoftbankRobotics Europe, France
  • Alex Coninx ISIR, Université Pierre et Marie Curie-Paris 6, France
  • Antoine Cully Imperial College London, UK
  • Adam Gaier Autodesk Research, London, UK
EVOSOFT — Evolutionary Computation Software Systems
  • Stefan Wagner University of Applied Sciences Upper Austria
  • Michael Affenzeller University of Applied Sciences Upper Austria
GI @ GECCO 2022 — 11th International Workshop on Genetic Improvement
  • Bobby R. Bruce University of California, Davis
  • Vesna Nowack Lancaster University
  • Aymeric Blot University College London, UK
  • Emily Winter Lancaster University
  • William B. Langdon University College London, UK
  • Justyna Petke University College London
IAM 2022 — 7th Workshop on Industrial Applications of Metaheuristics (IAM 2022).
  • Silvino Fernández Alzueta Arcelormittal, Spain
  • Pablo Valledor Pellicer ArcelorMittal Global R&D
  • Thomas Stützle Université Libre de Bruxelles, Belgium
IWLCS 2022 — 25th International Workshop on Learning Classifier Systems
  • David Pätzel University of Augsburg, Germany
  • Alexander Wagner University of Hohenheim, Germany
  • Michael Heider University of Augsburg
LAHS — Workshop on Landscape-Aware Heuristic Search (LAHS 2022)
  • Nadarajen Veerapen Université de Lille, France
  • Katherine Malan University of South Africa
  • Arnaud Liefooghe University of Lille, France
  • Sébastien Verel Univ. Littoral Côte d'Opale, France
  • Gabriela Ochoa University of Stirling, UK
LEOL — Large-Scale Evolutionary Optimization and Learning
  • Mohammad Nabi Omidvar University of Leeds, United Kingdom
  • Yuan Sun University of Melbourne
  • Xiaodong Li RMIT University, Australia
NEWK — Neuroevolution at work
  • Ernesto Tarantino Institute on High Performance Computing - National Research Council of Italy
  • De Falco Ivanoe Institute of High-Performance Computing and Networking (ICAR-CNR), ITALY
  • Antonio Della Cioppa Natural Computation Lab, DIEM, University of Salerno, ITALY
  • Scafuri Umberto Institute of High-Performance Computing and Networking (ICAR-CNR), ITALY
QD-Benchmarks — Workshop on Quality Diversity Algorithm Benchmarks
  • John Rieffel Union College
  • Antoine Cully Imperial College London, UK
  • Jean-Baptiste Mouret Inria Nancy - Grand Est, CNRS, Université de Lorraine, France
  • Stéphane Doncieux ISIR, Université Pierre et Marie Curie-Paris 6, CNRS UMR 7222, Paris
  • Stefanos Nikolaidis University of Southern California
  • Julian Togelius New York University
  • Matthew C. Fontaine University of Southern California
  • Amy K Hoover New Jersey Institute of Technology
QuantOpt — Quantum Optimization Workshop
  • Alberto Moraglio University of Exeter, UK
  • Serban Georgescu Fujitsu Research of Europe
  • Francisco Chicano University of Malaga, Spain
  • Darrell Whitley Colorado State University, United States
  • Oleksandr Kyriienko University of Exeter, UK
  • Denny Dahl ColdQuanta, USA
  • Ofer Shir Tel-Hai College and Migal Institute, Israel
  • Lee Spector Amherst College, Hampshire College, and the University of Massachusetts, Amherst
SAEOPT — Workshop on Surrogate-Assisted Evolutionary Optimisation
  • Alma Rahat Swansea University
  • Richard Everson University of Exeter
  • Jonathan Fieldsend University of Exeter, UK
  • Handing Wang Xidian University
  • Yaochu Jin University of Surrey
  • Tinkle Chugh University of Exeter, UK
SECDEF — Workshop Proposal on Genetic and Evolutionary Computation in Defense, Security, and Risk Management
  • Erik Hemberg MIT CSAIL
  • Marwa A. Elsayed Faculty of Computer Science, Dalhousie University Canada
SymReg — Symbolic Regression
  • Michael Kommenda University of Applied Sciences Upper Austria
  • William La Cava Harvard Medical School, Boston Children's Hospital
  • Gabriel Kronberger University of Applied Sciences Upper Austria
  • Steven Gustafson Noonum, Inc

AABOH — Analysing Algorithmic Behaviour of Optimisation Heuristics

http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=153426&copyownerid=172489

Summary

Optimisation and Machine Learning tools are among the most used tools in the modern world with its omnipresent computing devices. Yet, while both these tools rely on search processes (search for a solution or a model able to produce solutions), their dynamics has not been fully understood. Such scarcity of knowledge on the inner workings of heuristic methods is largely attributed to the complexity of the underlying processes that cannot be subjected to a complete theoretical analysis. However, this is also partially due to a superficial experimental set-up and, therefore, a superficial interpretation of numerical results. Indeed, researchers and practitioners typically only look at the final result produced by these methods. Meanwhile, the vast amount of information collected over the run(s) is wasted. In the light of such considerations, it is now becoming more evident that such information can be useful and that some design principles should be defined that allow for online or offline analysis of the processes taking place in the population and their dynamics.

Hence, with this workshop, we call for both theoretical and empirical achievements identifying the desired features of optimisation and machine learning algorithms, quantifying the importance of such features, spotting the presence of intrinsic structural biases and other undesired algorithmic flaws, studying the transitions in algorithmic behaviour in terms of convergence, any-time behaviour, traditional and alternative performance measures, robustness, exploration vs exploitation balance, diversity, algorithmic complexity, etc., with the goal of gathering the most recent advances to fill the aforementioned knowledge gap and disseminate the current state-of-the-art within the research community.

Thus, we encourage submissions exploiting carefully designed experiments or data-heavy approaches that can come to help in analysing primary algorithmic behaviours and modelling internal dynamics causing them.

Format of the workshop: invited talks, paper presentations, and a panel discussion.

Organizers

 

Anna V Kononova

Anna V. Kononovais an Assistant Professor at the Leiden Institute of Advanced ComputerScience. She received her MSc degree in Applied Mathematics from Yaroslavl State University (Russia) in 2004 and PhD degree in Computer Science from University of Leeds (UK) in 2010. After a total of 5 years of postdoctoral experiences at Technical University Eindhoven (The Netherlands) and Heriot-Watt University (Edinburgh, UK), Anna has spent a number of years working as a mathematician in industry. Her current research interests include analysis of optimisation algorithms and machine learning.

 

Hao Wang

Hao Wangobtained his PhD (cum laude) from Leiden University in2018. He is currently employed as an assistant professor of computer science at Leiden University. Previously, he has a research stay at Sorbonne University, France. He received the Best Paper Award at the PPSN2016conference and was a best paper award finalist at the IEEE SMC 2017 conference. His research interests are in the analysis and improvement of efficient global optimization for mixed-continuous search spaces, Evolution strategies, Bayesian optimization, and benchmarking.

 

Michael Emmerich

Michael Emmerich was born in 1973, Coesfeld, Germany. He received his Dr.rer.nat. degree from Dortmund University (Prof. H.-P. Schwefel, Prof. P. Buchholz Promoters) in 2005. He is currently an Associate Professor (UHD) with LIACS, Leiden University, and the head of the Multicriteria Optimization and Decision Analysis Research Group (moda.liacs.nl) and Scientific Coordinator of Center for Computational Life Science (CCLS), Leiden University. Moreover, he is a visiting research fellow at the Multiobjective Optimization research group at Jyväkylä University, Finland. In the past, he carried out projects as a Researcher at ICD e.V., Germany, IST Lisbon, the University of the Algarve, Portugal, ACCESS Material Science e.V., Germany, and the Dutch Institute on Fundamental Science of Matter, Amsterdam, The Netherlands. He is known for pioneering work on model-assisted and indicator-based multiobjective optimization (SMS-EMOA, Expected Hypervolume Improvement, Set-Oriented Newton Method) and on theory of subset selection problems, set-oriented integration/differentiation of qualtity indicators, and multimodal multiobjective optimization. He has edited four books and co-authored over 120 papers in multicriteria optimization algorithms and their application in drug discovery, logistics, complex networks, and sustainable building design.

Peter A. N. Bosman

Peter Bosman is a senior researcher in the Life Sciences research group at the Centrum Wiskunde & Informatica (CWI) (Centre for Mathematics and Computer Science) located in Amsterdam, the Netherlands. Peter obtained both his MSc and PhD degrees on the design and application of estimation-of-distribution algorithms (EDAs). He has (co-)authored over 150 refereed publications on both algorithmic design aspects and real-world applications of evolutionary algorithms. At the GECCO conference, Peter has previously been track (co-)chair, late-breaking-papers chair, (co-)workshop organizer, (co-)local chair (2013) and general chair (2017).

 

Daniela Zaharie

Daniela Zaharie is a Professor at the Department of Computer Science from the West University of Timisoara (Romania) with a PhD degree on a topic related to stochastic modelling of neural networks and a Habilitation thesis on the analysis of the behaviour of differential evolution algorithms. Her current research interests include analysis and applications of metaheuristic algorithms, interpretable machine learning models and data mining.

 

Fabio Caraffini

Fabio Caraffini is an Associate Professor in Computer Science at De Montfort University (Leicester, UK). Fabio holds a PhD in Computer Science (De Montfort University, UK, 2014) and a PhD in Mathematical Information Technology (University of Jyväkylä, Finland, 2016) and was awarded a BSc in ``Electronics Engineering and an MSc in ``Telecommunications Engineering by the University of Perugia (Italy) in 2008 and 2011 respectively. His research interests include theoretical and applied computational intelligence with a strong emphasis on metaheuristics for optimisation.

Johann Dreo

Johann Dreo is a Senior Research Engineer in Artificial Intelligence Algorithmics at the Systems Biology group of the Computational Biology department of Institut Pasteur (Université de Paris). His scientific interests are in decision aid, optimization, search heuristics, artificial intelligence, machine learning, algorithm design and engineering. He obtained his MSc in biology and PhD on bio-inspired algorithmics. He has more than 17 years of expertise of applying randomized optimization heuristics in practice, across various industrial applications. He published seminal works on automated algorithm design and award-winning heuristic solvers.

BBOB 2022 — Black Box Optimization Benchmarking

http://numbbo.github.io/workshops/

Summary

Benchmarking optimization algorithms is a crucial part in the design and application of them in practice. Since 2009, the Blackbox Optimization Benchmarking Workshop at GECCO has been a place to discuss general recent advances of benchmarking practices and the concrete results from actual benchmarking experiments with a large variety of (blackbox) optimizers.

The Comparing Continuous Optimizers platform (COCO, 1, https://github.com/numbbo/coco) has been developed in this context to
support algorithm developers and practicioners alike by automating benchmarking experiments for blackbox optimization algorithms
in single- and bi-objective, unconstrained continuous problems in exact and noisy, as well as expensive and non-expensive scenarios.

Also in the next BBOB 2022 edition of the workshop, we invite participants to discuss all kind of aspects of (blackbox) benchmarking. Like in previous years, presenting benchmarking results on the supported test suites of COCO are a focus, but submissions are not limited to those topics:

- single-objective unconstrained problems (bbob)
- single-objective unconstrained problems with noise (bbob-noisy)
- biobjective unconstrained problems (bbob-biobj)
- large-scale single-objective problems (bbob-largescale) and
- mixed-integer single- and bi-objective problems (bbob-mixint and bbob-biobj-mixint)

We encourage particularly submissions about algorithms from outside the evolutionary computation community and papers analyzing the large amount of already publicly available algorithm data of COCO (see https://numbbo.github.io/data-archive/). Like for the previous editions, we will provide source code in various languages (C/C++, Matlab/Octave, Java, and Python) to benchmark algorithms on the various test suites mentioned. Postprocessing data and comparing algorithm performance will be equally automatized with COCO (up to already prepared ACM-compliant LaTeX templates for writing papers).

For details, please see the separate BBOB-2022 web page at
https://numbbo.github.io/workshops/BBOB-2022/index.html

1 Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tušar, and Dimo Brockhoff. "COCO: A platform for comparing continuous optimizers in a black-box setting." Optimization Methods and Software (2020): 1-31.

Organizers

Anne Auger

Anne Auger is a research director at the French National Institute for Research in Computer Science and Control (Inria) heading the RandOpt team. She received her diploma (2001) and PhD (2004) in mathematics from the Paris VI University. Before to join INRIA, she worked for two years (2004-2006) at ETH in Zurich. Her main research interest is stochastic continuous optimization including theoretical aspects, algorithm designs and benchmarking. She is a member of ACM-SIGECO executive committee and of the editorial board of Evolutionary Computation. She has been General chair of GECCO in 2019. She has been organizing the biannual Dagstuhl seminar "Theory of Evolutionary Algorithms" in 2008 and 2010 and all seven previous BBOB workshops at GECCO since 2009. She is co-organzing the forthcoming Dagstuhl seminar on benchmarking.

 

Konstantin Dietrich

Konstantin Dietrich is a PhD student at the TH Köln - University of Applied Sciences. He is working on exploratory landscape analysis using various benchmarking data.

 

Paul Dufossé

Paul Dufossé received his diploma in 2017 from Université Paris-Dauphine and ENS Paris-Saclay in statistics and machine learning. Since late 2018, he is pursuing a PhD in computer science at Institut Polytechnique de Paris, France with the RandOpt team under Nikolaus Hansen and industrial partner Thales Defense Mission Systems. His research interests are optimization, machine learning and digital signal processing. In particular, he aims to design evolutionary algorithms to solve constrained optimization problems emerging from radar and antenna signal processing applications.

Tobias Glasmachers

Tobias Glasmachers is a professor at the institute for neural computation at the Ruhr-university of Bochum, Germany. He received his PhD from the faculty of Mathematics of the same university in 2008. Afterwards he joined the Swiss AI lab IDSIA for two years, where he was involved in developing natural evolution strategies. In 2012 he returned to Bochum as a junior professor, and he was appointed full professor in 2018. His research interests are optimization and machine learning. In the context of evolutionary algorithms he is interested in algorithm design and analysis of evolution strategies for single- and multi-objective optimization.

 

Nikolaus Hansen

Nikolaus Hansen is a research director at Inria, France. Educated in medicine and mathematics, he received a Ph.D. in civil engineering in 1998 from the Technical University Berlin under Ingo Rechenberg. Before he joined Inria, he has been working in evolutionary computation, genomics and computational science at the Technical University Berlin, the Institute of Genetic Medicine and the ETH Zurich. His main research interests are learning and adaptation in evolutionary computation and the development of algorithms applicable in practice. His best-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).

 

Olaf Mersmann

Olaf Mersmann is a Professor for Data Science at TH Köln - University of Applied Sciences. He received his BSc, MSc and PhD in Statistics from TU Dortmund. His research interests include using statistical and machine learning methods on large benchmark databases to gain insight into the structure of the algorithm choice problem.

Petr Pošík

Petr Posik works as a lecturer at the Czech Technical University in Prague, where he also recieved his Ph.D. in Artificial Intelligence and Biocybernetics in 2007. From 2001 to 2004 he worked as a statistician, analyst and lecturer for StatSoft, Czech Republic. Since 2005 he works at the Department of Cybernetics, Czech Technical University. Being on the boundary of optimization, statistics and machine learning, his research interests are aimed at improving the characteristics of evolutionary algorithms with techniques of statistical machine learning. He serves as a reviewer for several journals and conferences in the evolutionary-computation field. Petr served as the student chair at GECCO 2014, as tutorials chair at GECCO 2017, and as the local chair at GECCO 2019.

Tea Tušar

Tea Tušar is a research fellow at the Department of Intelligent Systems of the Jozef Stefan Institute in Ljubljana, Slovenia. She was awarded the PhD degree in Information and Communication Technologies by the Jozef Stefan International Postgraduate School for her work on visualizing solution sets in multiobjective optimization. She has completed a one-year postdoctoral fellowship at Inria Lille in France where she worked on benchmarking multiobjective optimizers. Her research interests include evolutionary algorithms for singleobjective and multiobjective optimization with emphasis on visualizing and benchmarking their results and applying them to real-world problems.

Dimo Brockhoff

Dimo Brockhoff received his diploma in computer science from University of Dortmund, Germany in 2005 and his PhD (Dr. sc. ETH) from ETH Zurich, Switzerland in 2009. After two postdocs at Inria Saclay Ile-de-France (2009-2010) and at Ecole Polytechnique (2010-2011), he joined Inria in November 2011 as a permanent researcher (first in its Lille - Nord Europe research center and since October 2016 in the Saclay - Ile-de-France one). His research interests are focused on evolutionary multiobjective optimization (EMO), in particular on theoretical aspects of indicator-based search and on the benchmarking of blackbox algorithms in general. Dimo has co-organized all BBOB workshops since 2013 and was EMO track co-chair at GECCO'2013 and GECCO'2014.

BENCH@GECCO2022 — Benchmarking and Reproducibility/Replicability

https://sites.google.com/view/benchmarking-network/home/activities/gecco-2022-workshop

Summary

Benchmarking plays a vital role in understanding the performance and search behavior of sampling-based optimization techniques such as evolutionary algorithms. This workshop will continue our workshop series on good benchmarking practices at different conferences in the context of EC that we started in 2020. The core theme is on benchmarking evolutionary computation methods and related sampling-based optimization heuristics, but each year, the focus is changed.

The focus in 2022 will be relevance, approaches, and practicability of benchmarking in industry.

Organizers

Carola Doerr

Carola Doerr, formerly Winzen, is a permanent CNRS researcher at Sorbonne University in Paris, France. Carola's main research activities are in the analysis of black-box optimization algorithms, both by mathematical and by empirical means. Carola is regularly involved in the organization of events around evolutionary computation and related topics, for example as program chair for PPSN 2020, FOGA 2019 and the theory tracks of GECCO 2015 and 2017, as guest editor for IEEE Transactions on Evolutionary Computation and Algorithmica, as organizer of Dagstuhl seminars and Lorentz Center workshops. Carola is an associate editor of ACM Transactions on Evolutionary Learning and Optimization (TELO) and board member of the Evolutionary Computation journal. Her works have received several awards, among them the Otto Hahn Medal of the Max Planck Society, best paper awards at EvoApplications and CEC, and four best paper awards at GECCO.

Tome Eftimov

Tome Eftimov is a researcher at the Computer Systems Department at the Jožef Stefan Institute, Ljubljana, Slovenia. He is a visiting assistant professor at the Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University, Skopje. He was a postdoctoral research fellow at the Stanford University, USA, where he investigated biomedical relations outcomes by using AI methods. In addition, he was a research associate at the University of California, San Francisco, investigating AI methods for rheumatology concepts extraction from electronic health records. He obtained his PhD in Information and Communication Technologies (2018). His research interests include statistical data analysis, metaheuristics, natural language processing, representation learning, and machine learning. He has been involved in courses on probability and statistics, and statistical data analysis. The work related to Deep Statistical Comparison was presented as tutorial (i.e. IJCCI 2018, IEEE SSCI 2019, GECCO 2020, and PPSN 2020) or as invited lecture to several international conferences and universities. He is an organizer of several workshops related to AI at high-ranked international conferences. He is a coordinator of a national project “Mr-BEC: Modern approaches for benchmarking in evolutionary computation” and actively participates in European projects.

Pascal Kerschke

Pascal Kerschke is professor of Big Data Analytics in Transportation at TU Dresden, Germany. Until his appointment in 2021, he was a postdoctoral researcher at the University of Münster, Germany. Prior to that, he obtained academic degrees in Data Analysis and Management (B.Sc.) and Data Science (M.Sc.) from the TU Dortmund University, Germany, as well as in Information Systems (Ph.D.) from the University of Münster, Germany. His research interests cover a wide range of topics in the context of benchmarking, data science, machine learning, and optimization. In particular, his research focuses on Automated Algorithm Selection, Exploratory Landscape Analysis, and continuous single- and multi-objective optimization. Moreover, he is the main developer of the related R-package flacco (https://flacco.shinyapps.io/flacco/), co-authored further R-packages such as smoof and moPLOT, co-organized numerous tutorials and workshops in the context of Exploratory Landscape Analysis and/or benchmarking, and is an active member of the Benchmarking Network (https://sites.google.com/view/benchmarking-network/) and the COSEAL group (http://coseal.net).

Boris Naujoks

Boris Naujoks is a professor for Applied Mathematics at TH Köln - Cologne University of Applied Sciences (CUAS). He joint CUAs directly after he received his PhD from Dortmund Technical University in 2011. During his time in Dortmund, Boris worked as a research assistant in different projects and gained industrial experience working for different SMEs. Meanwhile, he enjoys the combination of teaching mathematics as well as computer science and exploring EC and CI techniques at the Campus Gummersbach of CUAS. He focuses on multiobjective (evolutionary) optimization, in particular hypervolume based algorithms, and the (industrial) applicability of the explored methods.

 

Mike Preuss

Mike Preuss is assistant professor at LIACS, the Computer Science department of Leiden University. He works in AI, namely game AI, natural computing, and social media computing. Mike received his PhD in 2013 from the Chair of Algorithm Engineering at TU Dortmund, Germany, and was with ERCIS at the WWU Muenster, Germany, from 2013 to 2018. His research interests focus on the field of evolutionary algorithms for real-valued problems, namely on multi-modal and multi-objective optimization, and on computational intelligence and machine learning methods for computer games. Recently, he is also involved in Social Media Computing, and he is publications chair of the upcoming multi-disciplinary MISDOOM conference 2019. He is associate editor of the IEEE ToG journal and has been member of the organizational team of several conferences in the last years, in various functions, as general co-chair, proceedings chair, competition chair, workshops chair.

Vanessa Volz

Vanessa Volz is an AI researcher at modl.ai (Copenhagen, Denmark), with focus in computational intelligence in games. She received her PhD in 2019 from TU Dortmund University, Germany, for her work on surrogate-assisted evolutionary algorithms applied to game optimisation. She holds B.Sc. degrees in Information Systems and in Computer Science from WWU Münster, Germany. She received an M.Sc. with distinction in Advanced Computing: Machine Learning, Data Mining and High Performance Computing from University of Bristol, UK, in 2014. Her current research focus is on employing surrogate-assisted evolutionary algorithms to obtain balance and robustness in systems with interacting human and artificial agents, especially in the context of games.

DTEO — 5th GECCO Workshop on Decomposition Techniques in Evolutionary Optimization

https://sites.google.com/view/dteo/

Summary

Tackling an optimization problem using decomposition consists in transforming (or re-modeling or re-thinking) it into multiple, a priori smaller and easier, problems that can be solved cooperatively. A number of techniques are being actively developed by the evolutionary computing community in order to explicitly or implicitly design decomposition with respect to four facets of an optimization problem: (i) the environmental parameters, (ii) the decision variables, (iii) the objective functions, and (iv) the available computing resources. The workshop aims to be a unified opportunity to report the recent advances in the design, analysis and understanding of evolutionary decomposition techniques and to discuss the current and future challenges in applying decomposition to the increasingly big and complex nature of optimization problems (e.g., large number of variables, large number of objectives, multi-modal problems, simulation optimization, uncertain scenario-based optimization) and its suitability to modern large scale compute environments (e.g., massively parallel and decentralized algorithms, large scale divide-and-conquer parallel algorithms, expensive optimization). The workshop focus is there-by on (but not limited to) the developmental, implementational, theoretical and applied aspects of:
- Large scale evolutionary decomposition, e.g., decomposition in decision space, gray-box techniques, co-evolutionary algorithms, grouping and cooperative techniques, decomposition for constraint handling.
- Multi- and Many- objective decomposition, e.g., aggregation and scalarizing approaches, cooperative and hybrid island-based design, (sub-)population decomposition and mapping.
- Parallel and distributed evolutionary decomposition, e.g., scalability with respect to decision and objective spaces, divide-and-conquer decentralized techniques, distribution of compute efforts, scalable deployments on heterogeneous and massively parallel computing environments.
- Novel general-purpose decomposition techniques, e.g., machine-learning and model assisted decomposition, offline and on-line configuration of decomposition, search region decomposition and multiple surrogates, parallel expensive optimization.
- Understanding and benchmarking decomposition techniques.
- General purpose software tools and libraries for evolutionary decomposition.

Organizers

Bilel Derbel

Bilel Derbel is an associate Professor, with a research habilitation/accreditation, at the Department of Computer Science at the University of Lille, France. He is deputy team leader of the BONUS "Big Optimization aNd Ultra-Scale Computing" research group at Inria Lille Nord Europe and CRIStAL, CNRS. He is a co-founder member of the International Associated Lab (LIA-MODO) between Shinshu Univ., Japan, and Univ. Lille, France, on "Massive optimization and Computational Intelligence". His research topics are focused on the design and the analysis of combinatorial optimization algorithms and high-performance computing. His current interests are on the design of high-level evolutionary algorithms for single­ and multi­ objective optimization.

Ke Li

Ke Li is a UKRI Future Leaders Fellow, a Turing Fellow, and a Senior Lecturer of Computer Science at the University of Exeter (UoE). He was the founding chair of IEEE Computational Intelligence Society (CIS) Task Force 12 from 2017 to 2021. This is an international consortium that brings together global researchers to promote an active state of themed areas of the decomposition-based techniques in CI. Related activities include workshops and special sessions associated with major conferences in EC (GECCO, CEC and PPSN) and webinars since 2018. He has been a Publication Chair of EMO 2021, a dedicated conference on evolutionary multi-objective optimization and multi-criterion decision-making. Moreover, he has been an academic mentor of early career researchers (ECRs) in IEEE CIS since 2020. He has co-chaired a summer school in 2021 themed on "Data-Driven AI/CI: Theory and Applications" sponsored by IEEE Computational Intelligence Society. This summer school aims to i) provide a unique opportunity to ECRs from around the world to learn about state-of-the-art AI/CI techniques and applications; ii) interact with world-renowned high-profile experts; and iii) inspire new ideas and collaborations. Dr Li has been serving as an Associate Editor of four academic journals including IEEE Trans. Evol. Comput. Moreover, he was a guest editor of a special issues on the topic of semantic computing and personalization in Neurocomputing journal and a special issue on the topic of advances of AI in visual systems in Multimedia Tools and Applications journal.

 

Xiaodong Li

Xiaodong Li (M’03-SM’07) received his B.Sc. degree from Xidian University, Xi'an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. He is a Professor with the School of Science (Computer Science and Software Engineering), RMIT University, Melbourne, Australia. His research interests include evolutionary computation, neural networks, data analytics, multiobjective optimization, multimodal optimization, and swarm intelligence. He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member of IEEE CIS Task Force on Swarm Intelligence, a vice-chair of IEEE Task Force on Multi-modal Optimization, and a former chair of IEEE CIS Task Force on Large Scale Global Optimization. He is the recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS "IEEE Transactions on Evolutionary Computation Outstanding Paper Award".

Saúl Zapotecas

Saúl Zapotecas is a visiting Professor at Department of Applied Mathematics and Systems, Division of Natural Sciences and Engineering, Autonomous Metropolitan University, Cuajimalpa Campus (UAM-C). Saúl Zapotecas received the B.Sc. in Computer Sciences from the Meritorious Autonomous University of Puebla (BUAP). His M.Sc. and PhD in computer sciences from the Center for Research and Advanced Studies of the National Polytechnic Institute of Mexico (CINVESTAV-IPN). His current research interests include evolutionary computation, multi/many-objective optimization via decomposition, and multi-objective evolutionary algorithms assisted by surrogate models.

 

Qingfu Zhang

Qingfu Zhang is a Professor at the Department of Computer Science, City University of Hong Kong. His main research interests include evolutionary computation, optimization, neural networks, data analysis, and their applications. He is currently leading the Metaheuristic Optimization Research (MOP) Group in City University of Hong Kong. Professor Zhang is an Associate Editor of the IEEE Transactions on Evolutionary Computation and the IEEE Transactions Cybernetics. He was awarded the 2010 IEEE Transactions on Evolutionary Computation Outstanding Paper Award. He is on the list of the Thomson Reuters 2016 and 2017 highly cited researchers in computer science. He is an IEEE fellow.

EAHPC — Second Workshop on Evolutionary Algorithms in High Performance Computing

https://markcoletti.github.io/gecco_eahpc_workshop_site/2022/index.html

Summary

Evolutionary algorithms (EAs) are well-suited for High Performance Computing (HPC) because fitness evaluations can be readily done in parallel. Consequently, EAs have gathered considerable attention for their ability to accelerate finding solutions for a variety of computationally expensive problem domains, including reinforcement learning, neural architecture search, and model calibration for complex simulations. However, use of HPC resources adds an implicit secondary objective of ensuring those resources are efficiently utilized. This means that practitioners have to make decisions regarding evolutionary algorithms tailored for maximum HPC resource use, as well as associated software and hardware support. New EA-oriented HPC benchmarks might also be needed to guide practitioners in making those decisions.
We are looking for papers on the following sub-topics to facilitate discussion:
algorithmic — what novel EA variants best exploit HPC resources?
benchmarks — are there HPC specific measures for EA performance?
hardware — can we improve use of HPC hardware, such as GPUs?
software — what EA software, or software development practices, best leverage HPC capabilities?

Organizers

Mark Coletti

Mark Coletti is a research scientist with the Oak Ridge National Laboratory (ORNL), and he received his Ph.D. in Computer Science from George Mason University in 2014. His main research focus is improving understanding of evolutionary algorithms within HPC contexts, particularly in petascale and exascale environments. His technical background includes evolutionary computation, machine learning, agent-based modeling, software engineering, image processing, and geoinformatics.

Catherine (Katie) Schuman

Catherine (Katie) Schuman is an assistant professor at the University of Tennessee Knoxville. She received her Ph.D. in Computer Science from the University of Tennessee in 2015, where she completed her dissertation on the use of evolutionary algorithms to train spiking neural networks for neuromorphic systems. She is continuing her study of models and algorithms for neuromorphic computing at UTK. Katie co-leads the TENNLab neuromorphic computing research group. Katie has over 70 publications as well as seven patents in the field of neuromorphic computing. Katie received the U.S. Department of Energy Early Career Award in 2019.

Eric “Siggy” Scott

Eric Scott is a Senior Artificial Intelligence Engineer at MITRE Corporation in Northern Virginia and a PhD candidate at George Mason University. His research focuses on heuristic optimization algorithms, transfer learning, and their applications to modeling problems in a variety of fields. He holds a double B.Sc. in Computer Science and Mathematics from Andrews University in Berrien Springs, Michigan, and a M.Sc. in Computer Science from George Mason University.

 

Robert Patton

Dr. Robert M. Patton is a computational analytics scientist at Oak Ridge National Laboratory. His research is focused on nature-inspired computational techniques for large-scale data analytics. He is a member of IEEE’s CI Society and ACM’s SIGEVO.

 

Paul Wiegand

Paul Wiegand is an Assistant Professor in the Department of Computer Science & Quantitative Methods (Fall 2020). Before this, he served on the faculty at the School of Modeling, Simulation, & Training at the University of Central Florida (UCF) for over a decade, and held a postdoctoral position at the Navy Center for Applied Research in Artificial Intelligence before that. While at UCF, he taught in, and ran, their Modeling & Simulation graduate programs, as well as served as the director for the UCF Advanced Research Computing Center. His research has mainly centered on methods of natural computation, theory of coadaptive and coevolutionary computation, as well as application of coadaptive methods for multiagent learning and probabilistic reasoning. Paul also has a strong interest in basic foundations of computer science, as well as distributed and parallel high performance and high throughput computing.

 

Jeffrey K. Bassett

Jeff Bassett is a research scientist and engineer in machine learning that received his PhD in computer science from George Mason University in 2011 under the direction of Dr. Kenneth De Jong. He has significant experience in robotics, agent-based simulations, and 3D graphics as well as developing software for HPC environments.

Chathika Gunaratne

Chathika Gunaratne is a postdoctoral research associate in the Computer Science and Mathematics Division at Oak Ridge National Laboratory. Chathika's research focuses on explainable artificial intelligence and data-driven modeling and simulation of complex social systems. Chathika’s work incorporates technical aspects from evolutionary algorithms, agent-based modeling, machine learning, network analysis, and high-performance computing. Chathika earned his Ph.D. in Modeling and Simulation from the University of Central Florida in 2019, holds a M.S. in Modeling and Simulation also from UCF, and a B.Sc. in Computer Science from the University of Colombo, Sri Lanka. Chathika’s previous appointments include positions at Massachusetts Institute of Technology Computer Science and Artificial Intelligence Lab (MIT CSAIL), NBC Universal Studios, and SimCentric Technologies.

EC + DM — Evolutionary Computation and Decision Making

https://blogs.exeter.ac.uk/ecmcdm/

Summary

Solving real-world optimisation problems typically involve an expert or decision-maker. Decision making tools have been found to be useful in several such applications e.g., health care, education, environment, transportation, business, and production. In recent years, there has also been growing interest in merging Evolutionary Computation (EC) and decision-making techniques for several applications. This has raised amongst others the need to account for explainability, fairness, ethics and privacy aspects in optimisation and decision-making. This workshop will showcase research that is both at the interface of EC and decision making.

The workshop on Evolutionary Computation and Decision Making to be held in GECCO 2022 aims to promote research on theory and applications in the field. Topics of interest (but not limited to) include:
 Interactive multiobjective optimisation or decision-maker in the loop
 Visualisation
 Aggregation/trade-off operators & algorithms
 Fuzzy logic-based decision-making techniques
 Bayesian and other decision-making techniques
 Interactive multiobjective optimisation for (computationally) expensive problems
 Using surrogates (or metamodels) in decision making
 Hybridisation of EC and decision making
 Scalability in EC and decision making
 Decision making and machine learning
 Decision making in Big data
 Decision making in real-world applications
 Use of psychological tools to aid the decision-maker
 Fairness, ethics and societal considerations in EC and decision making
 Explainability in EC and decision making
 Accounting for trust and security in EC and decision making

Organizers

Tinkle Chugh

Dr Tinkle Chugh is a Lecturer in Computer Science at the University of Exeter. Between Feb 2018 and June 2020, he worked as a Postdoctoral Research Fellow in the BIG data methods for improving windstorm FOOTprint prediction (project funded by Natural Environment Research Council UK. He obtained his PhD degree in Mathematical Information Technology in 2017 from the University of Jyväskylä, Finland. His thesis was a part of the Decision Support for Complex Multiobjective Optimization Problems project, where he collaborated with Finland Distinguished Professor (FiDiPro) Yaochu Jin from the University of Surrey, UK. His research interests are machine learning, data-driven optimization, evolutionary computation and decision making.

Richard Allmendinger

Richard is Senior Lecturer in Data Science and the Business Engagement Lead of Alliance Manchester Business School, The University of Manchester, and Fellow of The Alan Turing Institute, the UK's national institute for data science and artificial intelligence. Richard has a background in Business Engineering (Diplom, Karlsruhe Institute of Technology, Germany + Royal Melbourne Institute of Technology, Australia), Computer Science (PhD, The University of Manchester, UK), and Biochemical Engineering (Research Associate, University College London, UK). Richard's research interests are in the field of data and decision science and in particular in the development and application of optimization, learning and analytics techniques to real-world problems arising in areas such as management, engineering, healthcare, sports, music, and forensics. Richard is known for his work on non-standard expensive optimization problems comprising, for example, heterogeneous objectives, ephemeral resource constraints, changing variables, and lethal optimization environments. Much of his research has been funded by grants from various UK funding bodies (e.g. Innovate UK, EPSRC, ESRC, ISCF) and industrial partners. Richard is a Member of the Editorial Board of several international journals, Vice-Chair of the IEEE CIS Bioinformatics and Bioengineering Technical Committee, Co-Founder of the IEEE CIS Task Force on Optimization Methods in Bioinformatics and Bioengineering, and contributes regularly to conference organisation and special issues as guest editors.

 

Jussi Hakanen

Dr Jussi Hakanen is a Senior Researcher at the Faculty of Information Technology at the University of Jyväskylä, Finland. He received MSc degree in mathematics and PhD degree in mathematical information technology, both from the University of Jyväskylä. His research is focused on multiobjective optimization and decision making with an emphasis on interactive multiobjective optimization methods, data-driven decision making, computationally expensive problems, explainable/interpretable machine learning, and visualization aspects related to many-objective problems. He has participated in several industrial projects involving different applications of multiobjective optimization, e.g. in chemical engineering. He has been a visiting researcher in Cornell University, Carnegie Mellon, University of Surrey, University of Wuppertal, University of Malaga and the VTT Technical Research Center of Finland. He has a title of Docent (similar to Adjunct Professor in the US) in Industrial Optimization at the University of Jyväskylä, Finland.

ECADA 2022 — 12th Workshop on Evolutionary Computation for the Automated Design of Algorithms

https://bonsai.auburn.edu/ecada/

Summary

Scope

The main objective of this workshop is to discuss hyper-heuristics and algorithm configuration methods for the automated generation and improvement of algorithms, with the goal of producing solutions (algorithms) that are applicable to multiple instances of a problem domain. The areas of application of these methods include optimization, data mining and machine learning.

Automatically generating and improving algorithms by means of other algorithms has been the goal of several research fields, including artificial intelligence in the early 1950s, genetic programming since the early 1990s, and more recently automated algorithm configuration and hyper-heuristics. The term hyper-heuristics generally describes meta-heuristics applied to a space of algorithms. While genetic programming has most famously been used to this end, other evolutionary algorithms and meta-heuristics have successfully been used to automatically design novel (components of) algorithms. Automated algorithm configuration grew from the necessity of tuning the parameter settings of meta-heuristics and it has produced several powerful (hyper-heuristic) methods capable of designing new algorithms by either selecting components from a flexible algorithmic framework or recombining them following a grammar description.

Although most evolutionary algorithms are designed to generate specific solutions to a given instance of a problem, one of the defining goals of hyper-heuristics is to produce solutions that solve more generic problems. For instance, while there are many examples of evolutionary algorithms for evolving classification models in data mining and machine learning, a genetic programming hyper-heuristic has been employed to create a generic classification algorithm which in turn generates a specific classification model for any given classification dataset, in any given application domain. In other words, the hyper-heuristic is operating at a higher level of abstraction compared to how most search methodologies are currently employed; i.e., it is searching the space of algorithms as opposed to directly searching in the problem solution space, raising the level of generality of the solutions produced by the hyper-heuristic evolutionary algorithm. In contrast to standard genetic programming, which attempts to build programs from scratch from a typically small set of atomic functions, hyper-heuristic methods specify an appropriate set of primitives (e.g., algorithmic components) and allow evolution to combine them in novel ways as appropriate for the targeted problem class. While this allows searches in constrained search spaces based on problem knowledge, it does not in any way limit the generality of this approach as the primitive set can be selected to be Turing-complete. Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence.

As meta-heuristics are themselves a type of algorithm, they too can be automatically designed employing hyper-heuristics. For instance, in 2007, genetic programming was used to evolve mate selection in evolutionary algorithms; in 2011, linear genetic programming was used to evolve crossover operators; more recently, genetic programming was used to evolve complete black-box search algorithms, SAT solvers, and FuzzyART category functions. Moreover, hyper-heuristics may be applied before deploying an algorithm (offline) or while problems are being solved (online), or even continuously learn by solving new problems (life-long). Offline and life-long hyper-heuristics are particularly useful for real-world problem solving where one can afford a large amount of a priori computational time to subsequently solve many problem instances drawn from a specified problem domain, thus amortizing the a priori computational time over repeated problem solving. Recently, the design of multi-objective evolutionary algorithm components was automated.

Very little is known yet about the foundations of hyper-heuristics, such as the impact of the meta-heuristic exploring algorithm space on the performance of the thus automatically designed algorithm. An initial study compared the performance of algorithms generated by hyper-heuristics powered by five major types of genetic programming. Another avenue for research is investigating the potential performance improvements obtained through the use of asynchronous parallel evolution to exploit the typical large variation in fitness evaluation times when executing hyper-heuristics.


Content

We welcome original submissions on all aspects of Evolutionary Computation for the Automated Design of Algorithms, in particular, evolutionary computation methods and other hyper-heuristics for the automated design, generation or improvement of algorithms that can be applied to any instance of a target problem domain. Relevant methods include methods that evolve whole algorithms given some initial components as well as methods that take an existing algorithm and improve it or adapt it to a specific domain. Another important aspect in automated algorithm design is the definition of the primitives that constitute the search space of hyper-heuristics. These primitives should capture the knowledge of human experts about useful algorithmic components (such as selection, mutation and recombination operators, local searches, etc.) and, at the same time, allow the generation of new algorithm variants. Examples of the application of hyper-heuristics, including genetic programming and automatic configuration methods, to such frameworks of algorithmic components are of interest to this workshop, as well as the (possibly automatic) design of the algorithmic components themselves and the overall architecture of metaheuristics. Therefore, relevant topics include (but are not limited to):
- Applications of hyper-heuristics, including general-purpose automatic algorithm configuration methods for the design of metaheuristics, in particular evolutionary algorithms, and other algorithms for application domains such as optimization, data mining, machine learning, image processing, engineering, cyber security, critical infrastructure protection, and bioinformatics.
- Novel hyper-heuristics, including but not limited to genetic programming based approaches, automatic configuration methods, and online, offline and life-long hyper-heuristics, with the stated goal of designing or improving the design of algorithms.
- Empirical comparison of hyper-heuristics.
- Theoretical analyses of hyper-heuristics.
- Studies on primitives (algorithmic components) that may be used by hyper-heuristics as the search space when automatically designing algorithms.
- Automatic selection/creation of algorithm primitives as a preprocessing step for the use of hyper-heuristics.
- Analysis of the trade-off between generality and effectiveness of different hyper-heuristics or of algorithms produced by a hyper-heuristic.
- Analysis of the most effective representations for hyper-heuristics (e.g., Koza style Genetic Programming versus Cartesian Genetic Programming).
- Asynchronous parallel evolution of hyper-heuristics.

Organizers

Daniel Tauritz

Daniel Tauritz is an Associate Professor in the Department of Computer Science and Software Engineering at Auburn University (AU), Interim Director and Chief Cyber AI Strategist of the Auburn Cyber Research Center, the founding Head of AU’s Biomimetic Artificial Intelligence Research Group (BioAI Group), a cyber consultant for Sandia National Laboratories, a Guest Scientist at Los Alamos National Laboratory (LANL), and founding academic director of the LANL/AU Cyber Security Sciences Institute (CSSI). He received his Ph.D. in 2002 from Leiden University. His research interests include the design of generative hyper-heuristics and self-configuring evolutionary algorithms and the application of computational intelligence techniques in cyber security, critical infrastructure protection, and program understanding. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members.

 

John Woodward

John R. Woodward is a lecturer at the Queen Mary University of London. Formerly he was a lecturer at the University of Stirling, within the CHORDS group (http://chords.cs.stir.ac.uk/) and was employed on the DAASE project (http://daase.cs.ucl.ac.uk/). Before that he was a lecturer for four years at the University of Nottingham. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, particularly Search Based Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming. He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 50 talks at International Conferences and as an invited speaker at Universities. He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities.

Manuel López-Ibáñez

Dr. López-Ibáñez is a "Beatriz Galindo" Senior Distinguished Researcher at the University of Málaga, Spain and a senior lecturer in the Decision and Cognitive Sciences Research Centre at the Alliance Manchester Business School, University of Manchester, UK. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 32 journal papers, 9 book chapters and 54 papers in peer-reviewed proceedings of international conferences on diverse areas such as evolutionary algorithms, multi-objective optimization, and various combinatorial optimization problems. His current research interests are experimental analysis and the automatic configuration and design of stochastic optimization algorithms, for single and multi-objective problems. He is the lead developer and current maintainer of the irace software package for automatic algorithm configuration (http://iridia.ulb.ac.be/irace).

ECXAI — Evolutionary Computation and Explainable AI

https://ecxai.github.io/ecxai/workshop

Summary

‘Explainable AI’ is an umbrella term that covers research on methods designed to provide human-understandable explanations of the decisions made/knowledge captured by AI models. Within the AI field this is currently a very active research area. Evolutionary Computation (EC) draws from concepts found in nature to drive development in evolution-based systems such as genetic algorithms and evolution systems. Alongside other nature-inspired metaheuristics, such as swarm intelligence, the path to a solution is driven by stochastic processes. This creates barriers to explainability: algorithms may return different solutions when re-run from the same input and technical descriptions of these processes are often a barrier to end-user understanding and acceptance. On the other hand, very often XAI methods require the fitting of some kind of model, and hence EC methods have the potential to play a role in this area. This workshop will focus on the bidirectional interplay between XAI and EC. That is, how XAI can help EC research, and how EC can be used within XAI methods
Recent growth in the adoption of black-box solutions including EC-based methods into domains such as medical diagnosis, manufacturing and transport & logistics has led to greater attention being given to the generation of explanations and their accessibility to end-users. This increased attention has helped create a fertile environment for the application of XAI techniques in the EC domain for both end-user and researcher focused explanation generation. Furthermore, many approaches to XAI in machine learning are based on search algorithms (e.g., Local Interpretable Model-Agnostic Explanations / LIME) that have the potential to draw on the expertise of the EC community; and many of the broader questions (such as what kinds of explanation are most appealing or useful to end users) are faced by XAI researchers in general.
From an application perspective, important questions have arisen, for which XAI may be crucial: Is the system biased? Has the problem been formulated correctly? Is the solution trustworthy and fair? The goal of XAI and related research is to develop methods to interrogate AI processes with the aim of answering these questions. This can support decision makers while also building trust in AI decision-support through more readily understandable explanations.
We seek contributions on a range of topics related to this theme, including but not limited to:
· Interpretability vs explainability in EC and their quantification
· Landscape analysis and XAI
· Contributions of EC to XAI in general
· Use of EC to generate explainable/interpretable models
· XAI in real-world applications of EC
· Possible interplay between XAI and EC theory
· Applications of existing XAI methods to EC
· Novel XAI methods for EC
· Legal and ethical considerations
· Case studies / applications of EC & XAI technologies

Organizers

John McCall

John McCall is Head of Research for the National Subsea Centre at Robert Gordon University. He has researched in machine learning, search and optimisation for 25 years, making novel contributions to a range of nature-inspired optimisation algorithms and predictive machine learning methods, including EDA, PSO, ACO and GA. He has 150+ peer-reviewed publications in books, international journals and conferences. These have received over 2400 citations with an h-index of 22. John and his research team specialise in industrially-applied optimization and decision support, working with major international companies including BT, BP, EDF, CNOOC and Equinor as well as a diverse range of SMEs. Major application areas for this research are: vehicle logistics, fleet planning and transport systems modelling; predictive modelling and maintenance in energy systems; and decision support in industrial operations management. John and his team attract direct industrial funding as well as grants from UK and European research funding councils and technology centres. John is a founding director and CEO of Celerum, which specialises in freight logistics. He is also a founding director and CTO of PlanSea Solutions, which focuses on marine logistics planning. John has served as a member of the IEEE Evolutionary Computing Technical Committee, an Associate Editor of IEEE Computational Intelligence Magazine and the IEEE Systems, Man and Cybernetics Journal, and he is currently an Editorial Board member for the journal Complex And Intelligent Systems. He frequently organises workshops and special sessions at leading international conferences, including several GECCO workshops in recent years.

Jaume Bacardit

Jaume Bacardit is Reader in Machine Learning at Newcastle University in the UK. He has receiveda BEng, MEng in Computer Engineering and a PhD in Computer Science from Ramon Llull University, Spain in 1998, 2000 and 2004, respectively. Bacardit’s research interests include the development of machine learning methods for large-scale problems, the design of techniques to extract knowledge and improve the interpretability of machine learning algorithms, known currently as Explainable AI, and the application of these methods to a broad range of problems, mostly in biomedical domains. He leads/has led the data analytics efforts of several large interdisciplinary consortiums: D-BOARD (EU FP7, €6M, focusing on biomarker identification), APPROACH (EI-IMI €15M, focusing on disease phenotype identification) and PORTABOLOMICS (UK EPSRC £4.3M focusing on synthetic biology). Within GECCO he has organised several workshops (IWLCS 2007-2010, ECBDL’14), been co-chair of the EML track in 2009, 2013, 2014, 2020 and 2021, and Workshops co-chair in 2010 and 2011. He has 90+ peer-reviewed publications that have attracted 4600+ citations and a H-index of 31 (Google Scholar).

 

Alexander Brownlee

Alexander (Sandy) Brownlee is a Lecturer in the Division of Computing Science and Mathematics at the University of Stirling. His main topics of interest are in search-based optimisation methods and machine learning, with a focus on decision support tools, and applications in civil engineering, transportation and software engineering. He has published over 70 peer-reviewed papers on these topics. He has worked with several leading businesses including BT, KLM, and IES on industrial applications of optimisation and machine learning. He serves as a reviewer for several journals and conferences in evolutionary computation, civil engineering and transportation, and is currently an Editorial Board member for the journal Complex And Intelligent Systems. He has been an organiser of several workshops and tutorials at GECCO, CEC and PPSN on genetic improvement of software.

 

Stefano Cagnoni

Stefano Cagnoni graduated in Electronic Engineering at the University of Florence, Italy, where he also obtained a PhD in Biomedical Engineering and was a postdoc until 1997. In 1994 he was a visiting scientist at the Whitaker College Biomedical Imaging and Computation Laboratory at the Massachusetts Institute of Technology. Since 1997 he has been with the University of Parma, where he has been Associate Professor since 2004. Recent research grants include: a grant from Regione Emilia-Romagna to support research on industrial applications of Big Data Analysis, the co-management of industry/academy cooperation projects: the development, with Protec srl, of a computer vision-based fruit sorter of new generation and, with the Italian Railway Network Society (RFI) and Camlin Italy, of an automatic inspection system for train pantographs; a EU-funded “Marie Curie Initial Training Network" grant for a four-year research training project in Medical Imaging using Bio-Inspired and Soft Computing. He has been Editor-in-chief of the "Journal of Artificial Evolution and Applications" from 2007 to 2010. From 1999 to 2018, he was chair of EvoIASP, an event dedicated to evolutionary computation for image analysis and signal processing, then a track of the EvoApplications conference. From 2005 to 2020, he has co-chaired MedGEC, a workshop on medical applications of evolutionary computation at GECCO. Co-editor of journal special issues dedicated to Evolutionary Computation for Image Analysis and Signal Processing. Member of the Editorial Board of the journals “Evolutionary Computation” and “Genetic Programming and Evolvable Machines”. He has been awarded the "Evostar 2009 Award" in recognition of the most outstanding contribution to Evolutionary Computation.

 

Giovanni Iacca

Giovanni Iacca is an Associate Professor at the Department of Information Engineering and Computer Science of University of Trento, Italy, where he founded the Distributed Intelligence and Optimization Lab (DIOL). Previously, he worked as postdoctoral researcher in Germany (RWTH Aachen, 2017-2018), Switzerland (University of Lausanne and EPFL, 2013-2016) and The Netherlands (INCAS3, 2012-2016), as well as in industry in the areas of software engineering and industrial automation. He was co-PI of the FET-Open project "Phoenix" (2015-2019) and received two best paper awards (EvoApps 2017 and UKCI 2012). His research focuses on computational intelligence, stochastic optimization, and distributed systems. In these fields, he co-authored almost 100 peer-reviewed publications, and he is actively involved in the organization of tracks and workshops at leading international conferences. He also regularly serves as reviewer for several journals and he is in the program committee of various international conferences.

 

David Walker

David Walker is a Lecturer in Computer Science at the University of Plymouth. He obtained a PhD in Computer Science in 2013 for work on visualising solution sets in many-objective optimisation. His research focuses on developing new approaches to solving hard optimisation problems with Evolutionary Algorithms (EAs), as well as identifying ways in which the use of Evolutionary Computation can be expanded within industry, and he has published journal papers in all of these areas. His recent work considers the visualisation of algorithm operation, providing a mechanism for visualising algorithm performance to simplify the selection of EA parameters. While working as a postdoctoral research associate at the University of Exeter his work involved the development of hyper-heuristics and, more recently, investigating the use of interactive EAs in the water industry. Since joining Plymouth Dr Walker’s research group includes a number of PhD students working on optimisation and machine learning projects. He is active in the EC field, having run an annual workshop on visualisation within EC at GECCO since 2012 in addition to his work as a reviewer for journals such as IEEE Transactions on Evolutionary Computation, Applied Soft Computing, and the Journal of Hydroinformatics. He is a member of the IEEE Taskforce on Many-objective Optimisation. At the University of Plymouth he is a member of both the Centre for Robotics and Neural Systems (CRNS) and the Centre for Secure Communications and Networking.

EGML-EC — Enhancing Generative Machine Learning with Evolutionary Computation

https://sites.google.com/view/egml-ec2022/

Summary

Deep generative models (DGMs) have become an important research branch in deep learning in machine learning and deep learning. DGMs include a broad family of methods such as generative adversarial networks (GANs), variational autoencoders (VAEs), and autoregressive (AR) models. These models combine the advanced deep neural networks with classical density estimation (either explicit or implicit) for mainly generating synthetic data samples. Although these methods have achieved state-of-the-art results in the generation of synthetic data of different types such as images, speech, text, molecules, video, etc., Deep generative models are still difficult to train.

There are still open problems, such as the vanishing gradient and mode collapse in GANs, which limit their performance. Although there are strategies to minimize the effect of those problems, they remain fundamentally unsolved. In the last years, evolutionary computation (EC) and related techniques (e.g. particle swarm optimization) and in the form of Evolutionary Machine Learning approaches have been successfully applied to mitigate the problems that arise when training DGMs, leveraging the quality of the results to impressive levels. Among other approaches, these new solutions include GAN, VAE, and AR training methods based on evolutionary and coevolutionary algorithms, the combination of deep neuroevolution with training approaches, and the evolutionary exploration of the latent space.

This workshop aims to act as a medium for debate, exchange of knowledge and experience, and encourage collaboration for researchers focused on DGMs and the EC community. Bringing these two communities together will be essential for making significant advances in this research area. Thus, this workshop provides a critical forum for disseminating the experience in the topic of enhancing generative modeling with EC, to present new and ongoing research in the field, and to attract new interest from our community.

Particular topics of interest are (not exclusively):

+ Evolutionary and co-evolutionary algorithms to train deep generative models + EC-based optimization of hyper-parameters for deep generative models + Neuroevolution applied to train deep generative architectures + Dynamic EC-based evolution of deep generative models training parameters + Evolutionary latent space exploration + Real-world applications of EC-based deep generative models solutions + Multi-criteria adversarial training of deep generative models + Evolutionary generative adversarial learning models + Software libraries and frameworks for deep generative models applying EC

Organizers

 

Jamal Toutouh

I am a Marie Skłodowska Currie Postdoctoral Fellow at Massachusetts Institute of Technology (MIT) in the USA, at the MIT CSAIL Lab. I obtained my Ph.D. in Computer Engineering at the University of Malaga (Spain). The dissertation, Natural Computing for Vehicular Networks, was awarded the 2018 Best Spanish Ph.D. Thesis in Smart Cities. My dissertation focused on the application of Machine Learning methods inspired by Nature to address Smart Mobility problems.
My current research explores the combination of Nature-inspired gradient-free and gradient-based methods to address Adversarial Machine Learning. The main idea is to devise new algorithms to improve the efficiency and efficacy of the state-of-the-art methodology by mainly applying co-evolutionary approaches. Besides, I am working on the application of Machine Learning to address problems related to Smart Mobility, Smart Cities, and Climate Change.

 

UnaMay OReilly

Una-May O'Reilly is the leader of the AnyScale Learning For All (ALFA) group at MIT CSAIL. ALFA focuses on evolutionary algorithms, machine learning, and frameworks for large-scale knowledge mining, prediction and analytics. The group has projects in cyber security using coevolutionary algorithms to explore adversarial dynamics in networks and malware detection. Una-May received the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe in 2013. She is a Junior Fellow (elected before age 40) of the International Society of Genetic and Evolutionary Computation, which has evolved into ACM Sig-EVO. She now serves as Vice-Chair of ACM SigEVO. She served as chair of the largest international Evolutionary Computation Conference, GECCO, in 2005.

Penousal Machado

Penousal Machado leads the Cognitive and Media Systems group at the University of Coimbra. His research interests include Evolutionary Computation, Computational Creativity, and Evolutionary Machine Learning. In addition to the numerous scientific papers in these areas, his works have been presented in venues such as the National Museum of Contemporary Art (Portugal) and the “Talk to me” exhibition of the Museum of Modern Art, NY (MoMA).

 

João Correia

João Correia is an Assistant Professor at the University of Coimbra and a researcher of the Computational Design and Visualization Lab. and member of the Evolutionary and Complex Systems (ECOS) of the Centre for Informatics and Systems of the same university. He holds a PhD in Information Science and Technology from the University of Coimbra and also has a MSc and BS in Informatics Engineering from the same university. His main research interests include Evolutionary Computation, Machine Learning, Adversarial Learning, Computer Vision and Computational Creativity. He is involved in different international program committees of international conferences of the area of Evolutionary Computation, Artificial Intelligence, Computational Art and Computational Creativity; and reviewer for various conferences and journals for the mentioned areas, namely GECCO and EvoStar. More recently, he was invited as a remote reviewer for a European Research Council Grant. He was also publicity chair and chair of the International Conference of Evolutionary Art Music and Design conference and currently the publicity chair for EvoStar - The Leading European Event on Bio-Inspired Computation. Furthermore, he has authored and co-authored several articles on the different International Conferences and journals on Artificial Intelligence and Evolutionary Computation and is involved in national and international projects concerning Evolutionary Computation, Machine Learning, Computational Creativity and Data Science.

 

Sergio Nesmachnow

Full Professor and Researcher at Universidad de la República, Uruguay. Main research areas: Metaheuristics, Computational intelligence, High Performance Computing, Smart Cities. More than 300 research articles published in journals and conferences.

EQUM — Workshop on Evolutionary Optimization in Uncertainty Quantification Models

http://www.sc.ehu.es/ccwbayes/gecco2022_equm/

Summary

Mathematical models are a powerful tool used to describe the processes of nature. Different techniques may be used and many of them have in common the need of an optimization algorithm in order to obtain the value of the parameters which describe the behaviour of a specific situation. The usual way of modelling nature processes is deterministic. However, nature is not deterministic, but stochastic. There are many situations where data is not available, or the situation has random behaviour, and in such cases, uncertainty must be
taken into account. In that sense, uncertainty quantification is an emerging area in mathematics. Its main goal is determining information about the uncertainty in the outputs of a mathematical system, from the available information about the randomness in the inputs. In the setting of mathematical modelling, uncertainty quantification seeks to better explain the model answer taking into account the variability often met in natural and physical phenomena
One usual approach of the uncertainty quantification is the study of models whose input data are assumed to be random variables. In most of them, it is assumed that the probability distributions of the parameters are known and follow standard patterns (uniform, Gaussian, exponential, etc.). However, this assumption may be unreal. Determining appropriate probability distributions of the model parameters is becoming a key part of the problem when dealing with real applications. In other words, when we try to describe real phenomena, usually it is not enough to build coherent models, but also consider and treat adequately the uncertainty involved in both sample data and model parameters as well as to control their effect on the solution. In this latter sense, a key issue is the computation of the probability distributions of the model parameters that make that the solution stochastic process of the model, at certain time instants, capture the uncertainty embedded in sample data.
In such a context, the resolution of the problems that arise from modelling the uncertainty of real escenarios, are non affordable to be solved by means of exact algorithms 1. In that sense, approximate algorithms have postulated as a real alternative for such optimizations. In this context, evolutionary computation approaches have not been investigated extensively with the exception of some papers that use metaheuristics and evolutionary algorithms with successful results 2-5. Nevertheless, such scientific works have been developed by scientists from numerical optimization and, therefore, the proposed evolutionary computation approaches are in most of the cases naïve. It is our belief that uncertainty models offer real challenges for proposing more sophisticated algorithms by the EC community.

Bibliography
1 R.C. Smith, Uncertainty Quantification: Theory, Implementation, and Applications (Computational Science and Engineering), SIAM, 2014
2 Rafael-J. Villanueva, J. Ignacio Hidalgo, Carlos Cervigón, Javier Villanueva-Oller, Juan-Carlos Cortés, Calibration of an agent-based simulation model to the data of women infected by Human Papillomavirus with uncertainty, Applied Soft Computing, Apr/2019.
3 Clara Burgos, Juan-Carlos Cortés, Iván-Camilo Lombana, David Martínez-Rodríguez, Rafael-J. Villanueva, Modeling the Dynamics of the Frequent Users of Electronic Commerce in Spain Using Optimization Techniques for Inverse Problems with Uncertainty, Journal of Optimization Theory and Applications, Aug/2018.
4 Juan‐Carlos Cortés, Pablo Martínez‐Rodríguez, José‐Antonio Moraño, José‐Vicente Romero, María‐Dolores Roselló, Rafael‐Jacinto Villanueva, Probabilistic calibration and short‐term prediction of the prevalence herpes simplex type 2: A transmission dynamics modelling approach, Mathematical Methods in the Applied Sciences, Jul/2021.
5 Clara Burgos-Simón, Juan-Carlos Cortés, David Martínez-Rodríguez, Rafael J. Villanueva, Modeling breast tumor growth by a randomized logistic model: A computational approach to treat uncertainties via probability densities, The European Physical Journal Plus, Oct/2020.

Organizers

 

Josu Ceberio

 

Rafael Villanueva

Rafael Villanueva is a researcher in Interdisciplinary Mathematics. My main area of interest is modeling infectious diseases, fitting their parameters with real data and simulating strategies to reduce their prevalence. To do that, differential equations, difference equations and networks are used. Moreover, he also works on the uncertainty of the real data and their effect on the model predictions. Finally, he also is lecturer of Mathematical Models in the Faculty of Business Administration and Management at the UPV.

 

Ignacio Hidalgo

Iñaki Hidalgo is Full Professor of Computing Science at Complutense University of Madrid (UCM). He received a PhD in Physics from the same university in 2001 under the Informatics and Automation doctoral program, with a dissertation on application of evolutionary algorithms for
computer architecture problems. He has published more than 150 papers in journals and international conferences, most of them related to RWA of EC. He was local chair of Gecco 2015 in Madrid and is currently co-chair of EvoApps 2022. Recently his group has been working on biomedical problems, some of them dealing with uncertainty not only of the data, but also of the model. He has successfully supervised 10 PhD theses and is currently supervising 4 PhD students.

 

Francisco Fernandez de Vega

Francisco Fernandez-de-Vega is Full Professor of Computer Architectures at the University of Extremadura. He received his PhD in Computer Science from the same university in 2001, and was awarded with the best PhD Engineering that year. He was University of Extremadura CIO from 2005-2007, and Vice-dean of Research at Centro Universitario de Mérida, from 2004-2005. He has written more than 200 papers in International Conferences and Journals. His research in the confluence of Parallel and Distributed Computing, Artificial Intelligence, Arts and Music, has been
internationally Awarded: best papers in PPSN 2002, EvoHot 2008; ACM Gecco 2013, Evolutionary Art, Design and Creativity Award, and recently Best AI APP Award 2021, by the Spanish Association for Artificial Intelligence. He was local chair for Evostar 2020 conference.

EVORL — Evolutionary Reinforcement Learning Workshop

https://sites.google.com/view/evorl/home

Summary

In recent years reinforcement learning (RL) has received a lot of attention thanks to its performance and ability to address complex tasks. At the same time, multiple recent papers, notably work from OpenAI, have shown that evolution strategies (ES) can be competitive with standard RL algorithms on some problems while being simpler and more scalable. Similar results were obtained by researchers from Uber, this time using a gradient-free genetic algorithm (GA) to train deep neural networks on complex control tasks. Moreover, recent research in the field of evolutionary algorithms (EA) has led to the development of algorithms like Novelty Search and Quality Diversity, capable of efficiently addressing complex exploration problems and finding a wealth of different policies while improving the external reward (QD) or without relying on any reward at all (NS). All these results and developments have sparked a strong renewed interest in such population-based computational approaches.

Nevertheless, even if EAs can perform well on hard exploration problems they still suffer from low sample efficiency. This limitation is less present in RL methods, notably because of sample reuse, while on the contrary they struggle with hard exploration settings. The complementary characteristics of RL algorithms and EAs have pushed researchers to explore new approaches merging the two in order to harness their respective strengths while avoiding their shortcomings.

Some recent papers already demonstrate that the interaction between these two fields can lead to very promising results. We believe that this is a nascent field where new methods can be developed to address problems like sparse and deceptive rewards, open-ended learning, and sample efficiency, while expanding the range of applicability of such approaches.
In this workshop, we want to highlight this new field currently developing while proposing an outlet at GECCO for the two communities (RL and EA) to meet and interact, in order to encourage collaboration between researchers to discuss past and new challenges and develop new applications.

The workshop will focus on the following topics:

  • Evolutionary reinforcement learning
  • Population-based methods for policy search
  • Evolution strategies
  • Neuroevolution
  • Hard exploration and sparse reward problems
  • Deceptive rewards
  • Novelty and diversity search methods
  • Divergent search
  • Sample-efficient direct policy search
  • Intrinsic motivation, curiosity
  • Building or designing behaviour characterisations
  • Meta-learning, hierarchical learning
  • Evolutionary AutoML
  • Open-ended learning

Organizers

 

Giuseppe Paolo

Giuseppe Paolo is a PhD student at ISIR in Sorbonne University, under the supervision of Stéphane Doncieux, and at AI-Lab in Softbank Robotics Europe, under the supervision of Alban Laflaquière. His research focuses on the intersection between evolutionary algorithms and reinforcement learning algorithm to tackle sparse rewards problems. Giuseppe Paolo received his M.Sc. in Robotics, Systems and Control at ETH Zurich in 2018 and the engineering degree from the Politecnico di Torino in 2015. He also did two research internships at RAM-Lab at the Hong Kong University of Science and Technology and at IBM Research Zurich.

 

Alex Coninx

Alex Coninx is an associate professor at ISIR, Sorbonne Université, with a main research focus on how artificial agents can autonomously build relevant representations of their environment and use them to explore and achieve some goals. This question is investigated using methods from developmental robotics, evolutionary and population-based approaches, state representation learning and bayesian optimization. Alex Coninx received an engineering degree from École Centrale Paris in 2006 and a Ph.D. in computer science from Université de Grenoble in 2012. Before joinining Sorbonne Université as faculty in 2016, Alex also worked at LPPA Collège de France, Imperial College London and EDF Labs, in the context of several European projects (ICEA, ALIZ-E, BAMBI, DREAM) focusing on neuroinspired models, autonomous robotics, cognitive architectures, developmental approaches and bayesian reasoning.

 

Antoine Cully

Antoine Cully is Senior Lecturer (Associate Professor) at Imperial College London (United Kingdom). His research is at the intersection between artificial intelligence and robotics. He applies machine learning approaches, like evolutionary algorithms, to robots to increase their versatility and their adaptation capabilities. In particular, he has recently developed Quality-Diversity optimization algorithms to enable robots to autonomously learn large behavioural repertoires. For instance, this approach enabled legged robots to autonomously learn how to walk in every direction or to adapt to damage situations. Antoine Cully received the M.Sc. and the PhD degrees in robotics and artificial intelligence from the Pierre et Marie Curie University of Paris, France, in 2012 and 2015, respectively, and the engineer degree from the School of Engineering Polytech’Sorbonne, in 2012. His PhD dissertation has received three Best-Thesis awards. He has published several journal papers in prestigious journals including Nature, IEEE Transaction in Evolutionary Computation, and the International Journal of Robotics Research. His work was featured on the cover of Nature (Cully et al., 2015) and received the "Outstanding Paper of 2015" and "Outstanding Paper of 2020" awards from the Society for Artificial Life, and the French "La Recherche" award (2016). He also received the best paper award from GECCO 2021 (in the NE track).

 

Adam Gaier

Adam Gaier is a Senior Research Scientist at the Autodesk AI Lab pursuing basic research in evolutionary and machine learning and the application of these techniques to problems in design and robotics. He received master's degrees in Evolutionary and Adaptive Systems form the University of Sussex and Autonomous Systems at the Bonn-Rhein-Sieg University of Applied Sciences, and a PhD from Inria and the University of Lorraine — where his dissertation focused on tackling expensive design problems through the fusion of machine learning, quality diversity, and neuroevolution approaches. His PhD work received recognition at top venues across these fields: including a spotlight talk at NeurIPS (machine learning), multiple best paper awards at GECCO (evolutionary computation), a best student paper at AIAA (aerodynamics design optimization), and a SIGEVO Dissertation Award.

EVOSOFT — Evolutionary Computation Software Systems

https://evosoft.heuristiclab.com

Summary

Evolutionary computation (EC) methods are applied in many different domains. Therefore, soundly engineered, reusable, flexible, user-friendly, and interoperable software systems are more than ever required to bridge the gap between theoretical research and practical application. However, due to the heterogeneity of application domains and the large number of EC methods, the development of such systems is both, time consuming and complex. Consequently many EC researchers still implement individual and highly specialized software which is often developed from scratch, concentrates on a specific research question, and does not follow state of the art software engineering practices. By this means the chance to reuse existing systems and to provide systems for others to build their work on is not sufficiently seized within the EC community. In many cases the developed systems are not even publicly released, which makes the comparability and traceability of research results very hard. This workshop concentrates on the importance of high-quality software systems and professional software engineering in the field of EC and provides a platform for EC researchers to discuss the following and other related topics:

  • development and application of generic and reusable EC software systems
  • architectural and design patterns for EC software systems
  • software modeling of EC algorithms and problems
  • open-source EC software systems
  • expandability, interoperability, and standardization
  • comparability and traceability of research results
  • graphical user interfaces and visualization
  • comprehensive statistical and graphical results analysis
  • parallelism and performance
  • usability and automation
  • comparison and evaluation of EC software systems

Organizers

Stefan Wagner

Stefan Wagner received his MSc in computer science in 2004 and his PhD in technical sciences in 2009, both from Johannes Kepler University Linz, Austria. From 2005 to 2009 he worked as associate professor for software project engineering and since 2009 as full professor for complex software systems at the Campus Hagenberg of the University of Applied Sciences Upper Austria. From 2011 to 2018 he was also CEO of the FH OÖ IT GmbH, which is the IT service provider of the University of Applied Sciences Upper Austria. Dr. Wagner is one of the founders of the research group Heuristic and Evolutionary Algorithms Laboratory (HEAL) and is project manager and head architect of the open-source optimization environment HeuristicLab. He works as project manager and key researcher in several R&D projects on production and logistics optimization and his research interests are in the area of combinatorial optimization, evolutionary algorithms, computational intelligence, and parallel and distributed computing.

 

Michael Affenzeller

Michael Affenzeller has published several papers, journal articles and books dealing with theoretical and practical aspects of evolutionary computation, genetic algorithms, and meta-heuristics in general. In 2001 he received his PhD in engineering sciences and in 2004 he received his habilitation in applied systems engineering, both from the Johannes Kepler University Linz, Austria. Michael Affenzeller is professor at the University of Applied Sciences Upper Austria, Campus Hagenberg, head of the research group Heuristic and Evolutionary Algorithms Laboratory (HEAL), head of the Master degree program Software Engineering, vice-dean for research and development, and scientific director of the Softwarepark Hagenberg.

GI @ GECCO 2022 — 11th International Workshop on Genetic Improvement

http://geneticimprovementofsoftware.com/events/gecco2022

Summary

Genetic improvement is the process of using automated search to improve existing software. It has successfully been used to fix bugs, transplant functionality from one system to another, improve predictions, and reduce software's runtime, energy and memory consumption; all without the necessity of costly human labour. Research within this field has already won three Humies. Despite impressive findings, genetic improvement is a relatively new field of research, with many opportunities to improve the state-of-the-art.

The genetic improvement workshop series has been successfully run since 2015 when it was co-located in Madrid with the top conference on evolutionary computation, GECCO. Each year it attracted between 5 to 14 accepted papers. In 2018 the workshop was held, for the first time, at the international conference on software engineering, ICSE 2018--2021 as well as at GECCO: GI@GECCO-2018 (Kyoto), GI@GECCO-2019 (Prague), and GI@GECCO-2020 (Cancun).

Having observed keen interest from both evolutionary computing and the software engineering community, we believe a 11th workshop will continue to build interest, collaboration, and discussion.

We invite submissions that discuss recent developments in all areas of research on, and applications of, Genetic Improvement. The keynote will be given by Prof. Westley Weimer, professor at the University of Michigan–Ann Arbor. GI is the premier workshop in the field and provides an opportunity for researchers interested in automated program repair and software optimisation to disseminate their work, exchange ideas and discover new research directions. Topics of interest include both the theory and practice of Genetic Improvement. Applications include, but are not limited to, using GI to:
• Improve efficiency
• Decrease memory consumption
• Decrease energy consumption
• Transplant new functionality
• Specialise software
• Translate between programming languages
• Generate multiple versions of software
• Improve low level or binary code
• Improve high level software engineering artifacts, e.g., documentation, specification, training and educational materials and techniques
• Repair bugs

In the event of the continued COVID-19 pandemic, as with GI 2020 and GI 2021, the workshop may be held online with recordings available on YouTube.
2021 video: https://youtu.be/LVLdIb18cBg
2020 video: https://youtu.be/GsNKCifm44A

Research and Position Papers

We invite submissions of two types of paper:

• Research papers (limit eight pages)
• Position papers (limit two pages)

We encourage authors to submit early and in progress work. The workshop emphasises interaction and discussion.

All papers should be submitted electronically double-blind as PDFs (in the GECCO conference format). All accepted papers must be presented at GI 2022 and will appear in the GECCO 2022 companion volume.

Organizers

 

Bobby R. Bruce

Bobby R. Bruce is a Project Scientist at UC Davis where he primarily works on the Gem5 computer architecture simulator. Prior to UC Davis, Bobby carried out research into the automatic optimization of Java bytecode at UCLA. In 2018 Dr. Bruce gained his PhD with Justyna Petke on Genetic Improvement, particularly automated improvement of software non-functional properties such as energy efficiency. His research interests are centred around Search-based Software Engineering, and its application to improving software performance.

 

Vesna Nowack

Vesna Nowack Since she gained her PhD in Software Engineering in 2016 from the Universitat Politecnica de Catalunya in Barcelona, she has conducted research in supercomputing (Spain) and taught robotics in Germany. Her recent research has been on APR in the UK, including 12 months with Bloomberg (London) published this summer as "On the Introduction of Automatic Program Repair in Bloomberg" by IEEE Software, 2021. She is now a Senior Research Assistant at Lancaster University where she continues her work on using GI to automatically fix bugs.

 

Aymeric Blot

Aymeric Blot is a Research Associate conducting research in genetic improvement at the CREST and SOLAR groups in University College London. He received in 2018 a doctorate from the University of Lille following work on automated algorithm design for multi-objective combinatorial optimisation. His research focuses on strengthening GI techniques using knowledge from automated machine learning, algorithm configuration, and evolutionary computation. He maintains and evolves the community website on genetic improvement.

 

Emily Winter

Emily Winter is a Senior Research Associate at Lancaster University, specialising in the human and socio-technical aspects of software engineering. She works on the Fixie Project: Exploiting Defect Prediction for Automatic Software Repair, investigating developer needs and preferences for how they interact with an Automatic Software Repair tool. As part of her research, she is seconded as a contractor to Bloomberg LP.

William B. Langdon

William B. Langdon has been working on GP since 1993. His PhD was the first book to be published in John Koza and Dave Goldberg's book series. He has previously run the GP track for GECCO 2001 and was programme chair for GECCO 2002 having previously chaired EuroGP for 3 years. More recently he has edited SIGEVO's FOGA and run the computational intelligence on GPUs (CIGPU) and EvoPAR workshops. His books include A Field Guide to Genetic Programming, Foundations of Genetic Programming and Advances in Genetic Programming 3. He also maintains the genetic programming bibliography. His current research uses GP to genetically improve existing software, CUDA, search based software engineering and Bioinformatics.

 

Justyna Petke

Justyna Petke is a Principal Research Fellow and Proleptic Associate Professor, conducting research in genetic improvement. She has a doctorate in Computer Science from University of Oxford and is now at the Centre for Research on Evolution, Search and Testing (CREST) at University College London. Her work on genetic improvement was awarded a Silver and a Gold 'Humie' at GECCO 2014 and GECCO 2016. She also organised several Genetic Improvement Workshops. She currently serves on the editorial board of the Genetic Programming and Evolvable Machines (GPEM), Empirical Software Engineering (EMSE), and Automated Software Engineering (ASE) journals.

IAM 2022 — 7th Workshop on Industrial Applications of Metaheuristics (IAM 2022)

https://sites.google.com/view/iam-workshop/home

Summary

Metaheuristics have been applied successfully to many aspects of applied Mathematics and Science, showing their capabilities to deal effectively with problems that are complex and otherwise difficult to solve. There are a number of factors that make the usage of metaheuristics in industrial applications more and more interesting. These factors include the flexibility of these techniques, the increased availability of high-performing algorithmic techniques, the increased knowledge of their particular strengths and weaknesses, the ever increasing computing power, and the adoption of computational methods in applications. In fact, metaheuristics have become a powerful tool to solve a large number of real-life optimization problems in different fields and, of course, also in many industrial applications such as production scheduling, distribution planning, and inventory management.

This workshop proposes to present and debate about the current achievements of applying these techniques to solve real-world problems in industry and the future challenges, focusing on the (always) critical step from the laboratory to the shop floor. A special focus will be given to the discussion of which elements can be transferred from academic research to industrial applications and how industrial applications may open new ideas and directions for academic research.

Topic areas of IAM 2022 include (but are not restricted to):

• Success stories for industrial applications of metaheuristics
• Pitfalls of industrial applications of metaheuristics.
• Metaheuristics to optimize dynamic industrial problems.
• Multi-objective optimization in real-world industrial problems.
• Meta-heuristics in very constraint industrial optimization problems: assuring feasibility, constraint-handling techniques.
• Reduction of computing times through parameter tuning and surrogate modelling.
• Parallelism and/or distributed design to accelerate computations.
• Algorithm selection and configuration for complex problem solving.
• Advantages and disadvantages of metaheuristics when compared to other techniques such as integer programming or constraint programming.
• New research topics for academic research inspired by real (algorithmic) needs in industrial applications.

Submission

Authors can submit short contributions including position papers of up to 4 pages and regular contributions of up to 8 pages following in each category the GECCO paper formatting guidelines. Software demonstrations will also be welcome.
The submission deadlines will adhere to the standard GECCO schedule for workshops.
The workshop itself will be publicized through mailing lists and academic and industrial contacts of the organizers.

Organizers

Silvino Fernández Alzueta

He is an R&D Engineer at the Global R&D Division of ArcelorMittal for more than 15 years. He develops his activity in the ArcelorMittal R&D Centre of Asturias (Spain), in the framework of the Business and TechnoEconomic Department. His has a Master Science degree in Computer Science and a Ph.D. in Engineering Project Management, both obtained at University of Oviedo in Spain. His main research interests are in analytics, metaheuristics and swarm intelligence, and he has broad experience in using these techniques in industrial environment to optimize production processes. His paper "Scheduling a Galvanizing Line by Ant Colony Optimization" obtained the best paper award in the ANTS conference in 2014.

 

Pablo Valledor Pellicer

He is a research engineer of the Global R&D Asturias Centre at ArcelorMittal (world's leading integrated steel and mining company), working at the Business & Technoeconomic department. He obtained his MS degree in Computer Science in 2006 and his PhD on Business Management in 2015, both from the University of Oviedo. He worked for the R&D department of CTIC Foundation (Centre for the Development of Information and Communication Technologies in Asturias) until February 2007, when he joined ArcelorMittal. His main research interests are metaheuristics, multi-objective optimization, analytics and operations research.

Thomas Stützle

Thomas Stützle is a research director of the Belgian F.R.S.-FNRS working at the IRIDIA laboratory of Université libre de Bruxelles (ULB), Belgium. He received his PhD and his habilitation in computer science both from the Computer Science Department of Technische Universität Darmstadt, Germany, in 1998 and 2004, respectively. He is the co-author of two books about ``Stochastic Local Search: Foundations and Applications and ``Ant Colony Optimization and he has extensively published in the wider area of metaheuristics including 22 edited proceedings or books, 11 journal special issues, and more than 250 journal, conference articles and book chapters, many of which are highly cited. He is associate editor of Computational Intelligence, Evolutionary Computation and Applied Mathematics and Computation and on the editorial board of seven other journals. His main research interests are in metaheuristics, swarm intelligence, methodologies for engineering stochastic local search algorithms, multi-objective optimization, and automatic algorithm configuration. In fact, since more than a decade he is interested in automatic algorithm configuration and design methodologies and he has contributed to some effective algorithm configuration techniques such as F-race, Iterated F-race and ParamILS.

IWLCS 2022 — 25th International Workshop on Learning Classifier Systems

https://iwlcs.organic-computing.de

Summary

In the research field of evolutionary machine learning (EML), learning classifier systems (LCSs) provide a powerful technique which has received a lot of research attention over nearly four decades. Since John Holland’s formalization of the genetic algorithm (GA) and his conceptualization of the first LCS—the Cognitive System 1 (CS-I)—in the 1970s, the LCS paradigm has broadened greatly into a framework encompassing many algorithmic architectures, knowledge representations, rule discovery mechanisms, credit assignment schemes and additional integrated heuristics. The resulting EML techniques' unique strengths lie in their adaptability and flexibility, them making only a minimal set of assumptions and—most importantly—their transparency. They thus bear great potential for being applied to various problem domains such as behaviour modeling, online-control, function approximation, classification, prediction and data mining.

The working principle of an LCS is to evolve a set of if-then (condition-action, or condition-prediction) rules which are historically called classifiers. The rules' conditions optimally divide the problem space into smaller subspaces such that each may be modelled well by a simpler local model (a rule's prediction). At that, the local models may be represented by rather straightforward schemes (e.g. simple tabular mappings) as well as more complex structures (e.g. artificial neural networks); accordingly, different kinds of LCSs may carry out different kinds of predictions. The size, shape and location of the subspace each single rule is responsible for is optimized via a, usually steady-state, genetic algorithm (GA) which pursues maximally-sized subspaces (i.e. maximally general conditions) while at the same time striving for maximally accurate local predictions, a tension which was formalized as the ‘generalization hypothesis’ by Stewart Wilson in 1995. Some of the central questions of LCS research are: Which kinds of machine learning and evolutionary computation algorithms can be utilized within the well-understood algorithmic structure of an LCS? How can the components of the rules be modelled? For example, in the past, techniques such as radial basis function interpolation and approximation networks, multi-layer perceptrons (MLP), as well as support vector machines (SVM) have been used for implementing rule predictions.

This workshop opens a forum for ongoing research in the field of LCSs as well as for the design and implementation of novel LCS-style EML systems, that make use of evolutionary computation techniques to improve the prediction accuracy of evolved rule sets. Furthermore, it is meant to solicit researchers from related fields such as (evolutionary) machine learning, (multi-objective) evolutionary optimization, neuroevolution etc. to bring in their experience. Topics that have been central to LCSs for many years, such as human interpretability, are more and more becoming a matter of high interest for other machine learning communities as well these days (‘explainable AI’). Hence, this workshop serves as a critical spotlight to disseminate the long experience of LCSs in these areas, to present new and ongoing research in the field, to attract new interest and to expose the machine learning community to an alternate, often advantageous, modelling paradigm.

Topics of interests include but are not limited to:

- advances in LCS methods: local models, problem space partitioning, classifier mixing, …

- evolutionary reinforcement learning: multi-step LCS, neuroevolution, …

- state of the art analysis: (quantitive and qualitative) surveys, sound comparative experimental benchmarks, carefully crafted reproducibility studies …

- formal developments in LCSs: provably optimal parametrization, time bounds, generalization, …

- interpretability of evolved knowledge bases: knowledge extraction, visualization, …

- advances in LCS paradigms: Michigan/Pittsburgh style, hybrids, iterative rule learning, …

- hyperparameter optimization: hyperparameter selection, online self-adaption, …

- applications: medical domains, bio-informatics, intelligence in games, cyber-physical systems, …

- optimizations and parallel implementations: GPU acceleration, matching algorithms, …

- other rule-based ML systems that use metaheuristics: artificial immune/evolving fuzzy rule-based systems, AntMiner, …

Organizers

David Pätzel

David Pätzel is a doctoral candidate at the Department of Computer Science at the University of Augsburg, Germany. He received his B.Sc. in Computer Science from the University of Augsburg in 2015 and his M.Sc. in the same field in 2017. His main research is directed towards Learning Classifier Systems with a focus on developing a more formal, probabilistic understanding of LCSs that can, for example, be used to improve existing algorithms. Besides that, his research interests include reinforcement learning, evolutionary machine learning algorithms and pure functional programming. He is an elected organizing committee member of the International Workshop on Learning Classifier Systems since 2020.

Alexander Wagner

Alexander Wagner is a doctoral candidate at the Department of Artificial Intelligence in Agricultural Engineering at the University of Hohenheim, Germany. He received his B.Sc. and M.Sc. degrees in computer science from the University of Augsburg in 2018 and 2020, respectively. His bachelor’s thesis already dealt with the field of Learning Classifier Systems. This sparked his interest and he continued working on Learning Classifier Systems, especially XCS, during his master studies. Consequently, he also dedicated his master’s thesis to this topic in greater depth. His current research focuses on the application of Learning Classifier Systems, in particular XCS and its derivatives, to self-learning adaptive systems designed to operate in real-world environments, especially in agricultural domains. In this context, the emphasis of his research is to increase reliability of XCS or LCS in general. His research interests also include reinforcement learning, evolutionary machine learning algorithms, neural networks and neuroevolution. He is an elected organizing committee member of the International Workshop on Learning Classifier Systems since 2021.

Michael Heider

Michael Heider is a doctoral candidate at the Department of Computer Science at the University of Augsburg, Germany. He received his B.Sc. in Computer Science from the University of Augsburg in 2016 and his M.Sc. in Computer Science and Information-oriented Business Management in 2018. His main research is directed towards Learning Classifier Systems, especially following the Pittsburgh style, with a focus on regression tasks encountered in industrial settings. Those have a high regard for both accurate as well as comprehensive solutions. To achieve comprehensibility he focuses on compact and simple rule sets. Besides that, his research interest include optimization techniques and unsupervised learning (e.g. for data augmentation or feature extraction). He is an elected organizing committee member of the International Workshop on Learning Classifier Systems since 2021.

LAHS — Landscape-Aware Heuristic Search

https://sites.google.com/view/lahs-workshop/

Summary

Fitness landscape analysis and visualisation can provide significant insights into problem instances and algorithm behaviour. The aim of the workshop is to encourage and promote the use of landscape analysis to improve the understanding, the design and, eventually, the performance of search algorithms. Examples include landscape analysis as a tool to inform the design of algorithms, landscape metrics for online adaptation of search strategies, mining landscape information to predict instance hardness and algorithm runtime. The workshop will focus on, but not be limited to, topics such as:
* Evolvability and searchability characterisation
* Exploiting problem structure
* Fitness landscape analysis
* Fitness landscape visualisation
* Fitness landscape theory
* Grey-box optimisation
* Informed search strategies
* Local optima networks
* Multi-objective fitness landscapes
* Performance and failure prediction

We will invite submissions of three types of articles:
* research papers (up to 8 pages)
* software libraries/packages (up to 4 pages)
* position papers (up to 2 pages)

Organizers

Nadarajen Veerapen

Nadarajen Veerapen is an Associate Professor (maître de conférences) at the University of Lille, France. Previously he was a research fellow at the University of Stirling in Scotland. He holds a PhD in Computing Science from the University of Angers, France, where he worked on adaptive operator selection. His research interests include local search, hybrid methods, search-based software engineering and visualisation. He is the Electronic Media Chair for GECCO 2021 and has served as Electronic Media Chair for GECCO 2020, Publicity Chair for GECCO 2019 and as Student Affairs Chair for GECCO 2017 and 2018. He has previously co-organised the workshop on Landscape-Aware Heuristic Search at PPSN 2016, GECCO 2017-2019.

Katherine Malan

Katherine Malan is an associate professor in the Department of Decision Sciences at the University of South Africa. She received her PhD in computer science from the University of Pretoria in 2014 and her MSc & BSc degrees from the University of Cape Town. She has over 20 years' lecturing experience, mostly in Computer Science, at three different South African universities. Her research interests include automated algorithm selection in optimisation and learning, fitness landscape analysis and the application of computational intelligence techniques to real-world problems.

Arnaud Liefooghe

Arnaud Liefooghe has been an Associate Professor with the University of Lille, France, since 2010. He is a member of the CRIStAL Research Center, CNRS, and of the Inria Lille-Nord Europe Research Center. He is also the Co-Director of the MODO international lab between Shinshu University, Japan, and the University of Lille. He received a PhD degree from the University of Lille in 2009. In 2010, he was a Postdoctoral Researcher with the University of Coimbra, Portugal. In 2020, he was on CNRS sabbatical at JFLI, and an Invited Professor at the University of Tokyo, Japan. His research activities deal with the foundations, the design and the analysis of stochastic local search heuristic algorithms, with a particular interest in multi-objective optimization and landscape analysis. He has co-authored over eighty scientific papers in international journals and conferences. He was a recipient of the best paper award at EvoCOP 2011 and at GECCO 2015. He has recently served as the co-Program Chair for EvoCOP 2018 and 2019, as the Proceedings Chair for GECCO 2018, and as the co-EMO Track Chair for GECCO 2019.

Sébastien Verel

Sébastien Verel is a professor in Computer Science at the Université du Littoral Côte d'Opale, Calais, France, and previously at the University of Nice Sophia-Antipolis, France, from 2006 to 2013. He received a PhD in computer science from the University of Nice Sophia-Antipolis, France, in 2005. His PhD work was related to fitness landscape analysis in combinatorial optimization. He was an invited researcher in DOLPHIN Team at INRIA Lille Nord Europe, France from 2009 to 2011. His research interests are in the theory of evolutionary computation, multiobjective optimization, adaptive search, and complex systems. A large part of his research is related to fitness landscape analysis. He co-authored of a number of scientific papers in international journals, book chapters, book on complex systems, and international conferences. He is also involved in the co-organization EC summer schools, conference tracks, workshops, a special issue on EMO at EJOR, as well as special sessions in indifferent international conferences.

Gabriela Ochoa

Gabriela Ochoa is a Professor in Computing Science at the University of Stirling, Scotland, where she leads the Data Science and Intelligent Systems (DAIS) research group. She received BSc and MSc degrees in Computer Science from University Simon Bolivar, Venezuela and a PhD from University of Sussex, UK. She worked in industry for 5 years before joining academia, and has held faculty and research positions at the University Simon Bolivar, Venezuela and the University of Nottingham, UK. Her research interests lie in the foundations and application of evolutionary algorithms and optimisation methods, with emphasis on autonomous search, hyper-heuristics, fitness landscape analysis, visualisation and applications to logistics, transportation, healthcare, and software engineering. She has published over 110 scholarly papers (H-index 31) and serves various program committees. She was associate editor of the IEEE Transactions on Evolutionary Computation, is currently for the Evolutionary Computation Journal, and is a member of the editorial board for Genetic Programming and Evolvable Machines. She has served as organiser for various Evolutionary Computation events and served as the Editor-in-Chief for the Genetic and Evolutionary Computation Conference (GECCO) 2017. She is a member of the executive boards of the ACM interest group on Evolutionary Computation (SIGEVO), and the leading European Event on bio-inspired computing (EvoSTAR).

LEOL — Large-Scale Evolutionary Optimization and Learning

https://www.tflsgo.org/special_sessions/gecco2022.html

Summary

Machine learning for optimization has attracted significant attention in both the machine learning and operations research communities. Novel machine learning techniques have been developed to effectively solve high-dimensional and complex optimization problems. These include automatic learning of heuristics, direct prediction of high-quality solutions, learning to branch for branch-and-bound algorithms, and learning to reduce or decompose optimization problems. Conversely, population-based metaheuristics in general, and evolutionary algorithms in particular, have also been used for high-dimensional learning tasks. Neuro-evolution for instance has shown promising results in tackling complex supervised and reinforcement learning problems. The aim of this workshop is to explore the synergy between machine learning and evolutionary algorithms to tackle high-dimensional optimization and learning problems. The workshop broadly covers novel techniques to enhance evolutionary algorithms via machine learning for solving complex large-scale optimization problems and/or novel algorithmic advancements of population-based metaheuristics for solving high-dimensional learning problems. Potential topics may include (but not limited to):

- automatic meta-heuristic design using machine learning (or hyper-heuristic),
- predicting high-quality solutions to warm-start evolutionary algorithms,
- predicting unknown parameters for optimization problems via machine learning,
- algorithm selection using machine learning,
- problem structure learning,
- surrogate models for expensive optimization problems,
- neural architecture search using evolutionary methods,
- deep neuro-evolution.

Organizers

 

Mohammad Nabi Omidvar

Nabi Omidvar is a University Academic Fellow (Assistant Professor) with the School of Computing, University of Leeds, and Leeds University Business School, UK. He is an expert in large-scale global optimization and is currently a senior member of the IEEE and the chair of IEEE Computational Intelligence Society's Taskforce on Large-Scale Global Optimization. He has made several award winning contributions to the field including the state-of-the-art variable interaction analysis algorithm which won the IEEE Computational Intelligence Society's best paper award in 2017. He also coauthored a paper which won the large-scale global optimization competition in the IEEE Congress on Evolutionary Computation in 2019. Dr. Omidvar's current research interests are high-dimensional (deep) learning and the applications of artificial intelligence in financial services.

 

Yuan Sun

Yuan Sun is a Research Fellow in the School of Computing and Information Systems, University of Melbourne and the Vice-Chair of the IEEE CIS Taskforce on Large-Scale Global Optimization. He completed his Ph.D degree from University of Melbourne and a Bachelor’s degree from Peking University. His research interests include artificial intelligence, evolutionary computation, operations research, and machine learning. He has published more than twenty research papers in these areas, and his research has been nominated for the best paper award at GECCO 2020 and won the CEC 2019 Competition on Large-Scale Global Optimization.

 

Xiaodong Li

Xiaodong Li (M’03-SM’07) received his B.Sc. degree from Xidian University, Xi'an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. He is a Professor with the School of Science (Computer Science and Software Engineering), RMIT University, Melbourne, Australia. His research interests include evolutionary computation, neural networks, data analytics, multiobjective optimization, multimodal optimization, and swarm intelligence. He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member of IEEE CIS Task Force on Swarm Intelligence, a vice-chair of IEEE Task Force on Multi-modal Optimization, and a former chair of IEEE CIS Task Force on Large Scale Global Optimization. He is the recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS "IEEE Transactions on Evolutionary Computation Outstanding Paper Award".

NEWK — Neuroevolution at Work

www.newk2022.icar.cnr.it

Summary

In the last years, inspired by the fact that natural brains themselves are the products of an evolutionary process, the quest for evolving and optimizing artificial neural networks through evolutionary computation has enabled researchers to successfully apply neuroevolution to many domains such as strategy games, robotics, big data, and so on. The reason behind this success lies in important capabilities that are typically unavailable to traditional approaches, including evolving neural network building blocks, hyperparameters, architectures and even the algorithms for learning themselves (meta-learning).
Although promising, the use of neuroevolution poses important problems and challenges for its future developments.
Firstly, many of its paradigms suffer from lack of parameter-space diversity, meaning with this a failure in providing diversity in the behaviors generated by the different networks.
Moreover, the harnessing of neuroevolution to optimize deep neural networks requires noticeable computational power and, consequently, the investigation of new trends in enhancing the computational performance.

This workshop aims:
- to bring together researchers working in the fields of deep learning, evolutionary computation and optimization to exchange new ideas about potential directions for future research;
- to create a forum of excellence on neuroevolution that will help interested researchers from a variety of different areas, ranging from computer scientists and engineers on the one hand to application-devoted researchers on the other hand, to gain a high-level view about the current state of the art.

Since an increasing trend to neuroevolution in the next years seems likely to be observed, not only will a workshop on this topic be of immediate relevance to get in insight in future trends, it will also provide a common ground to encourage novel paradigms and applications. Therefore, researchers putting emphasis on neuroevolution issues in their work are encouraged to submit their work. This event is also the ideal place for informal contacts, exchanges of ideas and discussions with fellow researchers.

The scope of the workshop is to receive high-quality contributions on topics related to neuroevolution, ranging from theoretical works to innovative applications in the context of (but not limited to):
- theoretical and experimental studies involving neuroevolution on machine learning in general, and on deep and reinforcement learning in particular
- development of innovative neuroevolution paradigms
- parallel and distributed neuroevolution methods
- new search operators for neuroevolution
- hybrid methods for neuroevolution
- surrogate models for fitness estimation in neuroevolution
- applications of neuroevolution to Artificial Intelligence agents and to real-world problems.

Organizers

Ernesto Tarantino

Ernesto Tarantino was born in S. Angelo a Cupolo, Italy, in 1961. He received the Laurea degree in Electrical Engineering in 1988 from University of Naples, Italy. He is currently a researcher at National Research Council of Italy. After completing his studies, he conducted research in parallel and distributed computing. During the past decade his research interests have been in the fields of theory and application of evolutionary techniques and related areas of computational intelligence. He is author of more than 100 scientific papers in international journal, book and conferences. He has served as referee and organizer for several international conferences in the area of evolutionary computation.

De Falco Ivanoe

Ivanoe De Falco received his degree in Electrical Engineering “cum laude” from the University of Naples “Federico II”, Naples, Italy, in 1987. He is currently a senior researcher at the Institute for High-Performance Computing and Networking (ICAR) of the National Research Council of Italy (CNR), where he is the Responsible of the Innovative Models for Machine Learning (IMML) research group. His main fields of interest include Computational Intelligence, with particular attention to Evolutionary Computation, Swarm Intelligence and Neural Networks, Machine Learning, Parallel Computing, and their application to real-world problems, especially in the medical domain. He is a member of the World Federation on Soft Computing (WFSC), the IEEE SMC Technical Committee on Soft Computing, the IEEE ComSoc Special Interest Research Group on Big Data for e-Health, the IEEE Computational Intelligence Society Task Force on Evolutionary Computer Vision and Image Processing, and is an Associate Editor of Applied Soft Computing Journal (Elsevier). He is the author of more than 120 papers in international journals and in the proceedings of international conferences.

Antonio Della Cioppa

Antonio Della Cioppa received the Laurea degree in Physics and the Ph.D. degree in Computer Science, both from University of Naples “Federico II,” Naples, Italy, in 1993 and 1999, respectively. From 1999 to 2003, he was a Postdoctoral Fellow at the Department of Computer Science and Electrical Engineering, University of Salerno, Salerno, Italy. In 2004, he joined the Department of Information Engineering, Electrical Engineering and Mathematical Applications, University of Salerno, where he is currently Associate Professor of Computer Science and Artificial Intelligence. His main fields of interest are in the Computational Intelligence area, with particular attention to Evolutionary Computation, Swarm Intelligence and Neural Networks, Machine Learning, Parallel Computing, and their application to real-world problems. Prof. Della Cioppa is a member of the Association for Computing Machinery (ACM), the ACM Special Interest Group on Genetic and Evolutionary Computation, the IEEE Computational Intelligence Society and the IEEE Computational Intelligence Society Task Force on Evolutionary Computer Vision and Image Processing. He serves as Associate Editor for the Applied Soft Computing journal (Elsevier), Evolutionary Intelligence (Elsevier), Algorithms (MDPI). He has been part of the Organizing or Scientific Committees for tens of international conferences or workshops, and has authored or co-authored about 100 papers in international journals, books, and conference proceedings.

 

Scafuri Umberto

Umberto Scafuri was born in Baiano (AV) on May 21, 1957. He got his Laurea degree in Electrical Engineering at the University of Naples ""Federico II"" in 1985. He currently works as a technologist at the Institute of High Performance Computing and Networking (ICAR) of the National Research Council of Italy (CNR). His research activity is basically devoted to parallel and distributed architectures and evolutionary models.

QD-Benchmarks — Workshop on Quality Diversity Algorithm Benchmarks

https://quality-diversity.github.io

Summary

Quality Diversity (QD) algorithms are a recent family of evolutionary algorithms that aim at generating a large collection of high-performing solutions to a problem. They originated in the ``Generative and Developmental Systems community of GECCO between 2011 (Lehman and Stanley, 2011) and 2015 (Mouret and Clune, 2015) with the ``Novelty Search with Local Competition and ``MAP-Elites'' evolutionary algorithms. Since then, many algorithms have been introduced (mostly at GECCO), inspired, for example, by surrogate modeling (Gaier et al., 2018, best paper of the CS track), by CMA-ES (Fontaine et al., 2019) or by deep-neuroevolution (Colas et al., 2020, Nilsson et al., 2021 — Best paper in NE track). Hence, 47\% (7/15) of the papers accepted in the GECCO CS track in 2021 used or introduced novel Quality-Diversity optimization algorithms and 56\%(5/9) in 2020 (see https://quality-diversity.github.ios for a list of QD papers).

The objective of this workshop is to develop a first set of benchmarks functions and a set of indicators to compare QD algorithms: for now, most authors introduced their own benchmark functions and indicators, which makes it challenging to compare algorithms and validate implementations. Similar sets of indicators and functions were developed for multi-objective algorithms (ZDT set of functions, Zitzler, Deb and Thiele, 2000) and single-objective algorithms (BBOB series of workshops at GECCO, since 2009). These benchmark suites catalysed research in these fields --- we aim to do the same for quality diversity algorithms.

Organizers

 

John Rieffel

John Rieffel is an Associate Professor of Computer Science at Union College in Schenctady, NY, USA. Prior to joining Union he was a postdoc at Cornell University and Tufts University. He received his Ph.D. in Computer Science from Brandeis University in 2006. His undergraduate-driven research lab at Union College focuses on soft robotics, tensegrity robotics, and evolutionary fabrication. John has published at GECCO, ALIFE/ECAL, and IEEE-RoboSoft, conferences, and in em Soft Robotics, Artificial Life, and Proceedings of the Royal Society Interface, among others.

 

Antoine Cully

Antoine Cully is Senior Lecturer (Associate Professor) at Imperial College London (United Kingdom). His research is at the intersection between artificial intelligence and robotics. He applies machine learning approaches, like evolutionary algorithms, to robots to increase their versatility and their adaptation capabilities. In particular, he has recently developed Quality-Diversity optimization algorithms to enable robots to autonomously learn large behavioural repertoires. For instance, this approach enabled legged robots to autonomously learn how to walk in every direction or to adapt to damage situations. Antoine Cully received the M.Sc. and the PhD degrees in robotics and artificial intelligence from the Pierre et Marie Curie University of Paris, France, in 2012 and 2015, respectively, and the engineer degree from the School of Engineering Polytech’Sorbonne, in 2012. His PhD dissertation has received three Best-Thesis awards. He has published several journal papers in prestigious journals including Nature, IEEE Transaction in Evolutionary Computation, and the International Journal of Robotics Research. His work was featured on the cover of Nature (Cully et al., 2015) and received the "Outstanding Paper of 2015" and "Outstanding Paper of 2020" awards from the Society for Artificial Life, and the French "La Recherche" award (2016). He also received the best paper award from GECCO 2021 (in the NE track).

 

Jean-Baptiste Mouret

Jean-Baptiste Mouret is a senior researcher ("directeur de recherche) at Inria, a French research institute dedicated to computer science and mathematics. He was previously an assistant professor ("mâitre de conférences) at ISIR (Institute for Intelligent Systems and Robotics), which is part of Université Pierre et Marie Curie - Paris 6 (UPMC, now Sorbonne Université). He obtained a M.S. in computer science from EPITA in 2004, a M.S. in artificial intelligence from the Pierre and Marie Curie University (Paris, France) in 2005, and a Ph.D. in computer science from the same university in 2008. He was the principal investigator of an ERC grant (ResiBots - Robots with animal-like resilience, 2015-2020) and was the recipient of a French "ANR young researcher grant (Creadapt - Creative adaptation by Evolution, 2012-2015). Overall, J.-B. Mouret conducts researches that intertwine evolutionary algorithms, neuro-evolution, and machine learning to make robots more adaptive. His work was featured on the cover of Nature (Cully et al., 2015) and it received the "2017 ISAL Award for Distinguished Young Investigator in the field of Artificial Life, the "Outstanding Paper of 2015 award from the Society for Artificial Life (2016), the French "La Recherche" award (2016), 3 GECCO best paper awards (2011, GDS track; 2017 & 2018, CS track), and the IEEE CEC "best student paper" award (2009). He co-chaired the "Evolutionary Machine Learning track at GECCO 2019 and the "Generative and Developmental Systems'' track in 2015.

 

Stéphane Doncieux

Stéphane Doncieux is Professeur des Universités (Professor) in Computer Science at Sorbonne University, Paris, France. He is engineer of the ENSEA, a French electronic engineering school. He obtained a Master's degree in Artificial Intelligence and Pattern Recognition in 1999. He pursued and defended a PhD in Computer Science in 2003. He was responsible, with Bruno Gas, of the SIMA research team since its creation in 2007 and up to 2011. From 2011 to 2018, he was the head of the AMAC (Architecture and Models of Adaptation and Cognition) research team with 11 permanent researchers, 3 post-doc students and engineers and 11 PhD students. As from January 2019, he is deputy director of the ISIR lab, one of the largest robotics lab in France. He has organized several workshops on ER at conferences like GECCO or IEEE-IROS and has edited 2 books. Stéphane Doncieux was co-chair of the GECCO complex systems track in 2019 and 2020. His research is in cognitive robotics, with a focus on the use of evolutionary algorithms in the context of synthesis of robot controllers. He worked on selective pressures and on the use of evolutionary methods in a developmental robotics approach in which the evolutionary algorithms are used for their creativity to bootstrap a cognitive process and allow it to acquire an experience that can be later redescribed in another representation for a faster and more effective task resolution. This is the goal of the H2020 DREAM European project that he has coordinated (http://dream.isir.upmc.fr).

 

Stefanos Nikolaidis

Stefanos Nikolaidis is an Assistant Professor of Computer Science at the University of Southern California and leads the Interactive and Collaborative Autonomous Robotics Systems (ICAROS) lab. His research focuses on stochastic optimization approaches for learning and evaluation of human-robot interactions. His work leads to end-to-end solutions that enable deployed robotic systems to act optimally when interacting with people in practical, real-world applications. Stefanos completed his PhD at Carnegie Mellon's Robotics Institute and received an MS from MIT, a MEng from the University of Tokyo and a BS from the National Technical University of Athens. His research has been recognized with an oral presentation at NeurIPS and best paper awards and nominations from the IEEE/ACM International Conference on Human-Robot Interaction, the International Conference on Intelligent Robots and Systems, and the International Symposium on Robotics.

 

Julian Togelius

Julian Togelius is an Associate Professor at New York University. His research interests include AI, player modeling, procedural content generation, coevolution, neuroevolution, and genetic programming. He has co-invented some early Quality-Diversity methods, like DeLeNoX, and has recently had a hand in creating the CMA-ME algorithm. Additionally, he has been active in inventing ways of using QD for game-playing and game content generation applications. Julian received a BA in Philosophy from Lund University in 2002, and MSc in Evolutionary and Adaptive Systems from University of Sussex in 2003, and a PhD in Computer Science from the university of Essex in 2007.

 

Matthew C. Fontaine

Matthew C. Fontaine is a PhD candidate at the University of Southern California (2019-present). His research blends the areas of discrete optimization, generative models, quality diversity, neuroevolution, procedural content generation, scenario generation in training, and human-robot interaction (HRI) into powerful scenario generation systems that enhance safety when robots interact with humans. In the field of quality diversity, Matthew has made first-author contributions of the Covariance Matrix Adaptation MAP-Elites (CMA-ME) algorithm and recently introduced the Differentiable Quality Diversity (DQD) problem, including the first DQD algorithm MAP-Elites via a Gradient Arborescence (MEGA). He is also a maintainer of the Pyribs quality diversity optimization library, a library implementing many quality diversity algorithms for continuous optimization. Matthew received his BS (2011) and MS (2013) degrees from the University of Central Florida (UCF) and first studied quality diversity algorithms through coursework with Ken Stanley. He was a research assistant in the Interactive Realities Lab (IRL) at the Institute for Simulation and Training (IST) at UCF from 2008-2014 studying human training, a teaching faculty member at UCF from 2014 to 2017, and a software engineer in simulation at Drive.ai working on scenario generation in autonomous vehicles from 2017-2018.

Amy K Hoover

QuantOpt — Quantum Optimization Workshop

Summary

Scope

Quantum computers are rapidly becoming more powerful and increasingly applicable to solve problems in the real world. They have the potential to solve extremely hard computational problems, which are currently intractable by conventional computers. Quantum optimization is an emerging field that focuses on using quantum computing technologies to solve hard optimization problems.

There are two main types of quantum computers, quantum annealers and quantum gate computers.

Quantum annealers are specially tailored to solve combinatorial optimization problems: they have a simpler architecture, and are more easily manufactured and are currently able to tackle larger problems as they have a larger number of qubits. These computers find (near) optimum solutions of a combinatorial optimization problem via quantum annealing, which is similar to traditional simulated annealing. Whereas simulated annealing uses ‘thermal’ fluctuations for convergence to the state of minimum energy (optimal solution), in quantum annealing the addition of quantum tunnelling provides a faster mechanism for moving between states and faster processing.

Quantum gate computers are general purpose quantum computers. These use quantum logic gates, a basic quantum circuit operating on a small number of qubits, for computation. Constructing an algorithm involves a fixed sequence of quantum logic gates. Some quantum algorithms, e.g., Grover's algorithm, have provable quantum speed-up. These computers can be used to solve combinatorial optimization problems using the quantum approximate optimization algorithm.

Quantum computers have also given rise to quantum-inspired computers and quantum-inspired optimisation algorithms.

Quantum-inspired computers use dedicated conventional hardware technology to emulate/simulate quantum computers. These computers offer a similar programming interface of quantum computers and can currently solve much larger combinatorial optimization problems when compared to quantum computers and much faster than traditional computers.

Quantum-inspired optimisation algorithms use classical computers to simulate some physical phenomena such as superposition and entanglement to perform quantum computations, in an attempt to retain some of its benefit in conventional hardware when searching for solutions.

To solve optimization problems on a quantum annealer or on a quantum gate computer using the quantum approximate optimization algorithm, we need to reformulate them in a format suitable for the quantum hardware, in terms of qubits, biases and couplings between qubits. In mathematical terms, this requirement translates to reformulating the optimization problem as a Quadratic Unconstrained Binary Optimisation (QUBO) problem. This is closely related to the renowned Ising model. It constitutes a universal class, since in principle all combinatorial optimization problems can be formulated as QUBOs. In practice, some classes of optimization problems can be naturally mapped to a QUBO, whereas others are much more challenging to map. In quantum gates computers, Grover’s algorithm can be used to optimize a function by transforming the optimization problem into a series of decision problems. The most challenging part in this case is to select an appropriate representation of the problem to obtain the quadratic speedup of Grover’s algorithm compared to the classical computing algorithms for the same problem.

Content

A major application domain of quantum computers is solving hard combinatorial optimization problems. This is the emerging field of quantum optimization. The aim of the workshop is to provide a forum for both scientific presentations and discussion of issues related to quantum optimization.

As the algorithms quantum that computers use for optimization can be regarded as general types of heuristic optimization algorithms, there are potentially great benefits and synergy to bringing together the communities of quantum computing and heuristic optimization for mutual learning.

The workshop aims to be as inclusive as possible, and welcomes contributions from all areas broadly related to quantum optimization, and by researchers from both academia and industry.

Particular topics of interest include, but are not limited to:

Formulation of optimisation problems as QUBOs (including handling of non-binary representations and constraints)
Fitness landscape analysis of QUBOs
Novel search algorithms to solve QUBOs
Experimental comparisons on QUBO benchmarks
Theoretical analysis of search algorithms for QUBOs
Speed-up experiments on traditional hardware vs quantum(-inspired) hardware
Decomposition of optimisation problems for quantum hardware
Application of the quantum approximate optimization algorithm
Application of Grover's algorithm to solve optimisation problems
Novel quantum-inspired optimisation algorithms
Optimization/discovery of quantum circuits
Quantum optimisation for machine learning problems
Optical Annealing
Dealing with noise in quantum computing
Quantum Gates’ optimisation, Quantum Coherent Control

Organizers

Alberto Moraglio

Alberto Moraglio is a Senior Lecturer at the University of Exeter, UK. He holds a PhD in Computer Science from the University of Essex and Master and Bachelor degrees (Laurea) in Computer Engineering from the Polytechnic University of Turin, Italy. He is the founder of a Geometric Theory of Evolutionary Algorithms, which unifies Evolutionary Algorithms across representations and has been used for the principled design and rigorous theoretical analysis of new successful search algorithms. He gave several tutorials at GECCO, IEEE CEC and PPSN, and has an extensive publication record on this subject. He has served as co-chair for the GP track, the GA track and the Theory track at GECCO. He also co-chaired twice the European Conference on Genetic Programming, and is an associate editor of Genetic Programming and Evolvable Machines journal. He has applied his geometric theory to derive a new form of Genetic Programming based on semantics with appealing theoretical properties which is rapidly gaining popularity in the GP community. In the last three years, Alberto has been collaborating with Fujitsu Laboratories on Optimisation on Quantum Annealing machines. He has formulated dozens of Combinatorial Optimisation problems in a format suitable for the Quantum hardware. He is also the inventor of a software (a compiler) aimed at making these machines usable without specific expertise by automating the translation of high-level description of combinatorial optimisation problems to a low-level format suitable for the Quantum hardware (patented invention).

Serban Georgescu

Serban Georgescu holds a PhD in Engineering from The University of Tokyo and a degree in Applied Mathematics from The University of Bucharest, Romania. After finishing his PhD in 2009 on the topic of iterative solvers on heterogeneous computing environments, he joined the Computational Science & Engineer Lab at ETH Zurich as a post-doctoral researcher. Serban joined Fujitsu Research Europe in 2010 where he is currently a research manager in the Artificial Intelligence Laboratory. One of the main focuses of Serban’s research group at Fujitsu is making new technologies, such as AI and quantum-inspired computing, easily accessible to non-experts and useful to society.

Francisco Chicano

Francisco Chicano holds a PhD in Computer Science from the University of Málaga and a Degree in Physics from the National Distance Education University. Since 2008 he is with the Department of Languages and Computing Sciences of the University of Málaga. His research interests include quantum computing, the application of search techniques to Software Engineering problems and the use of theoretical results to efficiently solve combinatorial optimization problems. He is in the editorial board of Evolutionary Computation Journal, Engineering Applications of Artificial Intelligence, Journal of Systems and Software, ACM Transactions on Evolutionary Learning and Optimization and Mathematical Problems in Engineering. He has also been programme chair and Editor-in-Chief in international events.

Darrell Whitley

Prof. Darrell Whitley has been active in Evolutionary Computation since 1986, and has published more than 250 papers. These papers have garnered more than 28,000 citations. Dr. Whitley’s H-index is 68. He introduced the first “steady state genetic algorithm” with rank based selection, published some of the earliest papers on neuroevolution, and has worked on dozens of real world applications of evolutionary algorithms. He has served as Editor-in-Chief of the journal Evolutionary Computation, and served as Chair of the Governing Board of ACM Sigevo from 2007 to 2011. He is a Fellow of the ACM recognized for his contributions to Evolutionary Computation, and he has been awarded the 2022 IEEE PIONEER Award in Evolutionary Computation.

 

Oleksandr Kyriienko

Dr. Oleksandr Kyriienko is a theoretical physicist and a leader of Quantum Dynamics, Optics, and Computing group https://kyriienko.github.io/. He is a Lecturer (Assistant Professor) at the Physics department of the University of Exeter. Oleksandr obtained the PhD degree in 2014 from the University of Iceland, and was a visiting PhD in diverse institutions, including Nanyang Technological University in Singapore. From 2014 to 2017 he did postdoctoral research at the Niels Bohr Institute, University of Copenhagen. In 2017-2019 he was a Fellow at the Nordic Institute for Theoretical Physics (NORDITA), located in Stockholm, Sweden. Oleksandr’s research encompasses various areas of quantum technologies, starting from designing quantum algorithms and simulators, and ranging into nonlinear quantum optics in two-dimensional materials. Recently, he has been working towards developing quantum machine learning and quantum-based solvers of nonlinear differential equations. Dr. Kyriienko has a strong interest in approaches to quantum optimisation, which represents one of the pinnacles of modern quantum computing.

 

Denny Dahl

Dr. Edward (Denny) Dahl is Director of Quantum Applications at ColdQuanta. He works with quantum software providers and end users to map problems to quantum computing platforms. Previously he developed quantum applications at D-Wave Systems. His experience includes the application of massively parallel computation, neural networks and distributed data flow computation models. Dr. Dahl holds several patents in computer science and algorithms. Denny received his Ph.D. in high-energy theoretical physics from Stanford University. His thesis work at the Stanford Linear Accelerator Center applied Hamiltonian methods to lattice gauge theory. As a postdoc at Lawrence Livermore National Laboratory, he applied Hopfield networks to optimization problems and developed acceleration techniques for stochastic gradient descent, which is the standard training technique for neural networks. Additionally, Dr. Dahl is a guest scientist at Los Alamos National Laboratory. He has taught many training courses in quantum annealing at universities, government labs and commercial entities. His recent publications in Science and Phys Rev B describe how to use quantum annealers to simulate two dimensional spin ice and spin liquid.

Ofer Shir

Ofer Shir is an Associate Professor of Computer Science. He currently serves as the Head of the Computer Science Department in Tel-Hai College, and as a Principal Investigator at Migal-Galilee Research Institute – both located in the Upper Galilee, Israel. Ofer Shir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel (conferred 2003), and both MSc and PhD in Computer Science from Leiden University, The Netherlands (conferred 2004, 2008; PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz in the Department of Chemistry – where he specialized in computational aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics. His current topics of interest include Statistical Learning in Theory and in Practice, Experimental Optimization, Theory of Randomized Search Heuristics, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Machine Learning.

Lee Spector

Dr. Lee Spector is a Professor of Computer Science at Amherst College, an Adjunct Professor and member of the graduate faculty in the College of Information and Computer Sciences at the University of Massachusetts, Amherst, and an affiliated faculty member at Hampshire College, where he taught for many years before moving to Amherst College. He received a B.A. in Philosophy from Oberlin College in 1984, and a Ph.D. from the Department of Computer Science at the University of Maryland in 1992. At Hampshire College he held the MacArthur Chair, served as the elected faculty member of the Board of Trustees, served as the Dean of the School of Cognitive Science, served as Co-Director of Hampshire’s Design, Art and Technology program, supervised the Hampshire College Cluster Computing Facility, and served as the Director of the Institute for Computational Intelligence. At Amherst College he teaches computer science and directs an initiative on Artificial Intelligence and the Liberal Arts. My research and teaching focus on artificial intelligence and intersections of computer science with cognitive science, philosophy, physics, evolutionary biology, and the arts. He is the Editor-in-Chief of the Springer journal Genetic Programming and Evolvable Machines and a member of the editorial boards of the MIT Press journal Evolutionary Computation and the ACM journal Transactions on Evolutionary Learning and Optimization. He is a member of the Executive Committee of the ACM Special Interest Group on Evolutionary Computation (SIGEVO) and he has produced over 100 scientific publications. He serves regularly as a reviewer and as an organizer of professional events, and his research has been supported by the U.S. National Science Foundation and DARPA among other funding sources. Among the honors that he has received is the highest honor bestowed by the U.S. National Science Foundation for excellence in both teaching and research, the NSF Director's Award for Distinguished Teaching Scholars.

SAEOPT — Workshop on Surrogate-Assisted Evolutionary Optimisation

https://saeopt.bitbucket.io/

Summary

In many real-world optimisation problems evaluating the objective function(s) is expensive, perhaps requiring days of computation for a single evaluation. Surrogate-assisted optimisation attempts to alleviate this problem by employing computationally cheap 'surrogate' models to estimate the objective function(s) or the ranking relationships of the candidate solutions.

Surrogate-assisted approaches have been widely used across the field of evolutionary optimisation, including continuous and discrete variable problems, although little work has been done on combinatorial problems. Surrogates have been employed in solving a variety of optimization problems, such as multi-objective optimisation, dynamic optimisation, and robust optimisation. Surrogate-assisted methods have also found successful applications to aerodynamic design optimisation, structural design optimisation, data-driven optimisation, chip design, drug design, robotics, and many more. Most interestingly, the need for on-line learning of the surrogates has led to a fruitful crossover between the machine learning and evolutionary optimisation communities, where advanced learning techniques such as ensemble learning, active learning, semi-supervised learning and transfer learning have been employed in surrogate construction.

Despite recent successes in using surrogate-assisted evolutionary optimisation, there remain many challenges. This workshop aims to promote the research on surrogate assisted evolutionary optimization including the synergies between evolutionary optimisation and learning. Thus, this workshop will be of interest to a wide range of GECCO participants. Particular topics of interest include (but are not limited to):

  • Bayesian optimisation
  • Advanced machine learning techniques for constructing surrogates
  • Model management in surrogate-assisted optimisation
  • Multi-level, multi-fidelity surrogates
  • Complexity and efficiency of surrogate-assisted methods
  • Small and big data-driven evolutionary optimization
  • Model approximation in dynamic, robust, and multi-modal optimisation
  • Model approximation in multi- and many-objective optimisation
  • Surrogate-assisted evolutionary optimisation of high-dimensional problems
  • Comparison of different modelling methods in surrogate construction
  • Surrogate-assisted identification of the feasible region
  • Comparison of evolutionary and non-evolutionary approaches with surrogate models
  • Test problems for surrogate-assisted evolutionary optimisation
  • Performance improvement techniques in surrogate-assisted evolutionary computation
  • Performance assessment of surrogate-assisted evolutionary algorithms

Organizers

Alma Rahat

Dr Alma Rahat is a Senior Lecturer in Data Science at Swansea University. He is an expert in Bayesian search and optimisation for computationally expensive problems (for example, geometry optimisation using computational fluid dynamics). His particular expertise is in developing effective acquisition functions for single and multi-objective problems, and locating the feasible space. He is one of the twenty-four members of the IEEE Computational Intelligence Society Task Force on Data-Driven Evolutionary Optimization of Expensive Problems, and he has been the lead organiser for the popular Surrogate-Assisted Evolutionary Optimisation workshop at the prestigious Genetic and Evolutionary Computation Conference (GECCO) since 2016. He has a strong track record of working with industry on a broad range of optimisation problems. His collaborations have resulted in numerous articles in top journals and conferences, including a best paper in Real World Applications track at GECCO and a patent. Dr Rahat has a BEng (Hons.) in Electronic Engineering from the University of Southampton, UK, and a PhD in Computer Science from the University of Exeter, UK. He worked as a product development engineer after his bachelor's degree, and held post-doctoral research positions at the University of Exeter. Before moving to Swansea, he was a Lecturer in Computer Science at the University of Plymouth, UK.

 

Richard Everson

Richard Everson is Professor of Machine Learning and Director of the Institute of Data Science and Artificial Intelligence at the University of Exeter. His research interests lie in statistical machine learning and multi-objective optimisation, and the links between them. Current research is on surrogate methods, particularly Bayesian optimisation, for large expensive-to-evaluate optimisation problems, especially computational fluid dynamics design optimisation.

Jonathan Fieldsend

Jonathan Fieldsend, Editor-in-Chief University of Exeter, UK is Professor of Computational Intelligence at the University of Exeter. He has a degree in Economics from Durham University, a Masters in Computational Intelligence from the University of Plymouth and a PhD in Computer Science from the University of Exeter. He has over 100 peer-reviewed publications in the evolutionary computation and machine learning domains, with particular interests in multiple-objective optimisation, and the interface between optimisation and machine learning. Over the years, he has been a co-organiser of a number of different Workshops at GECCO (VizGEC, SAEOpt and EAPwU), as well as EMO Track Chair in GECCO 2019 and GECCO 2020. He is an Associate Editor of IEEE Transactions on Evolutionary Computation, and ACM Transactions on Evolutionary Learning and Optimization, and on the Editorial Board of Complex and Intelligence Systems. He is a vice-chair of the IEEE Computational Intelligence Society (CIS) Task Force on Data-Driven Evolutionary Optimisation of Expensive Problems, and sits on the IEEE CIS Task Force on Multi-modal Optimisation and the IEEE CIS Task Force on Evolutionary Many-Objective Optimisation.

 

Handing Wang

Handing Wang received the B.Eng. and Ph.D. degrees from Xidian University, Xi'an, China, in 2010 and 2015. She is currently a professor with School of Artificial Intelligence, Xidian University, Xi'an, China. Dr. Wang is an Associate Editor of IEEE Computational Intelligence Magazine and Complex & Intelligent Systems, chair of the Task Force on Intelligence Systems for Health within the Intelligent Systems Applications Technical Committee of IEEE Computational Intelligence Society. Her research interests include nature-inspired computation, multi- and many-objective optimization, multiple criteria decision making, and real-world problems. She has published over 10 papers in international journal, including IEEE Transactions on Evolutionary Computation (TEVC), IEEE Transactions on Cybernetics (TCYB), and Evolutionary Computation (ECJ).

 

Yaochu Jin

Yaochu Jin received the B.Sc., M.Sc., and Ph.D. degrees from Zhejiang University, Hangzhou, China, in 1988, 1991, and 1996 respectively, and the Dr.-Ing. degree from Ruhr University Bochum, Germany, in 2001. He is Professor in Computational Intelligence, Department of Computer Science, University of Surrey, Guildford, U.K., where he heads the Nature Inspired Computing and Engineering Group. He is also a Finland Distinguished Professor, University of Jyvaskyla, Finland and a Changjiang Distinguished Professor, Northeastern University, China. His main research interests include evolutionary computation, machine learning, computational neuroscience, and evolutionary developmental systems, with their application to data-driven optimization and decision-making, self-organizing swarm robotic systems, and bioinformatics. He has (co)authored over 200 peer-reviewed journal and conference papers and has been granted eight patents on evolutionary optimization. Dr Jin is the Editor-in-Chief of the IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS and Complex & Intelligent Systems. He was an IEEE Distinguished Lecturer (2013-2015) and Vice President for Technical Activities of the IEEE Computational Intelligence Society (2014-2015). He was the recipient of the Best Paper Award of the 2010 IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology and the 2014 IEEE Computational Intelligence Magazine Outstanding Paper Award. He is a Fellow of IEEE.

Tinkle Chugh

Dr Tinkle Chugh is a Lecturer in Computer Science at the University of Exeter. Between Feb 2018 and June 2020, he worked as a Postdoctoral Research Fellow in the BIG data methods for improving windstorm FOOTprint prediction (project funded by Natural Environment Research Council UK. He obtained his PhD degree in Mathematical Information Technology in 2017 from the University of Jyväskylä, Finland. His thesis was a part of the Decision Support for Complex Multiobjective Optimization Problems project, where he collaborated with Finland Distinguished Professor (FiDiPro) Yaochu Jin from the University of Surrey, UK. His research interests are machine learning, data-driven optimization, evolutionary computation and decision making.

SECDEF — Genetic and Evolutionary Computation in Defense, Security, and Risk Management

https://secdef.cs.dal.ca/

Summary

With the constant appearance of new threats, research in the areas of defense, security and risk management has acquired an increasing importance over the past few years. These new challenges often require innovative solutions and computational intelligence techniques can play a significant role in finding them.
In the last eight years, we have been organizing the SecDef workshop under GECCO to seek both theoretical developments and applications of Genetic and Evolutionary Computation and their hybrids to the following (and other related) topics:
• Cyber-crime and cyber-defense: anomaly detection systems, attack prevention and defense, threat forecasting systems, anti-spam, antivirus systems, cyber warfare, cyber fraud;
• IT Security: Intrusion detection, behavior monitoring, network traffic analysis;
• Risk management: identification, prevention, monitoring and handling of risks, risk impact and probability estimation systems, contingency plans, real time risk management;
• Critical Infrastructure Protection (CIP);
• Military, counter-terrorism and other defense-related aspects.
The workshop invites both completed and ongoing work, with the aim to encourage communication between active researchers and practitioners to better understand the current scope of efforts within this domain. The ultimate goal is to understand, discuss, and help set future directions for computational intelligence in security and defense problems.

Organizers

Erik Hemberg

Erik Hemberg is a Research Scientist in the AnyScale Learning For All (ALFA) group at Massachusetts Institute of Technology Computer Science and Artificial Intelligence Lab, USA. He has a PhD in Computer Science from University College Dublin, Ireland and an MSc in Industrial Engineering and Applied Mathematics from Chalmers University of Technology, Sweden. His work focuses on developing autonomous, pro-active cyber defenses that are anticipatory and adapt to counter attacks. He is also interested in automated semantic parsing of law, and data science for education and healthcare.

 

Marwa A. Elsayed

Marwa A. Elsayed is an NSERC postdoc fellow, tenured at the Faculty of Computer Science, Dalhousie University Canada. She also works as a research consultant at Queen’s School of Computing, where she received her Ph.D. degree in November 2018. Her research interests focus on cybersecurity spanning the areas of Cloud Computing, Big Data Systems, the Internet of Things (IoT), and advanced network technologies, leveraging data science principles, data analytics, machine learning (ML), and software engineering principles. She is a review editor on the editorial board of the Elsevier Journal of Computers and Security, and Frontiers in Communications and Networks Journal (specialty section of Networks). She is also a member of the review committees of several IEEE international conferences and journals. Dr. Elsayed’s research has received several academic recognitions and three best paper awards at top international conferences.

SymReg — Symbolic Regression

https://heal.heuristiclab.com/research/symbolic-regression-workshop

Summary

Symbolic regression designates the search for symbolic models that describe an relationship in provided data. Symbolic regression has been one of the first applications for genetic programming and as such is tightly connected to evolutionary algorithms. However, in recent years several non-evolutionary techniques for solving symbolic regression have emerged. Especially with the focus on interpretability and explainability in AI research, symbolic regression takes a leading role among machine learning methods, whenever model inspection and understanding by a domain expert is desired. Examples where symbolic regression already produces outstanding results include modeling where interpretability is desired, modeling of non-linear dependencies, modeling with small data sets or noisy data, modeling with additional constraints, or modeling of differential equation systems.

The focus of this workshop is to further advance the state-of-the-art in symbolic regression by gathering experts in the field of symbolic regression and facilitating an exchange of novel research ideas. Therefore, we encourage submissions presenting novel techniques or applications of symbolic regression, theoretical work on issues of generalization, size and interpretability of the models produced, or algorithmic improvements to make the techniques more efficient, more reliable and generally better controlled. Furthermore, we invite participants of the symbolic regression competition to present their algorithms and results in detail at this workshop.

Particular topics of interest include, but are not limited to:

  • evolutionary and non-evolutionary algorithms for symbolic regression
  • improving stability of symbolic regression algorithms
  • integration of side-information (physical laws, constraints, ...)
  • benchmarking symbolic regression algorithms
  • symbolic regression for scientific machine learning
  • innovative symbolic regression applications

Organizers

Michael Kommenda

Michael Kommenda is a senior researcher and project manager at the University of Applied Sciences Upper Austria, where he leads several applied research projects with a focus on machine learning and data-based modeling. He received his PhD in technical sciences in 2018 from the Johannes Kepler University Linz, Austria. The title of his dissertation is Local Optimization and Complexity Control for Symbolic Regression, which condenses his research on symbolic regression so far. Michael's research interests currently are improving symbolic regression so that it becomes an established regression and machine learning technique. Additionally, Michael is one of the architects of the HeuristicLab optimization framework and contributed significantly to its genetic programming and symbolic regression implementation.

William La Cava

William La Cava is a faculty member in the Computational Health Informatics Program (CHIP) at Boston Children’s Hospital and Harvard Medical School. He received his PhD from UMass Amherst with a focus on interpretable modeling of dynamical systems. Prior to joining CHIP, he was a post-doctoral fellow and research associate in the Institute for Biomedical Informatics at the University of Pennsylvania.

 

Gabriel Kronberger

Gabriel Kronberger is full professor at the University of Applied Sciences Upper Austria and has been working on algorithms for symbolic regression since his PhD thesis that he defended in 2010. He is currently heading the Josef Ressel Center for Symbolic Regression (https://symreg.at), a five-year nationally funded effort focused on developing improved SymReg methods and applications in collaboration with several Austrian company partners. His current research interests are symbolic regression for scientific machine learning and industrial applications. Gabriel has authored or co-authored 94 publications (SCOPUS) and has been a member of the Program Committee for the GECCO Genetic Programming track since 2016.

 

Steven Gustafson

Steven Gustafson received his PhD in Computer Science and Artificial Intelligence, and shortly thereafter was awarded IEEE Intelligent System's "AI's 10 to Watch" for his work in algorithms that discover algorithms. For 10+ years at GE's corporate R&D center he was a leader in AI, successful technical lab manager, all while inventing and deploying state-of-the-art AI systems for almost every GE business, from GE Capital to NBC Universal and GE Aviation. He has over 50 publications, 13 patents, was a co-founder and Technical Editor in Chief of the Memetic Computing Journal. Steven has chaired various conferences and workshops, including the first Symbolic Regression and Modeling (SRM) Workshop at GECCO 2009 and subsequent workshops from 2010 to 2014. As the Chief Scientist at Maana, a Knowledge Platform software company, he invented and architected new AutoML and NLP techniques with publications in AAAI and IJCAI. Steven is currently the CTO of Noonum, an investment intelligence company, that is pushing the state-of-the-art of large scale knowledge graph, NLP and machine learning decision support systems.