IEEE Computational Intelligence Society Webinar Competition 2019
IEEE Conference on Games (CoG) 2019 Competitions
The following competitions are being held at CoG 2019:
- 4th Angry Birds Level Generation Competition
- Bot Bowl I
- Fighting Game AI Competition
- First TextWorld Problems: A Reinforcement and Language Learning Challenge
- Geometry Friends Game AI Competition
- General Video Game AI Competitions
- Hanabi Competition
- Hearthstone AI competition
- MicroRTS AI Competition
- Short Video Competition
- StarCraft AI Competition
- Strategy Card Game AI Competition
IEEE Congress On Evolutionary Computation (CEC) 2019 Competitions
The following competitions are being held at CEC 2019:
- CEC-C01 Competition on "Multimodal Multiobjective Optimization
- CEC-C02 Competition on "Evolutionary Multi-task Optimization
- CEC-C03 Competition on "Online Data-Driven Multi-Objective Optimization Competition
- CEC-C04 Competition on "Smart Grid and Sustainable Energy Systems
- CEC-C05 Competition on "Evolutionary Computation in Uncertain Environments: A Smart Grid Application
- CEC-C06 Competition on "100-Digit Challenge on Single Objective Numerical Optimization
- CEC-C07 FML-based Machine Learning Competition for Human and Smart Machine Co-Learning on Game of Go
- CEC-C08 General Video Game AI Single-Player Learning Competition
- CEC-C09 Strategy Card Game AI Competition
- CEC-C10 Nonlinear Equation Systems Competition
- CEC-C11 Competition on Large-Scale Global Optimization
- CEC-C12 Divide-the-Dollar Competition
- CEC-C13 Continuous derivative-free optimization competition
4TH ANGRY BIRDS LEVEL GENERATION COMPETITION
Description: This year we will run our fourth Angry Birds Level Generation Competition. The goal of this competition is to build computer programs that can automatically create fun and challenging Angry Birds levels. The difficulty of this competition compared to similar competitions is that the generated levels must be stable under gravity, robust in the sense that a single action should not destroy large parts of the generated structure, and most importantly, the levels should be fun to play, visually interesting and challenging to solve. Participants will be able to ensure solvability and difficulty of their levels by using open source Angry Birds AI agents that were developed for the Angry Birds AI competition. This competition will evaluate each level generator based on the overall fun or enjoyment factor of the levels it creates. Aside from the main prize for "most enjoyable levels", two additional prizes for "most aesthetic levels" and "most challenging levels" will also be awarded. This evaluation will be done by an impartial panel of judges. restrictions will be placed on what objects can be used in the generated levels (in order to prevent pre-generation of levels). We will generate 100 levels for each submitted generator and randomly select a fraction of those for the competition. There will be a penalty if levels are too similar. Each entrant will be evaluated for all prizes. More details on the competition rules and can be found on the competition website aibirds.org. The competition will be based on the physics game implementation "Science Birds" by Lucas Ferreira using Unity3D.
Competition Webpage: https://aibirds.org
BOT BOWL I
- Niels Justesen, PhD student, IT University of Copenhagen
- Nicolai Overgaard Larsen, Danish Eurobowl Captain, former Eurobowl Committee Chairman
- Sebastian Risi, Associate Professor, IT University of Copenhagen
- Julian Togelius, Associate Professor, New York University
Description: Bot Bowl will be an AI competition using the Fantasy Football AI (FFAI) framework . FFAI simulates the board game Blood Bowl by Games Workshop and offers API’s for scripted bots and ML algorithms in Python. Blood Bowl is a major challenge for artificial agents due to its complexity and lack of rewards . The competition will have two tracks: 1) The main track will use the traditional board size of 26x15 squares with 11 players on each side. 2) The mini track will use a custom board size of 12x5 squares with only 3 players on each side. If the mini track will reach human-level performance this year, we will scale it up next year. We will test the winners against human players in the end of the competition. Both tracks will be limited to only allow a prefixed human team. In the future, the competition can be extended to allow multiple races from the rulebook (orcs, elves etc.), custom-made rosters, and board layouts (such as procedurally generated dungeons). The competition format will be round-robin followed by one final; Bot Bowl I.
-  https://github.com/njustesen/ffai
-  Justesen, Niels, Sebastian Risi, and Julian Togelius. Blood Bowl: The Next Board Game Challenge for AI. FDG 2018, 1st Workshop on Tabletop Games, (2018).
Competition Webpage: We are using the github page for FFAI as the main information hub along with a Discord server (invitation link is on Github) https://github.com/njustesen/ffai
FIGHTING GAME AI COMPETITION
- Ruck Thawonmas
Description: What are promising techniques to develop general fighting-game AIs whose performances are robust against a variety of settings and opponents? As the platform, Java-based FightingICE is used which also supports Python programming and development of visual-based deep learning AIs. Two leagues (Standard and Speedrunning) are associated to each of the three character types: Zen, Garnet, and Lud where the character data of the last one is not revealed. Standard League considers the winner of a round as the one with the hit point (HP) above zero at the time its opponent's HP has reached zero. In Speedrunning League, the league winner of a given character type is the AI with the shortest average time to beat our sample MCTS AI. The competition winner is decided considering both leagues' results based on the 2015 Formula-1 scoring system..
Competition Webpage: http://www.ice.ci.ritsumei.ac.jp/~ftgaic/
FIRST TEXTWORLD PROBLEMS: A REINFORCEMENT AND LANGUAGE LEARNING CHALLENGE
- Marc-Alexandre Cote
- Wendy Tay
- Tavian Barnes
- Eric Yuan
- Adam Trischler
Description: The goal of this competition is to build an AI agent that can play efficiently and win simplified text-based games. We hope to highlight the limitation of existing Reinforcement Learning models when combined with Natural Language Processing. Therefore, any agent that doesn't show learning behaviors will be penalized. Enter your submission for a chance to win $2000 USD and more in prizes! The competition runs until June 1st, 2019.
The agent must navigate and interact within a text environment, i.e. the agent perceives the environment through text and acts in it using text commands. The agent would need skills like:
- language understanding
- dealing with a combinatorial actions space
- efficient exploration
- sequential decision-making
In this competition, all the games share a similar theme (cooking in a modern house), similar text commands, and similar entities (i.e. interactable objects within the games). To better understand the games, check out the Jupyter notebook found in the starting kit.
The simplified games were generated using TextWorld (https://www.microsoft.com/en-us/research/project/textworld/). TextWorld is an open-source framework that both generates and interfaces with text-based games. You can use TextWorld to train your agents.
Competition Webpage: http://aka.ms/textworld-challenge
GEOMETRY FRIENDS GAME AI COMPETITION
- Rui Prada, INESC-ID and Instituto Superior Técnico, Universidade de Lisboa
- Francisco S. Melo, INESC-ID and Instituto Superior Técnico, Universidade de Lisboa
- João Dias, INESC-ID and Instituto Superior Técnico, Universidade de Lisboa
Description: The goal of the competition is to build AI agents for a 2-player collaborative physics-based puzzle platformer game (Geometry Friends). The agents control, each, a different character (circle or rectangle) with distinct characteristics. Their goal is to collaborate in order to collect a set of diamonds in a set of levels as fast as possible. The game presents problems of combined task and motion planning and promotes collaboration at different levels. Participants can tackle cooperative levels with the full complexity of the problem or single-player levels for dealing with task and motion planning without the complexity of collaboration..
Competition Webpage: https://geometryfriends.gaips.inesc-id.pt/
Tracks: The competition has 3 tracks:
- Single Player Circle
- Single Player Rectangle
GENERAL VIDEO GAME AI COMPETITIONS
Scope and Topics: The General Video Game AI (GVG-AI) Competition explores the problem of creating agents for general video game playing. How would you create a single agent that is able to play any game it is given? Could you program an agent that is able to play a wide variety of games, without knowing which games are to be played and without a forward model? How would you create a generator to design game rules or levels?
Five GVGAI competitions are proposed:
- GVGAI Single-Player Planning Track
Submission via http://www.gvgai.net
- GVGAI Two-Player Planning Track
Submission via http://www.gvgai.net
- GVGAI Single-Player Learning Track
Submission via http://www.aingames.cn
- GVGAI Level Generation Track
Submission via http://www.gvgai.net
- GVGAI Rule Generation Track
Submission via http://www.gvgai.net
GVGAI Steering Committee: Jialin Liu, Diego Pérez Liébana, Julian Togelius, Simon M. Lucas
Submittion Instructions: The participants are invited to submit their agent via http://www.gvgai.net or http://www.aingames.cn, depending on the track. Submission instructions of each track will be provided separately on the corresponding webpage.
Submittion Deadline: Agent submission: 15th July 2019, 23:59 (GMT)
Demo Video: https://www.youtube.com/watch?v=O84KgRt6AJI
Description: Write an agent capable of playing the cooperative partially observable card game Hanabi. Agents are written in Java and submitted via our online submission system.
In Hanabi, agents cannot see their own cards but can see the other agent's cards. On their turn, agents can either choose to play a card from their hand, discard a card from their hand or spend an information token to tell another player about a feature (rank or suit) of the cards they have. The players must try to play cards for each suit in rank order. If the group makes 3 errors when executing play actions the game is over.
Agents will be paired with either copies of their own agent or a set of unknown agents. The winner is the agent that achieves the highest score over a set of unknown deck orderings.
Competition Webpage: http://hanabi.fosslab.uk/
- Mirror Track - Agents Play With Copies Of Their Strategy
Mirror track means that agents play with copies of their strategy, this means all agents playing the game will be using the same strategy
- Mixed Track - Agents Play With Unknown Strategy
Agents will play with a set of unknown policies (you don't know how they are making their decisions) to form the team, for each game the agents will be paired with a set of n-1 other agents. This will then be com paired with the scores with the other competition entrants. Deck orderings will remain consistent for 1 round, before a different deck ordering is chosen. Player positions will also be randomized between rounds to avoid the player's agent always playing first.
- Learning Track - Agents Play Multiple Games With The Same Group (to Allow For Strategy Learning)
This was at the request of last years competition entrants, this will play similarly to the mixed track, but rather than the paired agents changing every round, the agent set would be fixed. This gives the agents the opportunity to learn from the observed moves.
HEARTHSTONE AI COMPETITION
Description: The collectible online card game Hearthstone features a rich testbed and poses unique demands for generating artificial intelligence agents. The game is a turn-based card game between two opponents, using constructed decks of thirty cards along with a selected hero with a unique power. Players use their limited mana crystals to cast spells or summon minions to attack their opponent, with the goal to reduce the opponent's health to zero. The competition aims to promote the stepwise development of fully autonomous AI agents in the context of Hearthstone.
During the game, both players need to play the best combination of hand cards, while facing a large amount of uncertainty. The upcoming card draw, the opponent’s hand cards, as well as some hidden effects played by the opponent can influence the player’s next move and its succeeding rounds. Predicting the opponent’s deck from previously seen cards, and estimating the chances of getting cards of the own deck can help in finding the best cards to be played. Card playing order, their effects, as well as attack targets have a large influence on the player’s chances of winning the game.
Despite using premade decks players face the opportunity of creating a deck of 30 cards from the over 1000 available in the current game. Most of them providing unique effects and card synergies that can help in developing combos. Generating a strong deck is a step in consistently winning against a diverse set of opponents..
Competition Webpage: You can find more information on this year’s competition and the evaluation of last year’s submissions on our webpage. It also features a list of previously submitted bots and their source code as well as information about how to get started.
Tracks: The competition will encourage submissions to the following two separate tracks, which will be available in the second year of this competition:
- Premade Deck Playing
In the "Premade Deck Playing"-track participants will receive a list of decks and play out all combinations against each other. Determining and using the characteristics of player’s and the opponent’s deck to the player’s advantage will help in winning the game. This track will feature an updated list of decks to better represent the current meta-game.
- User Created Deck Playing
The "User Created Deck Playing"-track invites all participants in creating their own decks or choosing from the vast amount of decks available online. Finding a deck that can consistently beat multiple other decks will play a key role in this competition track. Additionally, it gives the participants the chance in optimizing the agents’ strategy to the characteristics of their chosen deck.
Author's Schedule: For the deadline for submitting papers, please check the website of CoG 2019
MICRORTS AI COMPETITION
Description: Several AI competitions organized around RTS games have been organized in the past (such as the ORTS competitions, and the StarCraft AI competitions), which has spurred a new wave of research in to RTS AI. However, as it has been reported numerous times, developing bots for RTS games such as StarCraft involves a very large amount of engineering, which often relegates the research aspects of the competition to a second plane. The microRTS competition has been created to motivate research in the basic research questions underlying the development of AI for RTS games, while minimizing the amount of engineering required to participate. Also, a key difference with respect to the StarCraft competition is that the AIs have access to a 'forward model' (i.e., a simulator), with which they can simulate the effect of actions or plans, thus allowing for planning and game-tree search techniques to be developed easily. This will be the third edition of the competition, after the 2017 and 2018 editions hosted at IEEE-CIG conferences..
Competition Webpage: https://sites.google.com/site/micrortsaicompetition/home
SHORT VIDEO COMPETITION
Description: The aim of the competition is to promote the production of short videos highlighting any research which is relevant to IEEE CoG. The videos may be related to CoG papers but this is not necessary. A similar competition was run for IEEE CIG 2018 and attracted 8 entries, and led to an interesting presentation session that was very well received. Links to the top 3 entries are below in the appendix. The videos should be informative and well presented. Participants must submit a video which is not longer than 5 minutes, but there is no lower limit. The video should include a title page at the beginning. Each video must mention that it is an entry for the IEEE CoG 2019 Short Video Competition.
To enter the competition at least one author of the video must be registered for the conference.
Entries should be submitted via the conference paper submission server, selecting "Short Video Competition" from the list of special sessions. The information required is the video title, authors, a brief description (approx. 150 words), and a link to the video which can be hosted on any easily viewable video streaming service such as Youtube, Youku, or ieee.tv
Entries are submitted by registering the video and uploading the required information to the conference paper submission server. Links to all accepted videos will be published on the conference website after the conference. A panel will select the set of finalist videos to be judged by the audience during the short video competition session at the conference.
The session should ideally be run in plenary mode to attract maximum audience participation and feature just the top 6 videos shortlisted by the panel. An ideal time to do this would be just prior to the conference dinner.
The winner will be chosen by an audience vote at the end of this session. The organisers reserve the right to exclude any video they deem to be offensive or inappropriate.
Sponsorship: $1,000USD of prize money will be provided by IEEE CIS Education Committee to be divided in ratio 500:300:200 for 1st,2nd,3rd place.
APPENDIX: LEADING ENTRIES IN IEEE CIG 2018 SHORT VIDEO COMPETITION (ranking of 8 finalists decided by audience vote; note in this case we did no prior shortlisting so the 8 finalists were the entire set of entries)
- Winner: Vanessa Volz
Evolving Mario Levels in the Latent Space of a Deep Convolutional Generative Adversarial Network
- 2nd Place: Niels Justesen
Automated Curriculum Learning by Rewarding Temporally Rare Events
- 3rd Place: Raluca Gaina
General Win Prediction: CIG 2018 Short Video Competition
STARCRAFT AI COMPETITION
- Kyung-Joong Kim, GIST
- Seonghun Yoon, Sejong Univ
Description: IEEE CoG StarCraft competitions have seen quite some progress in the development and evolution of new StarCraft bots. For the evolution of the bots, participants used various approaches for making AI bots and it has fertilized game AI and methods such as HMM, Bayesian model, CBR, Potential fields, and reinforcement learning. However, it is still quite challenging to develop AI for the game because it should handle a number of units and buildings while considering resource management and high-level tactics. The purpose of this competition is developing RTS game AI and solving challenging issues on RTS game AI such as uncertainty, real-time process, managing units. Participants are submitting the bots using BWAPI to play 1v1 StarCraft matches.
Competition Webpage: http://cilab.sejong.ac.kr/sc_competition
STRATEGY CARD GAME AI COMPETITION
- Jakub Kowalski
- Radosław Miernik
Description: Legends of Code and Magic (LOCM) is a small implementation of a Strategy Card Game, designed to perform AI research. Its advantage over the real cardgame AI engines is that it is much simpler to handle by the agents, and thus allows testing more sophisticated algorithms and quickly implement theoretical ideas.
All cards effects are deterministic, thus the nondeterminism is introduced only by the ordering of cards and unknown opponent's deck. The game board consists of two lines (similarly as in TES:Legends), so it favors deeper strategic thinking. Also, LOCM is based on the fair arena mode, i.e., before every game, both players create their decks secretly from the symmetrical yet limited choices. Because of that, the deckbuilding is dynamic and cannot be simply reduced to using human-created top-meta decks.
This competition aims to play the same role for Hearthstone AI Competition as microRTS plays for various StarCraft AI contests. Encourage advanced research, free of drawbacks of working with the full-fledged game. In this domain, it means i.a. embedding deckbuilding into the game itself (limiting the usage of premade decks), and allowing efficient search beyond the one turn depth.
The contest is based on the LOCM 1.2, the same as in CEC 2019 Competition. One-lane, 1.0 version of the game, has been used for CodinGame contest in August 2018.
Competition Webpage: https://jakubkowalski.tech/Projects/LOCM/COG19/
CEC-C01 Multimodal Multiobjective Optimization
- Jing Liang
- Boyang Qu
- Dunwei Gong
Scope and Topics: In multiobjective optimization problems, there may exist two or more distinct Pareto optimal sets (PSs) corresponding to the same Pareto Front (PF). These problems are defined as multimodal multiobjective optimization problems (MMOPs). Arguably, finding one of these multiple PSs may be sufficient to obtain an acceptable solution for some problems. However, failing to identify more than one of the PSs may prevent the decision maker from considering solution options that could bring about improved performance.
The aim of this special session is to promote the research on MMO and hence motivate researchers to formulate real-world practical problems. Given that the study of multimodal multiobjective optimization (MMO) is still in its emerging stages, although many real-word applications are likely to be amenable to treatment as a MMOP, to date the researchers have ignored such formulations:
This special session is devoted to the novel approaches, algorithms and techniques for solving MMOPs. The main topics of the special session are:
- Evolutionary algorithms for multimodal multiobjective optimization
- Hybrid algorithms for multimodal multiobjective optimization
- Adaptable algorithms for multimodal multiobjective optimization
- Surrogate techniques for multimodal multiobjective optimization
- Machine learning methods helping to solve multimodal multiobjective optimization problems
- Memetic computing for multimodal multiobjective optimization
- Niching techniques for multimodal multiobjective optimization
- Parallel computing for multimodal multiobjective optimization
- Design methods for multimodal multiobjective optimization test problems
- Decision making in multimodal multiobjective optimization
- Related theory analysis
Submission Instructions: MMO Benchmark for CEC2019 and corresponding comparison instruction can be found on http://www5.zzu.edu.cn/ecilab/info/1036/1163.htm. Papers should be submitted following the instructions at the IEEE CEC 2019 web site before the deadline. Please select the main research topic as the Special Session on "multimodal multiobjective optimization". Accepted papers will be included and published in the conference proceedings. Participants without paper are also welcomed and a detailed report about the algorithm and results in IEEE format should be provided.
Submission Deadline: 30th April 2019, 23:59 (GMT)
CEC-C02 Evolutionary Multi-task Optimization
- Liang Feng
- Kai Qin
- Abhishek Gupta
- Yuan Yuan
- Yew-Soon Ong
- Xu Chi
Supported by: IEEE CIS Task Force from Intelligent Systems Applications Technical Committee, Task Force on "Transfer Learning & Transfer Optimization"
Scope and Topics: The human possesses the most remarkable ability to manage and execute multiple tasks simultaneously, e.g., talking while walking. This desirable multitasking capability has inspired computational methodologies and approaches to tackle multiple tasks at the same time by leveraging commonalities and differences across different tasks to improve the performance and efficiency of resolving component tasks compared to when dealing with them separately. As a well-known example, multi-task learning is a very active subfield of machine learning whereby multiple learning tasks are performed together using a shared model representation such that the relevant information contained in related tasks can be exploited to improve the learning efficiency and generalization performance of task-specific models.
Multi-task optimization (MTO) is a newly emerging research area in the field of optimization, which investigates how to effectively and efficiently tackle multiple optimization problems at the same time. In the multitasking scenario, solving one optimization problem may assist in solving other optimization problems (i.e., synergetic problem-solving) if these problems bear commonality and/or complementarity in terms of optimal solutions and/or fitness landscapes. As a simple example, if some problems have the same globally optimal solution but distinct fitness landscapes, obtaining the global optimum to any problem makes the others also get solved. Recently, an evolutionary MTO paradigm named as evolutionary multitasking was proposed to explore the potential of evolutionary algorithms (EAs) incorporated with a unified solution representation space for MTO. As a population-based optimizer, EAs feature the Darwinian "survival-of-the-fittest" principle and nature-inspired reproduction operations which inherently promote implicit knowledge transfer across tasks during problem-solving. The superiority of this new evolutionary multitasking framework over traditional ways of solving each task independently has been demonstrated on synthetic and real-world MTO problems by using a multi-factorial EA (MFEA) developed under this framework.
Evolutionary multitasking opens up new horizons for researchers in the field of evolutionary computation. It provides a promising means to deal with the ever-increasing number, variety and complexity of optimization tasks. More importantly, rapid advances in cloud computing could eventually turn optimization into an on-demand service hosted on the cloud. In such a case, a variety of optimization tasks would be simultaneously executed by the service engine where evolutionary multitasking may harness the underlying synergy between multiple tasks to provide service consumers with faster and better solutions.
Due to the good response of this competition held at CEC’17 and WCCI’2018 (17 entries in CEC’17, and 13 entries in WCCI’18), we would like to continue to organize this competition at CEC’19, aiming at promoting research advances in both algorithmic and theoretical aspects of evolutionary MTO.
Please refer to the complete document for more details.
Submission Deadline: 1st May 2019, 23:59 (GMT)
CEC-C03 Online Data-Driven Multi-Objective Optimization Competition
- Handing Wang
- Cheng He
- Ye Tian
- and Yaochu Jin
Supported by: IEEE CIS TF on "Intelligence Systems for Health" in the Intelligent Systems Application Technical Committee and IEEE CIS TF on "Data-Driven Evolutionary Optimization of Expensive Problems" in the Evolutionary Computation Technical Committee
Scope and Topics: Evolutionary multi-objective optimization (EMO) has been flourishing for two decades in academia. However, the industry applications of EMO to real-world optimization problems are infrequent, due to the strong assumption that objective function evaluations are easily accessed. In fact, such objective functions may not exist, instead computationally expensive numerical simulations or costly physical experiments must be performed for evaluations. Such problems driven by data collected in simulations or experiments are formulated as data-driven optimization problems, which pose challenges to conventional EMO algorithms. Firstly, obtaining the minimum data for conventional EMO algorithms to converge requires a high computational or resource cost. Secondly, although surrogate models that approximate objective functions can be used to replace the real function evaluations, the search accuracy cannot be guaranteed because of the approximation errors of surrogate models. Thirdly, since only a small amount of online data is allowed to be sampled during the optimization process, the management of online data significantly affects the performance of algorithms. The research on data-driven evolutionary optimization has not received sufficient attention, although techniques for solving such problems are highly in demand. One main reason is the lack of benchmark problems that can closely reflect real-world challenges, which leads to a big gap between academia and industries.
Submission Instructions: In this competition, we carefully select 6 benchmark multi-objective optimization problems from real-world applications, including design of car cab, optimization of vehicle frontal structure, filter design, optimization of power systems, and optimization of neural networks. The objective functions of those problems cannot be calculated analytically, but can be calculated by calling an executable program to provide true black-box evaluations for both offline and online data sampling. A set of initial data is generated offline using Latin hypercube sampling, and a predefined fixed number of online data samples are set as the stopping criterion. This competition, as an event organized by the Task Force on "Intelligence Systems for Health" in the Intelligent Systems Application Technical Committee and Task Force on "Data-Driven Evolutionary Optimization of Expensive Problems" in the Evolutionary Computation Technical Committee, aims to promote the research on data-driven evolutionary multi-objective optimization by suggesting a set of benchmark problems extracted from various real-world optimization applications. All benchmark functions are implemented in MATLAB code. Also, the MATLAB code has been embedded in a recently developed software platform – PlatEMO, an open source MATLAB-based platform for evolutionary multi- and many-objective optimization, which currently includes more than 50 representative algorithms and over 100 benchmark functions, along with a variety of widely used performance indicators.
Submission Deadline: 15th April 2019, 23:59 (GMT)
CEC-C04 Competition on Smart Grid and Sustainable Energy Systems
- Zhile Yang
- Kunjie Yu
- Zhou Wu
Scope and Topics: To shape a low carbon energy future has been a crucial and urgent task under Paris Global Agreement. Numerous optimisation problems have been formulated and solved to effectively save the fossil fuel cost and relief energy waste from power system and energy application side. However, some key problems are of strong non-convex, non-smooth or mixed integer characteristics, leading to significant challenging issues for system operators and energy users. This competition aims to encourage the relevant researchers to present their state-of-the-art optimisation tools for solving three featured complicated optimisation tasks including unit commitment, economic load dispatch and parameter identification for photovoltaic models and PEV fuel cells.
Unit commitment (UC) problem aims to minimize the economic cost by optimally determining the online/offline status and power dispatch of each unit, while maintaining various system constraints, formulating a large scale mixed-integer problem. Economic load dispatch is a power system operation task aiming to minimise the fossil fuel economic cost by determining the day-ahead and/or hourly power generation for each power generator. Fuel cell is one of most important energy storages in the future, particularly with the applications to vehicles and robotics. Proton Exchange Membrane is the key component of fuel cell however is of significant difficulties to be accurately modelled due to the nonlinearity, multivariate and strongly coupled characteristics. Evolutionary computation is immune from complex problem modelling formulation, and is therefore promising to provide powerful optimisation tools for intelligently and efficiently solving problems such as smart grid and various energy systems scheduling to reduce carbon consumptions.
A brief list of potential submission topics is shown below:
- Unit commitment
- Economic load dispatch
- Parameters identification for photovoltaic models and PEM fuel cells
Submission Deadline: 7th January 2019, 23:59 (GMT)
CEC-C05 Evolutionary Computation in Uncertain Environments: A Smart Grid Application
- Fernando Lezama
- Joao Soares
- Zita Vale
- Jose Rueda
- Markus Wagner
- IEEE CIS Task Force 3 on Energy Domain
- IEEE PES Intelligent Systems Subcommittee (ISS), part of IEEE PES Analytic Methods for Power Systems TC
Scope and Topics: Following the success of the previous edition at WCCI 2018, we are relaunching this competition at major conferences in the field of computational intelligence. This CEC 2019 competition proposes optimization of a centralized day-ahead energy resource management problem in smart grids under environments with uncertainty. This year we increased the difficulty by proving a more challenging case study, namely with higher degree of uncertainty.
The CEC 2019 competition on "Evolutionary Computation in Uncertain Environments: A Smart Grid Application" has the purpose of bringing together and testing the more advanced Computational Intelligence (CI) techniques applied to an energy domain problem, namely the energy resource management problem under uncertain environments. The competition provides a coherent framework where participants and practitioners of CI can test their algorithms to solve a real-world optimization problem in the energy domain with uncertainty consideration, which makes the problem more challenging and worth to explore.
- Participants will propose and implement metaheuristic algorithm (e.g., evolutionary algorithms, swarm intelligence, estimation of distribution algorithm, etc.) to solve the energy resource management problem under uncertainty.
- The organizers provide a framework, implemented in MATALAB© 2014b 64 bits, in which participants can easily test their algorithms (we also provide a differential evolution algorithm implementation as an example). The guidelines include the necessary information to understand the problem, how the solutions are represented, and how the fitness function is evaluated. Those elements are common for all participants.
- Since the proposed algorithms might have distinct sizes of population and run for a variable number of iterations, a maximum number of "50000 function evaluations" is allowed in each trial for all participants. The convergence properties of the algorithms are not a criterion to be qualified in this competition.
- 20 independent trials should be performed in the framework by each participant.
- How to submit an entry and how to evaluate them
- The winner will be the participant with the minimum ranking index, which is calculated as the average value over the 20 trials of the expected fitness value (over the considered uncertain scenarios) plus the standard deviation
- Each participant is kindly requested to put the text files corresponding to final results (see guideline document), as well as the implementation files (codes), obtained by using a specific optimizer, into a zipped folder named CEC2019_SG_AlgorithmName_ParticipantName.zip (e.g. CEC2019_SG_DE_Lezama.zip).
- 7th January 2019, 23:59 (GMT) (For those submitting papers to the special session)
- 30th April 2019, 23:59 (GMT) (Submission without paper)
CEC-C06 Competition on 100-Digit Challenge on Single Objective Numerical Optimization
- P N Suganthan
- K. V. Price
- Mostafa Z Ali
Scope and Topics: Research on single objective optimization algorithms often forms the foundation for more complex scenarios, such as niching algorithms and both multi-objective and constrained optimization algorithms. Traditionally, single objective benchmark problems are also the first test for new evolutionary and swarm algorithms. Additionally, single objective benchmark problems can be transformed into dynamic, niching composition, computationally expensive and many other classes of problems. It is with the goal of better understanding the behavior of evolutionary algorithms as single objective optimizers that we are introducing the 100-Digit Challenge. The SIAM 100-Digit Challenge was developed in 2002 by Nick Trefethen in conjunction with the Society for Industrial and Applied Mathematics (SIAM) as a test for high-accuracy computing. Specifically, the challenge was to solve 10 hard problems to 10 digits of accuracy. One point was awarded for each correct digit, making the maximum score 100, hence the name. Contestants were allowed to apply any method to any problem and take as long as needed to solve it. Out of the 94 teams that entered, 20 scored 100 points and 5 others scored 99. In a similar vein, we propose the 100-Digit Challenge. In contrast to the SIAM version, this 100-Digit Challenge asks contestants to solve all ten problems with one algorithm, although limited control parameter "tuning" for each function will be permitted to restore some of the original contest’s flexibility. Another difference is that the score for a given function is the average number of correct digits in the best 25 out of 50 trials.
Submission Instructions: The participants are asked to submit their papers to the CEC 2019 according to the paper submission instructions. Authors are asked to email their final results in a format requested in the associated Technical Report. Three top performing algorithms will be made available online form the competition web pages.
Submission Deadline: 7th January 2019, 23:59 (GMT)
CEC-C07 FML-based Machine Learning Competition for Human and Smart Machine Co-Learning on Game of Go
- Chang-Shing Lee
- Yusuke Nojima
- Naoyuki Kubota
- Giovanni Acampora
- Marek Reformat>
- and Ryosuke Saga
Supported by: Task Forces on Competitions of IEEE CIS Fuzzy Systems Technical Committee
Scope and Topics: With the success of AlphaGo, there has been a lot of interest among students and professionals to apply machine learning to gaming and in particular to the game of Go. Several conferences have held competitions human players vs. computer programs or computer programs against each other. The goal of this competition includes: (1)The OpenGo Darkforest (OGD) Cloud Platform for Game of Go, (2) Understand the basic concepts of an FML-based fuzzy inference system, (3) Use the FML intelligent decision tool to establish the knowledge base and rule base of the fuzzy inference system, (4) Use the data predicted by Facebook AI Research (FAIR) Open Source Darkforest AI Bot as the training data, (5) Use the data predicted by Facebook AI Research (FAIR) Open Source ELF OpenGo AI Bot as the desired output of the training data, and (6) Optimize the FML knowledge base and rule base through the methodologies of evolutionary computation and machine learning in order to develop a regression model based on FML-based fuzzy inference system.
Submission Instructions: The participants are invited to submit their results via the competition website (http://oase.nutn.edu.tw/cec2019-fmlcompetition/). Participants are also encouraged to submit the results to the competition held in FUZZ-IEEE 2019 (http://oase.nutn.edu.tw/fuzz2019-fmlcompetition/). We will announce the winner at both conferences.
Submission Deadline: 10th May 2019, 23:59 (GMT)
CEC-C08 General Video Game AI Single-Player Learning Competition
- Hao Tong
- Ruben Rodriguez Torrado
- Philip Bontrager
Scope and Topics: The General Video Game AI (GVG-AI) Competition explores the problem of creating agents for general video game playing. How would you create a single agent that is able to play any game it is given? Could you program an agent that is able to play a wide variety of games, without knowing which games are to be played and without a forward model?
The GVGAI Learning framework has been interfaced with OpenAI Gym and provides a fantastic and user-friendly environment for testing your Reinforcement Learning agents. The framework also allow users to create their own games easily to test their agents.
More about this competition can be found on the competition website (http://www.aingames.cn).
Submission Instructions: The participants are invited to submit their agent via the competition website (http://www.aingames.cn). Participants are also encouraged to submit papers about this competition to the Special Session on Games (CEC-04) via the CEC2019 website.
- Paper submission: 7th January 2019, 23:59 (GMT)
- Agent submission: 30th April 2019, 23:59 (GMT)
Remark: Paper submission is not mandatory, you are welcomed to participant in the competition without submitting any paper.
CEC-C09 Strategy Card Game AI Competition
- Jakub Kowalski
- Radoslaw Miernik
Scope and Topics: The game is a small implementation of a Strategy Card Game, designed to perform AI research. Its advantage over the real cardgame AI engines is that it is much simpler to handle by the agents, and thus allows testing more sophisticated algorithms and quickly implement theoretical ideas. Its goal is to encourage advanced research, free of drawbacks of working with the full-fledged game. It means i.a. embedding deckbuilding into the game itself (limiting the usage of premade decks), and allowing efficient search beyond the one turn depth.
All cards effects are deterministic, thus the nondeterminism is introduced only by the ordering of cards and unknown opponent's deck. The game board consists of two lines (similarly as in TES:Legends), so it favors deeper strategic thinking. Also, it is based on the fair arena mode, i.e., before every game, both players create their decks secretly from the symmetrical yet limited choices. Because of that, the deckbuilding is dynamic and cannot be simply reduced to using human-created top-meta decks.
Submission Deadline: 19th May 2019, 23:59 (GMT) (preliminary deadline)
CEC-C10 Nonlinear Equation Systems Competition
- Yong Wang
- Wenyin Gong
- Crina Grosan
Scope and Topics: Nonlinear equation systems (NESs) frequently arise in many physical, electronic, and mechanical processes. Very often, a NES may contain multiple roots. Since all these roots are important for a given NES in the real-world applications, it is desirable to simultaneously locate them in a single run, such that the decision maker can select one final root which matches at most his/her preference. For solving NESs, several classical methods, such as Newton-type methods, have been proposed. However, these methods have some disadvantages in the sense that they are heavily dependent on the starting point of the iterative process, can easily get trapped in a local optimal solution, and require derivative information. Moreover, these methods tend to locate just one root rather than multiple roots when solving NESs.
Solving NESs by EAs is a very important area in the community of evolutionary computation, which is challenging and of practical interest. However, systematic work in this area is still very limited. The aim of competition is to facilitate the development of EAs for locating multiple roots of NESs.
- Paper submission: 7th January 2019, 23:59 (GMT)
- Agent submission: 30th April 2019, 23:59 (GMT)
CEC-C11 Competition on Large-Scale Global Optimization
- Daniel Molina
- Antonio LaTorre
Supported by: IEEE CIS Task Force on Large Scale Global Optimization
Scope and Topics: In the past two decades, many evolutionary algorithms have been developed and successfully applied for solving a wide range of optimization problems. Although these techniques have shown excellent search capabilities when applied to small or medium sized problems, they still encounter serious challenges when applied to large scale problems, i.e., problems with several hundreds to thousands of variables. This is due to the Curse of dimensionality, as the size of the solution space of the problem grows exponentially with the increasing number of decision variables, there is an urgent need to develop more effective and efficient search strategies to better explore this vast solution space with limited computational budgets. In recent years, research on scaling up EAs to large-scale problems has attracted significant attention, including both theoretical and practical studies. Existing work on tackling the scalability issue is getting more and more attention in the last few years.
This special session is devoted to highlight the recent advances in EAs for handling large-scale global optimization (LSGO) problems, involving single objective or multiple objectives, unconstrained or constrained, binary/discrete or real, or mixed decision variables. More specifically, we encourage interested researchers to submit their original and unpublished work on:
- Theoretical and experimental analysis on the scalability of EAs;
- Novel approaches and algorithms for scaling up EAs to large-scale optimization problems;
- Applications of EAs to real-world large-scale optimization problems;
- Novel test suites that help researches to understand large-scale optimization problems characteristics.
Submission Instructions: The competition allows participants to run their own algorithms on 15 benchmark functions, each of them of 1000 dimensions. Detailed information about these benchmark functions is provided in the following technical report:
X. Li, K. Tang, M. Omidvar, Z. Yang and K. Qin, "Benchmark Functions for the CEC’2013 Special Session and Competition on Large Scale Global Optimization," Technical Report, Evolutionary Computation and Machine Learning Group, RMIT University, Australia, 2013.
Source code is available in the website, for C++, MatLab, Java and Python.
The technique and the results can be reported in a paper for the corresponding special session. The authors must provide their results as shown in the aforementioned technical report (Table 2). In order to make it easier to obtain the results in the requested format, the original source code of the benchmark has been modified to automate this task (except in Java version). Additionally, several tools are provided to create an Excel file with the results as recorded by the modified code and the latex table to allow its easy inclusion in the paper.
In order to help researchers to compare their proposals with previous winners, we have developed a website https://tacolab.org, that allows researchers to compare the data of their proposal (provided in an Excel file) with those of previous algorithms. Several reports, both as tables and figures, can be automatically generated by this tool (and exported to be included in the manuscript), including, in the report LSGO Competition plots the criteria used in the competition.
- Paper submission (including the Special Sessions): 7th Jan 2019, 23:59 (GMT)
- Competition submission: 7th February 2019, 23:59 (GMT)
CEC-C12 Divide-the-Dollar Competition
- Daniel Ashlock
- Garrison Greenwood
Scope and Topics: The conventional divide-the-dollar game is a two player game where the players simultaneously bid on how to divide a dollar. If the bids sum to a dollar or less each player receives their bid, otherwise they receive nothing. This contest is based on the generalized divide the dollar game, which has N ≥ 2 players. In this game, instead of dividing a dollar, a scoring set, S ⊂ RN is used. Each player bids a point coordinate and, if the resulting point is in the scoring set, then the players receive their bid, otherwise nothing. The players will be given several example sets, similar to optimization problems in an optimization contest, to train a general purpose agent for learning a generalized divide the dollar problem from feedback. Each participant will upload an agent to play a generalized divide the dollar game.
The contest will use sets not seen by the players before and will be restricted to the two-player version. All sets satisfy x,y ∈ R2 with X ≥ 0, y ≥ 0, and x,y ≤ 2. Sets will consist of one or more simply connected regions. Agents will participate in a round-robin tournament with the score on each set recorded. During play the players will be given feedback in the form of each players bid and the outcome (score/no score). Agents will also have access to the history of bids each agent has made and if that bid scored. Winners will be determined for each problem test set and an overall winner with the best average score over all of the problem test sets. Agents can be designed using any computational intelligence technique. Contest participants will upload a framework in Java for their agent through the competition website. The uploaded agent must be a standalone agent. Each participant may submit only one agent to the contest. Each participant is expected to submit a short paper (23 pages) describing their agents structure and the computational intelligence methods used to construct and train it. Papers will be orally presented during the special session on games and will appear in the conference proceedings. Winners will be announced during the special session.
This contest is intended as a successor to the contests for prisoner’s dilemma, with generalized divide the dollar being a more complex game with a far larger strategy space. The contest organizers have published at least one agent representation that can play this game, but adapting to unknown scoring sets is a challenge that is likely to spark research in agent representations and advance the theory and practice of mathematical games in evolutionary computation.
Submission Instructions: TBA
Submission Deadline: TBA
CEC-C13 Continuous Derivative-free Optimization Competition
- Olivier Teytaud
- Jérémy Rapin
Scope and Topics: The participants are invited to use the Nevergrad platform to implement an optimizer and evaluate it against currently implemented algorithms. 5 different tracks are proposed: Noisy, Ill Conditioned, Deceptive, Parallel, One-shot.
Submission Instructions: The participants are invited to implement their algorithm(s) in the cec2019_optimizer.py file, and then run a command line which will test it against different settings, and plot results figures. The figures and implementation should then be sent to the organizers by e-mail.
Full instructions are provided here:
Submission Deadline: Competition submission: 25th May 2019, 23:59 (GMT)