magazine

 

August 2019 CIS Magizing cover

The IEEE Computational Intelligence Magazine (CIM) publishes peer-reviewed articles that present emerging novel discoveries, important insights, or tutorial surveys in all areas of computational intelligence design and applications, in keeping with the Field of Interest of the IEEE Computational Intelligence Society (IEEE/CIS). Additionally, CIM serves as a media of communications between the governing body and its membership of IEEE/CIS. Authors are encouraged to submit papers on applications oriented developments, successful industrial implementations, design tools, technology reviews, computational intelligence education, and applied research.

Contributions should contain novel and previously unpublished material. The novelty will usually lie in original concepts, results, techniques, observations, hardware/software implementations, or applications, but may also provide syntheses or new insights into previously reported research. Surveys and expository submissions are also welcome. In general, material which has been previously copyrighted, published or accepted for publication will not be considered for publication; however, prior preliminary or abbreviated publication of the material shall not preclude publication in this journal.



 

Impact Score

CIM impact scores 2019

Journal Citation Metrics Journal Citation Metrics such as Impact Factor, Eigenfactor Score™ and Article Influence Score™ are available where applicable. Each year, Journal Citation Reports© (JCR) from Thomson Reuters examines the influence and impact of scholarly research journals. JCR reveals the relationship between citing and cited journals, offering a systematic, objective means to evaluate the world's leading journals.  Find out more about IEEE Journal Rankings.

 

Call for Special Issues

Featured Paper

Improving RTS Game AI by Supervised Policy Learning, Tactical Search, and Deep Reinforcement Learning
Authors: Nicolas A. Barriga, Marius Stanescu, Felipe Besoain, Michael Buro
Publication: IEEE Computational Intelligence Magazine (CIM)
Issue: Volume 14, Issue 3 – August 2019
Pages: 8-18

Abstract: Constructing strong AI systems for video games is difficult due to enormous state and action spaces and the lack of good state evaluation functions and high-level action abstractions. In spite of recent research progress in popular video game genres such as Atari 2600 console games and multiplayer online battle arena (MOBA) games, to this day strong human players can still defeat the best AI systems in adversarial video games. In this paper, we propose to use a deep Convolutional Neural Network (CNN) to select among a limited set of abstract action choices in Real-Time Strategy (RTS) games, and to utilize the remaining computation time for game tree search to improve low-level tactics. The CNN is trained by supervised learning on game states labeled by Puppet Search, a strategic search algorithm that uses action abstractions. Replacing Puppet Search by a CNN frees up time that can be used for improving units' tactical behavior while executing the strategic plan. Experiments in the μRTS game show that the combined algorithm results in higher win-rates than either of its two independent components and other state-of-the-art μRTS agents. We then present a case-study that investigates how deep Reinforcement Learning (RL) can be used in modern video games, such as Total War: Warhammer, to improve tactical multi-agent AI modules. We use popular RL algorithms such as Deep-Q Networks (DQN) and Asynchronous AdvantageActor Critic (A3C), basic network architectures and minimal hyper-parameter tuning to learn complex cooperative behaviors that defeat the highest difficulty built-in AI in mediumscale scenarios.

Index Terms: Games, Real-time systems, Search problems, Reinforcement learning, Neural networks, Supervised learning
IEEE Xplore Link: https://ieeexplore.ieee.org/document/8764630