cognitive and developmental systems logo


The IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS (TCDS) focuses on advances in the study of development and cognition in natural (humans, animals) and artificial (robots, agents) systems. It welcomes contributions from multiple related disciplines including cognitive systems, cognitive robotics, developmental and epigenetic robotics, autonomous and evolutionary robotics, social structures, multi-agent and artificial life systems, computational neuroscience, and developmental psychology. Articles on theoretical, computational, application-oriented, and experimental studies as well as reviews in these areas are considered.

TCDS is co-sponsored by the Computational Intelligence Society, the Robotics and Automation Society, and the Consumer Electronics Society. TCDS is technically co-sponsored by the Computer Society.

Impact Score

TCDS impact score 2019
Journal Citation Metrics Journal Citation Metrics such as Impact Factor, Eigenfactor Score™ and Article Influence Score™ are available where applicable. Each year, Journal Citation Reports© (JCR) from Thomson Reuters examines the influence and impact of scholarly research journals. JCR reveals the relationship between citing and cited journals, offering a systematic, objective means to evaluate the world's leading journals.
Find out more about IEEE Journal Rankings.

Featured Paper

A Reinforcement Learning Architecture That Transfers Knowledge Between Skills When Solving Multiple Tasks
Authors: Paolo Tommasino, Daniele Caligiore, Marco Mirolli, Gianluca Baldassarre
Publication: IEEE Transactions on Cognitive and Developmental Systems (TCDS)
Issue: Volume 11, Issue 2 – June 2019
Pages: 292-317

Abstract: When humans learn several skills to solve multiple tasks, they exhibit an extraordinary capacity to transfer knowledge between them. We present here the last enhanced version of a bio-inspired reinforcement-learning (RL) modular architecture able to perform skill-to-skill knowledge transfer and called transfer expert RL (TERL) model. TERL architecture is based on a RL actor-critic model where both actor and critic have a hierarchical structure, inspired by the mixture-of-experts model, formed by a gating network that selects experts specializing in learning the policies or value functions of different tasks. A key feature of TERL is the capacity of its gating networks to accumulate, in parallel, evidence on the capacity of experts to solve the new tasks so as to increase the responsibility for action of the best ones. A second key feature is the use of two different responsibility signals for the experts' functioning and learning: this allows the training of multiple experts for each task so that some of them can be later recruited to solve new tasks and avoid catastrophic interference. The utility of TERL mechanisms is shown with tests involving two simulated dynamic robot arms engaged in solving reaching tasks, in particular a planar 2-DoF arm, and a 3-D 4-DoF arm.

Index Terms: Autonomous robotics, bio-inspired modular neural architecture, catastrophic interference, cumulative learning, functioning and learning responsibility signals, mixture-of-expert networks, reaching tasks, transfer reinforcement learning (TRL)
IEEE Xplore Link: