T Neural Networks and Learning Systems

tnnls logo

Scope

The IEEE Transactions on Neural Networks and Learning Systems publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems.

Impact Score

TNNLS Impact 2021

Journal Citation Metrics Journal Citation Metrics such as Impact Factor, Eigenfactor Score™ and Article Influence Score™ are available where applicable. Each year, Journal Citation Reports© (JCR) from Thomson Reuters examines the influence and impact of scholarly research journals. JCR reveals the relationship between citing and cited journals, offering a systematic, objective means to evaluate the world's leading journals.  Find out more about IEEE Journal Rankings.

Call for Nominations / Applications for the position of Editor-in-Chief of the IEEE Transactions on Neural Networks and Learning Systems

The IEEE Transactions on Neural Networks and Learning Systems (TNNLS) publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems. Details about the current state of this publication can be found at https://cis.ieee.org/publications/t-neural-networks-and-learning-systems

The IEEE CIS Executive Committee has formed an Adhoc Search Committee to invite nominations/applications for the position of Editor-in-Chief for TNNLS. The Editor-in-Chief appointment is for a 2-year term starting 1 January 2022. Nominees/applicants should be dedicated volunteers with outstanding research profiles and extensive editorial experience. The nomination/application package should include a complete CV along with a separate description (max 300 words/topic) on each of the following items:

  • Vision Statement;
  • Editorial Experience;
  • Summary of publishing experience in IEEE journals/magazines;
  • IEEE Volunteer Experience;
  • Institutional Support;
  • Current service and administrative commitments;
  • Networking with the Community;
  • Challenges, if any, faced by the publication, and how to deal with them;
  • Why the candidate considers himself/herself fit for this position?

The nomination/application package should be sent as a single PDF file through email to both Prof. Kay Chen Tan (kctan@polyu.edu.hk) and Jo-Ellen Snyder (j.e.snyder@ieee.org) by May 15, 2021.

  • Kay Chen Tan, Chair of the Search Committee
  • Pau-Choo (Julia) Chung
  • Pablo Estevez
  • Barbara Hammer
  • Haibo He
  • Jim Keller
  • Derong Liu
  • Marios Polycarpou

Call for Special Issues


IEEE TNNLS Special Issue on "Explainable and Generalizable Deep Learning for Medical Imaging," Guest Editors: Tianming Liu, University of Georgia, USA; Dajiang Zhu, University of Texas at Arlington, USA; Fei Wang, Cornell University, USA; Islem Rekik, Istanbul Technical University, Turkey; Xia Hu, Rice University, USA; Dinggang Shen, ShangheiTech University, China. Submission Deadline: April 15, 2022. [Call for Papers]

IEEE TNNLS Special Issue on "Explainable Representation Learning-based Intelligent Inspection and Maintenance of Complex Systems," Guest Editors: Zhigang Liu, Tongji University, Southwest Jiaotong University, China; Cesare Alippi, Università della Svizzera italiana, Switzerland and Politecnico di Milano, Italy, Hongtian Chen University of Alberta, Canada, Derong Liu University of Illinois at Chicago, USA. Submission Deadline: April 1, 2022. [Call for Papers]

IEEE TNNLS Special Issue on "Reinforcement Learning Based Control: Data-Efficient and Resilient Methods," Guest Editors: Weinan Gao, Florida Institute of Technology, USA; Li Na, Harvard University, USA; Kyriakos Vamvoudakis, Georgia Institute of Technology, USA; F. Richard Yu, Carleton University, Canada; Zhong-Ping Jiang, New York University, USA. Submission Deadline: March 1, 2022. [Call for Papers]

IEEE TNNLS Special Issue on "Stream Learning," Guest Editors: Jie Lu, University of Technology Sydney, Australia; Joao Gama, University of Porto, Portugal; Xin Yao, Southern University of Science and Technology, China; Leandro Minku, University of Birmingham, UK. Submission Deadline: December 15, 2021 [EXTENDED]. [Call for Papers]

IEEE TNNLS Special Issue on "Theory, Algorithms, and Applications for Hybrid Intelligent Dynamic Optimization," Guest Editors: Jun Fu, Northeastern University, China; Junfei Qiao, Beijing University of Technology, China; Kok Lay Teo, Curtin University, Australia; Rolf Findeisen, Otto-von-Guericke University Magdeburg, Germany. Submission Deadline: October 31, 2021 [EXTENDED]. [Call for Papers]

IEEE TNNLS Special Issue on "Causal Discovery and Causality-Inspired Machine Learning," Guest Editors: Kun Zhang, Carnegie Mellon University, USA; Ilya Shpitser, John Hopkins University, USA; Sara Magliacane, University of Amsterdam, Netherlands; Davide Bacciu, University of Pisa, Italy; Fei Wu, Zhejiang University, China; Changshui Zhang, Tsinghua University, Chinal Peter Spirtes, Carnegie Mellon University, USA. Submission Deadline: November 1, 2021 [EXTENDED]. [Call for Papers]

Featured Paper

The Boundedness Conditions for Model-Free HDP( λ )
Authors: Seaar Al-Dabooni, Donald Wunsch
Publication: IEEE Transactions on Neural Networks and Learning Systems (TNNLS)
Issue: Volume 30, Issue 7 – July 2019
Pages: 1928-1942

Abstract: This paper provides the stability analysis for a model-free action-dependent heuristic dynamic programing (HDP) approach with an eligibility trace long-term prediction parameter (λ). HDP(λ) learns from more than one future reward. Eligibility traces have long been popular in Q-learning. This paper proves and demonstrates that they are worthwhile to use with HDP. In this paper, we prove its uniformly ultimately bounded (UUB) property under certain conditions. Previous works present a UUB proof for traditional HDP [HDP(λ = 0)], but we extend the proof with the λ parameter. By using Lyapunov stability, we demonstrate the boundedness of the estimated error for the critic and actor neural networks as well as learning rate parameters. Three case studies demonstrate the effectiveness of HDP(λ). The trajectories of the internal reinforcement signal nonlinear system are considered as the first case. We compare the results with the performance of HDP and traditional temporal difference [TD(λ)] with different λ values. The second case study is a single-link inverted pendulum. We investigate the performance of the inverted pendulum by comparing HDP(λ) with regular HDP, with different levels of noise. The third case study is a 3-D maze navigation benchmark, which is compared with state action reward state action, Q(λ), HDP, and HDP(λ). All these simulation results illustrate that HDP(λ) has a competitive performance; thus this contribution is not only UUB but also useful in comparison with traditional HDP.

Index Terms: λ-return, action dependent (AD), approximate dynamic programing (ADP), heuristic dynamic programing (HDP), Lyapunov stability, model free, uniformly ultimately bounded (UUB)
IEEE Xplore Link: https://ieeexplore.ieee.org/document/8528554