tnnls logo


The IEEE Transactions on Neural Networks and Learning Systems publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems.

Impact Score

Journal Citation Metrics Journal Citation Metrics such as Impact Factor, Eigenfactor Score™ and Article Influence Score™ are available where applicable. Each year, Journal Citation Reports© (JCR) from Thomson Reuters examines the influence and impact of scholarly research journals. JCR reveals the relationship between citing and cited journals, offering a systematic, objective means to evaluate the world's leading journals.  Find out more about IEEE Journal Rankings.

Call for Nominations / Applications for the position of Editor-in-Chief of the IEEE Transactions on Neural Networks and Learning Systems

The IEEE Transactions on Neural Networks and Learning Systems (TNNLS) publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems. Details about the current state of this publication can be found at

The IEEE CIS Executive Committee has formed an Adhoc Search Committee to invite nominations/applications for the position of Editor-in-Chief for TNNLS. The Editor-in-Chief appointment is for a 2-year term starting 1 January 2022. Nominees/applicants should be dedicated volunteers with outstanding research profiles and extensive editorial experience. The nomination/application package should include a complete CV along with a separate description (max 300 words/topic) on each of the following items:

  • Vision Statement;
  • Editorial Experience;
  • Summary of publishing experience in IEEE journals/magazines;
  • IEEE Volunteer Experience;
  • Institutional Support;
  • Current service and administrative commitments;
  • Networking with the Community;
  • Challenges, if any, faced by the publication, and how to deal with them;
  • Why the candidate considers himself/herself fit for this position?

The nomination/application package should be sent as a single PDF file through email to both Prof. Kay Chen Tan ( and Jo-Ellen Snyder ( by May 15, 2021.

  • Kay Chen Tan, Chair of the Search Committee
  • Pau-Choo (Julia) Chung
  • Pablo Estevez
  • Barbara Hammer
  • Haibo He
  • Jim Keller
  • Derong Liu
  • Marios Polycarpou

Call for Special Issues

IEEE TNNLS Special Issue on "Causal Discovery and Causality-Inspired Machine Learning," Guest Editors: Kun Zhang, Carnegie Mellon University, USA; Ilya Shpitser, John Hopkins University, USA; Sara Magliacane, University of Amsterdam, Netherlands; Davide Bacciu, University of Pisa, Italy; Fei Wu, Zhejiang University, China; Changshui Zhang, Tsinghua University, Chinal Peter Spirtes, Carnegie Mellon University, USA. Submission Deadline: October 22, 2021. [Call for Papers]

IEEE TNNLS Special Issue on "Theory, Algorithms, and Applications for Hybrid Intelligent Dynamic Optimization," Guest Editors: Jun Fu, Northeastern University, China; Junfei Qiao, Beijing University of Technology, China; Kok Lay Teo, Curtin University, Australia; Rolf Findeisen, Otto-von-Guericke University Magdeburg, Germany. Submission Deadline: August 1, 2021. [Call for Papers]

IEEE TNNLS Special Issue on "Deep Neural Networks for Graphs: Theory, Models, Algorithms and Applications," Guest Editors: Ming Li, Zhejiang Normal University, China; Alessio Micheli, University of Pisa, Italy; Yu Guang Wang, Max Planck Institute for Mathematics in the Sciences, Germany; Shirui Pan, Monash University, Australia; Pietro Liò, University of Cambridge, UK; Giorgio Stefano Gnecco, IMT School for Advanced Studies, AXES Research Unit, Italy; Marcello Sanguineti, University of Genoa, Italy. Submission Deadline: July 31, 2021. [Call for Papers]

Featured Paper

The Boundedness Conditions for Model-Free HDP( λ )
Authors: Seaar Al-Dabooni, Donald Wunsch
Publication: IEEE Transactions on Neural Networks and Learning Systems (TNNLS)
Issue: Volume 30, Issue 7 – July 2019
Pages: 1928-1942

Abstract: This paper provides the stability analysis for a model-free action-dependent heuristic dynamic programing (HDP) approach with an eligibility trace long-term prediction parameter (λ). HDP(λ) learns from more than one future reward. Eligibility traces have long been popular in Q-learning. This paper proves and demonstrates that they are worthwhile to use with HDP. In this paper, we prove its uniformly ultimately bounded (UUB) property under certain conditions. Previous works present a UUB proof for traditional HDP [HDP(λ = 0)], but we extend the proof with the λ parameter. By using Lyapunov stability, we demonstrate the boundedness of the estimated error for the critic and actor neural networks as well as learning rate parameters. Three case studies demonstrate the effectiveness of HDP(λ). The trajectories of the internal reinforcement signal nonlinear system are considered as the first case. We compare the results with the performance of HDP and traditional temporal difference [TD(λ)] with different λ values. The second case study is a single-link inverted pendulum. We investigate the performance of the inverted pendulum by comparing HDP(λ) with regular HDP, with different levels of noise. The third case study is a 3-D maze navigation benchmark, which is compared with state action reward state action, Q(λ), HDP, and HDP(λ). All these simulation results illustrate that HDP(λ) has a competitive performance; thus this contribution is not only UUB but also useful in comparison with traditional HDP.

Index Terms: λ-return, action dependent (AD), approximate dynamic programing (ADP), heuristic dynamic programing (HDP), Lyapunov stability, model free, uniformly ultimately bounded (UUB)
IEEE Xplore Link: