T Neural Networks and Learning Systems

tnnls logo

Scope

The IEEE Transactions on Neural Networks and Learning Systems publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems.

Impact Score

NeuralNetworks2020factor
Journal Citation Metrics Journal Citation Metrics such as Impact Factor, Eigenfactor Score™ and Article Influence Score™ are available where applicable. Each year, Journal Citation Reports© (JCR) from Thomson Reuters examines the influence and impact of scholarly research journals. JCR reveals the relationship between citing and cited journals, offering a systematic, objective means to evaluate the world's leading journals.  Find out more about IEEE Journal Rankings.

Special Note:

In order to support the world-wide efforts in flighting the COVID-19, the IEEE Computational Intelligence Society (IEEE CIS) has set up a program, the COVID 19 Initiative. Under this initiative, the IEEE TNNLS will expedite, to the extent possible, the processing of all articles submitted to TNNLS with primary focus on COVID 19. Here are the important information:

  1. We have set-up a special Fast-Track under IEEE TNNLS to process COVID-19 focused manuscripts. All papers submitted to this Fast Track will be undergone a fast review process, with the targeted first decision within 4 weeks. If the paper can go to the revision stage, the author(s) then have 2 weeks of revision time, followed by another round of review within 3 weeks to reach a final decision. That is to say, we target to reach a final decision for all the Fast Track manuscripts within 9 weeks.
  2. When you decide to submit to this special Fast Track, please kindly make sure you select the Paper type "Fast Track: COVID-19 Focused Papers". Also, please make sure that your manuscript must be within the scope of IEEE TNNLS, as well as with a research focus on COVID-19.
  3. If accepted, TNNLS will arrange to publish and print such articles immediately. Furthermore, all such articles will be published, free-of-charge to authors and readers, as free access for one year from the date of the publication to enable the research findings to be disseminated widely and freely to other researchers and the community at large.

We look forward to your submissions and support to TNNLS!

Call for Special Issues

IEEE TNNLS Special Issue on "Effective Feature Fusion in Deep Neural Networks," Guest Editors: Yanwei Pang, Tianjin University, China, Fahad Shahbaz Khan, Inception Institute of Artificial Intelligence, UAE, Xin Lu, Adobe Inc., USA, Fabio Cuzzolin, Oxford Brookes University, UK. Submission Deadline: November 30, 2020. [Call for Papers]

IEEE TNNLS Special Issue on "Deep Learning for Anomaly Detection," Guest Editors: Guansong Pang, University of Adelaide, Australia, Charu Aggarwal, IBM T. J. Watson Research Center, United States, Chunhua Shen, University of Adelaide, Australia, Nicu Sebe, University of Trento, Italy. Submission Deadline: November 30, 2020. [Call for Papers]

IEEE TNNLS Special Issue on "New Frontiers in Extremely Efficient Reservoir Computing," Guest Editors: Gouhei Tanaka, The University of Tokyo, Japan, Claudio Gallicchio, University of Pisa , Italy, Alessio Micheli, University of Pisa, Italy, Juan Pablo Ortega , University of St. Gallen Akira Hirose, The University of Tokyo, Japan. Submission Deadline: October 7, 2020. [Call for Papers]

IEEE TNNLS Special Issue on "Biologically Learned/Inspired Methods for Sensing, Control and Decision Making," Guest Editors: Yongduan Song, Chongqing University, China, Jennie Si, Arizona State University, USA, Sonya Coleman, Ulster University, UK, Dermot Kerr, Ulster University, UK. Submission Deadline: October 31, 2020. [Call for Papers]

Featured Paper

The Boundedness Conditions for Model-Free HDP( λ )
Authors: Seaar Al-Dabooni, Donald Wunsch
Publication: IEEE Transactions on Neural Networks and Learning Systems (TNNLS)
Issue: Volume 30, Issue 7 – July 2019
Pages: 1928-1942

Abstract: This paper provides the stability analysis for a model-free action-dependent heuristic dynamic programing (HDP) approach with an eligibility trace long-term prediction parameter (λ). HDP(λ) learns from more than one future reward. Eligibility traces have long been popular in Q-learning. This paper proves and demonstrates that they are worthwhile to use with HDP. In this paper, we prove its uniformly ultimately bounded (UUB) property under certain conditions. Previous works present a UUB proof for traditional HDP [HDP(λ = 0)], but we extend the proof with the λ parameter. By using Lyapunov stability, we demonstrate the boundedness of the estimated error for the critic and actor neural networks as well as learning rate parameters. Three case studies demonstrate the effectiveness of HDP(λ). The trajectories of the internal reinforcement signal nonlinear system are considered as the first case. We compare the results with the performance of HDP and traditional temporal difference [TD(λ)] with different λ values. The second case study is a single-link inverted pendulum. We investigate the performance of the inverted pendulum by comparing HDP(λ) with regular HDP, with different levels of noise. The third case study is a 3-D maze navigation benchmark, which is compared with state action reward state action, Q(λ), HDP, and HDP(λ). All these simulation results illustrate that HDP(λ) has a competitive performance; thus this contribution is not only UUB but also useful in comparison with traditional HDP.

Index Terms: λ-return, action dependent (AD), approximate dynamic programing (ADP), heuristic dynamic programing (HDP), Lyapunov stability, model free, uniformly ultimately bounded (UUB)
IEEE Xplore Link: https://ieeexplore.ieee.org/document/8528554