tnnls logo


The IEEE Transactions on Neural Networks and Learning Systems publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems.

ATTENTION AUTHORS: Please read the Double-Anonymous Review Policy before submitting your manuscript.

Impact Score

TNNLS Impact Score 2023





The values displayed for the journal bibliometrics fields in IEEE Xplore are based on the Journal Citation Report from Clarivate from the 2022 report released in June 2023. The values displayed for CiteScore metrics are from Scopus 2022 report released in June 2023. Journal Citation Metrics Journal Citation Metrics such as Impact Factor, Eigenfactor Score™ and Article Influence Score™ are available where applicable. Each year, Journal Citation Reports© (JCR) from Thomson Reuters examines the influence and impact of scholarly research journals. JCR reveals the relationship between citing and cited journals, offering a systematic, objective means to evaluate the world's leading journals.  Find out more about IEEE Journal Bibliometrics

Special Issues

IEEE TNNLS Special Issue Proposal: Advancements in Foundation Models [Call for Papers]

Guest Editors: Tianming Liu, University of Georgia, USA, Xiang Li, Massachusetts General Hospital and Harvard Medical School, USA, Hao Chen, Hong Kong University of Science and Technology, Hong Kong, China, Yixuan Yuan, Chinese University of Hong Kong, Hong Kong, China, Anirban Mukhopadhyay, TU Darmstadt, Germany.

Submission Deadline: 15 August 2024

Featured Paper

Reinforcement Learning Control With Knowledge Shaping

IEEE Transactions on Neural Networks and Learning Systems (Volume: 35, Issue: 3, March 2024)

Abstract: We aim at creating a transfer reinforcement learning framework that allows the design of learning controllers to leverage prior knowledge extracted from previously learned tasks and previous data to improve the learning performance of new tasks. Toward this goal, we formalize knowledge transfer by expressing knowledge in the value function in our problem construct, which is referred to as reinforcement learning with knowledge shaping (RL-KS). Unlike most transfer learning studies that are empirical in nature, our results include not only simulation verifications but also an analysis of algorithm convergence and solution optimality. Also different from the well-established potential-based reward shaping methods which are built on proofs of policy invariance, our RL-KS approach allows us to advance toward a new theoretical result on positive knowledge transfer. Furthermore, our contributions include two principled ways that cover a range of realization schemes to represent prior knowledge in RL-KS. We provide extensive and systematic evaluations of the proposed RL-KS method. The evaluation environments not only include classical RL benchmark problems but also include a challenging task of real-time control of a robotic lower limb with a human user in the loop.

IEEE Xplore Link: