tnnls logo

Scope

The IEEE Transactions on Neural Networks and Learning Systems publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems.

Impact Score



Journal Citation Metrics Journal Citation Metrics such as Impact Factor, Eigenfactor Score™ and Article Influence Score™ are available where applicable. Each year, Journal Citation Reports© (JCR) from Thomson Reuters examines the influence and impact of scholarly research journals. JCR reveals the relationship between citing and cited journals, offering a systematic, objective means to evaluate the world's leading journals.  Find out more about IEEE Journal Rankings.

Call for Special Issues

IEEE TNNLS Special Issue on "Deep Integration of Artificial Intelligence and Data Science for Process Manufacturing," Guest Editors: Feng Qian, East China University of Science and Technology, China, Yaochu Jin, University of Surrey, United Kingdom, S. Joe Qin, University of Southern California, United States, Kai Sundmacher, Max Planck Institute for Dynamics of Complex Technical Systems, Germany. Submission Deadline: October 30, 2019. [Call for Papers]

IEEE TNNLS Special Issue on "Deep Representation and Transfer Learning for Smart and Connected Health," Guest Editors: Vasile Palade, Coventry University, UK,Stefan Wermter, University of Hamburg, Germany,Ariel Ruiz-Garcia, Coventry University, UK,Antonio de Padua Braga, University of Minas Gerais, Brazil,Clive Cheong Took, Royal Holloway (University of London), UK.Submission Deadline: September 30, 2019. [Call for Papers]

IEEE TNNLS Special Issue on "Robust Learning of Spatio-Temporal Point Processes: Modeling, Algorithm, and Applications," Guest Editors: Junchi Yan, Shanghai Jiao Tong University, China, Hongteng Xu, Duke University, USA, Liangda Li, Yahoo Research, USA, Mehrdad Farajtabar, DeepMind, USA, Xiaokang Yang, Shanghai Jiao Tong University, China. Submission Deadline: September 20, 2019. [Call for Papers]

Featured Paper

Selected article from IEEE Transactions on Neural Networks and Learning Systems

Multicolumn RBF Network

This paper proposes the multicolumn RBF network (MCRN) as a method to improve the accuracy and speed of a traditional radial basis function network (RBFN). The MCRN mechanism is constructed based on dividing a dataset into smaller subsets using the k-d tree algorithm. N resultant subsets are considered as separate training datasets to train N individual RBFNs. Those small RBFNs are stacked in parallel and bulged into the MCRN structure during testing. The MCRN is considered as a well-developed and easy-to-use parallel structure, because each individual ANN has been trained on its own subsets and is completely separate from the other ANNs. This parallelized structure reduces the testing time compared with that of a single but larger RBFN, which cannot be easily parallelized due to its fully connected structure. Small informative subsets provide the MCRN with a regional experience to specify the problem instead of generalizing it. The MCRN has been tested on many benchmark datasets and has shown better accuracy and great improvements in training and testing times compared with a single RBFN.

IEEE Transactions on Neural Networks and Learning Systems, Apr. 2018