Open Access awarded Papers

IEEE Open Access

IEEE Transactions on Emerging Topics in Computational Intelligence now offers publication of its highlighted papers in Open Access for the duration of 3 months in order to assist authors gain maximum exposure for their groundbreaking research and application-oriented papers to all reader communities.

The first highlighted paper offered to make Open Access in TETCI would be available for the duration of 3 months starting from 1 January 2019.


Light Gated Recurrent Units for Speech Recognition

Authors: Mirco Ravanelli, Philemon Brakel, Maurizio Omologo and Yoshua Bengio

Publication: IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI)

Issue: Volume 2, Issue 2 – April 2018

Pages: 92-102

Abstract: A field that has directly benefited from the recent advances in deep learning is automatic speech recognition (ASR). Despite the great achievements of the past decades, however, a natural and robust human-machine speech interaction still appears to be out of reach, especially in challenging environments characterized by significant noise and reverberation. To improve robustness, modern speech recognizers often employ acoustic models based on recurrent neural networks (RNNs) that are naturally able to exploit large time contexts and long-term speech modulations. It is thus of great interest to continue the study of proper techniques for improving the effectiveness of RNNs in processing speech signals. In this paper, we revise one of the most popular RNN models, namely, gated recurrent units (GRUs), and propose a simplified architecture that turned out to be very effective for ASR. The contribution of this work is twofold: First, we analyze the role played by the reset gate, showing that a significant redundancy with the update gate occurs. As a result, we propose to remove the former from the GRU design, leading to a more efficient and compact single-gate model. Second, we propose to replace hyperbolic tangent with rectified linear unit activations. This variation couples well with batch normalization and could help the model learn long-term dependencies without numerical issues. Results show that the proposed architecture, called light GRU, not only reduces the per-epoch training time by more than 30% over a standard GRU, but also consistently improves the recognition accuracy across different tasks, input features, noisy conditions, as well as across different ASR paradigms, ranging from standard DNN-HMM speech recognizers to end-to-end connectionist temporal classification models.

Index Terms: Speech recognition, Deep learning, Recurrent neural networks, LSTM, GRU


Available in Open Access from 1 January 2019 to 31 March 2019 in IEEE Xplore Digital Library.