Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks

Warning The system is temporarily closed to updates for reporting purpose.

Erdoğan, Hakan and Hershey, J. R. and Watanabe, S. and Le Roux, J. (2015) Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2015), South Brisbane, QLD

Full text not available from this repository.

Official URL: http://dx.doi.org/10.1109/ICASSP.2015.7178061


Separation of speech embedded in non-stationary interference is a challenging problem that has recently seen dramatic improvements using deep network-based methods. Previous work has shown that estimating a masking function to be applied to the noisy spectrum is a viable approach that can be improved by using a signal-approximation based objective function. Better modeling of dynamics through deep recurrent networks has also been shown to improve performance. Here we pursue both of these directions. We develop a phase-sensitive objective function based on the signal-to-noise ratio (SNR) of the reconstructed signal, and show that in experiments it yields uniformly better results in terms of signal-to-distortion ratio (SDR). We also investigate improvements to the modeling of dynamics, using bidirectional recurrent networks, as well as by incorporating speech recognition outputs in the form of alignment vectors concatenated with the spectral input features. Both methods yield further improvements, pointing to tighter integration of recognition with separation as a promising future direction.

Item Type:Papers in Conference Proceedings
Subjects:T Technology > TK Electrical engineering. Electronics Nuclear engineering
ID Code:28858
Deposited By:Hakan Erdoğan
Deposited On:24 Dec 2015 16:22
Last Modified:23 Aug 2019 15:40

Repository Staff Only: item control page