Robust sub-band speech feature extraction using multiresolution convolutional neural networks

Document Type : Original Article

Authors

1 Computer Engineering Department, K.N.Toosi University of Technology, Tehran,Iran

2 Computer Engineering Department, K.N.Toosi University of Technology, Tehran, Iran

Abstract

Convolutional neural networks (CNNs), as a kind of deep neural networks, have been recently used for acoustic modeling and feature extraction along with acoustic modeling in speech recognition systems. In this paper, we propose to use CNN for robust feature extraction from the noisy speech spectrum. In the proposed manner, CNN inputs are noisy speech spectrum and its targets are denoised logarithm of Mel filter bank energies (LMFBs). Consequently, CNN extracts robust features from speech spectrum. The drawback of CNN in the proposed method is its fixed frequency resolution. Thus, we propose to use multiple CNNs with different convolution filter sizes to provide different frequency resolutions for feature extraction from the speech spectrum. We named this method as Multiresolution CNN (MRCNN). Recognition accuracy on Aurora 2 database, shows that CNNs outperform deep belief networks such that, CNN recognition accuracy has 20% relative improvement on average over DBN. However, results show that MRCNN recognition accuracy has 1% relative improvement on average over CNN.

Keywords


[1]      فائزه بنی‌اردلان، احمد اکبری، بابک ناصرشریف، «حذف نویز و استخراج ویژگی‌های گلوگاه در سطح زیرباند توسط شبکه‌های خودرمزگذار عمیق برای بازشناسی گفتار»، کنفرانس پردازش سیگنال و سیستم‌های هوشمند، دانشگاه صنعتی امیرکبیر، دوره اول، 1394.
[2]      مجتبی غلامی‌پور، بابک ناصرشریف، «مقاوم‌سازی ویژگی‌های مل کپستروم نسبت به نویز با استفاده از شبکه باور عمیق»، کنفرانس پردازش سیگنال و سیستم‌های هوشمند، دانشگاه صنعتی امیرکبیر، دوره اول، 1394.
[3]      مجتبی حاجی آبادی، عباس ابراهیمی مقدم، حسین خوش بین، «حذف نویز صوتی مبتنی بر یک الگوریتم وفقی نوین»، مجله مهندسی برق دانشگاه تبریز، دوره 46، شماره 3، ص: 139-147، پائیز 1395.
[4]      مسعود گراوانچی‌زاده، ساناز قائمی سردرودی، «بهبود کیفیت گفتار مبتنی بر بهینه‌سازی ازدحام ذرات با استفاده از ویژگیهای ماسک‌گذاری سیستم شنوائی انسان»، مجله مهندسی برق دانشگاه تبریز، دوره 46، شماره 3، ص: 287-297، پاییز 1395.
[5]      O. Abdel-Hamid, A. r. Mohamed, H. Jiang, L. Deng, G. Penn and D. Yu, "Convolutional neural networks for speech recognition," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 22, pp. 1533-1545, 2014.
[6]      S. Ikbal and H., Bourlard, "Phase autocorrelation derived robust speech features" in Proc. ICASSP, vol. 2, pp. 133-136, 2003.
[7]      K. Han, Y. He, D. Bagchi, E. Fosler-Lussier and D. Wang, "Deep neural network based spectral feature mapping for robust speech recognition," in Proc. Interspeech, pp. 2484-2488, 2015.
[8]      O. Abdel-Hamid, A. r. Mohamed, H. Jiang and G. Penn, "Applying convolutional neural networks concepts to hybrid NN-HMM model for speech recognition," in 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4277-4280, 2012.
[9]      J. Du, Q. Wang, T. Gao, Y. Xu, L. Dai and C.H. Lee, “Robust Speech Recognition with Speech Enhanced Deep Neural Networks”, Interspeech, pp. 616-620, 2014.
[10]      X. Feng, Y. Zhang and J. Glass. "Speech feature denoising and dereverberation via deep autoencoders for noisy reverberant speech recognition" In Proc. ICASSP, pp. 1759-1763, 2014.
[11]      A. Mohamed, G.E. Dahl and G. Hinton, “Acoustic Modeling Using Deep Belief Networks”, Audio, Speech and Language Processing, IEEE Transactions on, Vol. 20, pp. 14-22, 2011.
[12]      T. N. Sainath, A.-r. Mohamed, B. Kingsbury and B. Ramabhadran, "Deep convolutional neural networks for LVCSR," in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8614-8618, 2013.
[13]      O. Abdel-Hamid, L. Deng and D. Yu, "Exploring convolutional neural network structures and optimization techniques for speech recognition," in Interspeech, pp. 3366-3370, 2013.
[14]      J.-T. Huang, J. Li and Y. Gong, "An analysis of convolutional neural networks for speech recognition," in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4989-4993, 2015.
[15]      D. Palaz,, R. Collobert and M. Magimai Doss, "Estimating phoneme class conditional probabilities from raw speech signal using convolutional neural networks," in Interspeech, pp. 1766-1770, 2013.
[16]      D. Palaz, M. M. Doss and R. Collobert, "Convolutional Neural Networks-based continuous speech recognition using raw speech signal," in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4295-4299, 2015.
[17]      D. Palaz, and R. Collobert, "Analysis of cnn-based speech recognition system using raw speech as input," in Proceedings of Interspeech, 2015.
[18]      T. N. Sainath, B. Kingsbury, G. Saon, H. Soltau, A.-r. Mohamed, G. Dahl, et al., "Deep convolutional neural networks for large-scale speech tasks," Neural Networks, vol. 64, pp. 39-48, 2015.
[19]      Y. Takashima, T. Nakashika, T. Takiguchi and Y. Ariki, "Feature extraction using pre-trained convolutive bottleneck nets for dysarthric speech recognition," in Signal Processing Conference (EUSIPCO), 2015 23rd European, pp. 1411-1415, 2015.
[20]      A. Lozano-Diez, R. Zazo-Candil, J. Gonzalez-Dominguez, D. T. Toledano and J. n. Gonz?lez-Rodr?guez, "An end-to-end approach to language identification in short utterances using convolutional neural networks," in INTERSPEECH, 2015.
[21]      S. Thomas, S. Ganapathy, G. Saon and H. Soltau, "Analyzing convolutional neural networks for speech activity detection in mismatched acoustic conditions," in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2519-2523, 2014.
[22]      R. Yeh, M. Hasegawa-Johnson and M. N. Do, "Stable and symmetric filter convolutional neural network," in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2652-2656, 2016.
[23]      T. N. Sainath, O. Vinyals, A. Senior and H. Sak, "Convolutional, long short-term memory, fully connected deep neural networks," in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4580-4584, 2015.
[24]      T. N. Sainath, R. J. Weiss, A. Senior, K. W. Wilson and O. Vinyals, "Learning the speech front-end with raw waveform cldnns," in Proc. Interspeech, 2015.
[25]      T. N. Sainath, B. Kingsbury, A.-r. Mohamed and B. Ramabhadran, "Learning filter banks within a deep neural network framework," in Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on, pp. 297-302, 2013.
[26]      T. N. Sainath, B. Kingsbury, A.-r. Mohamed, G. E. Dahl, G. Saon, H. Soltau, et al., "Improvements to deep convolutional neural networks for LVCSR," in Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on, pp. 315-320, 2013.
[27]      Y. Zhao, X. Jin, X. Hu, "Recurrent convolutional neural network for speech processing.", in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017.
[28]      Y. Zhang, W. Chan, N. Jaitly, "Very deep convolutional networks for end-to-end speech recognition.", in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017.
[29]      K. Choi, G. Fazekas, M. Sandler, K.Cho, "Convolutional recurrent neural networks for music classification", in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017.
[30]      Y. Qian, M. Bi, T. Tan and K. Yu, "Very Deep Convolutional Neural Networks for Noise Robust Speech Recognition," in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 12, pp. 2263-2276, Dec. 2016.
[31]      W. Dai, C. Dai, S. Qu, J. Li, S. Dos, " very deep convolutional neural networks for raw waveforms", in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017.
[32]      H.-G. Hirsch and D. Pearce, "The Aurora experimental framework for the performance evaluation of speech recognition systems under noisy conditions," in ASR2000-Automatic Speech Recognition: Challenges for the new Millenium ISCA Tutorial and Research Workshop (ITRW), 2000.
[33]      A. Agarwal, E. Akchurin, et al., "An Introduction to Computational Networks and the Computational Networks Toolkit", microsoft technical reports, 2016.