Recognition of Emotions Provoked by Auditory Stimuli using EEG Signal Based on Sparse Representation-Based Classification

Authors

Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran

Abstract

Excitements are important for the proper interpretation of actions as well as relationships among human. Recognizing emotions through Electroencephalogram (EEG) allows the ability to recognize emotional states without traditional methods, including filling in the questionnaire. Automatic emotion recognition reflects the excitement of the individual without clinical examinations or need to visits, which plays a very important role in completing the Brain-Computer Interface (BCI) puzzle. One of the major challenges in this regard is first to select and extract the proper characteristics/features of the EEG signal, in order to create an acceptable distinction between different emotional states. Another challenge is to select an appropriate classification algorithm to distinguish and correctly label the signals associated with each emotional state. In this paper, we proposed to use Sparse Representation-based Classification (SRC) which addresses both of the mentioned challenges by directly utilizing the EEG signal samples (no need to involve with feature extraction/selection) and then classifying the emotional classes based on class dictionaries which are learned to represent the sparse model of the data of each emotional state. The proposed method is tested on two databases, the first of which is experimentally recorded in our biomedical signal processing lab and the subjects were excited by auditory stimuli, and the second database is taken from Shanghai University, China, in which the subjects were excited by visual stimuli. The results of the proposed method provides more than 80% accuracy for the recognition of two positive and negative emotions and suggest that the proposed method has higher degree of success in the classification of emotions while it avoids the complexity of feature selection/extraction.

Keywords


[1] S. Sanei and J. A. Chambers, EEG signal processing, John Wiley & Sons, 2013.
[2] S. Sanei, Adaptive processing of brain signals, John Wiley & Sons, 2013.
[3] K. R. Scherer, “What are emotions? And how can they be measured?,” Social science information, vol. 44, no. 4, pp. 695-729, 2005.
[4] K. Schaaff and T. Schultz, “Towards emotion recognition from electroencephalographic signals,” in Affective Computing and Intelligent Interaction and Workshops, Amsterdam, Netherlands, IEEE 3rd International Conference on ACII2009, pp. 1-6, 2009.
[5] M. Murugappan, M. Rizon, R. Nagarajan, S. Yaacob, I. Zunaidi, and D. Hazry, “EEG feature extraction for classifying emotions using FCM and FKM,” International journal of Computers and Communications, vol. 1, no. 2, pp. 21-25, 2007.
[6] R. Jenke, A. Peer, and M. Buss, “Feature extraction and selection for emotion recognition from EEG,” IEEE Transactions on Affective Computing, vol. 5, no. 3, pp. 327-339, 2014.
[7] S. Jirayucharoensak, S. Pan-Ngum, and P. Israsena, “EEG-based emotion recognition using deep learning network with principal component based covariate shift adaptation,” The Scientific World Journal, vol. 4, no. 3, pp. 213-232, 2014.
[8] X. W. Wang, D. Nie, and B. L. Lu, “Emotional state classification from EEG data using machine learning approach,” Neurocomputing, vol. 129, pp. 94-106, 2014.
[9] A. Y. Yang, J. Wright, Y. Ma, and S. S. Sastry, “Feature selection in face recognition: A sparse representation perspective,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 4, no. 3, pp.-212-232, 2007.
[10] Y. Shin, S. Lee, J. Lee and H. N. Lee, “Sparse representation-based classification scheme for motor imagery-based brain-computer interface systems,” Journal of Neural Engineering, vol. 9, no. 5, 2012.
[11] F. J. Herrmann, M. P. Friedlander, and O. Yilmaz, “Fighting the curse of dimensionality: Compressive sensing in exploration seismology,” IEEE Signal Process. Mag., vol. 29, no. 3, pp. 88-100, 2012.
[12] S. Mallat and Z. Zhang, “Matching pursuit with time-frequency dictionaries,” Courant Institute of Mathematical Sciences New York United States, vol. 1, no. 2, pp. 81-90, 1993.
[13] علی محمدی زاده، «کاربردهای موسیقی درمانی در زمینه‌های روان‌پزشکی، پزشکی و روا‌‌شناختی به انضمام موسیقی و عرفان، طبقه‌بندی تم‌های موسیقی، موسیقی درمانی و وحدت جهانی،»‌ تهران: اسرار دانش، 1380.
[14] T. F. Bastos Filho, A. Ferreira, A. C. Atencio, S. Arjunan, and D. Kumar, “Evaluation of feature extraction techniques in emotional state recognition,” IEEE 4th international conference on Intelligent Human Computer Interaction (IHCI), Kharagpur, India, pp. 1-6, 2012.