ANALYSIS OF THE MEL SCALE FEATURES USING ELECTROGRAPHY AND SPEECH SIGNALS BY PARAMETERIZED KNN AND XGBOOST

Authors

DOI:

https://doi.org/10.32689/maup.it.2021.2.7

Keywords:

machine learning, speech recognition, emotion recognition, MFCC, supervised learning, decision trees, MEL scale

Abstract

Recognizing human emotions and speech is something that has always been an exciting research topic for scientists. In our work, we show that the parametrization of a vector is obtained and carried out from a sentence divided into an emotionally-informational containing part and an effectively used informational part. There are several characteristics of speech that distinguish it between utterances. Pitch, timbre, intensity, and vocal tone classify speech into multiple emotions. We have supplemented them with a new function of speech classification, which consists in dividing a sentence into an emotionally charged part of a sentence and a part that carries only informational load. Therefore, we can conclude that the speech pattern changes under the influence of different emotional environments. Since the identification of the emotional states of the speaker can be made on the basis of the Mel scale, MFCC is one of such options for studying the emotional aspects of the speaker’s utterances. In this paper, we implement a model to identify multiple MFCC emotional states for two datasets, classify the emotions for them based on MFCC characteristics, and compare them accordingly. Overall, this work implements a dataset minification-based classification model that uses averaging functions to improve the level of classification accuracy in various machine learning algorithms. In addition to the static analysis of the author’s tonal portrait, which is especially used in MFFC, we propose a new method of dynamic analysis of a sentence in processing and research as a new linguistic-emotional entity pronounced by the same author. Due to the ranking by importance of the characteristics of the MEL scale, we can parameterize the coordinates of the vectors being processed using the parameterized KNN method. Language recognition is a multilevel pattern recognition task. Here, acoustic signals are analyzed and structured as a hierarchy of structural elements, words, phrases and sentences. Each level of such a hierarchy can provide time constants: possible word sequences or known pronunciation types that reduce recognition errors to a lower level. The analysis of the dynamics of voice and speech is suitable for improving the quality of human perception and the formation of human speech by a machine. It is within the capabilities of artificial intelligence. Emotion results can be widely applied in e-learning platforms, automotive systems, medicine, etc.

References

Koolagudi, S.G., Rao K.S. (2012). Emotion recognition from speech: a review. International Journal of Speech Technology, no. 15, pp. 99–117.

Marechal, C. et al. (2019). Survey on AI-based multimodal methods for emotion detection. High-Performance Modelling and Simulation for Big Data Applications, Springer.

Rao, K.S., Koolagudi, S.G., Vempada, R.R. (2013). Emotion recognition from speech using global and local prosodic features. International Journal of Speech Technology, vol. 16, no. 2, pp. 143–160.

Koolagudi, S.G., Barthwal, A., Devliyal, S., Rao, K.S. (2012). Real life emotion classification from speech using gaussian mixture models. Communications in Computer and Information Science, no. 38, pp. 3892–3899.

S. Latif, R. Rana, S. Younis, Qadir, J., Epps, J. (2018). Transfer learning for improving speech emotion classification accuracy. Interspeech: Proceedings of the Annual Conference of the International Speech Communication Association, pp. 257–261.

Lee, C.M., Narayanan, S.S. (2005). Toward detecting emotions in spoken dialogs. IEEE Transactions on Speech and Audio Processing, vol. 13, no. 2, pp. 293–303.

Banse, R. Scherer, K.R. (1996). Acoustic profiles in vocal emotion expression. Journal of Personality and Social Psychology, vol. 70, no. 3, pp. 614–636.

Hozjan, V., Kačič, Z. (2003). Context-independent multilingual emotion recognition from speech signals. International Journal of Speech Technology, no. 6, pp. 311–320.

Ramakrishnan, S. (2012). Recognition of Emotion from Speech: A Review. Speech Enhancement, Modeling and Recognition-Algorithms and Applications, pp. 121–138.

Sebe, N. Cohen, I., Huang, T.S (2005). Multimodal emotion recognition. Handbook of Pattern Recognition and Computer Vision, 3rd Edition.

Zhang, Q., Wang, Y., Wang, L., Wang, G. (2007). Research on speech emotion recognition in E-learning by using neural networks method. IEEE International Conference on Control and Automation, ICCA, vol. 167, pp. 114–177.

Jing, S., Mao, X., Chen, L. (2018). Prominence features: Effective emotional features for speech emotion recognition. Digital Signal Processing: A Review Journal, vol. 72, no. pp. 216–231.

Albornoz, E.M., Milone, D.H., Rufiner, H.L. (2011). Spoken emotion recognition using hierarchical classifiers. Computer Speech and Language.

A. Özseven, T., Düğenci, M., Durmuşoğlu (2018). A Content Analysis of The Research Approaches in Speech Emotion. International Journal of Engineering Sciences & Research Technology.

Krishna Kishore K.V, Krishna Satish P. (2013). Emotion recognition in speech using MFCC and wavelet features. Proceedings of the 2013 3rd IEEE International Advance Computing Conference, IACC.

Yousefpour, A., Ibrahim, R., Hamed, H.N.A. (2017). Ordinal-based and frequency-based integration of feature selection methods for sentiment analysis. Expert Systems with Applications, no. 75, pp. 80–93.

Shu L. et al. (2018). A review of emotion recognition using physiological signals. Sensors (Switzerland).

Oosterwijk, S., Lindquist, K.A., Anderson, E., Dautoff, R., Moriguchi, Y., Barrett, L.F. (2012). States of mind: Emotions, body feelings, and thoughts share distributed neural networks. NeuroImage.

Pessoa, L. (2010). Emotion and cognition and the amygdala: From “what is it?” to “what’s to be done?”. Reprinted from Neuropsychologia, vol.48.

Koolagudi, S.G., Rao, K.S. (2012). Emotion recognition from speech: A review. International Journal of Speech Technology, no. 15, pp. 99–117.

Winkielman, P., Niedenthal, P., Wielgosz, J., Eelen, J., Kavanagh, L.C. (2014). Embodiment of cognition and emotion. APA handbook of personality and social psychology, vol. 1, pp.151–175.

Fernández-Caballero, A. et al. (2016). Smart environment architecture for emotion detection and regulation. Journal of Biomedical Informatics.

Guan, H., Liu, Z., Wang, L., Dang, J., Ruiguo, Yu. (2018). Speech Emotion Recognition Considering Local Dynamic Features. Computer Science.

Cen, L., Zhu, L.Y., Wu, F., Yu, Z.L. Hu F.A. (2016). Real-Time Speech Emotion Recognition System and its Application in Online Learning. Emotions, Technology, Design, and Learning.

Shuman, V., Scherer, K.R. (2015). Emotions, Psychological Structure of’. International Encyclopedia of the Social & Behavioral Sciences: Second Edition.

Ekman, P. (2005). Basic Emotions. Handbook of Cognition and Emotion.

Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D. H. J., Hawk, S. T., van Knippenberg, A. (2010). Presentation and validation of the Radboud Faces Database. Cognition and Emotion, no. 24(8), pp. 1377–1388.

Ekman, P. (1993). Facial expression and emotion. American Psychologist, no. 48(4), pp. 384–392.

Bourke, C., Douglas, K., Porter, R. (2010). Processing of facial emotion expression in major depression: A review. Australian and New Zealand Journal of Psychiatry. 2010, no. 44(8), pp. 681–696.

Van den Stock, J., Righart, R., B. de Gelder, (2007). Body Expressions Influence Recognition of Emotions in the Face and Voice. Emotion.

Banse, R., Scherer, K.R. (1996). Acoustic Profiles in Vocal Emotion Expression. Journal of Personality and Social Psychology.

Gulzar, T., Singh, A., Sharma, S. (2014). Comparative Analysis of LPCC, MFCC and BFCC for the Recognition of Hindi Words using Artificial Neural Networks. International Journal of Computer Applications.

Shrawankar, U., Thakare, V.M. (2013). Techniques for Feature Extraction In Speech Recognition System : A Comparative Study. International Journal Of Computer Applications In Engineering, Technology and Sciences.

Haamer, R.E., Rusadze, E., Lüsi, I., Ahmed, T., Escalera, S., Anbarjafari, G. (2018). Review on Emotion Recognition Databases. Human-Robot Interaction – Theory and Application.

Lalitha, S., Geyasruti, D., Narayanan, R., Shravani, M. (2015). Emotion Detection Using MFCC and Cepstrum Features. Procedia Computer Science, vol. 70, pp. 29–35.

Jackson, P., Haq, S. (2014). Surrey audio-visual expressed emotion (savee) database. University of Surrey: Guildford.

Liu, Z.T., Xie, Q., Wu, M., Cao, W.H., Mei, Y., Mao, J. W. (2018). Speech emotion recognition based on an improved brain emotion learning model. Neurocomputing.

Ekman, P. et al. (1987). Universals and Cultural Differences in the Judgments of Facial Expressions of Emotion. Journal of Personality and Social Psychology.

Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S. (2009). A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence.

Koduru, A., Valiveti, H.B., Budati, A.K. (2020). Feature extraction algorithms to improve the speech emotion recognition rate. International Journal of Speech Technology.

Kumar, K., Kim, C., Stern, R.M. (2011). Delta-spectral cepstral coefficients for robust speech recognition. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing – Proceedings.

Tiwari, V. (2010). MFCC and its applications in speaker recognition. International Journal on Emerging Technologies.

Dave, N. (2013). Feature Extraction Methods LPC, PLP and MFCC In Speech Recognition. International Journal for Advance Research in Engineering and Technology.

Yankayi, M. (2016). Feature Extraction Mel Frequency Cepstral Coefficients (Mfcc), pp. 1–6.

Ananthakrishnan, S., Narayanan, S.S. (2008). Automatic prosodic event detection using acoustic, lexical, and syntactic evidence. IEEE Transactions on Audio, Speech and Language Processing.

Kinnunen, T., Li, H. (2010). An overview of text-independent speaker recognition: From features to supervectors. Speech Communication.

Wang, W.Y., Biadsy, F., Rosenberg, A., Hirschberg, J. (2013). Automatic detection of speaker state: Lexical, prosodic, and phonetic approaches to level-of-interest and intoxication classification. Computer Speech and Language.

Lyons, J. (2014). Mel Frequency Cepstral Coefficient. Practical Cryptography.

Palo, H.K., Chandra, M., Mohanty, M.N. (2018). Recognition of Human Speech Emotion Using Variants of Mel-Frequency Cepstral Coefficients. Lecture Notes in Electrical Engineering, vol. 442, pp. 491–498.

Yazici, M., Basurra, S., Gaber, M. (2018). Edge Machine Learning: Enabling Smart Internet of Things Applications. Big Data and Cognitive Computing.

Wang, Xia, Dong, Yuan, Hakkinen, J., Viikki, O. (2002). Noise robust Chinese speech recognition using feature vector normalization and higher-order cepstral coefficients.

Davis, S.B., Mermelstein, P. (1990). Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences. Readings in Speech Recognition.

Palaz, D., Magimai-Doss, M., Collobert, R. (2019). End-to-end acoustic modeling using convolutional neural networks for HMM-based automatic speech recognition. Speech Communication.

Passricha, V., Aggarwal, R.K. (2020). A comparative analysis of pooling strategies for convolutional neural network based Hindi ASR. Journal of Ambient Intelligence and Humanized Computing.

Vimala, C., Radha, V. (2014). Suitable Feature Extraction and Speech Recognition Technique for Isolated Tamil Spoken Words. International Journal of Computer Science and Information Technologies.

Dalmiya, C.P., Dharun, V.S., Rajesh, K.P. (2013). An efficient method for Tamil speech recognition using MFCC and DTW for mobile applications. 2013 IEEE Conference on Information and Communication Technologies, ICT.

NithyaKalyani, A., Jothilakshmi, S. (2019). Speech summarization for tamil language. Intelligent Speech Signal Processing.

Stevens, S.S., Volkmann, J., Newman, E.B. (1937). A Scale for the Measurement of the Psychological Magnitude Pitch. Journal of the Acoustical Society of America.

Mitrović, D., Zeppelzauer, M., Breiteneder, C. (2010). Features for Content-Based Audio Retrieval.

Caruana, R., Niculescu-Mizil, A. (2006). An empirical comparison of supervised learning algorithms. ACM International Conference Proceeding Series.

Kotsiantis, S.B. (2007). Supervised machine learning: A review of classification techniques. Informatica (Ljubljana).

Luckner, M., Topolski, B., Mazurek, M. (2017). Application of XGboost algorithm in fingerprinting localisation task. Computer Science.

Sutton, O. (2012). Introduction to k Nearest Neighbour Classification and Condensed Nearest Neighbour Data Reduction. Introduction to k Nearest Neighbour Classification

Deng, Z., Zhu, X., Cheng, D., Zong, M., Zhang, S. (2016). Efficient kNN classification algorithm for big data. Neurocomputing.

Okfalisa, Gazalba, I., Mustakim, Reza, N.G.I. (2018). Comparative analysis of k-nearest neighbor and modified k-nearest neighbor algorithm for data classification. 2nd International Conferences on Information Technology, Information Systems and Electrical Engineering, ICITISEE.

Skuratovskii, R.V. (2020). The timer compression of data and information Proceedings. IEEE 3rd International Conference on Data Stream Mining and Processing (DSMP), pp. 455–459.

Skuratovskii, R.V. (2019). Employment of Minimal Generating Sets and Structure of Sylow 2-Subgroups Alternating Groups in Block Ciphers. Advances in Computer Communication and Computational Sciences, Springer, pp. 351–364.

Gnatyuk, V.A. (2001). Mechanism of laser damage of transparent semiconductors. Physica B: Condensed Matter, pp. 308–310.

Zgurovsky, M.Z., Pankratova, N.D. (2007). System Analysis: Theory and Applications. Springer Verlag. Berlin.

Romanenko, Y.O. (2016). Place and role of communication in public policy. Actual Problems of Economics, vol. 176, no. 2, pp. 25–26.

Skuratovskii, R.V. (2021). On commutator subgroups of Sylow 2-subgroups of the alternating group, and the commutator width in wreath products. European Journal of Mathematics, vol. 7, pp. 353–373.

Downloads

Published

2021-11-03

How to Cite

СКУРАТОВСЬКИЙ, Р., & БАЗАРНА, А. (2021). ANALYSIS OF THE MEL SCALE FEATURES USING ELECTROGRAPHY AND SPEECH SIGNALS BY PARAMETERIZED KNN AND XGBOOST. Information Technology and Society, (2 (2), 58-74. https://doi.org/10.32689/maup.it.2021.2.7