A Review of Emotion Recognition Using EEG Data and Machine Learning Techniques

Using AI to help humans with handling their emotions and identifying their stress levels in the current stressful lifestyle will greatly help them manage their lifestyle. Using the deep learning techniques, it can be made possible by creating a virtual bot to observe and understand human emotions. In this paper, the researcher try to review the comments from Reddit that are used, preprocessed and trained using Deep Neural Network to learn the emotions of the user. The inference engine module, which is a hybrid network consisting of convolutional neural network and recurrent neural network, is also interfaced. The model provides a high accuracy of response. The selection of frequency bands plays an important role in discerning patterns of brain-related emotions. This document explores a new method for selecting appropriate thematic bands instead of using fixed bands to detect emotions. A common spatial technique and machine machines were used to classify the emotional states. This document describes a number of possible technologies aimed at communication and other applications; however, they represent only a small sample of the extensive future potential of these technologies. We have also focused on relatively anticipated breakthroughs in the discussion of applications in sensory, BCI technologies; but breakthroughs like the new portable sensor technology, which offers ultra-high-resolution spatial and time-based activity in the brain, opens the door to a much broader range of applications. Interface a person's emotional processing techniques to achieve good accuracy in emotion


INTRODUCTION
Immersive people who interact with computer systems should use all human senses, such as visual, sound, relief, smell, taste, etc. In the case of human computer systems, a person receives information from the system using the eyes, ears, skin, nose, etc., and related brain does information processing. The user then enters the information into the computer system using the system scenarios used in the human computer interface. To simplify the humancomputer interface, you can also enter information randomly. It can be entered into the computer with cameras, sensor-based tracking systems and feedback systems. Then, process the information from the corresponding algorithms, depending on the system application. In this document, we explore the new human-computer interface dimension based on real-time recordings and their recognition. Electroencephalogram (EEG) is a non-invasive technique for recording the electrical potential of the scalp, which is produced by the activity of the cerebral cortex and reflects the state of the brain [1]. EEG technology provides us with a simple and portable way to monitor brain activity using appropriate methods and algorithms of signal processing and classification.
We have designed new algorithms for brain health recognition, including emotion recognition and concentration detection, and innovative integrated methods and tools for implementing user dives and interactions based on EEG. Algorithms of "internal" brains capable of quantifying including emotion recognition would research human computer interactions by increasing proposed new objective quantification methods and algorithms such as new research tools for medical applications, entertainment and even new digital art application methods, and allow us to integrate brain state quantifying algorithms into human computer interfaces. This would lead to the implementation of applications such as EEG-based serious games, including neuro feedback games, emotion-allowing personalized web searches, experimental art animations, personal nail avatars communicating with virtual objects, other avatars, or even social bots working in older people, etc.
Despite the existence of different methods of obtaining brain signals, the most commonly used method is electroencephalogram (EEG), as it is non-invasive, portable, and inexpensive and can be used in almost any environment [2]. In addition, cheap and very portable EEG devices have been developed in recent years. As explained by [3], BCI systems have been used in rehabilitation such as spelling systems, neuroscience, such as attention systems and cognitive psychology monitoring, for example in the treatment of attention deficit hyperactivity disorders. BCI systems have recently been studied for recognizing emotions and are considered a promising technique in this area because emotions are created in the brain. There are a number of challenges in using BCI systems for emotion detection, such as a selection of methods and channels that acquire brain signals that provide the best information about 254 Brain-Computer Interface Systems -recent advances and prospects regarding a person's emotional state and processing techniques to achieve good accuracy in emotion recognition.

Definitions
BCIs are systems that allow each user to share information with the environment and control devices using brain activity. Brain signals can be obtained by invasive or non-invasive methods. In the first case, electrodes are implanted directly into the brain. In another case, the signal is obtained from the skin of the user's head. Despite the existence of several methods for obtaining brain signals, the most commonly used method is electroencephalogram (EEG) because it is non-invasive, portable, and inexpensive and can be used in almost any environment [4]. In addition, in recent years, cheap and increasingly portable EEG devices have been developed. As described in [5], e.g., BCI systems have recently been studied in recognizing emotions and are considered a promising technique in this area because emotions are generated in the brain. There are several challenges in using BCI systems for emotion detection, such as choosing methods and channels to obtain brain signals that provide the best information for Brain-Computer Interface Systems -Recent Advances and Prospects 254 in terms of a person's emotional state, as well as processing techniques to achieve good accuracy in emotion recognition BCI is a communication system in which messages or commands sent to a person to the outside world do not go through normal output trajectories of peripheral nerves and brain muscles. For example, for EEG-based BCI messages, you receive an EEG-based BCI message. BCI offers its users an alternative way of working around the world. Since independent BCI allows the brain to have brand new escape routes, they are theoretically interested as dependent on BCI.
Most current studies focus on acquiring sentimental traits by analyzing lexical and syntactic functions. These features are explicitly expressed in sentimental words, emojis, and exclamation marks and so on. In this article, the word-of-the-word method, derived from unsupervised learning based on the great Corpora Twitter, this method is using embedded contextual semantic relationships and the emergence of statistical characteristics of words between tweets. The project provided real-time data analysis and implementation of the model is more practical and realistic. The project also classifies emojis based on emotions. The project contains complex interpretations, higher overheads. The response model is not direct and contains collective responses. [6], suggested that the conscious emotions of mobile phone programs increased because of their own and ability to be a student. To be programmed to be used, the system for the diagnosis of emotions must be real-time and very accurate. This article introduces high quality emotion diagnostics for cell phone programs. The dominant bins are then fed into a Gaussian mixture model based classifier to classify the emotion. Experimental results show that the proposed system achieves high recognition accuracy in a reasonable time. The merits of this model include enhanced recognition features, easier implementation and quick response. [7], He suggested that the analysis of The Emotions of the On-The-Counter of The Social Media of the Company provides a way of revealing public feelings about events or related products. [8] suggested that understanding people's emotions through natural language is a complex task for smart systems based on the Internet of Things (IoT). The main problem is due to a lack of basic knowledge in the manifestations of emotions related to different real contexts. This article proposes Bavarian conclusion methods to study hidden semantic dimensions such as contextual information in natural language and to learn knowledge of emotional manifestations based on these semantic dimensions. And by adding hierarchy to the district level further into the document assumptions of the spread of emotions, we were able to balance the results of emotions in the document and achieve even better predictions of emotions in words and documents. The model is easy to implement and real-time data analysis was possible. The model can only synthesize texts.
[9] He suggested that he felt analysis of reviews, applications problem, recently became very popular in the field of text mining and computer language research. Here we need to explore the link between Amazon product reviews and customer product reviews. Traditional machine learning algorithms are used, including vector machine support, nearest neighborhood k-methods and deep neurological networks such as the Recurring Neural Network (RNN), Comparing these results may better understand these algorithms. They may also complement other methods of detecting fraud. The project includes feedback-oriented analysis that is more realistic and convenient for practical use. The data only includes customer reviews, so the model is dressed one-way.
[10] He suggested that the recognition of emotions would play an important role in affective computing. To solve these problems, the iCV-MEFED data set is released, which includes a 50-class connection of emotions and labels evaluated by psychologists. The task is complicated because the mother's folded face is very similar from different categories. However, the proposed data set may help pave the way for further research to identify facial movements combined. The model provides accurate results for mining functions. The rate of data processing and data processing was high. Project-related data has not been analyzed in real time. A more stable algorithm is necessary because the model can be changed. [11] He suggested that with the rapid growth of online social media content and their impact on human behavior, many researchers were interested in researching these media platforms. Part of their work focused on sentimental analysis and opinion on mining.
They refer to the automatic recognition of people's emotions on specific topics through their speeches and publications. The dataset is selected manually and the results of the automatic analysis are compared with the Innovative Systems Design and Engineering www.iiste.org ISSN 2222-1727 (Paper) ISSN 2222-2871 (Online) Vol. 11, No.4, 2020 human reserve. Tests show the feasibility of this task and achieving an F1 score equal to 45.9%. The model classifies a wide range of data, allowing an improved real-time analysis of emotions. Several data have been analyzed. The project module is a hierarchical dependence on different algorithms, making it difficult to maintain and less maintenance sensitive. [12] He suggested to create a large number of comments online with the rapid development of Internet and social networking technologies. In the age of big data, it is useful to understand the emotional tendencies of commentary through artificial intelligence technology in a timely manner. Analysis of technological moods is part of artificial intelligence and the research is very important to get reviews of sentimental trends. The main content of the analysis of opinions is the classification of text and the different words have a different contribution to the classification. Current polls generally use the power of divided words. The method will prove effective with high accuracy in the comments. Computing power is high for accurate results. The model was created by simple algorithms. The model rating focuses on text instead of other elements such as images, emojis, etc.
Second, Long-Term Short-Term Memory (LSTM) uses recurring neurological networks to determine the relationship between the transformation of official expressions into image sequences and six basic emotions. The system is applied to a humanoid robot to demonstrate its practical ability to improve HRI. It was developed as a high level PDA model and it was easier to work with. Requires hardware and support across multiple platforms. [13] He proposed a study that would introduce a facial recognition method to determine students' understanding of the entire distance learning process. This study suggests learning a model of emotional recognition consisting of three phases: extraction functions, a subset of functions and a classification of feelings. Experimental results indicate that the model proposed in this article is consistent with the expressions of the learning situation of students in the virtual learning environment. This article shows that recognizing emotions based on facial expressions is possible in distance learning, making it possible to identify the educational status of students in real time. It can therefore help teachers change learning strategies in a virtual learning environment based on students' feelings. [14] he suggested that facial recognition (FER) is an important task for machines to understand people's emotional changes. The final result of recognition is calculated according to the softmax classification. Fine-tuning is effective for tasks with a well-trained model when sufficient tests cannot be performed. The model contains a well-built algorithm and is very stable. It's really slow calculation wise and requires a lot of computing power. [15] he suggested that robots should be able to recognize human emotions to improve human-robot communication (HRI). This study suggests a humanoid robot emotion detection system. The robot is equipped with a camera to capture the facial image of users and uses this system to recognize users' emotions and corresponding response. The emotion recognition system, which is based deep in neural networks, teaches six basic emotions: happiness, anger, disgust, fear, sadness and awe. Convolutional neural networks (CNN) are used to extract visual functions by studying a large number of still images.

Signals Processing, Characteristics and Classification
The processing of EEG signals in the BCI system is divided into two parts: the choice of signal characteristics and the classification of their characteristics. The choice of the method used in the first step depends on the characteristics of the signal in the weather area or frequency.
Wave-lines [16] are widely used to select EEG signaling properties in emotion detection systems and are defined as small waves with limited duration and mean values as zero. These are mathematical functions where a function or data set is placed on time and frequency.
The second stage of connection signal processing is the classification of signals in signals relevant to the application using translation algorithms. Translation algorithms include, for example, linear discriminatory analysis, neighboring vector support machine and artificial neural network [17].
The artificial neural network is widely used as an algorithm for classifying different types of human information as human feelings. Artificial neural networks are computer models of learning inspired by the biology of the human brain. These models consist of synapses associated with neurons.
From a functional point of view, neural networks imitate the brain's ability to learn and can ideally train away all information, given the input data set, by adjusting the synaptic weight. A well-trained network should, in principle, have the right to use its expertise and respond in accordance with completely new submissions. The most common use of neural networks is controlled classification and therefore requires a number of training and test data. The neural network consists of an input layer, a hidden layer and an output layer.
The input layer consists of neurons that receive input stimuli. The output layer consists of neurons whose output is the network output. A hidden layer or interlayer consists of neurons working in a data-processing network. BCI systems have two methods for processing EEG signals: analysis of the electrical spectrum for different frequency bands and possible event-related analysis. Since different bands reflect different brain functions [18], frequency training is a well-known technique used in clinics for the Quantitative EEG Protocol (QEEG). The QEEG protocol shall assess the capacity of the different frequency bands on the basis of the patient's EEG signals Innovative Systems Design and Engineering www.iiste.org ISSN 2222-1727 (Paper) ISSN 2222-2871 (Online) Vol. 11, No.4, 2020 and compare them with the QEEG reference database. A pathological and related recovery protocol can be created using a statistical model. ERP analysis, including SCP and P300 analysis, is a method used to analyze the synchronized potential of a renewable energy event. It has been shown that SCP is useful for ADHD treatment [19] and P300 part training can be used for drug use [20].

Figure 2: BCI neural Network Output Layers
A Convolution Neurons Network (ConvNet/CNN) is a deep learning algorithm that can create input image (weight and bias of teaching) to assign different aspects/objects to the image and be able to separate from each other. The pre-processing required by CNN is much lower than other classification algorithms. While the primitive methods of filters are hand-designed, with appropriate training, CNN has the ability to learn these filters/features. CNN's architecture is similar to connecting neurons to the human brain and is inspired by the Visual Cortex Organization. Some neurons respond only to stimuli in a restricted field of view called a sensitive area. The entire range of these fields overlaps to cover the entire visual area. A repeating neuronal network (RNN) is a type of neurological network where the production of the previous stage is applied as an entry into the current step. In traditional neural networks, all input and output data are independent of each other, but if the following word is required in the sentence, it is necessary to keep the words above in mind. It created RNN, which solved this problem by using a hidden layer. The most important and important feature of RNN is a hidden state that remembers some information during a JADA. RNN is "memory" that remembers all the information about what is calculated. Uses the same parameters for each record because it performs the same task in all inputs or hidden layers to create outputs. This reduces the complexity of the parameters, unlike other neurons in the network. RNN converts independent activation to dependent activation, ensuring the same weight and distortion for all layers, reducing the complexity of the parameters increase, and remembering each previous output by transferring each output to the next hidden layer. Therefore, these three layers can be combined so that the scales and distortions of all hidden layers are the same as one repeating layer.

Emotion Recognition Algorithms
Recognizing EEG emotions can reveal a user's "inner" feelings, and then it can be used as a treatment or create an emotion-enabled avatar for the user or other real-time app. Emotion recognition algorithms consist of two parts: extraction and classification of functions. In real-time applications, the goal is to develop fast algorithms that recognize more emotions with fewer electrodes. Currently, detection algorithms for doom-making have been created, as shown in Table 1. EEG-based emotion recognition algorithms can be divided into two groups: depending on the object. [21] we designed an algorithm in real time with only 3 channels in total. Fractal dimension algorithms were used to calculate fractal functions, and the EEG-based emotion recognition algorithm was used in real time at predefined thresholds based on analysis of training sessions. In our work, recognizing the level of excitement and courage with accuracy of 84.9% and 90%, respectively -satisfied, pleasant, happy, disappointed, sad, fear and neutral emotions were different.
BCI tests are a multidisciplinary company. BCI signaling and safety functions occur in the brain and reflects anatomical, chemistry and physiology.
BCI performs signal processing and depends on your computer's hardware and software. This includes adjusting routines that depend on learning principles and other human factors such as attention, motivation and fatigue. BCI exits work equipment with certain electronic and/or mechanical properties and provides feedback that includes the user's sensitive and sensitive functions.
Innovative Systems Design and Engineering www.iiste.org ISSN 2222-1727 (Paper) ISSN 2222-2871 (Online) Vol. 11, No.4, 2020 Finally, BCI work protocols perform an operation based on input properties, function extraction methods, translation algorithms, and outputs. BCI studies thus cover neurobiology, psychology, engineering, applied mathematics and computer science. Success depends on expertise and effective exchange of information in all these areas.
Although all BCI research programmes have the same purpose: rapid and accurate communication and control they are not only different from inputs, extraction function methods, translation algorithms, outputs and work protocols, but also their immediate objectives. Some focus on specific applications, such as word processing or neuronetwork control, while others focus on creating common features of BCI design and use using prototype application, such as marker control. Regardless of the objectives, any hardware and software capable of obtaining electrophysiological input with sufficient speed and accuracy shall have this output sufficiently and quick to regulate acceptable device delays and to regulate the adaptability of users and the system. As their purpose is to investigate, they should also retain complete data for subsequent evaluation.
In addition, BCI products, where their efforts are productive and reliable, shall comply with certain principles of testing design, evaluation of data and documentation and dissemination of results.

EVALUATION OF EMOTION RECOGNITION TECHNIQUES
The approach combines a pre-rehearsed function created using a word-feeling polarity concept based on the sensation of the lexicon and engrama features feel the features of vector Reddit comments and input features provided by a deep complex network of neurons. Our model captures contextual information about the repetitive structure and builds text reproduction through neural networks. Finally, we conclude that the deep convolution of the Neurone Network, using pre-trained word vectors, is a good performance task to classify emotions. The used chatbot is developed as a complete module that uses Virtual Reality features that consist of facial recognition and voice and sentiment analysis through interactive conversation. The development process also provides support between platforms and adds cloud features. The final product is a multi-working cross-platform device that can detect emotions through text, sound, and images. Processes communicate through a built-in interface, so each version is interchangeable and can be used in different languages. The aim is to make BCI2000 available to BCI2000 involved in BCI research and development and related data storage and analysis tools.

4.1Field Applications of BCI
BCI-based emotional identification systems can be implemented in many areas, such as: Flat-screen TV, Education, Medicine, Play, Intelligent mentoring systems. An example of an entertainment program is an EEGbased music player [22]. In this area, the user's current emotional state and the music is played with the associated application. The songs are divided into six different emotions: fear, sad, frustrated, happy, satisfied and pleasant 5.0 SUMMARY In the long term, we see a more holistic approach to BCI, combining critical brain, behavioral, task and environmental information from advanced, multi-aspect sensor technologies, analytical approaches, and computing infrastructure as extensions of cloud technologies. This approach can also benefit from studying the synergy between human computers and extensive data collection consisting of both brain function (e.g. EEG, FMRI) and brain structure (e.g. diffusion-weight photography), from individual neurons to brain maps. This data can provide a lot of information on how to differ and changes in the physical brain structure, both inside and between individuals, lead to changes in functional brain data that can be detected in real time, creating a much greater potential for individualized BCI technologies. The widespread integration of neuro technologies also allows the development of a wide range of opportunistic BCI technologies, which can have a significant impact on quality of life on a daily occurrence if scientists and developers are able to overcome obstacles related to the detection and interpretation of neural signatures in a relatively unlimited environment.
In addition to detection systems based on affective facial expressions, colloquial language and body language, there are several applications in affective computing that focus on the detection of emotions using learning techniques to identify patterns of physiological activity that correspond to expressions of different emotions. Studies use cardiovascular signals, electronic activity, electromyography and peripheral temperature for affective detection. Electroencephalograms (EEG) dependent systems were also used to detect emotions, for example, as a system which allows the robot to recognize human emotions. Emotions are created by images and divided into three categories, and this: pleasant, uncomfortable and neutral. Brain signals are a reliable source of information because the interpretation of emotions begins in the central nervous system. In addition, a person cannot control brain consumption to simulate the wrong emotional state.