SMART EMOTION BASED SYSTEM FOR MUSIC
Abstract— This project presents a method to automatically detect emotional duality and mixed emotional experience using Linux based system. Co-ordinates, distance and movement of tracked points were used to create features from visual input that captured facial expressions, head, face gestures and face movement. Spectral features, prosodic features were extracted using the camera. Espeak and Pyttsx and Face API was used for calculation of features. A combined feature vector was created by feature level fusion and cascade classifier was used for emotion detection. Live participants and actions are to be used for recording simultaneous mixed emotional experience. As per calculated result system will play songs and display books list.
Index Terms— Smart Emotion, Face Detection, Emotion Prediction, OpenCV.
- INTRODUCTION
Emotion recognition has important applications in the field of medicine, education, marketing, security and surveillance. Machines can enhance the human- computer interaction by accurately identifying human emotions and responding to those emotions. Available research has mainly examined automatic recognition of a single emotion. But psychology as well as behavioural science studies have shown that humans concurrently experience and express mixed emotions. For instance, a person can feel happy and sad at the same time. In this research combinations of six basic emotions (happiness, sadness, surprise, anger, fear, disgust along with neutral state) were used. Main aim of this study is developing characteristics that capture data from facial expressions to identify multiple emotions. In case of single-label classification problem each annotated feature-vector instance is only associated with a single class label. However, the multiple concurrent emotion recognition is a multi-label classification problem. Don't use plagiarised sources.Get your custom essay just from $11/page
In a multi-label problem, each feature vector instance is associated with multiple labels such as presence or absence of one of each six basic emotions. The multi-label classifying process has been receiving increased attention while being applied to many a domains like text, music, images and video based systems, security and bioinformatics.
Human emotions play a very important role in human interactions. They can disclose the attentiveness, purpose, and intellectual condition of an individual. Facial aspect, as specified in a study, is the most “powerful”, inherent, unspoken and instant way so humans can convey sentiments as well as reveal intentions. Human expressions are the only characteristics that pertain to emotion, but they may be the most discernible. On account of the diversified neighbouring and aesthetic setting that a human has, the amount of emotions individuals use cannot be rigorously defined.
The assessment of emotions and ideas is a new research field that is intended to recognize opinions, feelings, estimates, behaviours, tendencies and emotions of people expressed in texts. The increasing importance of emotion detection has coincided with the growth of social media, such as surveys, forums, blogs, twitter and social networks. Emotional analysis systems are used in almost all businesses and social fields ,because opinions and emotions are important for all human activities and have a serious impact on people’s behaviours.
Music, as a channel of expressions, has always been an approved choice to potray and recognize human emotion. Dependable emotion based classifying systems can go a long way in helping us grasp their meaning. However, study in the field of emotion-based music classification has not produced optimal results.
This paper examined recognition of concurrent emotional ambivalence and mixed emotions. Additionally, the study examined two concurrent emotions (emotion duality) to limit the scope of the research based on availability of scenarios. This was done so that the experimental design was realistic. The subjects could express dual emotions with ease and observers could annotate the data without ambiguity. This study implemented multimodal emotion recognition methodology with multiple check box input for facilitating annotation of concurrent emotions in the user interface software. Previously we use static systems to play songs as simple music player by manual selection of songs , and user decides to play songs according to his/her choice. According to proposed system, the process of deciding and playing the songs will done by system itself by recognising facial expression(happy, sad, angry, surprise and excitement )
II. LITERATURE SURVEY
In S. Patwardhan[1], we investigate the effect of transfer of emotion-rich features between source and target networks on classification accuracy and training time in a multimodal setting for vision based emotion recognition.
In M. Liu [2], Emotional expressions of virtual agents are widely believed to enhance the interaction with the user by utilizing more natural means of communication. However, as a result of the current technology virtual agents are often only able to produce facial expressions to convey emotional meaning.
In RC. Ferrari and M. Mirza [3], This paper presents the initial implementation of a system of multimodal recognition of emotions using mobile devices and the creation of an affective database through a mobile application. The recognizer works into a mobile educational application to identify user's emotions as they interact with the device.
In G. M. Knapp [4], we investigate the effect of transfer of emotion-rich features between source and target networks on classification accuracy and training time in a multimodal setting for vision based emotion recognition.
III. THE PROPOSED METHOD
User: Use this system.
Server: Connection between User and database.
Database: Storage of information related to Facial characteristics, songs and books uploaded.
Our system has mainly three modules, user module, mood detection module and video suggestion module. Various processes involved in these two modules are:
User-Module:
User can use system and store songs and books library in the system.
Mood-detection-Module:
As per the facial expression it will recognize mood of user it will show songs list or books library and it will also give video suggestion.
Video-suggestion-Module:
According to the users mood it will give the suggestions of videos.
Main motivation of the system is to automatically identify users’ mood and according to that related Music (Happy, Angry, Stressed, Normal) will play through Linux based system. Music mood describes the inherent emotional meaning of a music clip. It is helpful in music understanding, music search and some music-related applications. Nowadays, user expect more semantic metadata to archive music, such as similarity, style and mood.
CONCLUSION:
To conclude, music is an important means of regulating mood in various everyday situations. Proposed system is readily available to everyone and can be listened to almost anywhere. System is directly dependent upon Facial Expressions of user, so it is very effective.
REFERENCES
- S. Patwardhan, “Augmenting Supervised Emotion Recognition with Rule-Based decision Model”, arXiv, 2016.
- S. Patwardhan and G. M. Knapp, “Affect Intensity Estimation Using Multiple Modalities,” Florida Artificial Intelligence Research Society Conference, May. 2014.
- S. Patwardhan and G. M. Knapp, “Multimodal Affect Analysis for Product Feedback Assessment,” IIE Annual Conference. Proceedings. Institute of Industrial Engineers-Publisher, 2013.
- Kahou, C. Pal, X. Bouthillier, P. Froumenty, C. Glehre,R. Memisevic, P. Vincent, A. Courville, Y. Bengio, RC. Ferrari and M. Mirza. Combining modality specific deep neural networks for emotion recognition in video. Proceedings of the 15th ACM on International conference on multimodal interaction, 2013.
- D. Schönbrodt and J. B. Asendorpf, "The Challenge of Constructing Psychologically Believable Agents," J. Media Psychology, vol. 23, pp. 100-107, Apr. 2011.
- C. Krämer, I. A. Iurgel, and G. Bente, "Emotion and motivation in embodied conversational agents," presented at the AISB'05 Conv., Symp. on Agents that Want and Like: Motivational and Emotional Roots of Cognition and Action, Hatfield, UK, 2005.
- Detecting Pedestrians Using Patterns of Motion and Appearance Paul Viola Michael J. Jones Daniel Snow
- Vision –based method for detecting driver drowsiness and distraction in driver monitoring system Jaeik Jo, Ho Gi Jung, Kang Ryoung, Jaihie Kim
- A situation-adaptive lanekeeping support system: Overview of the SAFELANE approach. A. Amditis, M. Bimpas, G. Thomaidis,M. Tsogas, M. Netto, S.Mammar, A. Beutner, N. Mohler, T.Wirthgen, S. Zipser,¨ A. Etemad,M. Da Lio, and R. Cicilloni,