Document Type
Conference Proceeding
Date of Original Version
2016
Abstract
This paper presents a novel BigEAR big data framework that employs psychological audio processing chain (PAPC) to process smartphone-based acoustic big data collected when the user performs social conversations in naturalistic scenarios. The overarching goal of BigEAR is to identify moods of the wearer from various activities such as laughing, singing, crying, arguing, and sighing. These annotations are based on ground truth relevant for psychologists who intend to monitor/infer the social context of individuals coping with breast cancer. We pursued a case study on couples coping with breast cancer to know how the conversations affect emotional and social well being. In the state-of-the-art methods, psychologists and their team have to hear the audio recordings for making these inferences by subjective evaluations that not only are time-consuming and costly, but also demand manual data coding for thousands of audio files. The BigEAR framework automates the audio analysis. We computed the accuracy of BigEAR with respect to the ground truth obtained from a human rater. Our approach yielded overall average accuracy of 88.76% on real-world data from couples coping with breast cancer.
Citation/Publisher Attribution
Dubey, H., Mehl, M. R., & Mankodiya. (2016, June 27-29). BigEAR: Inferring the Ambient and Emotional Correlates from Smartphone-based Acoustic Big Data. 2016 IEEE First International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE). Washington, DC, USA. doi: 10.1109/CHASE.2016.46
Available at: http://dx.doi.org/10.1109/CHASE.2016.46
Author Manuscript
This is a pre-publication author manuscript of the final, published article.
Terms of Use
This article is made available under the terms and conditions applicable
towards Open Access Policy Articles, as set forth in our Terms of Use.