My scientific approach
Studying emotion is about understanding why a person’s heart may race before a blind date, why someone can get goosebumps while looking over a sweeping canyon, and why we have daily ups and downs. These experiences vary in many ways. But scientists usually only investigate a few (2-6) emotional states or prototypical facial expressions at a time. This is a major reason why findings in emotion science have been ambiguous — by way of analogy, imagine trying to draw conclusions about the effect of brain size on intelligence by comparing a parrot, a dog, and an octopus.
To better understand emotion, we first document how it varies. We can then study how different dimensions of emotion correspond to variability in the situations we encounter, in the patterns of brain activity they evoke, in physiological responses like goosebumps, heart palpitations, panting, and sweating, and in expressions of the face, body, and voice — and apply machine learning to recognize these emotion-related responses when they occur.
Understanding the diversity of emotion isn’t simple. We’ve introduced new statistical methods to explore how many different dimensions of emotion are evoked by video and music, conveyed via facial and vocal expression, and described using words and emoji. Among the many products of this work are maps that visualize taxonomies of emotion-related response, within and across cultures. These visualizations capture complexities in emotional response that are otherwise difficult to grasp.
State spaces of emotion are defined by their conceptualization, dimensionality, and distribution.
Once we’ve documented the structure of human emotion-related responses, we’ll be better positioned to develop technology that accounts for our wellbeing, anticipates our implicit goals when we interact with computers and with people, and can optimize for the affective states that we’re seeking out during a given interaction — for example, when selecting music that fits the mood, taking a picture to use as a professional headshot, or recalling conversation topics that got us excited the last time we talked to someone.
I developed this approach to emotion with Dacher Keltner at UC Berkeley. Much of this work would also not have been possible without the help of my collaborators from around the world: at UC Berkeley (Samy Abdel Ghaffar, Joseph Ocampo, Maria Monroy, Bob Knight, Colin Hoy, Regina Lapate), the University of Washington in St. Louis (Hillary Anger Elfenbein), the University of Amsterdam (Disa Sauter, Xia Fang), Stockholm University (Petri Laukka), ATR in Kyoto (Yukiyasu Kamitani, Tomoyasu Horikawa), Google, and Facebook/Instagram.
Interested in collaborating?