Affective Neuroscience, Blog, Brain research, Cognitive Neuroscience, Cognitive Psychology, Consumer Behavior, EEG research, Emotional Brain, Neurotechnology, Scientific Research, Uncategorized

What Can (and Can’t) the Brain Tell Us About Our Digital Emotions?

In recent years, brain-based metrics have become increasingly popular across digital health, marketing, and immersive technologies. Neuro and biometric thecniques such EEG, GSR, eye tracking and facial coding are no longer confined to labs—they’re now being used to test everything from meditation apps to virtual retail experiences. But amidst this neuro-data boom, one fundamental question still lacks a clear answer:

How far can neurophysiological data truly go in helping us understand emotions—especially in digital environments?

Having worked on both the scientific and applied sides of this field, I’ve learned that while these tools are incredibly promising, their real power lies in how we design the research around them, not in the tools alone.

Emotions Are Patterns, Not Fingerprints

One of the most persistent myths in affective neuroscience is the idea that each emotion has a unique, universal physiological signature—like a fingerprint waiting to be read. But research consistently shows that emotional responses are far more context-dependent and variable than we once thought.

For example, frontal alpha asymmetry has been widely explored as a potential marker of affective valence (Davidson, 2004), and late positive potential (LPP) in EEG is often linked to emotional attention or deliberation (Conrad et al., 2022). But both are modulated by variables like task difficulty, prior emotional states, and even cultural expectations.

In a study I led at LabLENI (UPV), we used a combination of EEG, GSR, and eye-tracking alongside an Implicit Association Task (IAT) to explore consumers’ emotional and cognitive responses to food-related stimuli. The goal was to capture subtle, automatic associations that standard self-reports might miss. However, interpreting the signals wasn’t always straightforward—an elevated GSR response in one participant could indicate excitement, while in another it reflected mild anxiety linked to task ambiguity. The biometric data alone didn’t tell the full story since. Instead, understanding the participant’s context—both psychological and experimental—was essential to avoid misleading conclusions.

This complexity is also echoed in Valenzi et al. (2014) research. The authors investigated EEG-based classification of six emotional states (happiness, sadness, fear, anger, surprise, and disgust) using Support Vector Machine (SVM) algorithms. Although they reported high classification accuracy—up to 97.2%—this was achieved under high controlled conditions using validated audiovisual stimuli. The results demonstrate that discrete emotional states can be detected with high precision, but only when contextual variability is minimized. In real-world applications, however, emotional states rarely present themselves in such neatly defined categories. This underscores the importance of using multimodal data and contextual awareness when applying biometrics to decode emotion.

EEG brain activity patterns associated with different emotions: sadness, disgust, neutrality, and amusement (taken from Valenzi et al, 2014)

In short, emotions are not fingerprints—they’re mosaics. And if we want to decode them meaningfully, we need a framework that accounts for variability, nuance, and the richness of real-world experience.

Context Is the Missing Variable

Good research design is everything. Without it, neurophysiological data are little more than noise.

I’ve seen this firsthand in academic-industry collaborations. In a project developed at the University of Valencia for Bunge Loders Croklaan, we used EEG, GSR, and facial expression analysis to explore consumers’ emotional responses to various chocolate flavor profiles. EEG and GSR gave us reliable indicators of arousal, but without a clear experimental framework and multimodal interpretation, those signals could have meant anything—from cognitive load to sensory discomfort. Only by embedding the physiological data within a rigorous, behaviorally informed design could we extract specific emotional patterns and preferences—insights directly tied to product perception.

Similarly, in a study led by Wu & Li (2022) results showed that combining EEG activity with facial expression information significantly boosted emotional state classification highlighting how multimodal data, not EEG alone, is key to robust insights. This conclusion was reinforced in a more recent investigation (Wu et al., 2024) who explored how individuals interpret different types of product icons, both abstract and concrete. While EEG data indicated that abstract icons demanded greater cognitive effort, it was the integration of eye-tracking data that allowed them to understand the real-time dynamics of attention and meaning-making.

These evidence makes it clear: when it comes to understanding how people process visual information, context and multiple data sources are essential.

💬 Author’s Note:
As researchers, we’re not just data collectors—we’re sense-makers. The responsibility lies not only in measuring what matters but in asking why it matters, for whom, and under what conditions. That’s where science becomes meaningful.

Neuroscience for Impact, Not Spectacle

Used wisely, neuro-tools offer a unique window into pre-conscious behavior. They can capture disengagement before a user clicks away, or detect subtle cognitive fatigue during digital therapy. But their value lies in translation: turning signals into actionable, human-centered insights.

In my consulting work, I often help companies design studies that go beyond “neurocuriosity.” As part of the process, I encourage them to ground their research in meaningful key questions like:
– What exactly do you want to understand about your users or consumers?
– How will the answers inform product design, user experience, or brand strategy?
– And perhaps most importantly: Can these results be replicated and turned into consistent insights?

Final Thought: Brain Data Isn’t Magic. It’s Methodology.

In a world hungry for emotional metrics, having access to high-tech tools isn’t enough. What we need is scientific rigor, contextual understanding, and thoughtful design. Because decoding emotional impact shouldn’t be about reading minds—it should be about asking better questions and building reliable, ethical frameworks for interpretation.

So next time someone says, “We’ll use EEG to track happiness,” ask them:
“Compared to what?”
That’s where science begins.

References

Davidson R. J. (2004). What does the prefrontal cortex “do” in affect: perspectives on frontal EEG asymmetry research. Biological psychology67(1-2), 219–233. doi: 10.1016/j.biopsycho.2004.03.008. PMID: 15130532.

Conrad, C. D., Aziz, J. R., Henneberry, J. M., & Newman, A. J. (2022). Do emotions influence safe browsing? Toward an electroencephalography marker of affective responses to cybersecurity notifications. Frontiers in neuroscience16, 922960. https://doi.org/10.3389/fnins.2022.922960

Valenzi, S. , Islam, T. , Jurica, P. and Cichocki, A. (2014) Individual Classification of Emotions Using EEG. Journal of Biomedical Science and Engineering7, 604-620. doi: 10.4236/jbise.2014.78061.

Wu, Y., Li, J. (2023). Multi-modal emotion identification fusing facial expression and EEG. Multimed Tools Appl 82, 10901–10919. https://doi.org/10.1007/s11042-022-13711-4

Wu, J., Liu, Y., Gan, L., Tong, M., & Xue, C. (2024). Understanding Relations Between Product Icon Type, Feature Type, and Abstraction: Evidence From ERPs and Eye-Tracking Studys. International Journal of Human–Computer Interaction41(5), 3537–3556. https://doi.org/10.1080/10447318.2024.2338663

Leave a comment