Skip to main content

Notes on Melissa Littlefield's "Instrumental Intimacy"

The overarching theme of the book is stated in the title, "Instrumental Intimacy", and is the notion that the capacity for machines to read physiological (specifically, neurological) signals and thus understand feelings, moods, and states of arousal is better than the person themselves. For example, the book highlights cases in which certain companies claim they can optimize one's mental state for peak athletic performance using neurofeedback from EEG. The idea stems from research showing correlations between a specific brain states and task performance. Companies can use these research claims to suggest that peak performance can result from specific brain states, and if one can train themselves to recognize them, they can perform at higher levels. While this is a clear violation of the correlation != causation argument, this seems to be the business foundation of many of the cases in Dr. Littlefield's book.



The translation of quantified body signals into subjective feelings, behavioral states, and habits is a common practice in neuroscience -- we often correlate the activity and structure of parts of the nervous system to human characteristics we wish to understand: pain, addiction, disease susceptibility, learning ability, etc. I don't think there's much argument to the idea that the brain is quite central to all of these processes (although it alone is not sufficient to explain all of them), and so I assume that many neuroscientists will not be surprised at any business who uses this as a marketing angle. I also don't think the notion suggesting that there are imperceptible things about ourselves that could be useful to know, is too controversial. I personally believe feedback from outside sources is necessary for my own betterment, because I don't know everything, not even about myself. Take an obvious example, going to the doctor: I can't perceive my own cholesterol levels and I may require a blood test from time to time to tell me when I should adjust my diet. Take a less obvious example, discussing race issues with your acquaintances: I, as a person who is not Black, can't always recognize how my actions uphold racist sentiments and policies, and participating in a discussion about race or reading the narratives of Black people can bring those things to my awareness. Getting feedback from outside ourselves, maybe from people, maybe from machines, can clearly be very helpful. We can't always depend on our gut. But can we depend on our own brain?

Scientists have long viewed the brain as the key explainer of individual behaviors and quirks. Introductory neuroscience classes often discuss the case of Phineas Gage, one of the most famous neurological incidents in which an exploding railroad spike resulted in an accidental lobotomy, and extreme changes in behavior lasting long after Phineas healed. When direct recordings of brain activity were not available, the debunked science of phrenology, or the idea that the shape of the skull (then, a proxy for brain structure) could predict one's behavior, was central to medical narrative. But with the advent of the electroencephalogram (EEG), a machine that could record electrical potentials from neural activity through the scalp, came new avenues for studying the human psyche on a more intimate level, thereby placing scientists (and their machines) in the position to offer feedback we desire to better ourselves. And so Littlefield's book is a discussion of the history of EEG and its use in scientific and commercial settings for gaining access to those imperceptible things about ourselves, and the trust we put in machines to help us understand... us.

The book focuses on a handful of companies that use EEG in their products, and fleshes out some of the implications in each use case. The primary implication I interpreted overlapped with Shoshana Zuboff's "The Age of Surveillance Capitalism", and is that the technology can be a violator of space which extracts intimate information about ourselves and puts it on display, a term Littlefield calls "extimacy." Moreover, as machine learning technology often does, that information can then be used to misrepresent us. Littlefield states, "There cannot be a problem of correspondence when the output is the only and natural result of the input; the exterior is the interior," and with the risk of inaccuracy and misrepresentation, "the visualizations of EEG wearables offer the illusion that what you see is what you get." "Illusion" is a key word here, because a machine's representation of one's brain activity is just that -- a representation, filtered by the interpretations and biases of a few people's idea of what constitutes an internal state, and it is not necessarily the truth. In other words, just as with other machine learning products, it is quite easy to be misrepresented with EEG.

A unique implication raised in book is the normalization of a metric for self-control. In Littlefield's words, "you may be judged on your brain's ability to interface with or control the technology; you may accept this as a perfectly fashionable thing to do." In neurofeedback applications, for example, will your ability to hone certain patterns of brain activity associated with task performance be a source of judgment for certain jobs, academic opportunities, or insurance? Clearly, the incorporation of this data into wearable technology and machine learning can be especially dangerous when mixed with sentiments from pop-neuroscience: that the brain can be hacked to reveal information that one is unable to express through language (or unwilling to share), and optimized to enable peak performance in business, athletics, or academics.

Closing notes: As with Zuboff's Surveillance Capitalism, this book left me questioning my own role in perpetuating a narrative hailing brain data as an ultimate source of one's intimate information. My part in perpetuating unfair systems is something I sit with and question constantly. I do think there are benefits to studying and using brain data that outweigh certain risks, but they need to be continually evaluated, and the implications need always be on the surface. I used to see brain data as a useful supplement to signals captured by other wearables, and a useful tool for optimizing health- and self-care, and EEG was an important component of this vision. Surveillance Capitalism, along with Littlefield's notion of Extimacy put to words quite well the feelings I've had for some time.

Comments