Skip to main content

Posts

First preprint as an independent researcher

 Well, here it is: a semi-finished version of my work with Keith Cross on the social significance of brain terminology in technology (and vice versa). It was a lot more work than I anticipated, and I learned a lot about the history of scientific metaphors, as well as many researchers and initiatives aiming at creating more appropriate language for AI-labeled technology. On the advice of many who viewed earlier versions of the manuscript (thank you), I cut out a lot. But, I feel there is much more to dig into -- it's the first time I feel like I have enough material to maybe write a short book about it. Hmmmm https://arxiv.org/abs/2107.14042
Recent posts

Still working... another excerpt about the computational metaphor

For decades, the bidirectionality of the Computational Metaphor has caused concern amongst prominent computer scientists, suggesting that treating machines like people can lead to treating people like machines. Joseph Weizenbaum, the developer of the chatbot Eliza, was perhaps the most prominent critic of computer anthropomorphization for this reason (Weizenbaum, 1976) . And Edsger W. Dijkstra, who coined the phrase, “structured programming” also spoke on his concerns about reversing the Computational Metaphor: “A more serious byproduct of the tendency to talk about machines in anthropomorphic terms is the companion phenomenon of talking about people in mechanistic terminology. The critical reading of articles about computer-assisted learning... leaves you no option: in the eyes of their authors, the educational process is simply reduced to a caricature, something like the building up of conditional reflexes. For those educationists, Pavlov’s dog adequately captures the essence of Mank...

Excerpt from upcoming post on the brain-computer metaphor

Here's another snippet of the Brain-Computer Metaphor essay I'm working on. I'm finding it a little difficult to strike the right tone on this one. Nonetheless, below is the working intro, which I've edited down considerably from my previous version: Last year, OpenAI’s latest language model, GPT-3, was tested as a viable healthcare chatbot, and promptly suggested that a fake patient should commit suicide because they “felt bad” (Daws, 2020) . While in this instance GPT-3 did not perform as hoped, large language models in general are fluent enough to give a false impression of language understanding and mental modeling (Bender et al., 2021) , thus epitomizing the concept of artificial intelligence, or AI. Cases like the above, however, call into question whether the intelligence and brain-based terminology used to market AI technology poses risks upon whom the technology is used. At the core of this terminology is perhaps the most debated metaphor in all of science, the...

Excerpt from upcoming post on the social implications of the brain-computer metaphor

I've been working on an essay in which I point out an ignored implication of a very popular neuroscience debate -- whether the brain is a computer. In it, I focus on the idea that through reinforcing the metaphor, the tech industry wields it such that society gives special status to AI for making important decisions which it shouldn't. And because the neuroscience and computer science fields share a historically close relationship, neuroscientists may be inadvertently pushing pervasive forms of tech-solutionism which have been shown to be harmful to marginalized folk. It's taking longer to put it together than I had planned, but here's a small piece (to be edited): If the computer is a brain, what’s missing from the metaphor? Input: To make data usable for a computer, often a series of human interventions needs to first be performed: collecting, cleaning, processing, and labeling are all crucial steps in getting a computer to “learn” patterns in the data. But what are ...

Columbia symposium on brain computer interfaces and neuroethics

I recently attended a symposium held by Columbia University entitled, “ Brain Computer Interfaces: Innovation, Security, and Society ”, in which attendees were gathered to discuss the social implications of neurotechnology -- a growing genre of tech which reads from and writes to one’s nervous system. The purpose of neurotech ranges from clinical to entertainment applications, and much of it is available today, direct to consumer or client. The social implications vary according to the application. Frequent neuroethics discussions center around issues of agency or enhancement, but I see more immediate concerns in issues that are common to other data-centric technologies like facial recognition and fitness trackers: data misuse.  The symposium covered a wide spectrum of the state of neurotechnology, and speakers were a mix of neuroscientists, engineers, and bio / neuro / AI ethicists, from both academia and industry. The “Innovation” aspect of the symposium was covered well, with ma...

Research proposal to study on-the-ground implications of DTC neurotechnology

With the fast-paced acceptance of AI technology into business and healthcare, the demand for personal physical data is greater than ever. Information about one’s health, emotions, and psychological states and traits is increasingly valued for constructing marketable digital profiles (Schmidt et al., 2019; Stark, 2018), and the ways these data are extracted is expanding beyond fitness trackers and facial recognition into perhaps the deepest intimate space: the brain. Brain-machine-interfaces (BMI) allow for direct translation of the brain’s electrical activity into signals which can indicate one’s perceptions and intentions. And as BMIs have become more mobile and accessible, there is a growing market for direct-to-consumer (DTC) neurotechnology devices, software applications, and online services. With that also comes growing concerns about how the data will be used. As of 2018, there were over 8000 active patents in neurotechnology, and a worldwide market of $8.4 billion, with products...

Notes on Ruha Benjamin's Race After Technology

Ruha Benjamin's Race After Technology  has been circulating as a must-read for those wanting to learn more about how racism is encoded into everyday tech. But it's not for those who are looking for simple tips on how to de-bias data, or wishing to find distinct boundaries between ethical and non-ethical technology. Rather, this book is about the connections between overtly racist technology and that which is touted as "social good", how racism shapes scientific thinking and vice versa, and how race and racism itself is an invented technology born from the scientific practice of classifying the things within our world. It is not a book that reveals answers for fixing tech, but instead reveals racist logic behind its development and marketing, and challenges the reader to question whether certain tech, broken or not, is good at all. Below are some of my notes on the book's main sections. Section 1: Engineered Inequity   "Intention" seems to be the most fr...

Blog content update

As I continue to learn more about critical AI and STS, the views I have on my own work, and my personal and professional goals, have changed. My recent posts focus on that, while most of my earlier posts were created while trying to break out of a seemingly unsustainable academic situation, and were meant to demonstrate to employers that I can work with data outside of a neurophysiology lab. I did not think about the implications of using internet data for pain research, for example, in the way I do now. As such, some of the earlier posts do not necessarily reflect how I would approach the same problems today. I have left them up because it's part of my journey, and they serve as an example of how someone in tech who thinks their thought processes are innocuous, may not necessarily be so.

Notes on Melissa Littlefield's "Instrumental Intimacy"

The overarching theme of the book is stated in the title, "Instrumental Intimacy", and is the notion that the capacity for machines to read physiological (specifically, neurological) signals and thus understand feelings, moods, and states of arousal is better than the person themselves. For example, the book highlights cases in which certain companies claim they can optimize one's mental state for peak athletic performance using neurofeedback from EEG. The idea stems from research showing correlations between a specific brain states and task performance. Companies can use these research claims to suggest that peak performance can result from specific brain states, and if one can train themselves to recognize them, they can perform at higher levels. While this is a clear violation of the correlation != causation argument, this seems to be the business foundation of many of the cases in Dr. Littlefield's book. The translation of quantified body signals into subjec...

Reflections on the priesthood of surveillance capitalism

Shoshanna Zuboff's "The Age of Surveillance Capitalism" is a stirring read. The work is substantial, composed of over 500 pages of narrative, with an extra ~150 pages of notes and references. It is divided into 3 sections covering 1) the relationship between surveillance and industrial capitalism, focusing mostly on the industrial revolution and social implications of Ford's assembly line 2) surveillance capitalism's components and sources of power, and a correction of the "you are the product" metaphor which is frequently used to describe surveillance tech 3) the social implications and psychological transformations that are currently occurring (or will occur) under surveillance capitalism Despite the literal weight of the book and the technical content on which it is based, Zuboff's writing is poetic and engaging, and it makes for an easy (but long) read. I found the third section of the book to be most interesting, in which she discusses the...