Skip to main content

Posts

Showing posts from 2020

Columbia symposium on brain computer interfaces and neuroethics

I recently attended a symposium held by Columbia University entitled, “ Brain Computer Interfaces: Innovation, Security, and Society ”, in which attendees were gathered to discuss the social implications of neurotechnology -- a growing genre of tech which reads from and writes to one’s nervous system. The purpose of neurotech ranges from clinical to entertainment applications, and much of it is available today, direct to consumer or client. The social implications vary according to the application. Frequent neuroethics discussions center around issues of agency or enhancement, but I see more immediate concerns in issues that are common to other data-centric technologies like facial recognition and fitness trackers: data misuse.  The symposium covered a wide spectrum of the state of neurotechnology, and speakers were a mix of neuroscientists, engineers, and bio / neuro / AI ethicists, from both academia and industry. The “Innovation” aspect of the symposium was covered well, with ma...

Research proposal to study on-the-ground implications of DTC neurotechnology

With the fast-paced acceptance of AI technology into business and healthcare, the demand for personal physical data is greater than ever. Information about one’s health, emotions, and psychological states and traits is increasingly valued for constructing marketable digital profiles (Schmidt et al., 2019; Stark, 2018), and the ways these data are extracted is expanding beyond fitness trackers and facial recognition into perhaps the deepest intimate space: the brain. Brain-machine-interfaces (BMI) allow for direct translation of the brain’s electrical activity into signals which can indicate one’s perceptions and intentions. And as BMIs have become more mobile and accessible, there is a growing market for direct-to-consumer (DTC) neurotechnology devices, software applications, and online services. With that also comes growing concerns about how the data will be used. As of 2018, there were over 8000 active patents in neurotechnology, and a worldwide market of $8.4 billion, with products...

Notes on Ruha Benjamin's Race After Technology

Ruha Benjamin's Race After Technology  has been circulating as a must-read for those wanting to learn more about how racism is encoded into everyday tech. But it's not for those who are looking for simple tips on how to de-bias data, or wishing to find distinct boundaries between ethical and non-ethical technology. Rather, this book is about the connections between overtly racist technology and that which is touted as "social good", how racism shapes scientific thinking and vice versa, and how race and racism itself is an invented technology born from the scientific practice of classifying the things within our world. It is not a book that reveals answers for fixing tech, but instead reveals racist logic behind its development and marketing, and challenges the reader to question whether certain tech, broken or not, is good at all. Below are some of my notes on the book's main sections. Section 1: Engineered Inequity   "Intention" seems to be the most fr...

Blog content update

As I continue to learn more about critical AI and STS, the views I have on my own work, and my personal and professional goals, have changed. My recent posts focus on that, while most of my earlier posts were created while trying to break out of a seemingly unsustainable academic situation, and were meant to demonstrate to employers that I can work with data outside of a neurophysiology lab. I did not think about the implications of using internet data for pain research, for example, in the way I do now. As such, some of the earlier posts do not necessarily reflect how I would approach the same problems today. I have left them up because it's part of my journey, and they serve as an example of how someone in tech who thinks their thought processes are innocuous, may not necessarily be so.

Notes on Melissa Littlefield's "Instrumental Intimacy"

The overarching theme of the book is stated in the title, "Instrumental Intimacy", and is the notion that the capacity for machines to read physiological (specifically, neurological) signals and thus understand feelings, moods, and states of arousal is better than the person themselves. For example, the book highlights cases in which certain companies claim they can optimize one's mental state for peak athletic performance using neurofeedback from EEG. The idea stems from research showing correlations between a specific brain states and task performance. Companies can use these research claims to suggest that peak performance can result from specific brain states, and if one can train themselves to recognize them, they can perform at higher levels. While this is a clear violation of the correlation != causation argument, this seems to be the business foundation of many of the cases in Dr. Littlefield's book. The translation of quantified body signals into subjec...

Reflections on the priesthood of surveillance capitalism

Shoshanna Zuboff's "The Age of Surveillance Capitalism" is a stirring read. The work is substantial, composed of over 500 pages of narrative, with an extra ~150 pages of notes and references. It is divided into 3 sections covering 1) the relationship between surveillance and industrial capitalism, focusing mostly on the industrial revolution and social implications of Ford's assembly line 2) surveillance capitalism's components and sources of power, and a correction of the "you are the product" metaphor which is frequently used to describe surveillance tech 3) the social implications and psychological transformations that are currently occurring (or will occur) under surveillance capitalism Despite the literal weight of the book and the technical content on which it is based, Zuboff's writing is poetic and engaging, and it makes for an easy (but long) read. I found the third section of the book to be most interesting, in which she discusses the...

Weapons of Math Destruction

A few months back I was introduced to the book, "Weapons of Math Destruction" by Dr. Cathy O'Neil. A mathematician and a former data scientist, she now writes about ethics in the tech industry. I got to see her in Milwaukee as she spoke on a major problem in data science: misguided metrics. Specifically, she discussed metrics that ignore the social implications of harmful models. The highlight of her talk was the "ethical matrix" -- a tool for understanding how an algorithm effects model stakeholders. It teaches data scientist to look beyond the normal metrics of performance (i.e. accuracy), and to apply more focus to other vital characteristics like data quality, perceived fairness, and transparency. Likewise, the ethical matrix not only accounts for the entities looking to use the model. It also takes into consideration those subjected to the model, the designers and sellers of the model, and the public at large. Overall, her talk was quite inspiring, and of...