Skip to main content

Columbia symposium on brain computer interfaces and neuroethics

I recently attended a symposium held by Columbia University entitled, “Brain Computer Interfaces: Innovation, Security, and Society”, in which attendees were gathered to discuss the social implications of neurotechnology -- a growing genre of tech which reads from and writes to one’s nervous system. The purpose of neurotech ranges from clinical to entertainment applications, and much of it is available today, direct to consumer or client. The social implications vary according to the application. Frequent neuroethics discussions center around issues of agency or enhancement, but I see more immediate concerns in issues that are common to other data-centric technologies like facial recognition and fitness trackers: data misuse. 

The symposium covered a wide spectrum of the state of neurotechnology, and speakers were a mix of neuroscientists, engineers, and bio / neuro / AI ethicists, from both academia and industry. The “Innovation” aspect of the symposium was covered well, with many of the speakers discussing the engineering challenges and current capabilities of the neurotech, including representatives from Facebook, CTRL labs, and Kernel. While the symposium had some good neuro- and AI- ethics speakers, I wish there were more specifics on the “Security” and “Society” aspects. Here are a few points I would have liked to seen discussed:

For existing industry neurotech data, what has been gathered, and how is it being used? 
How is that data stored, shared, and anonymized? 
Who handles the data and what experience and training do they have in working with brain data?
What models are being developed with this brain data to draw conclusions about individuals and groups?
What potential do data gatherers see specifically in brain data? Why is it so valuable to them?
How do data owners ensure data is used as intended? How do we define misuse?

Comments