With the fast-paced acceptance of AI technology into business and healthcare, the demand for personal physical data is greater than ever. Information about one’s health, emotions, and psychological states and traits is increasingly valued for constructing marketable digital profiles (Schmidt et al., 2019; Stark, 2018), and the ways these data are extracted is expanding beyond fitness trackers and facial recognition into perhaps the deepest intimate space: the brain. Brain-machine-interfaces (BMI) allow for direct translation of the brain’s electrical activity into signals which can indicate one’s perceptions and intentions. And as BMIs have become more mobile and accessible, there is a growing market for direct-to-consumer (DTC) neurotechnology devices, software applications, and online services. With that also comes growing concerns about how the data will be used.
As of 2018, there were over 8000 active patents in neurotechnology, and a worldwide market of $8.4 billion, with products for gaming, device control, meditation, sleep improvement, depression treatment, and various forms of “brain training” (Ienca et al., 2018). Even well-funded companies like Elon Musk’s Neuralink and Facebook have publicly announced recent investments in BMI technology for the purpose of “mind-reading” to interact with products (Musk & Neuralink, 2019; Samuel, 2019). The implications of sharing one’s brain data with tech companies are serious, as they frequently exhibit questionable ethical practices which have often resulted in the harm of marginalized members of society. Because BMIs extract such a uniquely intimate and personal source of data, I am interested in how industry uses this data to build their products and navigate privacy and bias issues that are uniquely embedded within it.
Brain signals are layered in many levels of abstraction. Their meaning is dependent on the method of extraction, their source location, how they are separated from noise, and the experiences and physiological state of the person from whom they are extracted. But even under the best recording environments, determining complex mental states and traits of an individual from brain activity is not always reliable, especially without contextual knowledge of important social factors. Regardless of the reliability of brain signals, BMIs and neurotechnology produce vast amounts of biometric data which can be correlated with human behaviors and characteristics, and in turn can be used to perpetuate social biases and marginalize vulnerable people (Crawford et al., 2019). Already, technologists commonly embrace this practice: for example, facial recognition data have been used for predicting criminality (Vincent, 2020; Amjad & Malik, 2020), perceived trustworthiness (Safra et al., 2020), and hireability (Crawford et al., 2019) -- the interpretations of which are all based on the centuries-old, debunked science of physiognomy. Trends like this suggest that brain data collected through DTC neurotechnology would be used for similar purposes, and perhaps with wider acceptance given common perceptions that brain data legitimately contain “mind-reading” information.
The intent of this project, therefore, would be to learn about the industry executives, data scientists, and governance specialists who are working with and marketing DTC neurotechnology. Specifically, I would aim to learn their interpretations and understanding of the data, and how they approach issues surrounding privacy, consent, and bias:
Who are these individuals? What experience do they have in working with sensitive personal and biometric data? What experience do they have working with brain signals? What do they think they can learn from a person’s brain? What potential do they see in integrating brain signals with existing AI-based technology? How do they source their data and who does it represent? What ethical concerns do they have when working with such sensitive, personal data? What measures are they taking to mitigate any harm or misuse of their DTC products? These would be the central questions of the project.
Discussions on the impacts of neurotechnology are often devoted to theoretical issues which, given technology that is not yet fully available to consumers, concern disrupting one’s sense of personal agency or augmenting one’s mental abilities (Roelfsema et al., 2018; Yuste et al., 2017; Oliver & Rotter, 2017). While these are important matters, the project I propose would focus on more immediate concerns, like privacy and bias. It would expand upon work by neuroethicist Marcello Ienca, who studies DTC neurotechnology in terms of 1) privacy, in which he shows neurotechnology can be hacked for “brain leaks” that reveal private mental, financial, and demographic information from users; and 2) regulation, in which he shows the vast amounts of data produced by neurotechnology are not bound by oversight in collection, use, or sharing (Ienca et al., 2018). It would also expand on work by Melissa Littlefield, who shows how DTC neurotechnology can form social biases and shift societal norms that come from quantifying and exposing one’s internal mental states (Littlefield, 2018). It would cover these issues as they are dealt with on the ground, and focus on the perspective of those who make the decisions.
Further, as technology serves to establish social and racial classes (Benjamin, 2019; Birhane & Guest, 2020), I would examine how DTC neurotechnology is used to categorize mental states and traits, and in turn how it could be used to shift social power, marginalize certain individuals, and perpetuate unjust systems. For example, neurodiverse individuals are particularly at risk for being unfavorably profiled by DTC neurotechnology: What happens when it exposes an individual with depression, bipolar disorder, or epilepsy without their consent? Does it work as intended on people with neurological disorders like schizophrenia or chronic pain? In the same vein that heart rate monitors and facial recognition are dysfunctional for people with darker skin, prominent non-invasive neural recording does not work well for those with coarse, curly hair (Nadin, 2020) -- thus is it equally accessible to those of all hair types, particularly Black people?
I believe the social implications of DTC neurotechnology needs more discussion, and this project would provide an important perspective. Consumer “mind-reading” devices are not science fiction. They are part of an expanding market in biometrics, personal health, and device control. Thus, they are a growing and highly sensitive component in the tech ecosystem which raises concerns around data bias, privacy, and regulation. Overall, the proposed project is an on-the-ground look about how industry navigates these issues, and would be a critical and practical supplement to the more popularized neuroethics discussions around agency, mental augmentation, and “mind-reading”.
References
Amjad, K., & Malik, A. A. (2020, September 25). A Technique and Architectural Design for Criminal Detection based on Lombroso Theory Using Deep Learning. LGURJCSIT, 4(3).
Benjamin, R. (2019). Race After Technology. Polity Press.
Birhane, A., & Guest, O. (2020, September 29). Towards decolonizing computational sciences. arXiv. arXiv:2009.14258v1
Crawford, K., Dobbe, R., Dryer, T., Fried, G., Green, B., Kaziunas, E., Kak, A., Mathur, V., McElroy, E., Nill Sanchez, A., Raji, D., Lisi Rankin, J., Richardson, R., Schulz, J., Meyers West, S., & Whittaker, M. (2019). AI Now 2019 Report. AI Now Institute. https://ainowinstitute.org/AI_Now_2019_Report.html
Ienca, M., Haselager, P., & Emanuel, E. J. (2018, September 6). Brain leaks and consumer neurotechnology. Nature Biotechnology, 36(9), 805 - 809. doi:10.1038/nbt.4240
Littlefield, M. M. (2018). Instrumental Intimacy. Johns Hopkins University Press.
Musk, E., & Neuralink. (2019, July 17). An integrated brain-machine interface platform with thousands of channels. bioRxiv. doi:10.1101/703801
Nadin, D. (2020, May 5). EEG research is racially biased, so undergrad scientists designed new electrodes to fix it. Massive Science. https://massivesci.com/articles/racial-bias-eeg-electrodes-research/
Oliver, M., & Rotter, S. (2017, December 13). Neurotechnology: current developments and ethical issues. Frontiers in Systems Neuroscience, 11(93). doi:10.3389/fnsys.2017.00093
Roelfsema, P. R., Denys, D., & Klink, P. C. (2018, July 1). Mind reading and writing: the future of neurotechnology. Trends in Cognitive Sciences, 22(7), 598 - 610. doi:10.1016/j.tics.2018.04.001
Safra, L., Chevallier, C., Grezes, J., & Baumard, N. (2020, September 22). Tracking historical changes in trustworthiness using machine learning analyses of facial cues in paintings. Nature Communications, 11(1), 4728. doi:10.1038/s41467-020-18566-7
Samuel, S. (2019, December 20). Brain-reading tech is coming. The law is not ready to protect us. Vox. https://www.vox.com/2019/8/30/20835137/facebook-zuckerberg-elon-musk-brain-mind-reading-neuroethics
Schmidt, P., Reiss, A., Durichen, R., & Laerhoven, K. V. (2019, September 20). Wearable-based affect recognition - a review. Sensors, 19(19), 4079. doi:10.3390/s19194079
Stark, L. (2018). Algorithmic psychometrics and the scalable subject. Social Studies of Science, 48(2), 204 - 231. doi:10.1177/0306312718772094
Vincent, J. (2020, June 24). AI experts say research into algorithms that claim to predict criminality must end. The Verge. https://www.theverge.com/2020/6/24/21301465/ai-machine-learning-racist-crime-prediction-coalition-critical-technology-springer-studyYuste, R., Goering, S., Aguera Y Arcas, B., Bi, G., Carmena, J. M., Carter, A., Fins, J. J., Friesen, P., Gallant, J., Huggins, J. E., Illes, J., Kellmeyer, P., Klein, E., Marblestone, A., Mitchell, C., Parens, E., Pham, M., Rubel, A., Sadato, N., … Wolpaw, J. (2017, November 9). Four ethical priorities for neurotechnologies and AI. Nature, 551(7679), 159 - 163. doi:10.1038/551159a
As of 2018, there were over 8000 active patents in neurotechnology, and a worldwide market of $8.4 billion, with products for gaming, device control, meditation, sleep improvement, depression treatment, and various forms of “brain training” (Ienca et al., 2018). Even well-funded companies like Elon Musk’s Neuralink and Facebook have publicly announced recent investments in BMI technology for the purpose of “mind-reading” to interact with products (Musk & Neuralink, 2019; Samuel, 2019). The implications of sharing one’s brain data with tech companies are serious, as they frequently exhibit questionable ethical practices which have often resulted in the harm of marginalized members of society. Because BMIs extract such a uniquely intimate and personal source of data, I am interested in how industry uses this data to build their products and navigate privacy and bias issues that are uniquely embedded within it.
Brain signals are layered in many levels of abstraction. Their meaning is dependent on the method of extraction, their source location, how they are separated from noise, and the experiences and physiological state of the person from whom they are extracted. But even under the best recording environments, determining complex mental states and traits of an individual from brain activity is not always reliable, especially without contextual knowledge of important social factors. Regardless of the reliability of brain signals, BMIs and neurotechnology produce vast amounts of biometric data which can be correlated with human behaviors and characteristics, and in turn can be used to perpetuate social biases and marginalize vulnerable people (Crawford et al., 2019). Already, technologists commonly embrace this practice: for example, facial recognition data have been used for predicting criminality (Vincent, 2020; Amjad & Malik, 2020), perceived trustworthiness (Safra et al., 2020), and hireability (Crawford et al., 2019) -- the interpretations of which are all based on the centuries-old, debunked science of physiognomy. Trends like this suggest that brain data collected through DTC neurotechnology would be used for similar purposes, and perhaps with wider acceptance given common perceptions that brain data legitimately contain “mind-reading” information.
Screenshot from the Wall Street Journal video profile on
BrainCo's EEG use in Chinese elementary classrooms
The intent of this project, therefore, would be to learn about the industry executives, data scientists, and governance specialists who are working with and marketing DTC neurotechnology. Specifically, I would aim to learn their interpretations and understanding of the data, and how they approach issues surrounding privacy, consent, and bias:
Who are these individuals? What experience do they have in working with sensitive personal and biometric data? What experience do they have working with brain signals? What do they think they can learn from a person’s brain? What potential do they see in integrating brain signals with existing AI-based technology? How do they source their data and who does it represent? What ethical concerns do they have when working with such sensitive, personal data? What measures are they taking to mitigate any harm or misuse of their DTC products? These would be the central questions of the project.
Discussions on the impacts of neurotechnology are often devoted to theoretical issues which, given technology that is not yet fully available to consumers, concern disrupting one’s sense of personal agency or augmenting one’s mental abilities (Roelfsema et al., 2018; Yuste et al., 2017; Oliver & Rotter, 2017). While these are important matters, the project I propose would focus on more immediate concerns, like privacy and bias. It would expand upon work by neuroethicist Marcello Ienca, who studies DTC neurotechnology in terms of 1) privacy, in which he shows neurotechnology can be hacked for “brain leaks” that reveal private mental, financial, and demographic information from users; and 2) regulation, in which he shows the vast amounts of data produced by neurotechnology are not bound by oversight in collection, use, or sharing (Ienca et al., 2018). It would also expand on work by Melissa Littlefield, who shows how DTC neurotechnology can form social biases and shift societal norms that come from quantifying and exposing one’s internal mental states (Littlefield, 2018). It would cover these issues as they are dealt with on the ground, and focus on the perspective of those who make the decisions.
Further, as technology serves to establish social and racial classes (Benjamin, 2019; Birhane & Guest, 2020), I would examine how DTC neurotechnology is used to categorize mental states and traits, and in turn how it could be used to shift social power, marginalize certain individuals, and perpetuate unjust systems. For example, neurodiverse individuals are particularly at risk for being unfavorably profiled by DTC neurotechnology: What happens when it exposes an individual with depression, bipolar disorder, or epilepsy without their consent? Does it work as intended on people with neurological disorders like schizophrenia or chronic pain? In the same vein that heart rate monitors and facial recognition are dysfunctional for people with darker skin, prominent non-invasive neural recording does not work well for those with coarse, curly hair (Nadin, 2020) -- thus is it equally accessible to those of all hair types, particularly Black people?
I believe the social implications of DTC neurotechnology needs more discussion, and this project would provide an important perspective. Consumer “mind-reading” devices are not science fiction. They are part of an expanding market in biometrics, personal health, and device control. Thus, they are a growing and highly sensitive component in the tech ecosystem which raises concerns around data bias, privacy, and regulation. Overall, the proposed project is an on-the-ground look about how industry navigates these issues, and would be a critical and practical supplement to the more popularized neuroethics discussions around agency, mental augmentation, and “mind-reading”.
References
Amjad, K., & Malik, A. A. (2020, September 25). A Technique and Architectural Design for Criminal Detection based on Lombroso Theory Using Deep Learning. LGURJCSIT, 4(3).
Benjamin, R. (2019). Race After Technology. Polity Press.
Birhane, A., & Guest, O. (2020, September 29). Towards decolonizing computational sciences. arXiv. arXiv:2009.14258v1
Crawford, K., Dobbe, R., Dryer, T., Fried, G., Green, B., Kaziunas, E., Kak, A., Mathur, V., McElroy, E., Nill Sanchez, A., Raji, D., Lisi Rankin, J., Richardson, R., Schulz, J., Meyers West, S., & Whittaker, M. (2019). AI Now 2019 Report. AI Now Institute. https://ainowinstitute.org/AI_Now_2019_Report.html
Ienca, M., Haselager, P., & Emanuel, E. J. (2018, September 6). Brain leaks and consumer neurotechnology. Nature Biotechnology, 36(9), 805 - 809. doi:10.1038/nbt.4240
Littlefield, M. M. (2018). Instrumental Intimacy. Johns Hopkins University Press.
Musk, E., & Neuralink. (2019, July 17). An integrated brain-machine interface platform with thousands of channels. bioRxiv. doi:10.1101/703801
Nadin, D. (2020, May 5). EEG research is racially biased, so undergrad scientists designed new electrodes to fix it. Massive Science. https://massivesci.com/articles/racial-bias-eeg-electrodes-research/
Oliver, M., & Rotter, S. (2017, December 13). Neurotechnology: current developments and ethical issues. Frontiers in Systems Neuroscience, 11(93). doi:10.3389/fnsys.2017.00093
Roelfsema, P. R., Denys, D., & Klink, P. C. (2018, July 1). Mind reading and writing: the future of neurotechnology. Trends in Cognitive Sciences, 22(7), 598 - 610. doi:10.1016/j.tics.2018.04.001
Safra, L., Chevallier, C., Grezes, J., & Baumard, N. (2020, September 22). Tracking historical changes in trustworthiness using machine learning analyses of facial cues in paintings. Nature Communications, 11(1), 4728. doi:10.1038/s41467-020-18566-7
Samuel, S. (2019, December 20). Brain-reading tech is coming. The law is not ready to protect us. Vox. https://www.vox.com/2019/8/30/20835137/facebook-zuckerberg-elon-musk-brain-mind-reading-neuroethics
Schmidt, P., Reiss, A., Durichen, R., & Laerhoven, K. V. (2019, September 20). Wearable-based affect recognition - a review. Sensors, 19(19), 4079. doi:10.3390/s19194079
Stark, L. (2018). Algorithmic psychometrics and the scalable subject. Social Studies of Science, 48(2), 204 - 231. doi:10.1177/0306312718772094
Vincent, J. (2020, June 24). AI experts say research into algorithms that claim to predict criminality must end. The Verge. https://www.theverge.com/2020/6/24/21301465/ai-machine-learning-racist-crime-prediction-coalition-critical-technology-springer-studyYuste, R., Goering, S., Aguera Y Arcas, B., Bi, G., Carmena, J. M., Carter, A., Fins, J. J., Friesen, P., Gallant, J., Huggins, J. E., Illes, J., Kellmeyer, P., Klein, E., Marblestone, A., Mitchell, C., Parens, E., Pham, M., Rubel, A., Sadato, N., … Wolpaw, J. (2017, November 9). Four ethical priorities for neurotechnologies and AI. Nature, 551(7679), 159 - 163. doi:10.1038/551159a
Comments
Post a Comment