Skip to main content

Posts

Showing posts from 2021

First preprint as an independent researcher

 Well, here it is: a semi-finished version of my work with Keith Cross on the social significance of brain terminology in technology (and vice versa). It was a lot more work than I anticipated, and I learned a lot about the history of scientific metaphors, as well as many researchers and initiatives aiming at creating more appropriate language for AI-labeled technology. On the advice of many who viewed earlier versions of the manuscript (thank you), I cut out a lot. But, I feel there is much more to dig into -- it's the first time I feel like I have enough material to maybe write a short book about it. Hmmmm https://arxiv.org/abs/2107.14042

Still working... another excerpt about the computational metaphor

For decades, the bidirectionality of the Computational Metaphor has caused concern amongst prominent computer scientists, suggesting that treating machines like people can lead to treating people like machines. Joseph Weizenbaum, the developer of the chatbot Eliza, was perhaps the most prominent critic of computer anthropomorphization for this reason (Weizenbaum, 1976) . And Edsger W. Dijkstra, who coined the phrase, “structured programming” also spoke on his concerns about reversing the Computational Metaphor: “A more serious byproduct of the tendency to talk about machines in anthropomorphic terms is the companion phenomenon of talking about people in mechanistic terminology. The critical reading of articles about computer-assisted learning... leaves you no option: in the eyes of their authors, the educational process is simply reduced to a caricature, something like the building up of conditional reflexes. For those educationists, Pavlov’s dog adequately captures the essence of Mank...

Excerpt from upcoming post on the brain-computer metaphor

Here's another snippet of the Brain-Computer Metaphor essay I'm working on. I'm finding it a little difficult to strike the right tone on this one. Nonetheless, below is the working intro, which I've edited down considerably from my previous version: Last year, OpenAI’s latest language model, GPT-3, was tested as a viable healthcare chatbot, and promptly suggested that a fake patient should commit suicide because they “felt bad” (Daws, 2020) . While in this instance GPT-3 did not perform as hoped, large language models in general are fluent enough to give a false impression of language understanding and mental modeling (Bender et al., 2021) , thus epitomizing the concept of artificial intelligence, or AI. Cases like the above, however, call into question whether the intelligence and brain-based terminology used to market AI technology poses risks upon whom the technology is used. At the core of this terminology is perhaps the most debated metaphor in all of science, the...

Excerpt from upcoming post on the social implications of the brain-computer metaphor

I've been working on an essay in which I point out an ignored implication of a very popular neuroscience debate -- whether the brain is a computer. In it, I focus on the idea that through reinforcing the metaphor, the tech industry wields it such that society gives special status to AI for making important decisions which it shouldn't. And because the neuroscience and computer science fields share a historically close relationship, neuroscientists may be inadvertently pushing pervasive forms of tech-solutionism which have been shown to be harmful to marginalized folk. It's taking longer to put it together than I had planned, but here's a small piece (to be edited): If the computer is a brain, what’s missing from the metaphor? Input: To make data usable for a computer, often a series of human interventions needs to first be performed: collecting, cleaning, processing, and labeling are all crucial steps in getting a computer to “learn” patterns in the data. But what are ...