Well, here it is: a semi-finished version of my work with Keith Cross on the social significance of brain terminology in technology (and vice versa). It was a lot more work than I anticipated, and I learned a lot about the history of scientific metaphors, as well as many researchers and initiatives aiming at creating more appropriate language for AI-labeled technology. On the advice of many who viewed earlier versions of the manuscript (thank you), I cut out a lot. But, I feel there is much more to dig into -- it's the first time I feel like I have enough material to maybe write a short book about it. Hmmmm https://arxiv.org/abs/2107.14042
For decades, the bidirectionality of the Computational Metaphor has caused concern amongst prominent computer scientists, suggesting that treating machines like people can lead to treating people like machines. Joseph Weizenbaum, the developer of the chatbot Eliza, was perhaps the most prominent critic of computer anthropomorphization for this reason (Weizenbaum, 1976) . And Edsger W. Dijkstra, who coined the phrase, “structured programming” also spoke on his concerns about reversing the Computational Metaphor: “A more serious byproduct of the tendency to talk about machines in anthropomorphic terms is the companion phenomenon of talking about people in mechanistic terminology. The critical reading of articles about computer-assisted learning... leaves you no option: in the eyes of their authors, the educational process is simply reduced to a caricature, something like the building up of conditional reflexes. For those educationists, Pavlov’s dog adequately captures the essence of Mank