Skip to main content

Notes on Ruha Benjamin's Race After Technology

Ruha Benjamin's Race After Technology has been circulating as a must-read for those wanting to learn more about how racism is encoded into everyday tech. But it's not for those who are looking for simple tips on how to de-bias data, or wishing to find distinct boundaries between ethical and non-ethical technology. Rather, this book is about the connections between overtly racist technology and that which is touted as "social good", how racism shapes scientific thinking and vice versa, and how race and racism itself is an invented technology born from the scientific practice of classifying the things within our world. It is not a book that reveals answers for fixing tech, but instead reveals racist logic behind its development and marketing, and challenges the reader to question whether certain tech, broken or not, is good at all. Below are some of my notes on the book's main sections.




Section 1: Engineered Inequity 

 "Intention" seems to be the most frequently cited excuse for developing or perpetuating a bad system. A good fraction of my Twitter TL is arguments between people who study the interactions between technology and society, and technologists themselves, and whether the blame of developing bad systems really falls on the designer if it was not their intention to harm people. Ruha Benjamin describes this argument in such a way that technologists are not held accountable for bad tech because there is not a "giggling programmer" or "boogeyman" who is intentionally mislabeling data sets, or corrupting calculations with malice. That is what we tend to think of when we think about racists and bigots -- covert actions, physically harming, name-calling, and ridiculing people who are not like them, and doing it with the intention of maintaining their perceived superiority over another. Yes, this is not likely what is happening when these systems are designed. Yet the systems somehow end up being disproportionately harmful in some way to the more vulnerable, and thus the inequities of society are somehow still encoded in technologies which are often touted as objective and unbiased. So how does a technologist who is not aware that they may be perpetuating racism, still be responsible for racism? How does a technologist who is not one of the underserved address this problem? 

Ruha Benjamin suggests first to focus on actions of a system, rather than the intentions. Technologies can be seen as "frozen moments" or "formalizations of fluid social interactions" which help us view our own values. As she states, 

"Biased bots and all their coded cousins could also help subvert the status quo by exposing and authenticating the existence of systemic inequality and thus by holding up a 'black mirror' to society, challenging us humans to come to grips with our deeply held cultural institutionalized biases." 

She then gives an example showing that her spell-check recognizes the word "underserved", but not "overserved", as if there could not be such a thing in society; as if there couldn't be a population who has far more than what is necessary, and unfairly saps resources away from those who don't. So technologists have to be more in tune to the outputs of their tech. Data scientists have to be more in tune to what their technology actually does, rather than defending what it was intended to do. Racism does not have to be consciously intentional to exist, thus when a new system is touted as being fair, unbiased, or objective, we should be skeptical of this, to say the least. As she states, 

"Lack of intention is not a viable alibi. One cannot reap the reward when things go right, but downplay responsibility when they go wrong." 

Section 2: Default Discrimination 

Here Benjamin proposes an interesting concept, apparently inspired by The Matrix, in which we can flip the meaning of "glitch" from an outlier annoyance in a system to a meaningful signal which reveals some designed inequity. In The Matrix, when one experiences a sense of dejavu (or a "glitch" in the matrix program) it signals that something has been changed as the machines' means to maintain control over humans, and that one should prepare for a fight. Likewise, Benjamin proposes that we can look at glitches in technology (i.e. when facial recognition can't actually recognize the faces of Black people) as a signal which reveals inequities embedded within the systems which create the technology itself. Thus, even though many glitches on the surface seem innocuous, they uncover deeply buried biases which which are born out of a means to dominate another group of people. 

I really like this flipped version of "glitch", although I would also consider another term: bug. In the tech domain, a "bug" and a "glitch" are often used interchangeably. "Bug" is not only an annoyance, but a single bug can also indicate a potential infestation. This seems appropriate, given the overused excuse of single "bad apples" spoiling the bunch -- it is never just one bad apple, it is never just one bug. If we see a bug, it could be a sign of some structural flaw allowing them entrance, some blind spot, some forgotten corner in the overall design. Thus, when addressing technological bugs, we should not simply remove that one and go on with our day, but we should examine the entire structure to understand where it got in, and attack the problem at the source. 

Section 3: Coded Exposure 

Photographs are often thought to be an unbiased way of capturing the world. Here Benjamin shows that is not the case, making the point that the default settings for camera technologies are optimized to capture lighter skin tones, often rendering Black people invisible or blurry. Photography has a long history of optimizing their output to work only for those with light skin -- Kodak's Shirley Cards for example would often show Black people's faces as blurry, or sometimes invisible except for the whites of their eyes and teeth. The phrase "technology is always one step ahead of society" is shown to be false in this one example, as companies have only retroactively adapted their products to address this issue. Kodak started producing "multiracial" Shirley Cards, and facial recognition technologies even today are scrambling to make their tech more accurately recognize Black people's faces. One example in the last couple weeks popped up on Twitter, in which the Twitter automatic photo-cropping centered the faces of White people, even if they were not the center of the photo, while cutting out the faces of Black people. Additionally, video-conferencing in Zoom which hides the background while leaving one's face visible often removed the entire heads of Black people, leaving them simply as a set of shoulders:

 
screen shots from a September 2020 Twitter post showing
facial recognition removes the entire head of a Black man
in a Zoom meeting


example of an experimental image circulating
around Twitter in September 2020, in which 
no matter the position of the Black man 
in the photo, Twitter photo-cropping 
preferentially centers the White man



These examples of photography show that although we view captured images as "just so" and objective, there is a layer of human-made decisions underneath, which determine how the image will be viewed and who the image is meant to reach. The "invisibility" of Blackness she refers to can be captured in the "glitches" of technology, frozen moments of social interaction, which in a way are also photographs. To be unseen by the makers and purveyors of technology, or those who hold the most power, is a signal of the systemic racism (intentional or not) within tech.

In this section, Benjamin also exposes similar practices in science. Just as photography's history is rooted in the desire to objectify, sort, and classify people; science has (even today) similar motivations. Genetic studies claiming to understand the biological bases of intelligence can run dangerously parallel to eugenics, for example. It should be obvious that any time we claim to link objective biological features to social traits and metrics, we run the risk of ranking members of society based on personal qualities that are not under their control. It should be obvious -- but it doesn't seem to be. And admittedly it wasn't always obvious to me. This is something that I think about quite a lot, as many of my past studies have been motivated by measuring pain perception and who is likely to respond to certain treatments based on brain properties. Does this mean that only people with property X should be allowed treatment Y because of some arbitrary thresholds I set in my study? I mean, of course, the decisions in scientific studies are not arbitrary, but the point is questions like this were not a center-point of discussion when designing these studies or communicating the results, when I feel they should have been. Clearly, as controversial studies continue to be published in biology, IRB oversight is not enough to address these questions. What are the social implications of assigning this biological correlate to this social characteristic / performance metric / subjective experience? What things outside of biology could explain it instead? Do we really need to explore this on the biological plane, or is this experiment just, as Benjamin puts it, "a perversion of knowledge" that is already known? These are not always easy questions to answer, but they should be discussed very carefully, and should include viewpoints beyond the typical representative in the IRB.

Section 4: Technological Benevolence

In this section, Benjamin discusses the practice of labeling certain technologies as "social good" when in fact they reinforce racial biases and perpetuate harm. Hirevue, for example, is a company which uses automated filtering of job resumes, claiming to help HR departments overcome human biases in hiring decisions, all at a much faster rate. Racial and gender biases in job searches and hiring are well documented, so a technology which claims to remove human prejudice in these decisions, and purports to make decisions entirely on qualifications seems like a good idea. It seems like a service which could potentially increase the racial and gender diversity at any institution. However, this is often not the outcome, as many of these algorithms have already been trained on data which comes from the employees already working at the institution (i.e. mostly not Black, mostly not women). Amazon, for example, had to shut down this approach in 2018 when they realized that it was accepting only resumes from primarily white men. Widespread adoption of his process would be disastrous. As Benjamin quotes computer science professor Arvind Narayanan, 

"'Human decision makers might be biased, but at least there is a *diversity* of biases. Imagine a future where every employer uses automated resume screening algorithms that all use the same heuristics, and job seekers who do not pass those checks get rejected everywhere.'" 

Another example of techno-benevolence is healthcare, in which "hotspotting" is a tactic aimed at targeting patient populations which disproportionately use/need healthcare resources. Using insurance claims data and geographic profiling, hotspotting often targets less-wealthy, Black and Brown communities as a sink of healthcare costs, stigmatizing these community members even further for the disadvantages they already face. In other words, they get targeted as healthcare sinks, not necessarily because they are biologically more prone to disease, but for the social disadvantages they suffer which cause them to need more healthcare in the first place. While some claim that the cost of stigmatizing patients may be worth it for the extra attention they need, the point is that after receiving their health service they will still be part of a community more vulnerable to disease and poverty, and living in areas with higher pollution and less educational opportunity. Improving these environmental conditions instead would kill two birds, but addressing only the immediate medical need while not taking into consideration the needs to improve housing, mental health, and substance abuse will only further entrain the cycle: racism -> lacking resources and needing healthcare -> stigmatize population -> racism.

Another interesting observation that Benjamin makes is that AI is often marketed in its ability to provide personalized or tailored services or information, yet the way it really works is to generalize and define boundaries between *groups* of data. I have always wondered why AI is framed as a tool for personalization when in essence what we tend to want from it is stereotypes. Recommendation engines, for example, provide options to you based on people that are mathematically similar to you in a space defined by some set variables. So for example, if a movie recommender has information about my age, sex, marital status, and racial identity, but has no data about what kind of movies I like, it will offer movies that other people similar in age, sex, marital status, and racial identity like. This is the very definition of stereotyping. Sure, the more data points it has, the less generalized the recommendations become, but in order to get those data points it either requires exposing and extracting more private data (i.e. social media activity, internet searches, location and movement data), or imputing missing data via ... generalizing. 

Having race and gender information on an individual can lead to greatly improved machine learning metrics. And when this information is not directly available, either because the individual does not disclose it, or it is explicitly prohibited for use in an algorithm, algorithms can generalize and make fairly accurate guesses based on other bits of information: zip code and name. Interestingly, Benjamin points out that at least for one company which uses this data to guess one's race (Diversity Inc.), African and Filipino Americans are the hardest to determine based solely on name because they share the names of those who have enslaved or colonized them. Thus, zip code is crucial for them to generalize race. Nonetheless, the fact that zip code serves as an accurate proxy for race in these algorithms highlights that racial bias is inherent in the data, even when an explicit race variable is not present. Thus any claim to provide bias-free tech is very likely false, due to the effects of systemic discrimination embedded throughout the data.

Section 5: Retooling Solidarity, Reimagining Justice

This last section begins laying out what may be done to combat racist technologies and prevent them from coming into existence. Although, this is done in such a way so as not to expose an "underground railroad", and requires one to think upon it for themselves. Benjamin shows the ways different technologies that range from explicitly racist to those masquerading as social good help entrench racism further into our social systems, and thus she explicitly avoids distinguishing between good and bad tech -- perhaps so the reader can exercise these thoughts for self-education purposes, or perhaps because there is no such thing. 

What is obvious from this section, however, is that tech-only fixes to racism and discrimination is not an adequate fix. She highlights Jay Z's investment in the app, Promise, which claims its "social good" status by saving money on the prison industrial complex through phone-tracking those on pretrial detention, rather than physical imprisonment. Keeping people out of prison seems like a good idea on the surface, but as Benjamin argues, in reality it expands the reach of the prison industrial complex:

"Yes, it is vital to divert money away from imprisonment to schools and public housing, if we really want to make communities stronger, safer, and more supportive for all their members. But ... simply diverting resources in this way is no panacea, because schools and public housing as they currently function are an extension of the PIC [prison industrial complex]: many operate with a logic of carcerality and on policies that discriminate against those who have been convicted of crimes. Pouring money into them as they are will only make them more effective in their current function as institutions of social control... I am calling on all of us to question the 'do good' rhetoric of the tech industry,"

and further, 

"We also demand the power to shape the programs and institutions in our communities and to propose a new and more humane vision of how resources and technology are used. This requires us to consider not only the ends, but the means. How we get to the end matters. If the path is that private companies, celebrities, and tech innovators should cash in on the momentum of communities and organizations that challenge mass incarceration, the likelihood is that the end achieved will replicate the current social order."

Benjmain exemplifies the short-comings of tech solutions with Facebook's VR "empathy machine", which was built under the proposal that empathy (rather than equity or justice) can be increased by immersing someone into the visual experience of another. Zuckerberg himself demonstrated the technology after the 2017 Puerto Rico hurricane disasters, which later received heavy criticism. As Benjamin points out, however, just seeing the world from someone else's viewpoint does nothing to change our prior experiences and assumptions, which hold far more power over our version of reality. Ideas like this, from the get-go, are at the very least poorly informed and simply asinine, and are seemingly designed for entertainment and exploitation of others' trauma and misfortunes, rather than social betterment. 

How to keep technology from cementing racism into society takes conscious effort and creativity on the part of the creators, and is something that technology alone cannot do. It requires human imagination and prioritizing equity over profit. However, as people are slow to learn, and as tech is dominated by the already powerful, to wait for tech to come around would only mean further destruction Black and Brown communities. So to give the reader a start, Benjamin highlights tools towards abolition such as data and algorithm audits, the European Union General Data Regulation Protection laws, Data Nutrition Labels, the Digital Defense Playbook, and organizations like Data for Black Lives and the Stop LAPD Spying Coalition. 

How to apply these resources and knowledge into my everyday approach to using and building technology is something I think about constantly. I am certainly wary of "social good" tech, which admittedly was a strong factor in bringing me into the tech space to begin with. I can see more clearly now the racism-positive feedback loop strengthened by tech (and some scientific practices), which I feel quite foolish for not seeing before. And because this book focuses so much attention on effects of tech on Black people, certainly I have learned more about racist designs in tech, in general. This book also has me questioning and re-evaluating my role in this space (critical AI / science / tech) as someone who is not Black. Specifically, I'm trying to monitor how I can be of value while being careful not to exploit the work of the Black researchers who have been the leading voices in the field. I have no interest in "replicating the current social order" which continues to be so harmful to so many.

Comments