For the past few decades, there have been constant warnings that our technological developments are threatening to outpace our civilization’s ability to adapt to and contain them. Some of today’s loudest alarms are being rung by the likes of Peter Thiel or Elon Musk, billionaires who are quick to rattle off a long list of doomsday tech scenarios (killer artificial intelligence, digital or biological weapons). Who is going to tell them that these moments have already come to pass, especially for those of us who are not white?
Some of the more blood-stained hands need little reminder. Over the past few weeks of protests set off by ex-Minneapolis police officer Derek Chavin killing George Floyd, waves of technology companies have offered pithy statements and gestures meant to distract from their role in enabling police violence, giving police powerful but often ineffective technologies, empowering federal authorities, and immiserating Black and brown communities for profit.
As debates rage over the best way to eliminate police violence, we should look at the way technology has been used to depoliticize crime while criminalizing marginalized groups and building a massive surveillance infrastructure that inevitably targets against Black and brown people.
In her New York Times op-ed “Yes, We Mean Literally Abolish the Police” Mariame Kaba explains that the origins of policing in strike busting and slave patrols should remind us that historically they’ve “suppressed marginalized populations to protect the status quo.” A century of reforms have failed to reduce police violence, especially against Black people, because when “a police officer brutalizes a black person, he is doing what he sees as his job.”
Defunding the police for terrorizing Black and brown communities is part of a larger ambition: the abolition of police (and prisons) to create “vital systems of support that many communities lack” instead of systems of brutalization, as prison abolitionist Ruth Wilson Gilmore puts it. The question still remains however: what does abolition mean for the companies and industries that empower police violence and profit from it?
The politics behind how crime is constructed and technology is deployed in response are a good place to start. We must, as political economist Jathan Sadowski writes in OneZero, become “Marie Kondo, but for technology. Does this thing contribute to human well-being and/or social welfare? If not, toss it away!”
How are crime and technology political?
For years, there has been a scramble for policing tools or methods to counter the growing public perception of policing as inherently racist, arbitrary, and gratuitously violent, with scientific narratives emphasizing objectivity and the statistical analysis of patterns. In one influential paper on the subject published in 2011, researchers David Weisburd and Peter Neyroud called for a “radical reformation of the role of science in policing” and argued that “the advancement of science in policing is essential if police are to retain public support and legitimacy.”
The great hope Weisburg, Neyroud, criminologists, and police departments espouse is the idea of remaking policing into an “arena of evidence-based policies” and convincing the public that it is driven by statistical analysis not racism—by the reality of crime not the dictates of officials informed by racial bias.
This cannot wipe away the fact, however, that criminalization has been well-established to have never been neutral, but always capitalizing on race, socioeconomics, or a variety of proxies to determine who is and what is criminal (and who is most often prosecuted for “crimes,” thus creating a statistical footing for algorithms to analyze). In their study From “brute” to “thug”: the demonization and criminalization of unarmed Black male victims in America, sociologists Calvin John Smiley and David Fakunle explore historical accounts of “myths, stereotypes, and racist ideologies led to discriminatory policies and court rulings that fueled racial violence in a post-Reconstruction era and [have] culminated in the exponential increase of Black male incarceration today.” Historian Khali Gibran Muhammad’s The Condemnation of Blackness: Race, Crime and the Making of Modern America argues that in “a rapidly industrializing, urbanizing, and demographically shifting America, Blackness was refashioned through crime statistics” and the 20th century saw great energy expended to forge seemingly empirical links between Blackness and criminality that would justify evidence-based policies seeking to terrorizing and brutalizing Black people.
Muhammad traces back the origins of America’s racialization of criminality back to the 1890 census, and part of a larger project to explain away racial incarceration disparities as individual failings—criminality not as a social phenomena but a cultural trait inherent to some inferior racial groups. Ironically enough, investigative reporter Yasha Levine also points to this moment as the origin point for the racism endemic to modern computer technology, and more specifically to Silicon Valley: “As the battle over the 2020 census makes clear, the drive to tally up our neighbors, to sort them into categories and turn them into statistics, still carries the seed of our own dehumanization.”
Technology, criminalization, and thus, crime statistics may be presented as race-neutral or apolitical, but historical evidence—and a little critical thinking—make it clear that this is bullshit. As Sadowski’s Too Smart argues, technology is best understood as “embedded with values and intentions…the result of decisions and actions made by humans, and it is then used by humans with motivations and goals.” Sadowski’s book focuses primarily on smart tech—integral to policing’s evidence-based rebrand—and fleshes out two major imperatives of smart tech to collect and control data: “The imperative of collection is about extracting all data, from all sources, by any means possible…the imperative of control is about creating systems that monitor, manage, and manipulate the world and people.”
The imperative to collect seeks to recast data “as an omnipresent resource right at the time when there is so much to gain for whoever can lay claim to that data and capitalize on its value.” The imperative to control “works through various, sprawling, connected, hidden systems, which monitor people by breaking them down into data points that can be recorded, analyzed, and assessed in real time” so that exclusion and inclusion can be finely tuned with checkpoints and passwords. Both of these imperatives are obviously political, especially given the racist origins of computing technology as a way to naturalize structural outcomes like poverty and incarceration.
Given this history, attempts to depoliticize such inherently political phenomena are worrying to say the least. In both cases, such efforts also betray a certain urgency that contradicts the supposed goal of improving public safety and health. According to Weisburg and Neyroud, depoliticizing crime will allow policing to “ensure its survival in a competitive world of provision of public services”—in other words, the institution is primarily concerned with preserving itself, not protecting us. And as technology critic Evgeny Morozov observed in his cutting review of Kevin Kelly’s What Technology Wants, apolitical and ahistorical conceptions of technology are useless unless you are interested in selling useless advice to corporations so they can sell a bullshit, apolitical narrative to the public. It’s not clear to me why we shouldn’t assume this is what they want.
Police-tech partnerships build a racist, yet legitimate, surveillance state
In their quest to regain legitimacy by depoliticizing crime as a category and technology as a tool, law enforcement have created the conditions for racist policing to be perpetually reproduced. Nowhere is this more clear than in the implementation of predictive policing and facial recognition software, two tools that have at best simply maintained the status quo and at worst have deepened the racial disparities behind criminalization, police violence, and incarceration.
Before policing began to flirt with predictive analytics, it was already widespread in the private sector. The insurance industry, for example, has for decades enjoyed an ability to “record, analyze, discipline, and punish people.”. “Smart” systems—specifically surveillance tools that ensure a never-ending flow of data and analysis—have allowed insurance companies and financial institutions to maintain that “judgements about who is responsible, what things are worth, how society should be organized” are now objective, even when their outcomes replicate society’s own structural disparities.
Predictive analytics in policing has been no different. Consider William Bratton, who as New York City Police Commissioner between 1994 and 1996 championed broken-windows policing (a focus on “minor” crimes to prevent violent offenses) and introduced COMPSTAT (a “data-driven” system to track and respond to crime in real-time), low-tech predictive policing precursors that resulted in incredibly racist outcomes. There’s scant evidence broken-windows policing was anything more than a new way to justify and expand racist and classist policing patterns. COMPSTAT led to the expansion of New York City’s unconstitutional stop-and-frisk program, a racial profiling strategy terrorized non-white communities in exchange for lasting negative impacts on mental health, educational attainment, and civic engagement.
None of this deterred Bratton. After becoming LAPD chief in 2002, he enlisted Sean Malinowski—then a sergeant with a doctorate in public administration—to improve LAPD’s COMPSTAT and then, in 2007, put out a call to action: “’I’m asking that more researchers begin to work with us and among us in the real-world laboratories of our departments and cities to help us prove or disprove the beliefs and practices that I, as a practitioner, and most of my colleagues deeply believe, espouse, and practice.”
Out of this, a flurry of data-driven experiments would emerge, the most important of which was PredPol. P. Jeffrey Brantingham, PredPol’s frontman, was a UCLA anthropologist, who approached the pair and offered to adapt a model that predicted earthquake aftershocks to predict crime using historical crime statistics. Officers would be given maps with red 500 square foot boxes where crime would happen and expected to patrol these boxes to deter crime or catch criminals in the act. As Motherboard has reported, software like PredPol is and has been racist from the start.
Legal scholars have warned that predictive policing systems threaten Fourth Amendment rights, namely requiring “reasonable suspicion” for a police officer to stop to protect against “unreasonable searches and seizures” by rendering individuals within a certain space at a certain time as potential criminals. Add this to the fact that not only is location one of many proxies for race, but that reliance on historical crime data will reinforce existing disparities in policing which focus on non-white communities in cities instead of, say, white communities in the suburbs, and further incentivize the expansion of “smart” systems to not only collect data through surveillance but, as Bratton put it in his 2007 call to action, “control behavior to such a degree that we can change behavior.”
Santa Cruz, one of the first U.S. cities to adopt PredPol, is now set to be the first U.S. city to ban the software—dozens of cities have or are still secretly experimenting with the program. And even if every city were to ban predictive policing, it is likely that serious damage has already been done with the adoption of a depoliticized crime and technology paradigm that merely reinforced racist policing and re-legitimized our systemically violent police departments. For nearly a decade, concerns have been raised that this paradigm not only distorts crime data for its own ends, but makes it near impossible to accurately assess its impact, positive or negative, on crime.
This problem is not limited to predictive policing, however, and actually persists across all technologies provided to police departments. Time and time again, we see technology being deployed to simply legitimize racist policing or the targeting of marginalized communities in new and old ways.
Despite the fact that facial recognition technology is widely acknowledged to be rife with racial bias, Amazon has for years provided its own racist product, Rekognition, to a handful of police departments. The company declared a one-year moratorium on providing facial recognition technology to the police, but this does not extend to its impressive lobbying efforts to write the laws that will regulate facial recognition and it cannot undo the false positives that have already led to one man, that we know of, being wrongfully accused by an algorithm. Nor does the temporary ban extend to its home surveillance network, Ring, which Amazon has gone to great lengths to not only provide to over one thousand police departments, but position to profit from the racist suburban paranoia that drives communities to adopt the camera in the first place. Nor does it extend to the “heat maps” of package thefts that Amazon has provided to police.
In the wake of the George Floyd protests, aerial surveillance was used in at least 15 different cities to track protesters—a callback to when the FBI operated a fleet of surveillance aircraft that flew over U.S. cities. Ostensibly concerned about civil unrest, federal agencies have deployed powerful surveillance tools and gained expansive new executive powers to closely track the protests along with any associated social media activity—a throwback to the federal government’s long history of using powerful surveillance tools to surveil protesters and dissidents, especially Black activists. Calls for transparency have grown as a result, but what good is sanitizing this surveillance infrastructure with light if we won’t take it down?
Abolition demands we become Luddites
Most talk of abolition focuses on the police and prisons, institutions whose punitive vision of addressing harm within our communities has transformed society into one dominated by dragnet surveillance and criminalization motivated by biases, all in the name of solving crime without actually addressing the root causes of it. But what of the infrastructures—the carceral tech—enthusiastically built by companies like Amazon, IBM, Palantir, Clearview, Axon (the maker of Tasers and body cameras), and hundreds of other tech companies?
In a column for the Guardian, Ben Tarnoff argues that in order to avoid a climate apocalypse, we’ll need to decarbonize by halting Big Tech’s attempt to digitize everything by deploying computing technology everywhere. Such a position might draw accusations of Luddism, but to that Tarnoff says:
“Good: Luddism is a label to embrace. The Luddites were heroic figures and acute technological thinkers. They smashed textile machinery in 19th-century England because they had the capacity to perceive technology ‘in the present tense,’ in the words of the historian David F Noble. They didn’t wait patiently for the glorious future promised by the gospel of progress. They saw what certain machines were doing to them in the present tense—endangering their livelihoods—and dismantled them.”
For years, budget cuts for public goods and services have been matched with budget increases for police departments, prison facilities, and technology contracts to further bloat police and prisons. Couple this with a world where Silicon Valley companies have a near-infinite ability to raise capital regardless of profitability and what do we expect? A society dominated by institutions like Wall Street, monolithic tech companies, the Pentagon, police departments, and prisons is a society dominated by infrastructures that prioritize their interests and imperatives above all else.
“Rather than austerity for schools and services, we need austerity for surveillance and social control,” writes Sadowski. “More concretely, unmaking also means systematically ripping out cameras, ShotSpotters, and other spy tech filling our cities.” The vision Sadowski lays out in OneZero, but also in his book is worth considering as we figure out what police and prison abolition might mean with regards to tech.
Part of that vision includes abolishing Silicon Valley by unmaking an industrial model that allows a narrow group of people to dictate the who, what, where, how, and why of technological development. We should be comfortable deciding that some forms of technology will never get built while others get destroyed and dismantled, or that some entities are never allowed to get their hands on various types of technology.
We’ll also need to liberate technological development from the dictates of capital—why should this or that tech be developed if it doesn’t prioritize human well-being or social welfare; why shouldn’t something be developed if it does but isn’t profitable to a narrow group of investors? Democratizing innovation is also necessary if we want to ensure our definitions of well-being or social welfare don’t look suspiciously similar to what is good for a status quo prioritizing the privileged and powerful. And it makes sense, then, that democratization prioritizes who are going to be subjected to a technology as ones who have serious substantive roles in its creation and design that goes beyond consumption. And it also follows that this means transparency results in fairer access to the datasets and tools that shape our everyday lives so that, together, we can unmake control systems and build up support systems that prioritize public good, not private profit.
The emphasis needs to be on not simply reforming the status quo propping up our deeply broken technopolitics but radically altering the political economy so that we can finally address the rampant racism, classism, and sexism embedded in the tools building our society and mediating our daily lives. Public ownership of data to solve social problems without endless commodification, not data dividends that reinforce a one-sided and exploitative firm-consumer relationship. The dismantling of our planetary surveillance infrastructure, not a re-legitimization that threatens to make permanent some of its most pernicious elements.
For too long, we’ve allowed the same people who’ve historically used crime and technology as political weapons to convince us that these are ahistorical and apolitical—natural and neutral even. As long as we subscribe to those delusions, we will never be able to take the steps necessary to contest their control over how crime is constructed as a category that aids social control or how technology is deployed as a tool to further a narrow group’s interests. And as long as we are unable to see, let alone win, political battles that come with defining crime or designing technology, we will never be in a position to build the world we want.