Pages

Wednesday, June 26, 2019

The Flawed Logic at the Heart of the Automation Fantasy


A thought provoking white paper on Automation and digital learning by Michelle Shevin Dec 19, 2018 · 14minute read 






“Tech doesn’t solve for trust, accountability, or labor — it shifts responsibility away from systems and onto individuals.”




Across the private, public, and non-profit sectors, a common recipe is being applied to growing stores of data: interoperability → integration → optimization → automation.

Promising to usher in an era of “smart cities,” “efficient services,” and “unlimited leisure,” automation is the fantasy driving the current revolution across business and bureaucracy.

Overwhelmed by the massive amount of information (personally identifiable and otherwise) generated by your operations in the digital age? Not to fear, the Age of Automation is here.

One Stack to Solve Them All

Automation promises (cheap) compliance. It worships at the altar of efficiency. It paints your wicked problems as debuggable and your complex systems as a set of linear causal relationships just waiting to be disentangled. It does not distinguish between the types of problems and processes faced by private vs. public institutions. It will not only mine your data for diamonds, but also cut and polish the stones. Put simply, it will “change the world.”

Organizations are not just mobilizing their own data. The best insights come from analyzing massive swaths of data from different sources. This is why public sector agencies are pooling data across services, and why consumer companies are gobbling up their customers’ personal information. As cited by the New York Times, American companies spent close to $20 billion in 2018 on acquiring and processing consumer data. 

The promised logic of that spend goes something like this:

In step 1, 

interoperability, data is made machine readable and digestible. Nigh gone are the days of manually digitizing PDFs. A growing stack of tools including scanners, computer vision, and natural language processing algorithms is getting better at munching up even the messiest data substrates and regurgitating extracted and compiled data-cud, ripe for analysis. With graph analytics, even the most disparate data sets can be layered for mining.

In step 2, 

integration, data from separate systems is joined up and made accessible through interfaces, dashboards, and databases. Longitudinal researchers, rejoice! Silos are spanned. Graphs are analyzed. Previously unseen relationships are modeled and visualized. Trends are heralded, their strength and directionality dissected and served up as so much “insight.”

In step 3, 

optimization, algorithms are layered onto the stack that promise to do things like “recommend,” “personalize,” and “predict” much better than any mere human can or ever could. In lieu of complicated stakeholder engagement with its messy debates about values, these algorithms take their cues from the logic of our past decisions and dominant narratives — from capitalism and neoliberal institutionalism. They drive toward efficiency. They drive toward growth. If the predecessor systems that generated their historical data inputs were sustainable, equitable, or fair, the algorithms might be too. If not, onward— speed the collapse.

In step 4, 

automation, new algorithms are given new responsibilities. Building onto a stack that already claims to better understand system dynamics and relationships, they now offer to reorganize accountability mechanisms and decision-making structures. They determine creditworthiness. They allocate healthcare benefits. They predicate access to public services — on previously orthogonal behaviors, or on one’s ability to prove their identity.

Alas. In the near term, at least, the promised land looks like trying to create a DHS digital account in Arkansas, and unfortunately it’s not pretty.


“The future is already here — just unevenly distributed”


From my current perch in philanthropy, I see the shift happening everywhere — at different speeds in different places, altering the very ground for the work that we want to do across sectors and regions — sometimes insidiously, always inexorably.

In the past, I’ve worked in the public sector (DoD’s Cebrowski Institute), the private sector (syndicated research on technology futures), and in cross-sector consulting (developing innovation ecosystems for prize incentive competitions). In a relatively short time, I’ve come full circle from tech engagement enthusiast (at the age of 24, I described myself in a job application as a “wearables evangelist”) to cautious skeptic of technology’s capacity to intervene positively in human systems.

Last year, I was in a meeting on Sand Hill Rd., beating my drum about centering ethics and equity in the development of automated public sector systems, when a funder and data integration enthusiast asked very seriously, 

“Why do you keep bringing that up — like, what could go wrong?”

At this point, from my perspective, it’s less about what could go wrong and more about what already has. 

There’s something rotten at the center of the automation fantasy.

Automation goes by many names (“artificial intelligence,” “algorithmic decision-making,” etc.) but likes to hide its true nature. 

Here are a few of the many faces it wears.

Garbage in, Garbage out.

And most of it is garbage.

Sure, “big data” has revealed correlations and relationships that enable monetization, value creation, and improved service delivery (perhaps less than you’d think). But that’s largely in spite of data quality and veracity, not because of it.


To cite one of many misnomers, what we’re building is not actually artificial intelligence, it’s (flawed) human intelligence at the scale of industrial societal machinery — yes, the man behind the curtain is still twisting at the knobs of civilization. And yes, it is indeed a white man wearing cargo shorts with socks & sandals in Palo Alto.


Planning often disguises itself as prediction.

At scale, algorithms create the future they forecast.

When machines make an accurate prediction, it’s a triumph of status quo, not of foresight.

More often, as with humans, they make self-fulfilling prophesies. They’ll serve up more of the same, faster this time, more accurately, and with less of your input needed. For recommending what to watch next on Netflix, this is really great. I do not aspire to stop liking sci-fi films with a strong female lead. (In case it hasn’t penetrated your filter bubble, don’t sleep on instant camp classic The Pyramid.)

But when it comes to public sector service delivery and systems that have real impact on families and livelihoods, it’s a different story. Why would we intentionally model future decision-making on past patterns, which we know to have been systematically biased, unfair, inequitable, discriminatory, and in many cases ideologically irresponsible, if not dangerous? Sure, algorithms are pretty good at learning from past patterns to project future decisions. But in myriad systems, that’s the last thing we should want them to do.

Friction is the engine of stability — and of progress.

Healthy systems thrive at the edge of chaos.

Automation arrives under a banner of “progress,” but reveals itself an agent of stagnation.


Friction — struggle — is theorized to be the driving force in biological evolution, tool use and technology development, a thriving immune system, and more. And yet in automation’s recipe book, it’s first on the list of ingredient subsitutions.

Sick of having to do your own research? Algorithms will mine vast stores of information so you don’t have to. Sick of waiting in line? Algorithms can optimize your arrival time. Sick of composing responses as part of basic human communication? Algorithms can suggest a response that’s uncannily just so you.

But what’s at stake in this rush to lubricate our every (trans)action? What might be lost when we no longer have to wait, suffer boredom, struggle, think about it, or even try?

Optimization and influence are subtle forms of control.

Borrowing from ad-tech business models, automation’s end-game in a capitalist society is not just selling more stuff, but actually designing human behavior.

The data-mining infrastructure undergirding automation is the same that supports Surveillance Capitalism, and it wants to blunt our agency, rob us of our sanctuary, and erase our unpredictability. 

As Shoshana Zuboff puts it, 

“Forget the cliche that if it’s free you are the product — you are not the product, but merely the free source of raw material from which products are made for sale and purchase…You are not the product, you are the abandoned carcass.”


In “Algorithm and Blues: The Tyranny of the Coming Smart Tech Utopia,” Brett Frischman describes some of the ideology at the heart of “smart tech” and automation:

“Supposedly, smart phones, grids, cars, homes, clothing and so on will make our lives easier, better, happier. These claims are rooted deeply in a smart-tech utopian vision that builds from prior techno-utopian visions such as cyber-utopianism as well as from economic-utopian visions such as the Coasean idea of friction-free, perfectly efficient markets and the Taylorist vision of scientifically managed, perfectly productive workers. 

In our modern digital networked world these visions creep well beyond their original contexts of idealized internet, markets and workplaces. Smart-tech can manage much more of our lives.”

There is no magic in machine learning.

Only ones and zeroes, graphs and correlations.

There’s no magic in machine learning, just a cascading flow of abdicated decisionmaking (and thus accountability). Sure, the power imbalances inherent in a world where some humans make decisions on behalf of other humans (to say nothing of nonhumans) is plenty problematic, but are we really so sure glorified mathematical equations are going to do a better job?

Speaking with the New York Times, grandfather of computer programming Donald Knuth recently admitted,

“I am worried that algorithms are getting too prominent in the world. It started out that computer scientists were worried nobody was listening to us. Now I’m worried that too many people are listening.”


It’s clear that many decision makers have already bought into the fantasy that machines are better suited to make choices than we are. Code is being put in charge of important systems and decisions, in many cases without even a thought to processes for redress or adjudication. Who can afford a protracted legal battle to seek recourse after a buggy algorithm denies their healthcare? Ironically, only those whose income precludes them needing access to public services at all.

Who wins — and who loses — in an automated world?

Automation promises to usher in wholly new forms of inequality.

Increasingly, access to services that put an explicitly human face on the automation of service delivery is sold at a premium. 

And in an automated world, privacy and sanctuary are privileges you pay for.


“Better never means better for everyone. It always means worse for some.” — Margaret Atwood

For a preview of who plans on winning the long game, take a peek at some of automation’s most vocal proponents: + 

The Inter-American Development Bank (IDB) is promoting the use of predictive analytics in the public sector, part of an ongoing fetish also known as “data for development.” 

+ For IBM, data is the new oil. For enterprise software companies, automation is what’s for dinner, and the public sector is a massive emerging market. 

+ As we heard from Facebook’s Mark Zuckerberg when (weakly) challenged by Congress on almost any problem with the platform that now mediates global information consumption (initially designed to reduce the friction associated with checking out freshman girls): algorithms will fix it. 

+ Big consulting firms like Accenture stand to gain from what they call their “technology vision.” This week, McKinsey is under fire for aiding and legitimizing authoritarian governments.


Fundamentally, there are trade-offs implicit in an automated future. We are sold a bill of goods based on the assumed value of efficiency, but make invisible trade-offs in equity. We are promised freedom from friction, but end up losing serendipity. Our systems optimize resource allocation, but only by rendering us contantly surveilled and increasingly responsible for managing our interactions with the system. We look forward to a future where drudgery is machine-borne, but struggle to imagine holding on to human dignity and meaningful lives. We are seduced by the logic of straightforward measurement and evaluation, but forget that not everything that matters can be measured.

Structural inequality sits squarely in automation’s analytical blindspot.

Overreliance on data analysis functionally prioritizes the types of correlations that linear algebra is good at spotting — but not those arising from complex system dynamics.

By now, biased algorithms are a well known problem. Because they are reliant on past data, they are subject to codifying bad patterns, based on bad data collection, inequitable historical distribution of services (and thus oversurveillance of low-income and minority populations), and pre-baked assumptions. We can see the evidence of this bias in the racist and sexist outcomes of automation efforts across sectors.

But with all of the focus on introducing fairness, accountability, and transparency in machine learning, we are still failing to see the forest for the trees. Specifically, attempts to correct for bias in algorithms typically fail to account for structural inequalities. Because it is steeped in and born out of historical data, automation knows only how to deepen the grooves of existing patterns, valuing only those variables that have been isolated for measurement and then made meaningful through their relation to other metrics.

But it is precisely the structural ecosystem in which automation is being deployed that we’ll need to problematize and address if we want to harvest the promise of analytical tools. Legitimately resistant to statistical analysis, the water in which we swim — a rich stew full of narratives of dominance, ideologies of growth and consumption, fundamental false dichotomies, rampant othering, ubiquitous misinformation, and ecological fatalism — is something we can glimpse the edges of but scarcely transcend.

With automation, transcendence is not on offer. 

Optimization, yes. Mitigation, maybe. 

Solutions, in name only. Instead, the fantasy of automation carries the ethos of exceptionalism and the the arrogant allure of the “end of history.” The fantasy of automation suggests deploying analytics to lock in the structures of the status quo. It’s a particular view of “progress.” Things could be so much better, it suggests, as long as the high-level distribution of power and resources stays pretty much the same.

Looking at algorithms that promise to revolutionize healthcare, Shannon Mattern writes:

What’s more, the blind faith that ubiquitous data collection will lead to “discoveries that benefit everyone” deserves skepticism. Large-scale empirical studies can reinforce health disparities, especially when demographic analyses are not grounded in specific hypotheses or theoretical frameworks. Ethicist Celia Fisher argues that studies like the Human Project need to clearly define “what class, race, and culture mean, taking into account how these definitions are continuously shaped and redefined by social and political forces,” and how certain groups have been marginalized, even pathologized, in medical discourse and practice. 

Researchers who draw conclusions based on observed correlations — untheorized and not historicized — run the risk, she says, of “attributing health problems to genetic or cultural dispositions in marginalized groups rather than to policies that sustain systemic political and institutional health inequities.” — Shannon Mattern, “Databodies in Codespace”

Automation shifts the burden of accountability away from systems and onto people.

The myth of unlimited leisure time through automation already rings false.

In an automated world, processes have been redesigned not to improve user experience, but to increase profit margins and/or reduce human capital expenditures.

But as Karen Levy’s research on trucking shows, automation doesn’t replace humans as much as it invades them. Like a violent ex-partner, it surveils, encroaches, polices, and manipulates, while requiring intimate access to the body and demanding increasing access to the mind.

Without intervention, those already at the margins will be further marginalized. And when automation is deployed in service of the status quo, value is extracted and/or invisible labor required from every person who interacts with automated systems.

The patient is now coordinator and advocate of her own care. The consumer is actively consumed in the ongoing cycle of consumption. The citizen is now arbiter of her own truth and curator of her own meaning. Across sectors, the invisible (and unpaid) labor now necessary to navigate the systems in which we are inextricably implicated, reveal us — the individual — increasingly responsible and increasingly commoditized in the acts of consumption, citizenship, and the pursuit of health and wellbeing.

No such thing as neutral technology.

In the fractal hierarchy of automation technology, invisible values are embedded everywhere you look.

There are values — moral values — in every design choice, every implementation process, every organizational culture change, and in every impact on end-user decision-making.

The framing of automation as a “technical fix” or inevitable application of technology obscures the age-old philosophical and moral underpinnings of the machine learning algorithms implicit in the automation stack, too often running on auto-pilot in lieu of having hard and inclusive conversations about values that resist quantification and measurement.

When it comes to automation technology, we should never assume neutrality, let alone positive progress. 

This is especially important when it comes to data integration and automation in the public sector. The same technical infrastructure built to support government transparency can be easily deployed for social control. The same analytics layers that promise to make criminal justice systems more just can also be used to fill private prisons with marginalized citizens. And the same surveillance mechanisms that promise to improve public safety can be mobilized to restrict citizens’ access to services.

China is promoting its social credit system, literally based on the government’s phrase “once untrustworthy, always restricted,” as a way to improve citizen trust in government. Chinese officials met with counterparts in at least 36 countries last year specifically to share their approach to “new media or information management” (read: digital control). 

In Mexico, where already just 2% of citizens believe they are living in a full democracy, transparency speeds ahead of accountability, leaving in its wake not just truth, but also cynicism and disengagement. In Brazil, a renowned and expansive public integrated data system built to automate social service delivery is being connected to private sector employment data, just as a hardliner takes office who has waxed romantic about military dictatorship. 

In Kenya, the government has set out to catalogue each citizen’s genome and earlobe geometry. And in the United States, public integrated data systems are being built that will soon touch the majority of citizens.

To be clear, many of the dedicated civil servants who operate our public services rightly welcome data integration; even getting access to real-time data dashboards from within one’s own agency is still a compelling prospect in many districts. But there is a useful distinction to be made between data being used to improve outcomes through research versus data being used for individual case management, predictive analytics, decision-support, and automated service delivery. I’m worried that tech companies are selling the public sector on a vision of automation whose tools embed values of capitalism, not sustainability; efficiency, not equity; status quo, not justice. And to note, no matter how many best practices are followed in design and implementation (as they have been in the integrated data system in Allegheny County, PA), there are at least two sides to every story of automation.

In every place you look, the fantasy of automation is finding purchase and fertile ground to plant its seeds. In spite of the blaring hype coming from enterprise tech companies, it most often does so quietly, insidiously, and strategically.

Impacted communities are left unaware until first contact with a buggy process or infuriating user experience. University IRBs are a thing quietly longed for, and yet long forgotten in the rush to ship. Systems are lifted wholesale from one context, white-labeled, and airdropped into another. Sold by the promise of modernization and progress, our leaders commit to procuring commercial off-the-shelf societal control.

I want to emphasize that none of this condemns data integration, graph analytics, or machine learning. These are valuable tools in a kit that must also include social science and stakeholder engagement. But the context in which these tools are deployed sets up path dependence. The fantasy that drives the purchasing and purposing of these tools merits careful scrutiny. The business models they support, the embedded values they encode, the degree of person-centeredness they reflect, the way they subtly shift responsibility between stakeholders, and the structural inequities they threaten to lock in — matter deeply. And the current context in which automation tools are sold and deployed is deeply flawed.

Commitments to community engagement, person-centered design methodologies and implementation approaches, rigorous and ongoing ethical review, default inclusion of social scientists and artists in development processes, algorithmic auditing, and the explicit and inclusive discussion of what values get embedded in tools (particularly ripe for revisiting: the unspoken social compacts between citizens and commercial/legal governance mechanisms) could go a long way toward ensuring automation isn’t flavored by authoritarianism, but not if we stay asleep at the wheel of this self-driving car.


Something is rotten at the center of the dream, and we urgently need to wake up — before we automate the broken promises of our past into the very fabric of our future.

No comments:

Post a Comment