At this moment, among U.S. academics, there is a distinct anxiety about what we’re now calling Facebook’s “emotional contagion experiment.” In the years since the 2008 economic collapse, impresarios of a new data science in disciplines whose methodological foundations are only partly, often contestedly quantitative, or not quantitative at all (not to mention actively critical of quantitative analysis), have successfully claimed the attention of university administrators, even if mainly by recourse to ideology and innuendo, and even if such attention is fickle (as it certainly is). In disciplines where quantitative analytic approaches are historically contested, but active and accepted as legitimate, informing work all along a political-intellectual continuum (sociology, anthropology), less suspense has likely been felt than in disciplines where quantitative approaches had gone dormant after failed campaigns in the past (history, literary studies) and could only be revived through a politics of fear. Still, one might say that all along, university-based researchers who, even if only nominally and in their own eyes, earn their living providing a public good rather than goods and services, knew that their wager on a renascent Silicon Valley corporate venture-culture rooted in the economic and cultural power of Google and Facebook, in particular, was just that, a wager: on the value of what those companies produced, as a form of culture; on the acceptance of that value as a social good by a public, some public, that is non-reducible to “consumers” no matter how one tries; and on the acceptance of that value by educators, as knowledge worth knowing, preserving, and teaching.
In an essay entitled “Dark Facebook: Facebook’s Secret Experiment in Emotional Manipulation Provides a Fresh Glimpse of its Radical Politics and Absolutist Ambitions,” Shoshana Zuboff considers the question of what outcomes were and were not imagined, here:
The researchers, Adam D. I. Kramer from Facebook’s Core Data Science Team, James Gilroy from UCSF, and Jeffry T. Hancock from Cornell, and their academic editor, Susan Fiske from Princeton, should have known better. They probably did know better. Indeed, Adam Kramer issued an apology of sorts — a strange document for many reasons. The grandiose claims of the article are repudiated and recast as weak results. But more interesting still is the obvious fact that he did not imagine the wave of outrage that greeted his triumphant publication in a respected academic journal.
I’m as much of two minds on this as Zuboff appears to be. I am quite certain that Kramer, Guillory, Hancock, and Fiske all knew better. I’m also more certain than Zuboff that Kramer, and certainly Guillory, Hancock and Fiske as well, did imagine such outrage as a possibility — and set it aside, in the uncomplicated, not overly premeditated hope that the course of events would run their way.
From what we know at the moment, at least, and generalizing from it — because Kramer, Guillory, Hancock, and Fiske are by no means unique or special here — we can describe the actions of such researchers or knowledge workers as a wager: a wager that publicity, and the forms of real even if temporary personal public power that come with it, would either obviate or outweigh the consequences of any criticism, either private (in the internally managed domain of professional ethics and behavior) or public.
Academics do make such wagers very frequently, though the specific form of partial autonomy they enjoy in their work should encourage us to hold them to the autonomous standard they attempt to evade. If you’ve ever had to manage a faculty colleague whose first and last goal in life is to earn the approval of an upper-level academic administrator whose job very may very well require her or him to adapt university operations to the whims of banks, corporations, military and security agencies, and their elected representatives — all of whose relationships to the university, in a society that demands universal education and doesn’t conscript its youth, are fundamentally parasitic — then you know what I mean.
I hardly need to point out that this is the wager that companies like Facebook are making themselves: they’re not at all failing to imagine a tipping point at which they’re actually shunned, or merely abandoned, and begin to decline. It’s just that they’re trying to grab as much as they can, while they can. It’s a futureless effort, from beginning to end.
Careers in the balance?
The academic reaction to the controversy over Facebook’s “emotional contagion experiment” was noted immediately. “A major scientific journal behaving like a rabbit in the headlights. A university in a PR tailspin,” a lede in The Guardian suggested, dramatically but not incorrectly, and correctly intuiting what such behavior might portend. “In Backlash Over Facebook Research, Scientists Risk Loss of Valuable Resource,” opined The Chronicle of Higher Education, more patiently.
I don’t have the time, the capacity, or the will to respond to such events in real time. To know what I think, I have to wait, at least a little while — which in practice, these days, means watching as my colleagues in the academic profession rush into what we call public “engagement.” That might why by the time I’d finished starting to think about all this, the outrage driving the buzz seems to have come and gone, and why from where I stand, at least, despite claims that suggest otherwise, the loudest or at least the most articulate voices seem to belong to those who, while certainly not unconcerned about research ethics either for procedural or for genuinely human reasons, appear equally concerned, sometimes suspiciously histrionic, about the prospect of losing access to data — as if unable to conceive of pursuing their work otherwise. Michael Bernstein of Stanford (formerly of Facebook): “Regardless of the moral imperatives, let me start by saying as a designer of social systems for research that any such requirement [for informed consent] will have an incredibly chilling effect on social systems research.” Scott Robertson, University of Hawaii: “Facebook’s Going to Be OK, but Science Is Taking a Hit.” Matthew Salganik, Princeton University and Microsoft Research: “The public backlash has the potential to drive a wedge between the tech industry and the social science research community. This would be a loss for everyone: tech companies, academia, and the public.” Mary Gray, Indiana University and Microsoft Research: “Pointing the finger or pontificating doesn’t move us closer to the discussions we need to have.”
If we can’t freely and fully study our own manipulation, where are we? it is argued, in the most politically sensible and sensitive formulations of such a concern. That’s a compelling thing to say, and for as long as we’re merely arguing about it, who’d really disagree?
But the problem is that in all imaginable practical terms, retaining access to Facebook (et al.)’s data will translate down to keeping one’s “likes” up — to staying on such a company’s good side, or if you prefer, in its happy face, or at the very least publicly tolerating and thus aligning oneself with its priorities in the end and in one way or another, despite all ambivalence and all professional provisos, qualifications, and when those fail, self-defense. And that’s leaving aside for now — though we ought not to leave it aside for too long — the question of what is more at stake, in any particular case, in an academic researcher’s professional panic at the prospect of losing access to data: noble dedication to knowledge, or the sometimes less noble careerism it justifies?
Ever attentive to administrative perspectives, the CHE adroitly either elicits or manufactures that concern for career continuity:
“I’m definitely worried that’s going to be the upshot,” added David M.J. Lazer, a political scientist at Northeastern University and leader in computational social science. While the research that has come through Facebook has not fundamentally changed our view of the world, Mr. Lazer said, it’s been clever, and many have viewed it as a down payment on more work to come.
And the fact is that others have made such points far more crudely and problematically. “Transitional moments in science always breed new anxieties,” we are informed by Duncan J. Watts, also of Microsoft Research:
In the late 18th and early 19th century, the world went through a period of rapid scientific discovery. Previously mythical continents like Australia and Africa were explored for the first time, and thousands of new species as well as new peoples were discovered.
Indeed they were — and at a bit of a human and non-human cost, too, which we’re still reckoning. If you don’t understand what is crude and problematic about an argument framed as Watts frames it here, then the fact is that your views are as worthy of professional critical study — as data in themselves — as anything else. Perhaps more so. (See Alan Jacobs’s comments here.)
We don’t have to be Facebook professors
We don’t have to be Facebook professors. Let’s not be a Facebook profession at all. Danah Boyd has identified the real driver here as public “anger at the practice of big data,” and to that insight I think that Paul Ohm of the University of Colorado, as quoted in this article by Kashmir Hill, has added the necessary political-economic context:
“A statistician who lives in Silicon Valley is a ‘data scientist’,” says Paul Ohm, a law professor at the University of Colorado. “Lots of companies — Bitly, OKCupid — have this weird conflation of data research based on what their users are doing and corporate profit-making.”
Whatever else one might say about them, the editorial boards of major newspapers see a public issue here. So does at least one United States Senator thus far. Some academics with the capacity to make persuasive arguments in something other than scholarly or scientific self-interest or self-defense are now doing so. And if our business analysts know one thing and nothing else, they know it better than any of us: that nothing lasts.
But if you just can’t stomach any declinism whatsoever, at the moment, then let’s put it this way: the removal of access to data has the potential to become a growth industry in itself. Think of it as tattoo removal scaled up.
Can we imagine an equivalent growth area in the social sciences and associated technical fields? Of course. Start, just for example, with the work of researchers in Drexel University’s Privacy, Security, and Automation Laboratory on “adversarial stylometry,” among other topics. That is to say: refocus your research on the techniques of anonymization themselves, or on their cultural contexts — and so on. Can you imagine a researcher in such areas coming under public attack as the authors of the Facebook study have? And can we imagine such a researcher not seeing that crater of public opinion in her path, before tumbling into it?
There is more to say here, particularly to my hapless academic colleagues in the humanities. For now, I leave them with this simultaneously glib and veracious observation by Gideon Rosen:
The Vietnam War, the civil-rights movement, and the social shake-up of the 1960s gave students reason to wonder about their culture and its values. The humanities were instantly “relevant,” and enrollments soared.
Following Julia Angwin, David Golumbia argues that “it isn’t possible to get off Facebook,” if by “Facebook” we mean “data brokers.” And he is right. Which means that the next step, beyond the legislative restraint of collection, will have to be the democratically planned, organized, and supervised destruction of data by the state — a process to be deliberated and to be legislated first of all (and emphatically not to be delegated to non-state actors). It’s hard to say what that would look like, especially (or even) in today’s United States — but it’s something I’ll be trying to imagine.
This essay also appears at Medium.