Art: Chaos nr. 2, Hilma af Klint, 1096
Over the weekend, I was catching up on the piece in New York Magazine by model and actress Emily Ratajkowski, about how much it cost for her to buy her own image back. It’s a very good, very long piece with a lot of room for reflection, but what struck me most is that I would have never assumed she was going through something like this.
In the headlines and her photos, she is always glamorous and in-control. In the piece, she is frustrated, angry, and, mostly, defeated. She writes,
I thought about something that had happened a couple of years prior, when I was 22. I’d been lying next to a pool under the white Los Angeles sun when a friend sent me a link to a website called 4chan. Private photos of me — along with those of hundreds of other women hacked in an iCloud phishing scam — were expected to leak onto the internet. A post on 4chan had compiled a list of actresses and models whose nudes would be published, and my name was on it. The pool’s surface sparkled in the sunlight, nearly blinding me as I squinted to scroll through the list of ten, 20, 50 women’s names until I landed on mine. There it was, in plain text, the way I’d seen it listed before on class roll calls: so simple, like it meant nothing.
It’s so surreal to think of someone as removed from mere mortals as Emily doing something as ordinary as checking her iPhone and panicking. After reading this piece, for me, she has turned from a glossy magazine cover, into a living, breathing human being.
Abstract You, Abstract Me
What I had in my head was an abstraction of Emily as portrayed to me by the media. Humans have been creating abstractions for hundreds of thousands of years, because there is simply no way to hold everything in our brain at once. A single human can’t know all of the inner lives of every single celebrity, how a car works from the engine up, what makes the weather happen, how pandemics work, how Nutella is made from scratch, and how the tax code works, all at once. There’s a reason we have generalists and specialists, and that generalists only know a little bit of everything:as an individual, it’s impossible to go deep on the entire universe.
One of the most basic examples of abstraction of this is the invention of human writing. Writing is hard, because we think in many dimensions. Our mind connects different parts of different concepts, fragments of thoughts, feelings, colors, and smells, and writing is a one-dimensional medium, where we need to construct a reasoned argument or narrative.
Here is another fantastic example of an abstraction, one that a lot of working moms perform all the time:
This is the truest depiction of being a working parent and having small children that I have ever seen. In my house, the floor is always covered with toys, I never have time to do anything more than brush my teeth, and my professional life exists in bursts between doctors’ visits, covid quarantines, sleepless nights, and endless, endless task of washing the dishes and doing the laundry, every single day. But thanks to the wonder of technology, when I’m chatting on Slack, or submitting pull requests, the messiness of my life is abstracted into the background. The digital realm is a room of my own. My work comes into focus.
Abstractions also exist in software, where an abstraction is any piece of written code that hides the complexity of the underlying code so that you can use it more easily and not get tripped up in the details. A good software abstraction, like a map, doesn’t tell you every single thing about the land, but it gives you generally a good enough idea to get by.
Abstractions are neither bad nor good in and of themselves. They’re just ways we humans make sense of the immensely complex worlds around us. But you have to understand the tradeoffs of what you’re abstracting away.
It’s here that I think we’re in a pretty dangerous spot these days, because there are some abstractions that we deal with, without really thinking about the fact that they’re abstractions, at all. I think something the online world does, often, not only with celebrities, but with all of us, is flatten us into abstractions.
Flattening Facebook and Twitter
There are a couple of recent examples I can think of. The first is the Buzzfeed story about the data scientist at Facebook who wrote an in-depth memo about monitoring politically-influenced bot activity. She was a member of Facebook’s Site Integrity Team who had been fired, ostensibly after writing about all the issues she’d dealt with and how heavily it weighed on her. Instantly, the narrative about her became that she was a whistleblower, yet another one of the heroic voices raising concerns against Facebook. And then, after her memo was leaked, her name vanished into the ether.
The story got major clicks for Buzzfeed and credit to the writers. But I wanted to know so much more. What did she actually do for Facebook? The story said she was a data scientist, but that can mean many things across an organization as large as Facebook. The story said that she was in charge of deleting content, but then said that she only reported it. What was the true story? The story then said,
[she] said she turned down a $64,000 severance package from the company to avoid signing a nondisparagement agreement. Doing so allowed her to speak out internally, and she used that freedom to reckon with the power that she had to police political speech.
This also doesn’t make any sense, since severance packages are usually signed upon being fired or leaving the company and have nothing to do with speaking on internal message boards.
There were a lot of loose ends that just didn’t add up for me. I wanted to know more – about why her org structure didn’t care about the reports, exactly what kind of reports they were, what else she worked on, what the internal politics of her organization looked like and how those decisions got made, but the story marched on, very eager to get to the part of the memo where the data scientist said she felt like she “had blood on her hands,” ultimately painting her in a single, flat light, a spark of sensation, once and gone.
What she illuminated was, to me, a very hierarchical organization focused on PR perception at all costs. This is important, because it means that PR is absolutely a lever to get Facebook to make specific decisions. How can we use this information to better understand how to influence Facebook? The article doesn’t get into that.
I haven’t seen her name in the news for a few weeks now, but she’s out there, a real human person who now has to somehow get another job in the tech industry with her name (involuntarily) attached to this big, huge controversy, used only for clickbait, once and gone.
The second example is the very recent controversy over Twitter’s photo cropping algorithm. Over the weekend, Twitter blew up: In cropping the picture of the backgrounds for a tweet, it turned out that a man’s Black coworker was cut out, leading many to conclude that Twitter’s photo cropping algorithm was racist.
Someone else was able to replicate these results in a juxtaposed image of Barack Obama/Mitch McConnell, and, immediately, the internet was off to the races trying to figure out who to blame at Twitter. There were lots of very angry threads and very little context. People, frustrated and incensed by the results of the crops, tested out cropping on all kinds of posts to see what the algorithm would select. A lot of them were serious. Some were funny. None looked good.
But then, an interesting thing happened: the creators of the algorithm weighed in. They first provided some context by linking to the original post where they announced the algorithm, and then talked about how they did it.
Then, the developers and data scientists who worked on the algorithm, as well as Twitter’s chief design officer, responded on Twitter:
Vinay Prabhu @vinayprabhu
(Results update) White-to-Black ratio: 40:52 (92 images) Code used: https://t.co/qkd9WpTxbK Final annotation: https://t.co/OviLl80Eye (I’ve created @cropping_bias to run the complete the experiment. Waiting for @Twitter to approve Dev credentials) https://t.co/qN0APvUY5fAnd, one of the original researchers also responded:
And then, finally, Twitter comms weighed in,
If you had only been looking at a couple tweets, which was entirely possible because they dominated the conversation, it was easy to conclude that Twitter had implemented an algorithm that was ignorantly biased and which it had no intent to fix.
But if you were (somehow, miraculously) able to link all of the tweets together, what came out was this:
Twitter implemented an algorithm to do automatic cropping in 2018.
It replaced a previous algorithm that actually looked at faces but had not been successful
The new algorithm used saliency:
A region having high saliency means that a person is likely to look at it when freely viewing the image. Academics have studied and measured saliency by using eye trackers, which record the pixels people fixated with their eyes.
Saliency is used across the industry, including at Apple and other companies and it’s not necessarily a great way to go.
It was actually tested for bias, but unfortunately, it looks like a number of things slipped through the cracks, which the researchers acknowledged and said they would work to address.
Both the original researchers and the CDO of Twitter weighed in multiple times in the conversation, confirming what they did originally and what they would do now to re-examine the algorithm.
It took about 48 hours for the cycle to go from outrage, to people manually trying out the algorithm, to people performing their own experiments, to full-on explanation, to the former researcher getting involved, to comms finally closing the loop, as much as it can be closed at this point.
In that time, the context was abstracted to a single idea – Twitter was maliciously ignorant. This was unsurprising to me, but disappointing: Once the whole story came out, it was obvious that in theory, they’d done everything right (at least, as far as the external conversation indicated. As with the Facebook story, it’s impossible to read into exactly what happened internally inside the black box) – they had vetted the algorithm, checked for racial bias, and, when the controversy arose, engaged consistently on Twitter with lots of very angry people who didn’t always come to the reply box with the best of intentions.
The Twitter employees turned from individuals into platforms – conduits for all the rage against all of these social media networks, against all the massively messed-up things they’ve been doing since their foundational days. They were abstracted away – both the responsible parties and the platform as a whole.
The larger story here, of course, is two-fold. First, it’s clear that, in this case, the right thing to do here is to test the algorithm even more rigorously and show some follow-up results in public.
The even more right thing to do, as the CDO says in one of his tweets, would be to revert to manual cropping. But the bigger story is that, of course, Twitter can’t say and can’t promise to switch to manual cropping, because probably manual cropping will result in a drop-off in the engagement they so crave, including engagement that was brought about as a result of this controversy.
So ultimately, what’s the bigger harm here – is it a single algorithm (it could be!), or is the entire structure of the ad-driven revenue model that will always push towards less visibility across all the dimensions of an issue, less consideration of people as individuals, and a higher volume of engagement rather than letting users exercise creative control and allow for nuance in conversations?
We are all humans, online and off
The internet has always made us flat abstractions: text bubbles, DMs, Slack chats, blog posts, Buzzfeed articles, without any context around what we are, who we are, what we believe. We are all large, we all contain multitudes, but the more I live and work online, the more I realize that, what we gain in being able to communicate across time and space, en masse, the more we lose in context, in gesture, in understanding a single person in the sea of humanity. It’s this, combined with the global scope of outrage, that’s a dangerous form of abstraction today.
What I’m reading:
99% of browser profiles are unique and a good summary of the paper
Opening up old KGB documents, including about Chernobyl
The Newsletter:
This newsletter’s M.O. is takes on tech news that are rooted in humanism, nuance, context, rationality, and a little fun. It goes out once or twice a week. If you like it, forward it to friends and tell them to subscribe!
The Author:
I’m a machine learning engineer. Most of my free time is spent wrangling a kindergartner and a toddler, reading, and writing bad tweets. Find out more here or follow me on Twitter.
Leave a Reply