In The Shadows
Shadow Work
When we read the words ‘unpaid labor’ a fewthings come to mind - women caring for children or aging relatives, forms of corporate wage theft like unpaid OT, things of that nature.
‘Shadow work’ was coined by an Austrian philosopher in the ‘80s as a blanket term for all the unpaid and invisible labor in our daily lives. In 2015 the definition was updated to include tasks we once depended on others for, but now do ourselves:
This includes everything from banking to travel bookings, ordering food in restaurants to bagging groceries, not to mention downloading and navigating the apps we need to pay parking tickets or track our children’s school assignments or even troubleshoot our own tech problems.
This framing dovetails nicely with the endless drumbeat about automation coming for our jobs. Author Craig Lambert points out this type of automation has already come for quite a few lousy jobs, forcing us to book our own travel and bag our own groceries.
Many of us find the modern travel booking situation much improved - we can secure flights and hotels with a few clicks of the mouse or taps of the phone, rather than sitting on hold for half an hour with a bored-sounding airline employee searching fare options to Disney. However! When airlines do away with thousands of phone support reps and there is, say, a major weather event, chaos may ensue. Companies cutting staffing numbers in favor of websites and apps may improve the customer experience during ideal conditions, but if you’re stranded at an airport and need a human being to rebook you, a slick app is little consolation.
This has been taken to its extreme by the big tech firms whose software solutions drive much of the self-serve automation in daily life. If you’re a consumer of, say, Netflix, or you pay for Google’s business suite, or you’re subscribed to Spotify, it may be extremely difficult to get human support for your issues. Steam, the country’s largest online video game marketplace, does not offer phone support, despite the fact it processes billions of dollars’ worth of purchases a year.
Attend any consumer-facing industry conference these days and you’ll stroll past a dozen booths from companies as wide-ranging as Salesforce and Pitney Bowes assuring you their AI-driven chatbots will reduce or eliminate the need to hire pesky humans to interface with your customers. For any of us who’ve been trapped in Chatbot Hell, the prospect of increased adoption means we’ll be stuck all-caps shouting at robots to sort out an increasing number of daily problems.
Companies are so committed to loading us up with more shadow work they’re willing to tolerate waves of theft at self-checkouts:
Walmart CEO and President Doug McMillon told CNBC earlier this month that theft "is higher than what it has historically been" and there will be consequences "if that is not corrected over time."
He is clearly not referring to consequences for the company’s executives, or its stock price, which is up over the last year. McMillion has the gall to threaten to close stores and punish Walmart customers if they don’t…stop…other customers from stealing? What? We’ve talked before about Walmart’s penchant for using local law enforcement as a free ‘enhanced security’ option for its stores. Clearly, even the company’s attempts to turn minor shoplifting into jailable felonies hasn’t stemmed the tide. Perhaps it’s the one instance of shadow work that can - if you’re willing to risk it - result in some form of compensation.
American companies’ thirst for profits means we’ll only be doing more and more shadow work. Everything will be done via app or website, we’ll be talking to ChatGPT for customer service, and companies will continue to slash employees any and everywhere they can. What won’t happen as a result of computers taking more and more customer facing jobs is a net improvement in society, working conditions, or anything other than fattening the wallets of investors, executives, and shareholders. Don’t hold your breath waiting on those hoverchairs.
Note: while looking for an image to grace this post, I came across a disturbing TikTok trend #shadowwork that promotes some warped self-help ideas. Not the same thing!
AI
Speaking of ChatGPT, the latest, greatest AI chatbot, let’s chat (sorry) about its meteoric rise and equally lofty expectations. Built by quasi-non-profit Elon-Musk-adjacent company OpenAI, ChatGPT made waves last November with its ability to carry out detailed instructions and write remarkably human-sounding responses. Contrary to the popular narrative, ChatGPT did not spring from the great minds at OpenAI fully formed, eager to write sonnets or edit your resume.
The software’s predecessor, GPT-3, was a fairly convincing chatbot, but it had problems:
ChatGPT’s predecessor, GPT-3, had already shown an impressive ability to string sentences together. But it was a difficult sell, as the app was also prone to blurting out violent, sexist and racist remarks. This is because the AI had been trained on hundreds of billions of words scraped from the internet—a vast repository of human language. That huge training dataset was the reason for GPT-3’s impressive linguistic capabilities, but was also perhaps its biggest curse.
We’ve heard this tune this before - Microsoft, Facebook, and others have all tried to launch AI bots and failed in interesting ways. The core problem is that, much like CAPTCHA image puzzles Google uses to train its image recognition - more shadow work we’re doing on their behalf - training an AI on words found on the Internet means, well, you’ve seen the Internet. It’s not great.
So what did OpenAI do? Before releasing the next iteration of their bot, they needed AI models to screen out offensive and bigoted language. They turned to our old friend Sama in Kenya:
The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild.
[…]
To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.
They paid the workers tasked with reading and tagging some of the most obscene, hateful speech…one to two bucks an hour. A few ended up with PTSD, no big deal:
One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child.
From OpenAI’s point of view, this is simply a cost of doing business. And, credit where it is due, by all accounts ChatGPT’s tagging system works quite well. In English. The tool has been so vigorously adopted it’s making waves in academia, causing professors and administrators concern students will use the bot to complete assignments, take exams, and write essays.
The problem with ChatGPT, and AI more generally, is that while it can assemble sentences and paragraphs in ways that sound convincingly natural, there’s no guarantee the information is correct. It is good for some things - I’ve used it for brainstorming prompts, for instance. Friends have used it to answer statistics questions, solve coding riddles, and even write job descriptions. For relatively minor, time-saving tasks like this, AI can be great. We can finally force the damn machines to do the mundane tasks we loathe.
However! A cute brainstorming bot is not going to make anyone rich, and even non-profit tech companies fuckin’ love money. OpenAI is planning to raise more than $10 billion dollars to do…stuff with ChatGPT and its other creations (they’re behind DALL-E as well). Microsoft, in the shadow of Google’s search dominance for years, hopes ChatGPT can revive its Bing search engine, and given how awful Google search has become I’m also hopeful it can provide a useful alternative. Google is worried enough to focus resources on beefing up its own AI bots in response.
Using AI to make Internet searches more useful is indeed a quality of life improvement for the English-speaking world, but what’s the catch? AI is finding new ways to help people like me, I discovered:
Is this…good? I guess? It would be nice if every contractor or freelancer out there could use AI to get paid more reliably, but I am skeptical. It is easy for software companies to roll out minor optimizations and slap an AI tag on it while providing little in the way of evidentiary proof. I doubt Quickbooks is going to share the internal research they’ve done testing email subject lines, and so I am forced to take their word for it. AI!
We may be on the precipice of AI being more openly used in search engines, work settings, and customer service centers. While there are certainly more efficiencies to be found around the edges, it’s important to remember that as cool as ChatGPT is to play around with, it’s an expensive tech project built on the backs of poor workers in Africa forced to read disgusting porn and violence fantasies. And, as always, it’s built on us, because every one of the millions of people typing silly things into the software is optimizing its engine. For free. Shadow work!
One of the most glaring limitations of modern AI is its inability to draw fingers. It’s become a running joke in tech circles because any time you ask an AI to draw fingers, the results are unsettling at best. AI knows what a finger looks like. It knows fingers attach to hands. But, for whatever reason, the most brilliant minds in computer science cannot train a freakin’ bot to output anything resembling a hand with five fingers on it. Is it a metaphor? Computers can mostly do the mundane jobs in place of humans, but they literally cannot replicate, nor replace us.
FTX
It has been awhile since we checked in with the high speed car crash that is the FTX bankruptcy and resulting criminal indictment against its founder. Both are still very much ongoing but more information has come out as the attorneys tasked with unraveling the exchange’s finances publish court filings.
One of blockchain’s promises is all transactions are de-centralized and immutable or whatever, but reality behind every crypto rug pull, scam, and collapse is that the places where people buy and sell their tokens are very much not public, and so stuff like this happens:
In FTX’s code, most accounts had a “borrow” flag set to zero, meaning that they could not have negative balances, but some 4,000 accounts had the borrow flag set to some positive number, meaning that FTX would lend them the money up to some credit limit. Of those 4,000 accounts, 41 had credit limits of $1 million to $150 million. One — Alameda — had a higher limit. Alameda’s limit was $65 billion.
Alameda was able to ‘borrow’ about as much as the entire FTX enterprise was worth. Those funds came from existing customer deposits - customers who could not, of course, do any borrowing of their own. Not content to allow its hedge fund to trade on absurd margin, FTX coders added a second flag to allow insiders to withdraw funds:
There was another flag in the code, though, “can_withdraw_below_borrow.”
[…]
One account had that flag set, says the presentation: Alameda. To the tune of $65 billion. Setting the borrow flag to $65 billion and the can_withdraw_below_borrow flag to true is functionally equivalent to “Alameda can take as much of FTX’s customer money as it wants, remove it from the exchange, and spend it on whatever.”
Not only could Alameda’s balance dip to negative sixty five billion dollars if necessary, but it could withdraw funds (in fiat!) from the exchange to be used on political donations, restaurants, real estate, whatever.
The problem with computer code is that unless you delete all of it there is an inconvenient, centralized record of what you built. In the early days of the FTX collapse various theories were floated, including in these very pages, speculating that its insiders were incompetent rather than malicious. It’s hard to argue you didn’t know what you were doing when you wrote what may as well have been labeled THEFT MECHANISM into your exchange’s source code, and only applied it to the one account you controlled.
Gmail
Have you noticed more political emails in your inbox recently? That’s because Google has cut a deal with politicians to allow their unsolicited, uh, solicitations into your inbox:
Gmail users may start seeing a lot more political emails in their inboxes, partly a result of Google bowing to pressure from conservatives who claimed the company marked Republican emails as spam more often than others.
Lest the blame be laid at the feet of the greedy GOP - whose wildly illegal fundraising tactics we’ve discussed at length - Dems signed on too. Both parties depend on a steady stream of suckers to donate their hard-earned money to candidates who will ignore them while in office.
It’s worth noting that candidates and campaigns have to be approved by Google to enter this program, which is yet another form of unaccountable gatekeeping by giant tech firms. Back in 2018, non-profit advocacy groups raised the alarm when Google began routing their emails away from inboxes. Four years later the company has caved to conservative threats and is picking and choosing which grifters get special, protected access to American eyeballs. Lovely.
Short Cons
ProPublica - “In emails, officials calculated what McNaughton was costing them to keep his crippling disease at bay and how much they would save if they forced him to undergo a cheaper treatment that had already failed him.”
NYT - “In a recent interview on [Joe Rogan], which has an estimated audience of 11 million listeners per episode on Spotify, a guest from Alaska presented an explosive discovery: There are tens of thousands of priceless woolly mammoth tusks lying on the river floor.”
ARS Technica - “According to a new study, a little over 70 percent of prescription drugs advertised on television were rated as having "low therapeutic value," meaning they offer little benefit compared with drugs already on the market.”
Know a CEO thinking of using AI to automate away key customer service elements of their business? Send them this newsletter!