House of Cards
AI
As if we weren’t swimming in enough breathlessly credulous coverage of AI writ large, industry darling Sam Altman got himself mixed up in a Succession-style power struggle last month. Apparently, one of the ‘key developments’ that led to OpenAI’s non-profit board briefly showing Altman the door was a letter from company researchers claiming they’d made a ‘powerful’ discovery:
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters.
First off, can we not name multiple artificial intelligence projects after wacko conspiracy movements? Letter choices aside, what did this amazing Qbot do?
Given vast computing resources, the new model was able to solve certain mathematical problems…
Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
That’s right, the discovery that shook the entire AI industry was OpenAI’s algorithms being able to solve grade-school math problems. If this seems underwhelming to you, it’s important to remember that as we have exhaustively detailed in these pages, current ‘AI’ is mostly good at putting combinations of words together in ways that sound convincing. Researchers believe math is different:
But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence.
I mean, I guess? Don’t computers literally do math as, like, a core function of their programming? Am I going crazy? Coincidentally, as the OpenAI chaos was going on, actual scientists were publishing scathing reports on how bad the latest version of ChatGPT was at analyzing medical outcome data. In fact, the chatbot created an entire fake clinical trial dataset to support it’s wrong conclusions.
Should we, as a society, be panicking that AI might be smart enough to do algebra? Maybe. OpenAI’s board thought Altman was too concerned about rapid commercialization of its products while researchers were raising concerns the software was getting too smart. Another explanation is that much of the moral panic around AI seems to derive from the overactive imaginations of our tech illuminati.
In 2015, Elon Musk and Larry Page got into an argument:
Humans would eventually merge with artificially intelligent machines, [Page] said. One day there would be many kinds of intelligence competing for resources, and the best would win.
If that happens, Mr. Musk said, we’re doomed. The machines will destroy humanity.
It is a little funny that the guy who can’t run a website or build cars that avoid fire trucks thinks computers will grow smart enough to destroy us. Framed differently, however, it makes complete sense that Musk’s imagination conjures visions of AI that would immediately wipe out humans, because competitive annihilation is the default setting in Silicon Valley. Ted Chiang wrote a prescient piece on it half a decade ago:
The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.
An added layer of irony is the fact that all the people in the midst of the debate over AI safety are also poised to become rich off it:
The people who say they are most worried about A.I. are among the most determined to create it and enjoy its riches. They have justified their ambition with their strong belief that they alone can keep A.I. from endangering Earth.
Yes, I definitely trust uhhhhh, the founders of Google, Facebook, Palantir, and Microsoft to protect humanity from dangerous technology. They can’t even keep us safe on the Internet they created.
Nor are they even true believers - the embrace of AI is more about riding popular trends than any belief in the underlying tech:
Sundar Pichai was initially unimpressed with ChatGPT, given how much it got wrong. But OpenAI released the chatbot anyway and the public loved it — and he wondered whether Google also could release products that weren’t perfect.
[…]
Zuckerberg… had been obsessed with the metaverse. But Yann LeCun, his top A.I. scientist and a pioneer of the technology, warned that A.I.-powered assistants could make Meta’s platforms extinct.
Some of the richest, most powerful men on the planet are so easily swayed by people within their circles telling then ‘[X] will destroy our business’ they’ve created a resource-devouring arms race to make chatbots that suck slightly less than their competitors.
When gullible technocrats don’t happen to have a business under threat from AI, its proselytizers change tack, insisting computers will kill us all in a grimdark techno-future cribbed from sci-fi novels:
Mr. Musk explained that his plan was to colonize Mars to escape overpopulation and other dangers on Earth. Dr. Hassabis replied that the plan would work — so long as superintelligent machines didn’t follow and destroy humanity on Mars, too.
Mr. Musk was speechless. He hadn’t thought about that particular danger. Mr. Musk soon invested in DeepMind alongside Mr. Thiel so he could be closer to the creation of this technology.
The founder of what is now Google’s AI lab was literally going around Silicon Valley scaring the shit out of every billionaire he could snag a lunch date with, netting himself nine figures in a protracted bidding war. Normal stuff.
Despite the years and billions spent developing AI, we’re no closer to a race of evil computers taking over. It’s just a bunch of rich dudes gassing each other up over apocalyptic visions of the near future, playing the role of both villain and savior.
Spam
In lieu of superintelligence, the billions plowed into AI tech has been productive in at least one area. Turns out, if you feed chatbots immense datasets, you can create some really convincing spam:
The use case for AI is spam web pages filled with ads. Google considers LLM-based ad landing pages to be spam, but seems unable or unwilling to detect and penalize it.
The use case for AI is spam books on Amazon Kindle. Most are “free” Kindle Unlimited titles earning money through subscriber pageviews rather than outright purchases.
The use case for AI is spam news sites for ad revenue.
The use case for AI is spam phone calls for automated scamming — using AI to clone people’s voices.
The use case for AI is spam Amazon reviews and spam tweets.
AI is making every website you visit to search for or buy things much, much worse. The hardest part of selling anything is marketing it, but when you can ask a chatbot to write marketing material based on millions of websites it’s scraped, and it coughs up an approximation of human language, you can make up for quality with quantity. You used to need to hire a human with a modicum of critical thinking skills to write your ad copy, but now a free script can do it, and you can feed hundreds or thousands of pages full of chatbot sputum into Google or Amazon or Facebook and reap the rewards.
Here is a SEO spammer explaining exactly how he hijacked millions of Google search impressions from a competitor. A byproduct of AI companies having already turned their content scrapers loose across the Internet is that their chatbots are good at offering Stealing-as-a-Service to the wider public. You used to need someone with a programming background to scrape a website’s code and duplicate it, now a free AI service will do it for you, and use the output to refine its model in the process.
AI spam has taken on many forms - numerous once-prestige publications and journalistic outlets are leaning on chatbot content as their corporate bosses slash human writing jobs to wring what little profits they can out of an industry in a self-imposed death spiral. Sports Illustrated outsourced filler articles to a third-party company and the results were low quality even by AI standards:
The AI authors' writing often sounds like it was written by an alien; one Ortiz article, for instance, warns that volleyball "can be a little tricky to get into, especially without an actual ball to practice with."
Arena Group, the latest vulture feasting upon dozens of legacy magazine brands, issued a bizarre denial claiming the authors and writing were real, despite evidence to the contrary. But, if you’re running a media company and a vendor promises you human-sounding writing at pennies on the dollar, why wouldn’t you take it? Would you bother to check if any of it makes sense? Editors are expensive! Chatbots don’t ask for raises or attempt to unionize.
The good news for companies relying on AI bots to write marketing or chum articles is that the billions pouring into AI startups are creating the same sort of cost deflation we saw in ride-sharing and food delivery in the early days - the costs of building and running an LLM are borne by investors, so the latest buzzy startup can offer its spam-generation services to spammers for next to nothing. Meanwhile, the vast quantities of garbage accumulate like fatbergs in search engines, on social media, and everywhere we go for information or entertainment.
Algorithms
CW: descriptions of child abuse, pedophilia
Another reason to be skeptical of AI claims by tech giants is they can’t seem to get a handle on the algorithms they built to do things like not show people child pornography :
[Meta] giant set up a child-safety task force in June after The Wall Street Journal and researchers at Stanford University and the University of Massachusetts Amherst revealed that Instagram’s algorithms connected a web of accounts devoted to the creation, purchasing and trading of underage-sex content.
Five months later, tests conducted by the Journal as well as by the Canadian Centre for Child Protection show that Meta’s recommendation systems still promote such content.
First off, it should not require journalists scrolling through child abuse groups to call attention to a problem of this magnitude:
A Meta spokesman said the company had hidden 190,000 groups in Facebook’s search results and disabled tens of thousands of other accounts, but that the work hadn’t progressed as quickly as it would have liked. “Child exploitation is a horrific crime and online predators are determined criminals,” the spokesman said, adding that Meta recently announced an effort to collaborate with other platforms seeking to root them out.
Sure, okay, pedophiles may find sneaky ways to use Facebook to trade child porn (though, arguably, a company that claims cutting edge video and image recognition should be able to combat this) but surely having CSAM keywords showing up in your fucking recommendation engine is a more easily fixable issue:
On a Journal Instagram test account, Meta wouldn’t allow search results for the phrase “Child Links,” but the system suggested an alternative: “Child Pornography Links.” After Meta blocked that term following a query by the Journal, the system began recommending new phrases such as “child lingerie” and “childp links.”
Nor is the problem confined to the ‘determined criminals’ trading awful content in secret groups. Simply following accounts of young women and influencers on Instagram caused sexualized underage content to show up in the platform’s suggested videos:
The Journal sought to determine what Instagram’s Reels algorithm would recommend to test accounts set up to follow only young gymnasts, cheerleaders and other teen and preteen influencers active on the platform.
Instagram’s system served jarring doses of salacious content to those test accounts, including risqué footage of children as well as overtly sexual adult videos—and ads for some of the biggest U.S. brands.
One of the many problems with large-scale machine learning algorithms is when they are trained on actual human behavior they can do things like associate ‘young gymnasts’ with ‘sexualized children’ and automatically show content that, again, if the company’s safety tools worked even remotely as well as they claim would not be allowed on Instagram in the first place. Not only does Meta not flag or block it, it deems it brand safe enough to put beside Pizza Hut and Disney ads.
I have said for years that Meta’s platform is simply too big, its software too much of a black box for even its seasoned engineers to fully understand. Zuck’s continued insistence on slashing risk and safety teams while pouring billions into pet projects means there are less people to diagnose these problems, and the company’s growth-focused approach means it will always choose more ads and profits over user safety.
It is telling that when shown concrete evidence its platform is distributing the most heinous content by the Wall Street Journal Meta’s PR team cried fake news:
Meta said the Journal’s tests produced a manufactured experience that doesn’t represent what billions of users see.
If this sounds familiar, it is because Elon Musk has trotted out the ‘if you trick our algorithms it’s actually defamatory’ gambit in his recent lawsuit against Media Matters, who found many examples of Nazi content alongside brand ads. Tech firms insist they should be immune from both regulation and accountability for what their software shows people, even and especially when it’s hate speech or child porn.
Climate
The UN Climate Change Conference (aka COP28) is being held right now in the UAE for some reason, and the current president of the group is also the CEO of the Abu Dhabi National Oil Company, because sure, why not. Despite some generally good-sounding news coming out of the conference, the contradiction of an oil company CEO running the climate conference has overshadowed whatever modest gains the rest of the world is working towards.
Before the conference even got going, a whistleblower leaked a trove of documents showing Sultan Al Jaber and the UAE lobby planned to use COP28 to cut deals for the country’s oil and gas.
Then, during an event called ‘She Changes Climate’ that Al Jaber was attending for I assume diversity reasons, he said this:
There is no science out there, or no scenario out there, that says that the phase-out of fossil fuel is what’s going to achieve 1.5C.
This week, Jaber backtracked a little, admitting that a phase down/out of fossil fuels was inevitable, at some indeterminate future time. Anything ‘phase’-related has been controversial at COP, with Saudi Arabia refusing to sign on to any agreement that includes the words.
It is understandable that two despotic Middle East oil states do not want the world setting short-term deadlines to phase out the resource that allows them to cling to power, but the UN can surely do better than staging an event in Abu Dhabi with seventy thousand people traveling on planes and jets to listen to fossil fuel industry propaganda?
Oil is so vital to tyrannical leaders around the world that Vladimir Putin is making a rare trip outside Russia since the invasion of Ukraine to…UAE and Saudi Arabia, to talk about ways their cartel can continue to prop up his flailing regime.
Whatever ostensible gains are being made as a result of COP and other initiatives like it, allowing countries like UAE, Saudi, and Russia to dictate and influence terms of agreements meant to help the rest of the world slowly choking on the fumes of their empires is counterproductive at best.
Short Cons
Rest of World - “When Rest of World tested ChatGPT’s ability to respond in underrepresented languages, we found problems reaching far beyond translation errors, including fabricated words, illogical answers and, in some cases, complete nonsense.”
WIRED - “Security researchers and technologists probing the custom chatbots have made them spill the initial instructions they were given when they were created, and have also discovered and downloaded the files used to customize the chatbots.”
Platformer - “Three days after Amazon announced its AI chatbot Q, some employees are sounding alarms about accuracy and privacy issues. Q is “experiencing severe hallucinations and leaking confidential data,” including the location of AWS data centers, internal discount programs, and unreleased features, according to leaked documents obtained by Platformer.”
Bellingcat - “A US artificial intelligence company surreptitiously collected money for a service that can create nonconsensual pornographic deepfakes using financial services company Stripe, which bans processing payments for adult material, an investigation by Bellingcat can reveal.”
Lever - “Yet unbeknownst to many patients, insurers can change their drug coverage throughout the year, thereby removing medications that enrollees were promised. When this happens, those who lose access to their medicines are usually barred from immediately moving to a different insurance plan.”
Grist - “Over the last five decades, Martin has made millions of dollars off this real estate boom, building a development empire on West Maui and turning hundreds of acres of plantation land into a paradise of palatial homes and swimming pools.”
Science - “But speaking to Science anonymously, four former members of Zlokovic’s lab say the anomalies the whistleblowers found are no accident. They describe a culture of intimidation, in which he regularly pushed them and others in the lab to adjust data.”
Forbes - “Sony has announced that certain TV shows that players may have bought through PlayStation will be deleted from their libraries. Yes, shows you paid for, just…deleted.”
NYT - “Netflix burned more than $55 million on Mr. Rinsch’s show and gave him near-total budgetary and creative latitude but never received a single finished episode.”
Know someone worried AI will follow humanity to Mars and wipe us out? Send them this newsletter!