Cruise Control
Quick note: Welcome to all our new subscribers, many of whom came from Ryan Broderick’s excellent newsletter Garbage Day. Also, as part of a long-overdue rebrand, I had a silly cartoon logo made, featuring some of our favorite recurring characters:
Of course, I did not measure any of the various Substack image requirements and it won’t fit anywhere in full, but you can enjoy it here, this one time.
Amazon
Antitrust is having a moment - the Department of Justice and various state Attorneys General are battling Google, Apple is fending off App Store monopoly claims, and Microsoft still faces scrutiny after its recent acquisition of Activision.
Another tech giant is staring defiantly down the barrel of a major lawsuit - Amazon was sued by the FTC in September over a raft of claims the company has used its monopoly to stifle competition and reap windfall profits for years.
Like many such lawsuits, Amazon fought hard to keep most of the filings redacted, claiming that revealing the government’s evidence would expose trade secrets and, ironically, create a competitive disadvantage. Significant portions of the filing were revealed last week, however, and while anyone with cursory knowledge of how Amazon does business might not be surprised by the details, the breadth is impressive.
Where to begin? In the early 2010s, Amazon developed ‘Project Nessie’:
The algorithm predicted the likelihood that another online store that was currently offering a lower price on an item would match Amazon’s raised price. That meant the company risked having a higher price for some time, but Amazon determined the risk was worth it if competitors followed their lead at least 20% of the time.
Even if it wasn’t always successful, the higher prices netted Amazon huge gains at scale. In 2018 alone, the FTC estimates Nessie made the company $334 million in profit. That profit had to come from somewhere, of course - the complaint alleges Amazon has extracted more than $1 billion from American households using Nessie’s price manipulation algorithms.
Elsewhere, Amazon used software to crawl the Internet to track its third-party sellers, penalizing them if they were caught offering discounts on different platforms or their own websites:
Among those penalties, Amazon could take away the “Buy Box,” or the yellow button that allows a customer to purchase a product. Roughly 98% of sales on Amazon take place using the Buy Box, according to recently unsealed portions of the complaint.
[…]
Along with taking away the Buy Box, Amazon also punishes sellers by burying their products deep in the search results or erasing the product’s price from public view, even if it is the best price available on Amazon’s store, the FTC said.
Amazon held its sellers captive on the platform, punishing them if they dared sell their products anywhere else at competitive prices, while it was using other software to manipulate prices in its favor.
Then there’s Amazon’s fulfillment platform, linked to Prime eligibility badges:
Amazon ties Prime eligibility — a badge that lets customers know the order will come with the free, two-day shipping guarantee that is part of a subscription to Amazon Prime — to sellers’ use of Amazon’s shipping and fulfillment services, the FTC said. Without Prime eligibility, sellers were relegated to a “near-invisible” version of Amazon’s digital store, the complaint read.
Anyone who’s used Amazon Prime is familiar with how Prime-eligible products get bumped to the top of searches - I’ve personally used it as a filter when I need products in a timely manner. Sellers must opt into Amazon’s additional services to receive preferred treatment in the Prime system, which allows Amazon to tack on more fees for each item sold:
The company pockets about 45% of every dollar a seller earns on the platform, according to a September study from the Institute for Local Self-Reliance, a research and advocacy organization.
When Amazon tested removing the fulfillment requirement from Prime, this happened:
Amazon briefly relaxed its policies, the FTC continued, when it began allowing sellers to fulfill their own orders and still be eligible for the Prime badge. That ended when an Amazon executive “had an ‘oh crap moment'” and realized that was “fundamentally weakening [Amazon’s] competitive advantage,” the FTC alleged in the newly unredacted portions of the complaint.
It is not a great argument your company is not engaging in illegal practices when it literally cites ‘losing our monopoly’ as a reason to roll back a seller-friendly change.
It is a miracle anyone makes money on Amazon’s marketplace, with the company taking nearly half of every sale. But the extraction doesn’t stop there - we haven’t even touched on Amazon’s ‘advertising’ platform yet. I’ve called it a payola scheme, but the tech and finance press lauds Amazon’s ‘ad business’ as a triumph for the company, bringing in billions a year in additional revenue. Except, as usual, Amazon has its finger on the scale:
Amazon flooded its search results with irrelevant “defect” ads at the direction of Founder Jeff Bezos, pumping Amazon profits while steering shoppers to higher-priced goods, the Federal Trade Commission alleged in a newly unredacted portion of its antitrust lawsuit against the company.
“At a key meeting, Mr. Bezos directed his executives to ‘[a]ccept more defects’ as a way to increase the total number of advertisements shown and drive up Amazon’s advertising profits,” the FTC wrote in a now-public part of the complaint. The agency said that defect ads referred to those that that are irrelevant or only somewhat relevant to what a user is searching for.
Right, yes. Not only is Amazon forcing sellers to pay for preferred listings to have a hope of appearing at the top of results, it’s showing their ads in completely irrelevant searches to rack up more impressions, which they charge for.
It might be difficult for Amazon to dispute that its advertising model is actually improving the customer experience when it’s serving shit like this:
The proliferation of junk ads led to more relevant organic results being crowded out. In their place, shoppers were served up products that were “plainly not what the customer searched for,” such as an ad for a LA Lakers t-shirt in a search for a Seattle Seahawks t-shirt.
Other results were more puzzling. In one example collected by an Amazon executive, “Buck urine” showed up first in a search for water bottles.
And saying shit like this:
Amazon weighed placing guardrails on ads in search results, but senior executives at the company ultimately determined they shouldn’t be “constrained” by limitations such as how relevant the products were to what shoppers search for.
And building systems to track the bad ads:
Even though Amazon knew defect ads worsened the search experience, internal experiments showed the practice had no detrimental effect to its advertising revenue, and therefore its profits. The company went as far as incorporating a “cost of defect” into its ad auction system “to make the most money from its ad auctions.”
Obviously, if I were an Amazon executive with my ethics glands removed (mandatory), listening to my team explain that we can make more money by simply serving more ads, regardless of whether they piss off shoppers or waste seller ad dollars, I’d say ‘sure, of course, let’s continue to do that.’ And, because Amazon operates every piece of the advertising, sale, and fulfillment process in near total secrecy, it can charge sellers for the right to waste their money on shitty ads and pay half their earnings back in fees.
The entire point of regulators like the FTC is to serve as a check on the behavior of companies, so they don’t decide that serving deer urine ads in unrelated searches is the path to infinite profits. Or lock millions of advertisers into an abusive relationship, with their money and even their inventory hanging on the whims of executives whose only remit is to make their earnings lines go up and to the right.
Every time we talk about a big tech lawsuit with potentially wide-ranging repercussions we must remember that these cases take years, are subject to external forces like changing political control in Washington, and rely on a legal system with a long track record of upholding a pro-business status quo. Even with reams of seemingly smoking gun evidence, the FTC has a long, uphill slog ahead of it to extract even mild concessions from a company like Amazon, who has functioned as an ecommerce nation state for decades, and has little motivation to make any meaningful changes.
That said, Amazon did discontinue Project Nessie in 2019, when it received notice the FTC was sniffing around. Companies can be shamed into changing some of their worst behavior, and if the FTC and DoJ continue to rack up wins against its brethren, Amazon might feel pressure to dial back its awfulness a touch, which is good for sellers and shoppers.
Cruise
We’ve talked before about Cruise, the robotaxi company burning a hole in GM’s pocket to the tune of $2.5 billion a year. That may seem like an astonishing amount of money for a company operating four hundred cars in a couple American cities, but, you know, tech visionaries and all that:
Each Chevrolet Bolt that Cruise operates costs $150,000 to $200,000, according to a person familiar with its operations.
[…]
Those vehicles were supported by a vast operations staff, with 1.5 workers per vehicle. The workers intervened to assist the company’s vehicles every 2.5 to five miles, according to two people familiar with is operations. In other words, they frequently had to do something to remotely control a car after receiving a cellular signal that it was having problems.
It is such a tired tech trope that Cruise is paying 1.5 expensive tech workers to babysit each one of its tiny fleet of EVs rather than, you know, paying a person to just drive them around. Those people make up less than half of the two thousand employees Cruise has, which helps explain why it’s billions in debt and bleeding billions more.
What do all the non-babysitters do? Many of them work on Cruise’s all-important software, which the company has repeatedly claimed drives more safely than humans could. If that were true, wouldn’t the company have thousands of cars on the roads a decade after its founding? Wouldn’t it be thriving, rather than retreating from driverless entirely while it deals with multiple regulatory investigations?
Well, it turns out that despite the company’s claims, its cars have major safety concerns that it may have been concealing from regulators:
Even before its public relations crisis of recent weeks, though, previously unreported internal materials such as chat logs show Cruise has known internally about two pressing safety issues: Driverless Cruise cars struggled to detect large holes in the road and have so much trouble recognizing children in certain scenarios that they risked hitting them.
Oh! Yeah, that’s not great:
In particular, the materials say, Cruise worried its vehicles might drive too fast at crosswalks or near a child who could move abruptly into the street. The materials also say Cruise lacks data around kid-centric scenarios, like children suddenly separating from their accompanying adult, falling down, riding bicycles, or wearing costumes.
[…]
“We have low exposure to small VRUs” — Vulnerable Road Users, a reference to children — “so very few events to estimate risk from,” the materials say. Another section concedes Cruise vehicles’ “lack of a high-precision Small VRU classifier,” or machine learning software that would automatically detect child-shaped objects around the car and maneuver accordingly.
Vulnerable Road Users is certainly one way to describe human pedestrians, but sure, let’s go with it. The problem with Cruise’s training approach was that it lacked enough collected data on children and their potentially erratic behavior. It could only get more data by…driving its cars around children, which was obviously not ideal. So instead, Cruise rolled out patches, but it’s important to remember it could only properly test these fixes by…sending driverless cars into the vicinity of children. That the cars might hit. Cool!
Then there were the holes:
Cruise has known its cars couldn’t detect holes, including large construction pits with workers inside, for well over a year, according to the safety materials reviewed by The Intercept. Internal Cruise assessments claim this flaw constituted a major risk to the company’s operations. Cruise determined that at its current, relatively miniscule fleet size, one of its AVs would drive into an unoccupied open pit roughly once a year, and a construction pit with people inside it about every four years.
So the existing Cruise fleet would drive into open construction pits once a year. Excellent. Holes are known as ‘negative obstacles’ in industry jargon, meaning there is nothing obvious about them for the software to detect - not every construction pit will be ringed by bright orange cones because humans are pretty good detecting and avoiding giant holes in the road.
Again, the problem is that in trying to recreate their best approximation of a perfectly-behaved human driver, software engineers simply couldn’t account for the millions+ of different encounters that occur any time a person gets behind the wheel of a car. Our human brain spot a child skipping down the sidewalk and thinks ‘hey, I should slow down and give that kid a wide berth because children are prone to fall down, or trip, or spot a shiny object in the street and beeline into a dangerous situation’. It’s an instinctual response that has proven extremely difficult to translate into a coherent set of instructions to give a computer-operated car. American humans do have a ludicrous rate of child pedestrian death compared to peer nations, but that’s due to our giant SUVs and trucks and lax traffic law enforcement - a different blog post entirely.
For Cruise, safety concerns were eventually overruled by financial incentives - with the company bleeding money and its techno-libertarian CEO pushing Growth Mindset, the choice was made to put its cars on the road and hope they could patch issues before anyone noticed. Unfortunately, people do notice when your robotaxis hit fire trucks and drag pedestrians down the street under their wheels.
It is a common refrain around these parts, but couldn’t the billions investors poured into Cruise be used to do, I don’t know, literally anything else for road and traffic safety? The company has been forced to retreat and reset, and GM is not Softbank (Cruise’s prior investor, because of course it is) and can’t simply write off a few billion a year with no progress. Uber abandoned its AV dreams years ago, and only Google’s Waymo, with its near endless funding and more measured approach, is left in the space. Why are we still working so hard to make robotaxis a thing, when the tech clearly isn’t there?
AI
What if Cruise were able to take - without permission - a detailed record of every driver’s behavior going back thirty years? That would probably allow it to create the fabled safety model its CEO so badly desires.
That is, in fact, the model many AI companies are using to create their LLMs - emphasis on ‘large’ - and the people investing billions in this ‘tech’ are worried that if they’re forced to obey pesky copyright standards they might be in trouble:
"The bottom line is this," the firm, known as a16z, wrote. "Imposing the cost of actual or potential copyright liability on the creators of AI models will either kill or significantly hamper their development."
Ya think? The e/acc weirdos weren’t the only ones getting upset about pesky ‘laws’, pretty much every major tech firm agreed with their assessment:
Google, Microsoft, and OpenAI made similar arguments — that the amount of data used to train their models is so massive there is no way they could figure out a way to pay for it.
And played the forgiveness, not permission card:
None of the companies denied using copyrighted material without authorization from rights holders. Instead, they generally argued that putting copyrighted material on the internet makes it "publicly available" and therefore fair game for use. Using that data to train an LLM constitutes "fair use" under current copyright law, the companies added.
When tech giants and start-ups backed by billionaire investors hoover up all the words on the Internet to train AI models they hope to sell for billions more, that’s just ‘fair use’. Obviously!
There are two problems here - the most obvious being that these companies are already profiting off the work of others. Asking ChatGPT to write a business email in the style of Cormac McCarthy may be cute and/or upsetting to your coworkers, but OpenAI has already ingested McCarthy’s copyrighted work and is selling you access to a bot that can regurgitate it convincingly. AI is such an obvious threat to actual writers and creatives that two of the major Hollywood guilds went on strike (and won!) over it.
The second problem is that AI is dumb, and makes lots of mistakes, because it is basically complicated sentence prompt completion software. AI doesn’t have to be dumb, but to put the L in LLM you need to scrape as much of the Internet as you possibly can, and lots of the Internet is fake and made-up. Humans are such a messy species it’s difficult to think of a language set that would train an AI to give correct, concise answers. Advances in LM might work great for, say, companies who want to train software on existing datasets or research scientists trying to digest large volumes of data, but thinking anyone can build a chatbot that consistently delivers accurate data based on a vague approximation of ‘the Internet’ is pure fantasy.
Which brings us here - the AI race now consists of companies climbing over one another to deliver deeply flawed chatbots that will garner enough attention that they can sell their creations on to the next set of investors for a juicy multiple. One particular Dipshit has even created his own pet chatbot running on the world’s least reliable data source:
A couple of days later, [Musk] announced that his latest company, xAI, had developed a powerful AI—one with fewer guardrails than the competition.
[…]
The xAI announcement says that Grok is built on top of a language model called Grok-1 that has 33 billion parameters. The company says it developed Grok in two months, a relatively short amount of time by industry standards, and also claims that a fundamental advantage is its “real-time knowledge of the world via the X platform,” or the platform formerly known as Twitter…
Before you run screaming, hear me out - if there is anyone on the planet who can expose AI chatbots as half-baked technology with no real societal value, it is Elon. He cobbled together a team of people whose primary criteria were ‘AI on resume’ and ‘willing to work for Elon Musk’ and they built a chatbot tuned to his non-existent sense of humor in two months. It’s going to suck so bad, and become such a laughingstock outside of the Twitter echo chamber, that maybe, just maybe, people will realize that GPT Bard and Bing are just slightly less obnoxious - but no more accurate - versions. Who knows.
SBF
Well, that was easy:
A jury on Thursday convicted FTX co-founder Sam Bankman-Fried of fraud, conspiracy and money laundering, the culmination of a month-long trial that saw the former crypto mogul take the stand in his own defense after his inner circle of friends turned deputies provided damning testimony against him.
As it turns out, when all of your co-conspirators testify that you did, in fact, do fraud, it is a pretty convincing argument to a jury! Obviously, it helped that prosecutors also had reams of evidence including chat logs and whatnot, but the case was pretty straightforward.
Some of the people who helped SBF run FTX have copped plea deals and may go to jail for a bit, which is standard for this sort of situation. But! Conspicuously absent from criminal proceedings were a few key people who helped Bankman-Fried build his empire.
The first are his parents, who are being sued by FTX to recover alleged millions they were given by their son for providing…advisory services? It is also very funny that they are both Stanford law professors and his mother taught legal ethics while his dad taught tax law as their son ran a billion-dollar offshore crypto fraud?
Also, curiously free of scrutiny is the law firm currently unwinding FTX’s finances:
By contrast, the lawyers currently in charge of FTX convinced the bankruptcy judge that they could investigate any wrongdoing at FTX themselves. Which is interesting, since the law firm, Sullivan & Cromwell, was a key legal adviser to FTX for more than a year before the bankruptcy.
A partner from the firm was named FTX’s general counsel back in 2021, when SBF was waist-deep in doing fraud. He remained at FTX after the bankruptcy, and Sullivan & Cromwell were the ones who helped initiate the bankruptcy and named its new CEO to oversee the unwinding. They helpfully handed over much of the information to prosecutors that got SBF sent away.
SBF has claimed the firm were more than simply outside counsel:
SBF claims that Sullivan & Cromwell took control of the company by misleading him—and at the same time reporting him to authorities—all while he was still CEO of their client, FTX. He has also maintained that Sullivan & Cromwell’s lawyers pressured him into resigning last November, in part by promising he would get to name the new board chairman. As soon as he signed over control, however, Sullivan & Cromwell’s handpicked CEO shut out SBF, and immediately named Sullivan & Cromwell as the law firm to represent FTX in the bankruptcy.
A bankruptcy judge saw no problem with the firm that had represented FTX up to the point of its demise overseeing its dissolution. Which, okay, perhaps you could make the argument that the lawyers best suited to unwinding SBF’s notoriously sticky web of finances were the ones who worked with him for years. But does that explain the level of secrecy the firm has maintained around the whole thing?
For starters, the Sullivan & Cromwell lawyers have convinced the judge to seal the names of FTX’s customers permanently and go so far as to declare them a trade secret. They’ve also sealed many documents in the case that are routinely disclosed, having to do with the sale of FTX assets, bonuses to key employees, and the hiring of professionals. The law firm has cited cybersecurity concerns and the need to protect confidential business information.
And there’s perhaps the most significant secret: Early on in the case, they quietly convinced the judge to give them sole discretion to legally shield from liability any people involved with FTX whom they chose to protect—and to seal those names too.
Huh. It doesn’t look great that the firm in charge of selling off FTX’s assets has hidden information on those sales and who’s involved in them from public view, and has the right to shield ‘people’ involved with FTX from legal liability?
This unusual provision played out during SBF’s trial:
If you followed the case, you may have seen how SBF’s defense lawyers were stymied from mentioning the role of lawyers at FTX (except for one lawyer, Daniel Friedberg, who, amongst his other issues, had tried unsuccessfully to keep Sullivan & Cromwell from running the bankruptcy).
So SBF couldn’t even mention the names of the lawyers who represented FTX, because those lawyers shielded themselves from liability. And external oversight:
Sullivan & Cromwell convinced the bankruptcy judge not to appoint an outside examiner, arguing that it would cost too much.
Oh yes, we must consider FTX’s creditors, who are the real victims in this thi-
Some of FTX’s customers appreciated the irony, given the fact that the bankruptcy is costing them more than a million dollars a day in professional fees.
Ahhhh, there it is. The noble souls at S&C are paying themselves a million dollars a day to unwind the mess they had some hand in creating, while shielding themselves and their former (?) partners from any and all accountability.
It is perhaps fitting that FTX, a company that turned out to closely resemble SBF’s infamous ‘magic box’ analogy, has become a different sort of magic box full of an unknowable amount of money and crypto, now run by a bunch of lawyers.
Short Cons
Reuters - “Facebook owner Meta is barring political advertisers from using its new generative AI advertising products, a company spokesperson said on Monday, cutting off campaigns' access to tools that lawmakers have warned could turbo-charge the spread of election misinformation.”
NYT - “Many U.S. troops who fired vast numbers of artillery rounds against the Islamic State developed mysterious, life-shattering mental and physical problems. But the military struggled to understand what was wrong.”
American Prospect - ““This rule is about a very basic principle that retirees who save their whole lives for a secure retirement should be able to rely on financial advice that they get that’s in their best interest, not in the interest of bad advisers,” said [Labor Secretary Julie] Su…”
ProPublica - “The TCP levels in the Smithwick Mills system are alarming to those who study water contamination. As with many chemicals, there’s limited information on TCP’s long-term effect on humans. But research involving animals shows evidence that it increases cancer risks at lower concentrations than many other known or likely carcinogens, including arsenic.”
AP - “The CFPB found that Citi employees were trained to avoid approving applications with last names ending in “yan” or “ian” — the most common suffix to Armenian last names — as well applications that originated in Glendale, California, where roughly 15% of the country’s Armenian American population lives.”
Verge - “Because this most recent proposal would allow studios to use digital scans of dead actors without the consent of their estates or the guild, however, SAG-AFTRA has refused and expressed its desire for changes that would require the studios to pay actors each time their faces are used and receive consent from those actors before doing so.”
ABC.au - “The post in the Facebook group No Offshore Wind Farm for the Illawarra referenced a University of Tasmania study purportedly published in his journal that predicted offshore wind turbines "could kill up to 400 whales per year". "That paper does not exist," Professor Hanich said.”
Reuters - “The trade commission is investigating whether Southern Glazer's has unlawfully given price preferences to large chains such as Total Wine that are not accorded to smaller retailers.”
Intercept - “Whereas the Biden administration released a three-page itemized list of weapons provided to Ukraine, down to the exact number of rounds, the information released about weapons sent to Israel could fit in a single sentence.”
Know someone trying to train an AI model on a bunch of Internet posts? Don’t send them this newsletter!