Ranked Choice
Google (Alphabet, whatever) is a giant, extremely profitable company with a virtual monopoly on web search, online videos, and display advertising. You would think that, given its vast resources and access to the world’s top tech talent, Google would be well positioned to take advantage of the recent AI hype cycle. Not so much!
The monopoly search giant has received criticism for being slow to react to new LLM chatbots and, in response to Microsoft’s partnership with OpenAI, has hustled to incorporate AI into its own search results. How’s that going?
Instead of highlighting links to content from expert humans, the “Search Generative Experience” (SGE) uses an AI plagiarism engine that grabs facts and snippets of text from a variety of sites, cobbles them together (often word-for-word) and passes off the work as its creation. If Google makes SGE the default mode for search, the company will seriously damage if not destroy the open web while providing a horrible user experience.
Oh, right, yes. Google’s brilliant idea - parroting Microsoft’s Bing rollout - is to use its powerful chatbots to aggregate a bunch of articles and sources into credible-sounding search responses. This is bad on both fronts - Google’s AI doesn’t perform any better than its peers at producing accurate data, and by cribbing the work of human authors it’s stealing traffic from the operators of real websites, many of whom depending on Google’s search traffic to stay alive.
Google claims its goal is still to drive visitors ‘off site’, but its foray into AI results continues a trend decades in the making. Since the early days of Sheryl Sandberg leaning in to clutter Google’s search results with the maximum tolerable number of ad units, the company has focused its efforts on making search more profitable at the expense of accuracy.
AI chat bubbles are designed to replace the ‘featured snippets’ Google rolled out years ago - the suggestions Google scrapes from sites its algorithm decides are credible. This strategy drives much less traffic to publisher sites, because if the excerpted phrase answers a searcher’s question, why click? At least the snippets are quoting from the source website though, rather than plagiarizing answers and still managing to get them wrong:
Google’s bot says that “the American Cancer Society recommends that men and women should be screened for colorectal cancer starting at age 50.” However, the American Cancer Society’s own website says that screenings should start at age 45, so this misleading “fact” probably came from elsewhere.
There’s also a bulleted list of “reasons to have a colonoscopy” that don’t include “routine screening,” hence it’s implying that you should only get the procedure if you have symptoms. The bulleted list is copied word-for-word from an article on an Australian Government health site called BetterHealth. The article actually lists “screening and surveillance for colorectal cancer” as a reason, but Google’s bot decided not to copy that fact.
Google controls over 90 percent of the search market, so it’s probably fine that its super advanced LLM can’t correctly answer simple questions about one of the most common preventative medical procedures for half the populace? AI, baby!
Then there’s Google’s ad business, which relies on the company’s powerful algorithms (using AI/ML) to dominate both search and display, raking in a few hundred billion dollars a year. Surely, with such a massive advantage over the competition, Google does a good job policing where those ads show?
Google’s YouTube runs ads on its own site and app. But the company also brokers the placement of video ads on other sites across the web through a program called Google Video Partners. Google charges a premium, promising that the ads it places will run on high-quality sites, before the page’s main video content, with the audio on, and that brands will only pay for ads that aren’t skipped.
Google violates those standards about 80% of the time, according to research from Adalytics, a company that helps brands analyze where their ads appear online. The firm accused the company of placing ads in small, muted, automatically-played videos off to the side of a page’s main content, on sites that don’t meet Google’s standards for monetization, among other violations.
That’s right, eighty percent of the time Google violates its own standards for ad quality, when the ads run on third-party websites. It charges advertisers billions of dollars to appear on non-compliant pages. If Google’s ad safety algos worked eighty percent of the time, one might be inclined to cut the company a little slack, but twenty percent? Come on.
If that weren’t bad enough, Google plans on rolling out AI-generated ads to run on the junk websites it doesn’t police properly. Last year the company debuted tech to hash together images you upload to its ad platform into short videos, showing unsuspecting YouTube viewers 15 second nonsensical clips with ad text overlaid on them as part of its new Performance Max system which - you guessed it! - uses machine learning models to optimize ad spend.
One reason so many shitty websites exist to serve Google’s non-compliant ads is Google itself - the company’s search algorithms have been gamed for profit from the moment the search giant stepped onto the scene. Called ‘SEO’ in the business, the art of tricking search engines into funneling you free traffic is big business. SEO is so important to digital retailers they often create entirely different versions of their sites to retain ranking. The reason every recipe you find on Google has a thousand word preamble before it gets to the directions? SEO.
All the SEOized sites then feed traffic back into Google’s ad ecosystem by running its AdSense tags, still the best way to make money off banner advertising. It’s an ouroboros of poor quality content, unnecessary fluff and filler, all in the name of protecting Google’s insurmountable monopoly. Unfortunately, for everyone using the search engine to try to find anything, the experience has become a lot less useful as Google packs more of its self-optimized junk into precious browser space.
Not only is Google aiming to make its own search worse, AI is causing an explosion of the shitty sites that have plagued results for years. Google will have to tune its AI-powered algorithms to filter out the AI-created crap striving for SEO clicks and ad dollars from Google, who’s using a different AI to create ads for advertisers to run in secret hidden pockets on the junk AI websites. The real singularity will be when the search experience is just a bunch of language models talking to each other, a thousand monkeys with typewriters attempting to solve the puzzle of which pizza place is near me right now (and not a ghost kitchen).
Drugs
By any measure, we are living in an unprecedented age of medical discovery. The Herculean effort to produce COVID-19 vaccines was nothing short of remarkable. We’re tantalizingly close to groundbreaking vaccines and treatments for pancreatic cancer, breast cancer, alzheimer’s, malaria, and TB.
The most competitive race in pharma these days, though, is weight loss. You’ve likely heard of Ozempic, the diabetes drug being used off-label for weight loss. There’s also Wegovy, which is just a larger dose of Ozempic. If you don’t like injections, Rybelsus is the pill form of, I repeat, a diabetes drug not approved for weight loss but administered for it nevertheless. If Novo Nordisk’s offerings aren’t your jam, never fear! Eli Lilly has an injectable called Mounjaro and its latest offering retatrutide had a very successful phase 2 clinical trial, delivering a whopping 24 percent reduction in body weight at 48 weeks. Lilly’s making a pill too, because weekly injections to get skinny are a drag.
We are in the middle of a pharma gold rush to capture market share in a chronically overweight country. That is not to say that drug company greed should overshadow the benefits of drugs that prevent or slow the potential time bomb of a fifth of the world developing diabetes by 2050. It is worth asking why we need six or eight or ten weight loss drugs on the market, saturating our televisions with ads, racking up untold billions in health insurance costs while the FDA stubbornly refuses to regulate the unhealthy junk the food industry pushes on us.
Meanwhile! The country is suffering a major shortage of necessary drugs, including cancer treatments and antibiotics. Sometimes it’s manufacturing hold-ups:
The supply chain problems, particularly for generic drugs, are fairly well known: a concentrated set of ingredients coming mostly out of India and China, a system of bulk purchasing for hospitals that leads generic manufacturers to see no money in manufacturing certain products.
And sometimes it’s domestically profit-driven:
Another factor driving shortages: Medications like Adderall and amoxicillin generate thin profits so companies don’t have an incentive to make and store large amounts in case a shortage develops, [Erin] Fox said.
Many of these drug shortages go underreported or unnoticed, because most people aren’t seeking chemotherapy drugs for their child, or may not be aware if a doctor is forced to prescribe a less effective antibiotic. But rest assured there will be ample supplies of expensive weight loss drugs - Wegovy costs $1400 a month before insurance. Are they better than the snake oil being sold unregulated (thanks FDA) on store shelves now in the form of diet pills and powders? Sure, despite some side effects, the new diabetes-cum-weight-loss treatments are very promising and an important tool to curb the country’s obesity epidemic. Should they exist in lieu of hospitals and doctors having access to basic medicines and life-saving treatments? Of course not.
As we saw during the COVID-19 vaccine sprint, the government has many tools at its disposal to fix supply shortages of necessary drugs, but lacks political will. The diet drug craze is great news for celebs and those who seek to look like them, but it could put even more strain on a thinly stretched American drug supply.
IRL
Have you heard of an app called IRL? Me either. You might be surprised to learn it had 20 million monthly active users at its peak, which would put it somewhere in the top 25 apps in the US by user base. Two years ago, the company was valued at a staggering $1.2 billion dollars by Softbank. Pretty wild!
You probably know where this is going by now - down the slowly winding road of outright fraud:
Fast-forward to today, and the venture is shutting down, admitting that those claims had been incorrect—to put it mildly. The board of directors concluded after an investigation that 95% of the users were “automated or from bots,” as The Information reported on Friday.
Whoops! It turns out the CEO, who resigned in April presumably once he realized the jig was up, simply lied to everyone about IRL’s usage. Eventually, when the board found out, they had no real choice other than to close the company, fire everyone, and return whatever was left to investors.
Here is a photo of Abraham Shafi from the obligatory soft focus profile in Forbes:
The author describes IRL’s ascendence:
Among IRL’s accolades to date? It entered the top 10 social network apps on the app store. It’s worth over $1 billion and it boasts over 20 million active users, of which 75% are Gen Z – making it one of the fastest growing social networks for the next generation. It is disrupting the mostly toxic social media space with a product that’s nothing short of revolutionary.
Impressive! It is worth noting, I suppose, that when you are running 19 million bots on your app, it is probably not quite so hard to achieve meteoric growth and a hefty valuation from the notoriously scrupulous Softbank.
Last time we talked about phantom users it was Charlie Javice pulling a fast on one JP Morgan with her files of fake student borrowers (it’s not going great for her). It is probably a little easier to maintain the ruse if you’re raising funding rounds from overzealous investors instead of selling your company to a bank, but eventually someone is going to notice.
Lying
Francesca Gino, a prominent professor at Harvard Business School known for researching dishonesty and unethical behavior, has been accused of submitting work that contained falsified results.
It is very funny that a business school professor selling her services as an expert on honesty and success appears to have fudged data on at least four of the key studies her career was built on. A group called Data Colada found Gino’s pattern of tampering while they were examining papers from another discredited dishonesty researcher, who graced these pages back in 2021. To quote Data Colada’s intrepid team of truthsters:
It turns out that Study 1’s data were also tampered with…but by a different person.
That’s right:
Two different people independently faked data for two different studies in a paper about dishonesty.
Son of a! Who knew there was so much lying going on in the study of lying? Anyhow, credit to NPR’s team who capped their reporting with this dagger:
Gino has contributed to over a hundred academic articles around entrepreneurial success, promoting trust in the workforce and just last year published a study titled, "Case Study: What's the Right Career Move After a Public Failure?"
Someone should write a paper about these two.
Short Cons
AP - “More than $200 billion may have been stolen from two large COVID-19 relief initiatives, according to new estimates from a federal watchdog investigating federally funded programs that helped small businesses survive the worst public health crisis in more than a hundred years.”
Gallup - “The Hologic Global Women’s Health Index -- a global survey of women and men that asks questions related to women’s health -- suggests that billions of women missed out on getting tested or screened for high blood pressure, cancer, diabetes, and sexually transmitted diseases or infections (STDs/STIs) in both 2020 and 2021.”
New Republic - “The Republican Study Committee (of which some three-quarters of House Republicans are members) on Wednesday released its desired 2024 budget, in which the party boldly declares its priority to eliminate the Community Eligibility Provision, or CEP, from the School Lunch Program.”
Slate - “The existence of an entire manufacturing industry dedicated to producing expensive pieces of equipment that can all be easily and effectively replaced by “yourself and maybe some dirt” is an odd thing.”
Verge - “The biggest automaker in the world is reportedly working on an electric vehicle prototype that mimics the feel of driving a manual transmission, complete with a gear shift that’s not connected to anything and a floor-mounted speaker to pipe in fake engine noises.”
Bloomberg - “Businessweek reviewed more than 1,200 confidential complaint reports logged by Papa over the past four years and found dozens of allegations of sexual harassment and assault, as well as an allegation of unlawful imprisonment, among a broader range of issues including theft and dissatisfaction with the service’s quality.”
ProPublica - “The industry objections resulted in a remarkable concession from the department: It allowed trucking company lobbyists to review the researchers’ preliminary report and provide comments on it. By the time of its release in 2020, the report had been dramatically rewritten, stripped of its key conclusions — including the need to federally mandate side guards — and cut down by nearly 70 pages.”
NYT - “Facewatch, a British company, is used by retailers across the country frustrated by petty crime. For as little as 250 pounds a month, or roughly $320, Facewatch offers access to a customized watchlist that stores near one another share.”
Inquirer - “Some of Philadelphia’s richest business owners, sophisticated finance professionals, and the state-funded Ben Franklin Technology Partnership of Southeastern Pennsylvania fell for an investment scheme that the Securities and Exchange Commission now says was too good to be true.”
Know someone thinking of fabricating data to support their preconceived notions about the psychology of lying? Send them this newsletter!