Non-Verbal Cues
Lemonade
If you’ve never heard of the “AI-driven” insurance startup Lemonade, that’s okay. Insurance is boring, and most people avoid it like the plague. I spent about six years doing advertising for different types of insurance, and it was a tough job! The last thing people want to do with their free time is fill out long, confusing forms and pay money for something that they may not ever use. I get it.
During my years in insurance - I am cringing writing this sentence - I went to a variety of “insuretech” conferences, short for insurance technology, because everything is “tech” now. Banking? It’s fintech. Anyhow, because insurance is an old, boring business, it is challenging to make “tech” in the space interesting. Enter Lemonade, which says it uses AI - not a thing! - to process insurance claims. Someone at Lemonade thought it would be a good idea to create a Twitter thread about how cool their AI was, and it went…poorly:
In a Twitter thread Monday that the company later deleted and called “awful,” Lemonade announced that the customer service AI chatbots it uses collect as much as 1,600 data points from a single video of a customer answering 13 questions. “Our AI carefully analyzes these videos for signs of fraud. It can pick up non-verbal cues that traditional insurers can't, since they don’t use a digital claims process,” the company said in a now-deleted tweet. The thread implied that Lemonade was able to detect whether a person was lying in their video and could thus decline insurance claims if its AI believed a person was lying.
Hoooooo boy. Where to begin. First off, insuretech companies spend a lot of time talking about fraud screening. I saw dozens of software products demoed over the years claiming to reduce fraud - some were pretty creepily invasive. It’s impossible to know how big of a problem fraud really is for insurers, but some estimates have the number between 5 and 10% of claims paid, which isn’t nothing. Any way to reduce that number would make insurance executives very happy.
So, part of Lemonade’s sales pitch to its investors is that its chatbots can help detect fraud. That’s not a sales pitch for Lemonade policyholders, because you don’t tell your customers you’re actively trying to catch them lying, but hey, insuretech!
Anyhow, Lemonade was immediately called out by scientists and engineers:
AI experts on Twitter immediately mocked and contested the claim, pointing out that the entire premise of so-called "emotion recognition" systems, which claim to detect a person's mood or mental state, is highly suspect. They also raised the well-established point that these systems are inherently biased.
“These kinds of physiognomic systems don’t work period. It’s increasingly embarrassing [for companies] to talk about them … and yet somehow they keep bubbling up,” Luke Stark, a professor at Western University who studies physiognomic AI, told Motherboard. “There always seems to be the temptation to brag that you’re doing some new fancy thing, as this company did in their tweets.”
Not great. Lemonade backtracked quickly:
On Wednesday, Lemonade deleted the Twitter thread, saying that it "caused more confusion than anything else." It also claimed that the company doesn't approve or reject insurance claims based solely on AI analysis: "We do not use, and we're not trying to build AI that uses physical or personal features to deny claims (phrenology/physiognomy)."
"The term non-verbal cues was a bad choice of words to describe the facial recognition technology we’re using to flag claims submitted by the same person under different identities. These flagged claims then get reviewed by our human investigators," the company wrote in a blog post after deleting its tweets. "AI is non-deterministic and has been shown to have biases across different communities. That’s why we never let AI perform deterministic actions such as rejecting claims or canceling policies."
Okay sure, but those are two different things? Lemonade went public last year as a B Corp, which means they are theoretically supposed to be creating profit and “social good” whatever that is. The company claimed in its stock filings that it has proprietary algorithms that run the core of its business - with insuretech, automating expensive processes like claims is often a goal - but also that these algorithms could potentially be flawed and lead to bias. Huh.
Facial recognition technology has been shown to be biased, especially against Black Americans. So even the excuse Lemonade used in its apology is not a great one.
The insurance business, as I’ve said, is pretty boring - you take in money, invest it, and hope that when you have to pay out claims you have enough money to cover the payments. If your bean counters have done their jobs properly, you end up with modest profit margins. You’re never going to be cool or sexy, unless you tell everyone you’re using cutting edge AI, and act like a tech company and instead of an insurance company. Investors don’t seem to care whether Lemonade is using racist software or not - their stock is up almost 25% since the Twitter meltdown.
ANOM
Authorities in Australia, New Zealand, the U.S. and Europe said Tuesday that they've dealt a huge blow to organized crime after hundreds of criminals were tricked into using a messaging app that was being secretly run by the FBI. Police said criminal gangs thought the encrypted app called ANOM was safe from snooping when, in fact, authorities for months had been monitoring millions of messages about drug smuggling, money laundering and even planned killings.
This is clearly another entry into the Do Not Post Your Crimes handbook. The FBI somehow (!!) sent out Google Pixel phones to crime syndicates in 100 countries (!!) with the ANOM app on them - which they claimed was completely secure:
Authorities in Australia said the app was installed on stripped-back mobile phones and its popularity grew organically in criminal circles after it was vouched for by some high-profile underworld figures, described as "criminal influencers."
I shouldn’t be surprised that there are criminal influencers. What was Criminal Instagram like?
"We have been in the back pockets of organized crime," Kershaw said. "All they talk about is drugs, violence, hits on each other, innocent people who are going to be murdered."
Was that really all they talked about? No water cooler talk between syndicates? Surely they had other things in common. Sports? TV shows? I guess not. The ANOM sting was conceived in 2018, after the FBI took down another app used by criminals, called, ironically, Phantom Secure.
The whole concept of this operation raises some…interesting? questions - the FBI and other law enforcement were clearly monitoring the criminals as they committed actual crimes. How did they decide when to strike? How much crime was the right amount to say okay, it’s time to take these 800 criminals down? How much crime did they let happen while collecting evidence? These investigations take time!
We’ll probably never get answers to these and the many other questions I have, because most of the details of these cases will be sealed, so the FBI and other law enforcement organizations can run new stings in the future. Let this be a lesson to you, criminals - if you get an encrypted phone in the mail, you might want to be careful what you post on it. Also, beware the criminal influencers.
Counterfeit Fish
Many articles have been written about cheap fish being substituted for more expensive varieties. While this can be a problem at sushi restaurants, seafood fraud can also cause problems for the environment - some fisheries use mislabeling to avoid environmental regulations. But! A clever scientist in Texas has figured out how to use a medical device designed to detect tumors to quickly determine a fish’s species:
[Abby Gatmaitan’s] research, published this spring in the Journal of Agricultural and Food Chemistry, showed that touching the tip of the “pen” to a sample of raw meat or fish could correctly identify the species it came from. The device was tested on five samples and took less than 15 seconds for each of them. Roughly the length of a typical ink pen, the tool provided answers about 720 times faster than a leading meat-evaluating technique called polymerase chain reaction (PCR) testing—and it was much easier to use.
Very cool! Seafood fraud is pretty widespread, with a 2019 study showing around 8 percent of seafood products are mislabeled. Lucrative fish like snapper and tuna are mislabeled at much higher rates.
Currently, FDA inspectors and scientists from various watchdog groups use much slower, more cumbersome DNA testing to determine whether seafood is authentic. The MasSpec pen used by Gatmaitan’s lab is small, easy to use, and doesn’t harm the food samples it tests. She thinks that with some more research her lab can even determine which fishery the seafood came from.
Apparently it also works for beef and other foods, which led me to the 2013 horse meat scandal in Europe. I have been unable to find how much one of these pens costs, but it might come in handy now that restaurants are reopening.
Car Wraps
I don’t recall where I first saw it, but I once ran across an ad for Dr Pepper Car Wraps. People are told they’ll be paid to drive around in a car wrapped with a Dr Pepper ad. It’s a scam - obviously - but it is so oddly specific I can’t help but wonder how the scammers came up with it, and how it’s had such staying power over the last few years. Anyhow here is the FTC press release about car wrap scams and it’s a total bummer:
If you have a car, you know how expensive the upkeep can be. Gas, maintenance, parking – the whole lot. So what if a company offered to pay you to drive around – which you were already doing – with their branding wrapped onto your car? It could sound like a good deal.
We’ve heard about some car wrap scams that have targeted college students, a group known to look for ways to make a few extra bucks. The gist of the scam is this: The scammers send emails with messages like “GET PAID TO DRIVE.” They offer to pay you $250-$350 a week if you’ll drive around with your car (or truck or motorcycle) wrapped to advertise a well-known product – or even an event like the 2020 Olympics.
Fellow kids! Driving is expensive! Scammers send a (fake) check and ask the prospective wrap driver to send some of it back for supplies or whatever, the check bounces, the victim is out fifteen hundred bucks or so. It’s not inventive - fake check scams have been around for as long as we’ve had checks.
I just wonder how, somewhere, someone was dreaming up a new scam and figured - Dr Pepper car wraps! Did they see one in the wild and get inspired? Do Dr Pepper car wraps even exist? Have they ever existed? I don’t know. Wikipedia says car wraps have been around since the ‘90s and Pepsi - a soft drink company! - used them to promote their products. Some websites claim you can actually get paid to wrap your car with ads for things other than soda. I just want to know how Dr Pepper, of all things, is the brand of choice for scammers. If anyone has seen a real Dr Pepper car wrap in the wild, please tell me. I’m dying to know.
Short Cons
WSJ - “In contrast to investors and CEOs, academics who study artificial intelligence, systems engineering and autonomous technologies have long said that creating a fully self-driving automobile would take many years, perhaps decades.”
WSJ - “The backlog, combined with dozens of customer lawsuits and inquiries from regulators about its practices, is eating up executives’ time and company resources while Robinhood works to prepare for an initial public offering later this year.”
NYT - “Bitcoin is also traceable. While the digital currency can be created, moved and stored outside the purview of any government or financial institution, each payment is recorded in a permanent fixed ledger, called the blockchain.”
Boston Globe - “For a culture conditioned to desire dogs, the stay-at-home orders that accompanied COVID-19 might as well have been Pavlov ringing a bell.”
Tips, thoughts, or car wrap sightings to scammerdarkly@gmail.com