There was a lot of news this past week about technology and safety. YouTube banned Senator Rand Paul for a week, Apple introduced a child safety feature that creates perceived privacy risk, Tesla is facing Autopilot investigations, and more.
I’m not going to get into coverage of all of them. However, I wanted to share an observation I found around cost and motivation.
In three different cases, companies are using safety to also ensure they don’t face other fines or disadvantageous rules. This isn’t evil. It may in fact demonstrate effective incentive design by government. While the companies may seek to do the right thing anyhow, this creates a compelling forcing function.
However, it may cause them to go overboard.
Case #1
Pending legislation will loosen up Apple’s control over app installation on iOS devices, thereby undercutting the App Store. However, there are provisions enabling Apple to work around this if it’s done in the interest of digital safety. That’s a $50B+ incentive for Apple to demonstrate how it furthers safety with the App Store. To be fair, I do think they do so, especially with regards to preventing the spread of malware. Nonetheless, it’s valuable for consumers to have a choice.
From Ars Technica:
A group funded by Apple and Google sent a statement to media claiming that the proposed law "is a finger in the eye of anyone who bought an iPhone or Android because the phones and their app stores are safe, reliable, and easy to use."
Case #2
As part of the coverage of Apple’s Child Sexual Abuse Material (CSAM) safety feature set, it came to light that Facebook reported 20.3 Million instances of CSAM on its platform. While only roughly 1 per 150 Facebook users, that’s still 40X what Google reported and nearly 100,000X what Apple reported (per Stratechery).
On one hand, this may look like Facebook has a much worse problem than Apple & Google. In many ways, that would feed the populist narrative to blame Facebook. However, this actually looks like Facebook is being much more proactive here. Part of this is because of incentives and some of it may be because of capability.
On incentives, the regulation spells out a $150,000 fine for the first time there is a failure to report and a $300,000 fine for any subsequent failures.
$300,000 X 20.3M = $6.09 Trillion
Facebook’s potential liability, for not reporting, is $6.1 Trillion Dollars! More than Biden’s infrastructure package and the equivalent of Amazon, Apple, and Facebook’s market capitalization put together. That’s a damn good reason to make sure you report something that you detect. Which is a key reason why Facebook’s numbers are so much higher - Facebook is good at detecting.
Detection is a key capability of Facebook’s. It has to be in order to handle the scale of person to person communication, and thereby abuse, that happens on its platform. Reviewing Facebook’s Transparency Reports, they had a 98.9% proactivity rate. Not only does this show compelling capability of their models, it means only 1.1% of the offending content was initially reported first by users and thus fewer users were exposed. But this good capability, detecting offending material to prevent harmful exposure, also is what is driving up the number of reports.
If you only looked at what was reported to Facebook, not what Facebook detected itself, their liability falls by a whopping $6 Trillion dollars to $67B. And the optics look a lot better too. It would be 200k instances instead of 20.3M. And this is where the incentives are elegant. Thus far, as long as the companies are reporting and taking action, they aren’t being fined. Facebook can report the full subset of what it detects, without fear of punishment for reporting, thereby helping solve the bigger issue.
Case #3
YouTube banned Senator Rand Paul last week for stating that “masks do not work”. While an untrue statement from him, it’s pretty remarkable to see free speech quite literally be curtailed by YouTube.
Their rationale is that it’s an untrue statement. And I agree it’s an untrue statement. Masks have been shown to work and I’ve personally put in a lot of effort to ensure they are worn. However, there’s a lot of other things on YouTube that are untrue. And . . . where does it stop? Fiction, such as Game of Thrones, is by definition “untrue”. So is fiction off the platform?
A suspicion of YouTube’s motivations is that this will placate the Biden Administration, who are thankfully trying to contain COVID. But I think we need a better framework for how to be making these calls. It would even make sense to me if part of the rationale behind this is because he is a senator and thus needs to be held to a higher bar because of being in public office. But that’s not the reason cited.
One day’s truth is another day’s deep sin. History is filled with these examples. See the CDC’s guidance last March for us all not to wear masks yet.
This is the case that worries me the most.