Tuesday, April 10, 2018

Where Facebook is To Be Defended

I do not use Facebook, let us start with that.  I don't think anyone really cares what I had for breakfast, let alone needs a picture of it.  I have no old loves from the '60s to have to figure out what they're doing (I married mine), which I would not bother to do regardless, given that I have a loving family.  I certainly would not voluntarily choose to get my news feed from a leftist-leaning organization with a history of censoring conservatives.

I'm also not particularly interested in having personal data of mine available to the rest of the world under someone else's control -- if 870+ essays online doesn't give someone enough personal information about me, I certainly don't need to make it easier for someone who just wants to sell me a car.  I will not defend Facebook for any of that, although I cannot imagine why they are (A) in California and paying the absurd taxes there, or (B) not all Republicans, given that the Democrats would confiscate their entire income if given free rein.

Right now, Facebook is about to have its CEO grilled by a congressional committee about its failure to protect personal data, and how it is sold -- excuse me, "made available" -- for a price to those people who want to sell you a car.  More particularly, the issue is going to be its vulnerability to have all that data accessible to Chinese and Russian hackers.

Well, that's their problem.  They built their company; they can defend it.  Or try to.

Nope, I'm going to defend them in a different forum for a different topic.  I'm talking about what happens when someone posts a video or a message or some such thing relating to an upcoming criminal act.

Let's say one of the ISIS clowns still remaining posts a message that he or she is going to attack a target somewhere, and posts that specifically on their Facebook page.  I don't know the mechanics (I'm not on Facebook), but let's say something like that happens.

The attack then happens and people are injured or killed.  And the family of one of the victims sues Facebook, claiming that because a notice to the effect that the attack was going to happen got posted on their site, Facebook is civilly liable for damages for ... well, something.  Their pockets are absurdly deep, so it would be worth the try.

I'm here to tell you that I don't buy that for a bit.

In such a case -- and mine may be hypothetical, but something akin to that scenario has definitely happened -- we have to ask what the responsibility is of the medium, when they have a billion users posting all the time, and their stated purpose is not as a news-filtering operation but to "connect people" so they can communicate.

I don't actually think there is a case.  I'm not a lawyer, do not access Lexis-Nexis to look up case law, but I can be at least minimally wizardly with logic.  Facebook created a platform by which people can communicate with each other.  They did not tell their users, and they did not promise their paid customers, that their role included protecting them from unreported threats and the like.

The contract is between the user and the medium -- the ISIS clown, in this case, and Facebook.  Whether the victim was a Facebook user is totally irrelevant; their claim would be that Facebook "should have known", meaning that they should have read the posting from the ISIS clown, deciphered the intent and done something, like notifying the FBI.

But that is changing the role of the medium, and I don't buy it.

Let us suppose that the terrorist called one of his buddies and used an iPhone on a Sprint network to make that call.  It's absurd to think that Apple should be held liable because they sold a product that was used to make a call that coordinated a terror attack.  It is equally absurd -- though another step closer to the Facebook scenario -- to hold Sprint liable because their network was used for that purpose, as if they are supposed to monitor calls on their network.

So what is the expectation in law that distinguishes Facebook and its peers, such that they should be expected to know what is being communicated over their medium, where the phone companies are not?  The selling of data doesn't even apply here, and although that is this week's news, that's a whole 'nother issue.  This is strictly about needing judicial precedent to establish that a communications medium is not liable for the normal manner in which it is used, not a phone company and not Facebook and its peers.

It gets a bit creepy when you start to ask whether the fact that Facebook does pull data from its users makes it a bit more liable, in that they can monitor actual conversations -- and do -- in order to mine them.  But I cannot fathom the impact of a ruling that applied liability to a medium for actions taken during the normal communications role of the medium and its users' normal communications -- even if those communications are used to plan harm.

It is reasonable to expect such media to be cooperative when they do recognize tangible threats.  But I fail to see how we should assign accountability for assisting in subsequent actions, unless they failed to take what could be deemed reasonable actions after recognizing the threat.

Of course, the better tack for Facebook would be to stop reading any messages on its platform.

Copyright 2018 by Robert Sutton
Like what you read here?  There's a new post from Bob at www.uberthoughtsUSA.com at 10am Eastern time, every weekday, giving new meaning to "prolific essayist."  Appearance, advertising, sponsorship and interview inquiries cheerfully welcomed at bsutton@alum.mit.edu or on Twitter at @rmosutton

No comments:

Post a Comment