The ongoing controversy surrounding Facebook’s use of our data raises questions that run deeper than just regulation, Jose Marichal writes.
Facebook is having a bad month. The decade-long complaint that the company is too free-wheeling with user data has re-emerged with unprecedented vigour.
This time, reports that shadowy firm Cambridge Analytica improperly accessed tens of millions of accounts has put Facebook’s privacy policies under a more powerful microscope. While Facebook should be subjected to a public flogging for its ‘data casualness’, we should make sure we understand why we’re angry at ‘Zuck and company’.
In creating Facebook, its creators did something so profoundly grandiose but so seemingly mundane, we fail to appreciate it. The Silicon Valley company took the deeply private and personal act of connecting through intimate disclosure and found a way to turn that communication into a commodity.
The effects of this commodification of personal relationship-building and maintenance are many. One is that Facebook has created a new-to-human-history, parallel system of relationship building alongside the existing means of forming relationships. Facebook does not replace face to face interaction, it runs alongside it.
This has produced many virtues. Facebook allows for a cost-effective, visually appealing way to stay in touch with friends and family. It has simplified the onerous task of cultivating individual bonds with old high school or university mates who otherwise would fall into the dustbin of personal history. Because of Facebook, I still have dear friends in my life who might not have stayed there had I needed to interrupt my busy life to call them for an hour every few months.
Along with this benefit, however, comes an entirely new form of communication that turns us all into our own publicists. Facebook is a one-stop public relations agency for helping us craft our personal brands to our friends and family. For friends we are not in regular face to face contact with, this is the only version of us they see.
This active cultivation of an online ‘public self’ through our private relationships is relatively new and hence, poorly understood. If Facebook is guilty of anything, it’s underestimating the profound effect it is having on human relationships.
This is what makes the Cambridge Analytica scandal (and what governments should do about it) so hard to pinpoint.
The fear is that Cambridge Analytica was and is using data to build a psychometric profile of users that will target their psychological vulnerabilities. But this is only a concern if users are their ‘real’ selves on Facebook. Undoubtedly some are, and some aren’t.
Some people carefully craft every post and photo they put on Facebook, but others are freewheeling and confessional. If Facebook is a user’s personal ‘brand management’ platform, then efforts to scan their posts for psychological tendencies might fall flat. However, if it’s understood by a user as a platform to disclose intimate hopes and fears with others, then Cambridge Analytica might have useful data.
Facebook obviously wants us to think of their platform as the latter. It wants us to see Facebook as a safe, intimate space for being our true selves. The fact that it hasn’t experimented with a ‘user pays’ model speaks volumes. It would feel strange for us to pay to connect with our friends. There is an inherent sense that ‘connecting’ shouldn’t be a commodified, market activity.
Because we’re only a decade into this great Facebook experiment, we don’t have a good framework for what we should expect from Facebook in its management of our data.
For the most part, we rely on our inherent sense of fairness. We know Facebook’s Terms of Service, at least before they updated them in the wake of the scandal, were ridiculously long and obtuse. However, we also sensed that we entered into a contract whereby we use the service for free in exchange for something (benign) being done with our data.
The collective anger at Facebook and Cambridge Analytica stems from the fact that it seems to violate that sense of fairness. Cambridge Analytica got access to tens of millions of ‘friend of a friend’ accounts without their direct consent. For Facebook to justify its actions under claims that “In the Terms of Service, it was stated….” would be too dismissive of the company’s responsibility to treat its ‘product’ in a safe manner.
Because of this, nation-states have every right to move in the European Union’s direction and give more users rights to their data. But even if states enact such laws, humanity still needs to reckon with the great disruption machine Facebook has created – a ‘public looking’ friend-community that, in some ways, has replaced the public sphere.
The responsibility for managing this new ‘public looking sphere’ must fall on Facebook. The company has an obligation to turn some of its machine-learning algorithmic power toward cultivating a community that doesn’t flatter its users to keep them engaged or bathes them in their own ideological rectitude. It has an obligation to introduce elements of serendipity, discomfort, and contrariness back into the human relationships taking place on the platform.
Without following basic rules of the road that promote the development of human wisdom and flourishing, our nascent experiment with Facebook will end very badly.