UNITED STATES
Just how creepy is the Facebook - Cambridge Analytica story?

4/22/2018

Just how creepy is the Facebook - Cambridge Analytica story?

By: Isobar US Vice President, Jeremy Pincus

If we think clearly about all the fears that have erupted since it was revealed that Cambridge Analytica improperly got data on tens of millions of Facebook users, there may actually be more smoke than fire.

The Facebook/Cambridge Analytica scandal confirms the worst fears of many about data mining. Unlike the seemingly weekly massive data breaches, this particular story has had huge traction because it blends two contemporary fears: First, the realization that big data tools can create models that use our Facebook likes to predict deeply personal truths about us, e.g., sexual orientation, political leanings, ethnicity, religion, drug use, personality, level of intelligence and even our level of happiness. Next, mix this with the biggest story of the Trump presidency – evidence that the 2016 election was manipulated by outsiders, using such models created from Facebook data.

A bit of context is needed: Internet anonymity has been dead for a very long time. We’ve come to accept that our laptops are loaded with cookies and spyware. The fact is that every corporation you interact with, whether it be Amazon, your bank or your wireless carrier, has a customer database, and you are in it. And, moreover, we expect these companies to keep detailed records about all of our interactions with them so they can provide us with better service and, ultimately, save us time and effort in decision making. We’ve known for a long time that the pile of mail at our front doors, or in our inbox, is all targeted to us based on our level of affluence, life-stage, history of charitable giving, etc. Most people just don’t think much about how this sausage gets made.

Giant 3rd party data brokers, like Experian and Acxiom, collect and maintain thousands of data points about each and every American (except for the few who truly live "off the grid"). There is so much data on you out there that Facebook likes are really just the tip of the iceberg. In this case, Facebook data that was shared with an academic institution for research was used to create predictive models that were then sold commercially in violation of the original agreement; but this is a technicality. Facebook, Cambridge Analytica, and every other data provider and modeler need to be vigilant about safeguarding personal data and adhere to their own self-imposed regulations on data sharing. In this instance, it appears they have breached them – which is, of course, a legitimate issue that needs to be addressed.

Even if outrage over Cambridge Analytica is overblown, there is still something very wrong with our system, and the present scandal offers an opportunity to try to fix it. It’s not the existence of the new tech tools that are the issue, it is how they are used, and who is using them and for what ends.

The root of the ethical problem here is the inherent asymmetry in control between data aggregators like Facebook and Google on the one hand and ordinary citizens on the other. So called "notice and consent" procedures, those endless usage agreements that no one reads or understands that must be "agreed to" in order to access websites or apps, are a deeply flawed legal fiction, a pretense that consumers are in a position to understand their privacy implications and/or have viable alternatives to clicking "I agree." The status quo places all power and control with the data aggregators, who are then free to use personal data in any way they see fit, subject to their own governance.

Big data is essentially a form of surveillance, and the implicit social contract holds that this data should only be used to make life better for us. Unfortunately, examples are accumulating of "well-meaning" big data projects that led to unintended consequences, e.g., the infamous case of Target inadvertently tipping off a teen’s pregnancy to her parents, based on her pattern of purchases. Just this week, it was revealed that Grindr has been sharing the HIV status of its members with 3rd party companies that analyze mobile and Web apps (now revealed, Grindr has said it will end the practice). And for those who thought government might lead by example, there’s the mother of all big data projects: the NSA and FBI’s indiscriminate, omnivorous collection of web searches, phone records, email content, credit card transactions, etc. in search of indicators of terrorism. These big data excesses, and the many more yet to be revealed, make it clear why the system needs to change, even if the Cambridge Analytica story isn’t particularly unique.

In contrast to the current "all or nothing" approach, a more level playing field might involve presenting users with domains of interest (certain types of products, services, etc.) and asking them to opt-out of any they found objectionable, and continuing to update those preferences over time. This suggestion is a more lenient version of the European Union’s data protection regime, GDPR, which will come into force next month; GDPR will require the likes of Google and Facebook to receive "opt-in" permission from users in order to use their personal data for advertising; this regulation will require firms to repeatedly ask for consent at different times for each unique purpose, a costly proposition that might have been avoided with a more proactive privacy policy. Alternatively, it may be time to take a closer look at John Taysom’s "Three’s A Crowd" proposal for the establishment of a central repository of PII managed by a non-profit NGO that provides lookalike clusters to advertisers, but not personal information.

In the past week, Facebook announced that they will remove all third-party data providers from its platform in the interest of "privacy." It is far from clear how this move will improve user privacy since Facebook is the world’s top data aggregator; some have even labeled this move a cynical attempt to use the current scandal to consolidate power in the personal data market by shutting out competitors. Clearly, it will take more transparent and neutral policies to allay the fears of consumers and regulators.

There’s an opportunity for advertisers and their agencies to lead by example in big data practices because of the obligation they hold as brand stewards. And, at the end of the day, we are all consumers who don’t want our privacy needlessly violated.

This article was originally published in Campaign US.

Isobar US

Latest news