Data Fetishism, FIPPs and the Intel Privacy Proposal

fetish – ‘An excessive and irrational devotion or commitment to a particular thing.

Basing their legislative proposal in the Fair Information Practice Principles (FIPPs), Intel looks to the past not the future of privacy.  The FIPPs were developed by the OECD in the 1970’s to help harmonize international regulation on the protection of personal data. Though they have evolved and morphed, those basic principles have served as the basis for privacy frameworks, regulations and legislation world-wide. Intel’s proposal borrows heavily from the FIPPs principles: collection limitation, purpose specification, data quality, security, transparency, participation and accountability. But the FIPPs age is showing. In crafting a new law for the United States, we need to address the privacy issues for the next 50 years, not the last.

When I started working several years ago for NCR Corporation I was a bit miffed at my title of “Data Privacy Manager.” Why must I be relegated to data privacy? There is much more to privacy than data and often controls around data are merely a proxy for combating underlying privacy issues. If the true goal is to protect “privacy” (not data) then shouldn’t I be addressing those privacy issues directly? The EU’s General Data Protection Regulation similarly evidences this tension between goals and mechanism. What the regulators and enactors sought to rein in with the GDPR was abusive practices by organizations that affected people’s fundamental human rights, but they constrained themselves to the language of “data protection” as the means to do this, leading to often contorted results. The recitals to the regulation mention “rights and freedoms” no less than 35 times. Article 1 Paragraph 2 even states “This Regulation protects fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data.” Clearly the goal is not to protect data for its own benefit, the goal is to protect people.

Now many people whose career revolves around data focused privacy issues may question why data protection fails at the task. Privacy concerns existed way before “data” and our information economy. Amassing data just exacerbated power imbalances that are often the root cause of privacy invasions. For those still unpersuaded whether “data protection” is indeed insufficient, I provide four quick examples where a data driven regulatory regime fails to address known privacy issues. They come from either end of Prof. Dan Solove’s taxonomy of privacy.

Surveillance – Though Solove classifies surveillance and interrogation under the category of Information Collection, the concern around surveillance isn’t about the information collected. The issue with surveillance is it invites behavioral changes and causes anxiety in the subject being watched. It’s not the use of the information collected that’s concerning, thought that may give rise to separate privacy issues, but rather the act and method of collection. Just the awareness and non-consent of surveillance (the unwanted perception of observation in Ryan Calo’s model) triggers the violation. No information need be collected. Consider store surveillance by security personnel where no “data” is stored. Inappropriate surveillance, such as targeting ethnic minorities, causes consequences (unease, anxiety, fear, avoiding certain normal actions that might merely invite suspicion) in the surveilled population.

Interrogation – Far from the stereotypical suspect in a darkened room with one light glaring, interrogation is about any contextually inappropriate questioning or probing for personal information. Take my favorite example of a hiring manager interviewing a female candidate and asking if she was pregnant. Inappropriate given the context of a job interview; that’s interrogation. It’s not about the answer (the “data”) or the use of the answer. The candidate needn’t answer to feel “violated” in the mere asking of the question, raising consequences of anxiety, trepidation, embarrassment or more. Again, we find the act and method of questioning is the invasion, irrespective of any data.

Intrusion – When Pokémon Go came out, fears about what information were collected about players abounded, but one privacy issue hardly on anyone’s radar was the use of churches by the game as places for the individual to train their characters. It turned out some of the churches on the list had been converted to people’s residences thus inviting players to intrude upon those resident’s tranquility and peaceful enjoyment of their homes. I defy any privacy professional to say that asking any developer about the personal data there are processing, even under the most liberal definition of personal data, would have uncovered this privacy invasion.

Decisional Interference – Interfering with private decisions strikes at the heart of personal autonomy. The classic examples are laws that affect family decisions, such as China’s one child policy or contraception in the United States. But there are many ways to interfere with individual’s decisions. Take the recent example of Cambridge Analytica. Yes, the researcher who collected the initial information shared people’s information with Cambridge Analytica and that was bad. Yes, Cambridge Analytica developed psychographic profile and that was problematic. But what really got the press, the politicians and others so upset was Cambridge Analytica’s manipulation of individuals. It was there attempt, successful or otherwise, to alter peoples’ perception and manipulate their decision to vote and for whom.

None of the above examples of privacy issues are properly covered by a FIPPs based data protection regime, without enormous contortion. They deal with interactions between persons and organizations or among person, not personal data. Some may claim, that while true, any of these invasions, at scale, must involve data, not one-off security guards. I invite readers to do a little Gedanken experiment. Imagine a web interface with a series of questions, each reliant on the previous answers. Are you a vegetarian? No? What is your favorite meat, chicken, fish or beef? Etc. I may not store your answer (no “data” collection) but ultimately the questioning leads you to one specific page where I offer you a product or service based on your specific selection, perhaps discriminatory pricing based on the selection. Here user interface design essentially captures and profiles users but without that pesky data collecting that would invite scrutiny from the privacy office. I’m not saying some companies might be advanced enough in their thinking, but in my years of practice most privacy assessments begin with “what personal data are you collecting?”

Now, I’ll admit I haven’t spent a time to develop a regulatory proposal but I’d at least suggest looking at Woody Hartzog’s Privacy’s Blueprint for one possible path to follow. Hartzog’s notions of obscurity, trust and autonomy as guiding privacy goals encapsulate more than a data centric world. But Hartzog doesn’t just leave these goals sitting out there with no way to accomplish them. He presents two controls that would help: signaling and increasing transaction costs. Hartzog’s proposal for signaling is that in determining the relationship between individuals and organizations and the potential for unfairness and asymmetries (in information and power), judges should look not to the legalese of the privacy notice, terms and conditions or contracts but the entirety of the interaction and interfaces. This would do more to determine whether a reasonable user fully understood the context of their interactions.

Hartzog’s other control, transaction costs, goes into making it more expensive for organization to commit privacy violations. One prominent example of legislation that increases transaction costs is the US TPCA which bans robocalls. Robocalling technology significantly decreases the cost of calling thousands or millions of households. The TCPA doesn’t ban solicitation, but it significantly increases the costs to solicitors by requiring a paid human caller to make the call. In this way, it reduces the incidents of intrusion. Similarly, the GDPR’s ban on automated decision making increases the transaction costs by requiring human intervention. This significantly reduces the scale and speed at which a company can commit privacy violations and the size of the population affected. Many would counter, and in fact many commenters on any legislative proposal, are concerned about the effect on innovation and small companies. True, that increasing transaction costs, in the way that the TCPA does, will increase costs for small firms. That is, after all, the purpose of increasing transaction costs, but the counter-argument is do you want a two-person firm in a garage somewhere adversely affecting the privacy of millions of individuals? Would you want a small firm without any engineers thinking about safety building a bridge over which thousands of commuters traveled daily? One could argue the same for Facebook, they’ve made it so efficient to connect billions of individuals they simply don’t have the resources to deal with the scale of the problems they’ve created.

The one area where I agree with the Intel proposal is about FTC enforcement. As our de-facto privacy enforcer it already has institutional knowledge to build on, but their enforcement needs real teeth not ineffectual consent decrees. When companies analyze compliance risk if the impact of non-compliance is cost comparable to the cost of compliance, the they are incentivized to reduce the likelihood of getting caught, not actually get in compliance with the regulation. The fine (impact) multiplied by the likelihood of getting fined must exceed the cost of compliance to drive compliance. This is what, at least in theory, the GDPR 4% seeks to accomplish. Criminal sanctions on individual actors, if enforced, may have similar results.

There are other problems with the FIPPs. They mandate controls without grounding in the ultimate effectiveness of those controls. I can easily technically comply with the FIPPs without manifesting improving privacy. Mandating transparency (openness in the Intel proposal) without judicial ability to consider the entirety of the user experience and expectation only yields lengthy privacy notices. Even shortened notices provide less than information about what’s going on than user’s reliance on the interactions with the company.

In high school, I participated in a mock constitution exercise where we were supposed to develop a new constitution for a new society. Unfortunately, we failed and lost the competition. Our new constitution was merely the US Constitution with a few extra amendments. As others have said we don’t need GDPR-light, we need something unique to the US. I don’t claim Hartzog’s model is the total solution, but rather than looking at the FIPPs, Intel and others proposing legislation should be looking forward for solutions for the future, not the past.

 

 

Privacy Policy: Disclosure to Law Enforcement

Edith Ramirez, FTC Chair, speaks at the IAPP Global Summit about privacy.
Edith Ramirez, FTC Chair, speaks at the IAPP Global Summit.

At the IAPP Global Summit in Washington, D.C. which just ended, I didn’t get a chance to ask my question of newly appointed FTC Chair Edith Ramirez.  She had only been in office 5 days and privacy is at the top of her agenda.  She had previously been scheduled for a Q&A but because of her new appointment, the questions were posed by a moderator and the audience was not allowed to participate.

Had I been given the opportunity, I would have asked the following question.  Would the FTC consider action against a company for violating the unfair and deceptive practices act if it turned over information to a government agency in violation with the company’s stated privacy policy?  Or is such an enforcement action verboten.

I suspect I know the answer.

To date, to my knowledge, the FTC has never made such a complaint against a company. However, the potential is there.

I would like to examine two different clauses from privacy statements and their particular risks to users of those services. Here is one common clause I’ve found in many privacy statements.

We may disclose any subscriber information to law enforcement agencies without further consent or notification to the subscriber upon lawful request from such agencies. We will cooperate fully with law enforcement agencies.

Notice the phrase “lawful request.” Such a policy does not preclude the scenario where a law enforcement agency simply asks for the information, no subpoena, warrant or national security letter. The request is lawful. No law prohibits the agent from making the request and no law prevents the company from disclosing the information to anybody (except the FTC’s enforcement of the company’s own privacy statement).  Could such a policy be deceptive? To the average consumer, the term lawful request seems to imply that the company will respond to legal requests such as the aforementioned court recognized documents.  However, to a lawyer, arguing before the FTC, the phrase could be read as I’ve described above, nothing unlawful, therefore the request was lawful.  The clause could be a result of sloppy draftsmanship or crafty lawyering.

Contrast that to the pertinent section of Facebook Data Use Policy:

We may access, preserve and share your information in response to a legal request (like a search warrant, court order or subpoena) if we have a good faith belief that the law requires us to do so. This may include responding to legal requests from jurisdictions outside of the United States where we have a good faith belief that the response is required by law in that jurisdiction, affects users in that jurisdiction, and is consistent with internationally recognized standards.

Notice they include the requirement that they must have good faith belief that the law requires them to comply. Not that it allows them to comply, but requires them. This is a significant difference in function.  Under the previous construction, they will comply with a request if the law allows them but under the Facebook policy, they will only comply if the law requires them. I also appreciate the out they provide themselves for international requests that it must be consistent with internationally recognized standards, possibly providing them a legal out to not enforce some dictator’s decree.  However, it would be nice if it was stronger still.