Email confusions

It always amuses me when people don’t know their own email address. I mean, I can understand typos and forgetting some overly complicated string of characters but some people fundamentally seem to always get their own email address incorrect.

I’ve currently been involved with an issue with Redbox whereby one of their customers consistently enters MY email address as their own and I get their receipt (along with what they rented, when they rented, from where they rented it and the last 4 digits of their credit card number). This isn’t just a typo because they do it consistently. I’ve called Redbox (now 3 times) asking them to block my email address. At least the first two times the customer service representative probably just “unsubscribed me.” The third time I asked to speak to a manager and they allegedly marked it such that if the customer attempts to enter my email address at a location they will be presented with an error. It remains to be seen.

Curiously the manager suggested I hit the “unsubscribe” button on the email, to which I pointed out there was none (see picture below). Even more curiously, the manager said that sometimes people have the same email address. Huh? I can only hope that she meant something else to which I’m not sure. I tried to explain that email addresses were unique and someone else couldn’t have the same one though maybe a similar one. She glossed over my explanation. We’ll see if they actually blocked my email address.

Unfortunately this particular email address (I have nearly a dozen) is overly simplistic so I could easily see someone mistaking theirs with mine. This reminds me of Steve Wozniak’s early acquisition of the phone number 888-888-8888, which proved completely useless because of the number of inaccurate calls he received.

Apparently I’m not the only one who has this problem, as this ArsTechnical article points out.

 

redbox

The Importance of Location to data privacy.

Intralinks' The CollaboristaBlog

As with many multi-national companies, Microsoft maintains corporate subsidiaries worldwide, often to optimize its operations under various legal regimes. While the justification for this is usually tax related, increasingly, compliance with local data security and privacy regulations are a driving factor. In light of the Snowden revelations about the NSA, other countries are closely scrutinizing the activities of American companies within their borders. Germany, for instance ousted Verizon in favor of local Deutsche Telekom, citing Verizon’s cooperation with the U.S. government as a determining factor.

Continue reading on my guest post on the CollaboristaBlog.

Theme Parks and the de-evolution of privacy therein.

I recently went to Universal Studios and Islands of Adventures with a friend. I usually go every few years and try to stay at one of the on-site hotels. Though they can be ungodly expensive, the benefit of being right there (and being able to return to your hotel midday to escape the Florida heat), combined with early park admission and unlimited express pass ride entrance almost makes up for the costs.

I haven’t been to any of the Disney parks in quite some time, just owing to a number of circumstances. I keep threatening to return, but haven’t been in almost ten years. Interesting since I use to go annually as part of my summer family vacation. I remember back in the days of yore, Disney actually issued a booklet of tickets f or each area of the park (Tomorrowland, Adventureland, etc…). Sometime before 1981, when Epcot opened, Disney began issuing entire park passports which would give you admission to all the rides in the park, with no need to use up tickets for each ride. The modern day equivalent of Express Passes, which grand someone willing to pay more priority admission to the ride.

Universal Studios Express PassesIn those days, if you wanted to leave the park and come back in you got your hand stamped indicating you had left the park and that along with a ticket valid for that day would suffice to allow you re-admission. As the ticketing system continued to evolve they eventually got rid of the ticket system and moved to an electronic ly read ticket, which eliminated the stamp as all the data was centralized. I still have one of these tickets today which was last used in 2001 and still has 2 days left on it (I had to make notes on my ticket otherwise I wouldn’t have a clue if it still had any days left). Also back in the 90’s Disney and other theme parks began issuing yearly passes (mostly to state residents in an effort to get them to come often especially during non-peak times). The yearly passes, issued to an individual, as opposed to the bearer, needed to be identifying. They included crude pictures and the persons name. Eventually, the entire ticketing system transmogrified over to to one precipitated on identification. Initially, the park attendants just had you sign the ticket when you first used it and allegedly validated that signature against some form of identification upon future ticket uses. Now, the more common practice is to require you to state the name of ticket bearer upon purchase which is imprinted on the ticket. Upon initial entry, the bearer does a finger scan which is matched against future entry attempts. Somewhat sensitive to customer concerns you are able to opt out by showing your ID which is supposed to be matched by the attendant against the name of the ticket. In the 5-6 times I entered the park last weekend, only once did at attendant look carefully (too carefully in my opinion). Most attendants realize that your one of the few people who won’t scan their finger so you probably aren’t trying to skip the line by standing out like that Interestingly enough, though I’m quite used to making a fuss about privacy, my friend who came with me said she felt like she was being treated like a criminal when she had to ask not to scan her fingers. Way to make people feel wanted, Universal!

The scanner are not, allegedly, finger print scanners but rather finger geometry scanners which just get some statistically significant measure to match you to your ticket. It’s unclear whether they match your name with your scan across multiple tickets or do anything else with the data. According to this old article, they purge the finger scans 30 days after the ticket expires, which in the case of my older ticket it does not. Then again, I never scanned my fingers so they have nothing to purge.

In addition to the whole name/finger scan issues, I was irked during my recent trip to learn that I need to have my picture taken for my Express pass. The pictures are printed onto small Express Pass cards. I’m assuming it was supposed to be that the attendant would look at your picture to compare it against you to make sure someone else wasn’t using your Express Pass. Two reasons why this may not be the case:

1) I never had an attendant look at the pass and look at me. Many times I held my thumb over my picture just to see. They mainly wanted to scan the barcode to make sure the pass was valid and wasn’t one of the limited use passes (once per ride, remember the OLD Disney ticket system?)

2) The pictures are of such low quality that you could barely use them to distinguish people. To demonstrate, I’ve even posted mine and my companion’s passes here with nary a worry that they are going to be used for facial recognition.

One of my major pet peeves was that there was very little (if any) disclosure at the point of collection about how they use this image, how long they are storing it, etc.  It may be buried in their privacy policy but if so it’s not clear and certainly not conspicuous.

I just found this article which talks about the Express pass system at the Universal hotels and the need to prevent “fraud.”

I’m certainly not the only one to recognize the failings of the Theme Parks at privacy. Bob Siegel over at Privacy Ref discusses his run in with automated call centers providing details about a person based on an entered telephone number.

FOLLOWUP: 7/28/2014 I’ve been receiving solicitations from Universal (seems like almost daily since my trip.) Interestingly, though not unexpected, clicking the unsubscribe link in the bottom of the email brings you to a page that a) requires to you to enter an email address and requires you to further check a box to affirmatively opt out of email marketing (for each of 4 different services).  This is a far cry from industry best practice, which is one click unsubscribe. If one wants to know how to do privacy wrong, one need only look to the practices of the Theme Parks.

 

Passwords are only one piece of the puzzle

Something I’ve argued for in the past is the need to not treat passwords as an all or nothing access. It’s not binary. It’s shouldn’t be grant or not grant; on or off. Passwords are just one piece of the authentication puzzle. Someone entering a password should be viewed with more confidence that they are who they say they are but not with complete confidence. When analyzing access to information, a more nuanced analysis should be done to assess the sensitivity of the information and the need for confidence in authentication. If you need X confidence, require X level of authentication. If you require Y confidence, then you need Y level of authentication.

Many websites require you to log in (again) if you’re accessing sensitive information or making a significant account change (such as changing an email address on the account). This is because the fact that someone is logged in only leads to small level of confidence that they are who they say they are. After all someone could log into a public terminal and walk away allowing someone else to sit down and the computer. Or a hacker could be using a tool such as FireSheep to impersonate someone’s web session on an public network.

Joseph Bonneau has a related post on Light Blue Touchpaper discussing that authentication is becoming a machine learning problem.

Multi-factor authentication is really a method for adding confidence in your authentication, but it still should never be viewed as 100% or 0%.  If you’ve ever forgotten your password, you know that your failure to remember it is not proof that you aren’t who you say you are. Similarly, knowing your password is not proof that you are. It just tends to suggest one way or the other. Financial websites are the most common ones adopting other signals of authentication, such as your user agent or a previously placed cookie on your machine. Facebook also looks to see that you’re in the same country you’re usually in, or that you’re using a browser you’ve used in the past. If not, they place additional hurdles before you can access your account. If you’ve ever phone your credit card company from a phone they don’t have on record, you will have noticed they require additional verification.

As you develop your applications, you should identify critical information or controls that may require additional confidence. Then look for methods of increasing that confidence: requiring additional information from the user, verifying keyboard cadence, identifying IP address, user agents, biometric requirements, FOBs or dongles, etc.

 

 

 

 

Desensitization as a privacy harm

“The Internet age means that a whole generation is accustomed to the idea that their digital lives are essentially in the public domain ” say Tyler Dawson in the Ottawa Citizen. While I’m not sure I agree with Mr. Dawson’s stereotyping a generation, I get his drift in terms of what I term desensitization. That is the idea that increase scrutiny into one’s private life leads to increased expectation of that scrutiny and thus reduced moral outrage.

In risk analysis we often look towards objective consequential harms or damages that may occur as the result of some action or inaction. The prototypical example is the risk of financial theft as a result of having one’s credit card or identity stolen. And while tangible harm is certainly important it is not the only type of damage that may result. Courts are loathe to recognize intangible harms, such as emotional distress, except in rare circumstances. But very few people would deny the very intangible harms to one’s psyche if nude photos were to be circulated of oneself (unless of course one is in the business of circulating such pictures). Many privacy harms are ethereal. Very few of us would be comfortable with the notion of constant surveillance by someone without our consent, even if nothing could ever affect us in a tangible way. I remember being provided a thought experiment at one point. If a space alien could use a telescope and follow your every movement, see everything you do, inspect every thought in your head, does a privacy harm exists? If you knew about this observation does that change your answer? Many people would feel judged in their behavior and thoughts and may alter their routine to prevent adverse judgment about them. I, as would others, would argue that is sufficient to rise to the level of a privacy harm. You are having to change your thoughts and behaviors as a result of the invasion into your personal space.

I return then to the idea of desensitization. The constant surveillance and invasion of privacy changes our social mores. It alters our thoughts and feelings towards the very notion of privacy and does so without our consent. To that extent, I would suggest that invasion without consent itself is a privacy harm. There need not be anything else.

 

2014 Privacy New Year’s resolution: dump Google.

For years, I was a big fan of Google. It just had some awesome services and generally seemed to be a good company but I’ve lost most faith. It’s too big, too all consuming, too powerful and ultimately too Evil. I’ve been SLOWLY moving away from Google for the past 2 years but it’s been a slow migration. I have most of my business mail now going to @privacymaverick, @enterprivacy and @rjcesq.com emails. I still need to get my personal mail off Gmail. Also last year, I moved this blog as well as a few others off blogger.com. I’ve never really used G+ though my email does have an account that I keep having problems as a result of. (Don’t get me started about it).

I still have many other services that I need to extract myself from. Luckily Google isn’t evil in letting people leave.  I still need to get off Calendar and Docs. However, the biggest challenge is going to be Android. I certainly don’t want to go to Apple. I hate the closed ecosystem they represent. Windows phone perhaps? How is Firefox OS doing?

On another completely unrelated note, over at Enterprivacy Consulting Group‘s blog, I talk about the lessons from Snapchat and the perils of investing in technology without considering privacy.

Privacy implications of Local Storage in web browsers

Privacy professionals often have a hard time keeping track of technology and how it affects privacy. This post is meant to help explain the technology of local/web storage.

With the ability to track users across domains, cookies have earned a bad reputation in the privacy community. This became particularly acute with the passing of the EU Cookie law. In particular the law requires affirmative consent when local storage on a user’s computer is used in a way that is not “strictly necessary for the delivery of a service requested by the user.” In other words, if you’re using it to complete a shopping cart at an online store, you need not get consent. If you’re using it to track the user for advertising purposes then you need to get consent.

Originally part of the HTML5 standard, web storage was split into it’s own specification. For more history on the topic, see this article. Web storage is meant to be accessed locally (by javascript) and can store up to 5MB per domain, compared to cookies which only store a maximum of 4kbs of data. Cookies are natively accessible by the server; the purpose of the cookie is to be accessed by server side scripts. Web storage is not immediately accessible by the server but it can be through javascript.

CONS

The con here is that, as a privacy professional, you should be aware of what your developers are doing with web/local storage. Simply asking your developer if they are using cookies may illicit a negative response when they are using an alternative technology that isn’t cookies. Later revelations and returning to your developers may result in a response “Well you asked about cookies, not local storage!” There are also proposals for a local browser accessible database but as of the time of this writing this is not an internet standard (see Mozilla Firefox’s IndexDB for an example).

Web Storage is not necessarily privacy invasive but two things need to be addressed. First, whether that local data is transmitted back to the server or used in such a way that implies results that are transmitted back to the server. Secondly, whether the data stored in local storage is accessible to third parties and represents a risk of exposure to the user. As of this writing, I’m not sure if 3rd party javascript running through a 1st party domain has the ability to access local storage of if it is restricted by a content security policy. The other risk is that a local user can access local storage through the a javascript console. Ideally data on the client should be encrypted.

PROS

Local storage also has the potential to increase privacy. Decentralization is a key technique for architecting for privacy and having access to 5MB of local storage allows enough room to keep most, if not all, client data on the client. Instead of developing rich customer profiles for personalization on the server, keeping this data on the client reduces the risks to the user because the server becomes less of a target. Of course, care must be taken to deal with multi tenancy (more than one person on an end client), which may be especially difficult for systems accessed often by library patrons and the problems of people accessing the data of other local users.

Thoughts on the term “privacy enhancing technologies”

For the last two years I’ve been lamenting the lack of standardization around the term “privacy enhancing technologies.” In fact, as I see it, the term has been bastardized to mean whatever the speaker wants it to mean.  Shortened with the moniker PETs, the term is used in both the privacy professional’s community and in the academic realm of cryptographic research. Newer incarnations, “privacy enabling technologies” and “privacy enhancing techniques,” do not even make the cut on Google’s Ngram service, which rates occurrences of terms in books (See chart below).

In 2008, the British Information Commissioner Office recognized the definitional problem in a paper on Privacy by Design and PETs:

There is no widely accepted definition for the term Privacy Enhancing Technologies (PETs) although most encapsulate similar principles; a PET is something that:

1. reduces or eliminates the risk of contravening privacy principles and legislation.
2. minimises the amount of data held about individuals.
3. empowers individuals to retain control of information about themselves at all times.

To illustrate this, the UK Information Commissioner’s Office defines PETs as:
“… any technology that exists to protect or enhance an individual’s privacy, including facilitating individuals’ access to their rights under the Data Protection Act 1998”.

The definition given by the European Commission is similar but also includes the concept of using PETs at the design stage of new systems:
Defining Privacy Enhancing Technologies
“The use of PETs can help to design information and communication systems and services in a way that minimises the collection and use of personal data and facilitates compliance with data protection rules. The use of PETs should result in making breaches of certain data protection rules more difficult and / or helping to detect them.”

The problem with such definitions is that they are broadly written and thus broadly interpreted and can be used to claim adherence to protecting privacy when in fact, one is not. This also leads to the perverse isolation of privacy protection as synonymous with data protection, which it is not. Privacy is as much about risk of aggregation, intrusion, freedom of association and other forms of invasions in the personal space that we designate and distinguish from the social space where we exist in a larger society.

I see the Privacy by Design (PbD) camp dancing around this. Ann Cavoukian, the Ontario Information and Privacy Commissioner and chief PbD champion, has promoted PETs for years and this evangelism is evident in the PbD foundational principle of full functionality. However, even she has allowed this termed to be applied loosely to make it more palatable to her audience. PbD and PETs thus become buzzwords to attach to an effort in a marketing ploy to give the appearance of doing the right thing, but often results in minimal enhancing of privacy.

I thus suggest the follow definition, and one to which I use in my own vernacular, a privacy enhancing technology is “a technology whose sole purpose is to enhance privacy.” Firewalls, something I see too often referred to as a PETs by laypersons, can enhance privacy but it’s purpose is not necessarily to do so. It is a security technology, protecting confidentiality and also securing the integrity and availability of the systems it protects. Data loss prevention, similarly, can be actually very privacy invasive but could enhance the privacy of data on some occasions. However, the primary purpose is to protect against loss of corporate intellectual property (be it personal information of customers or not), not enhance privacy.

Technologies which would qualify can be found in mixmaster networks (whose sole purpose is to obscure the sender and receiver identity of email) or zero knowledge proofs and related secure multi-party computations (which allow for parties to calculate public functions on private data without revealing anything other than the conclusions of the public function).

Some technologies may be privacy enhancing in application but the technology wasn’t created for the purpose of enhancing privacy. My purpose here is not to split hairs on the definition, per se. My purpose is to expose the dilution of the term to where it becomes doublespeak.

 

Problem closure in privacy

While reading this article earlier today, I came upon the term ‘problem closure’ which has been defined by sociologist to mean “the situation when a specific definition of a problem is used to frame subsequent study of the problem’s causes and consequences in ways that preclude alternative conceptualizations of the problem.” This is something that I see time and time again in discussions on privacy.

Perhaps the most prominent example is the refrain “I have nothing to hide” in response to concerns about surveillance. The answer defines the problem, suggesting that the only issue with surveillance is for those engaged in nefarious acts who need to conceal those acts. It precludes other issues of surveillance. This has been addressed at length by Law Professor Daniel Solove and his discussion of the topic can be found here.

More to practical applications of privacy, I find that many organizations suffer from problem closure in their business models and system implementations. This makes it difficult to add in privacy controls when the starting point is a system that is antithetical to privacy or precludes it. One of the many reasons for this is that companies view their raison d’être as their particular solution not solving the problem they originally set out to solve.  This can best be illustrated by example. Ask many up and coming firms in the online advertising space and they are in the business of ‘targeted behavioral marketing.’ Really, they are in the business of effective online advertising but they’ve defined their company by the one solution they currently offer. This attitude not only is bad for privacy it is bad for the business. The ability to adapt and change to changing customer needs and market conditions is the hallmark of a strong enterprise. Those that are stuck in an unadaptable business model have already sealed their eventual fate. This is especially true in industries driven by technology.

Are you in the business of providing gas powered automobiles or are you in the business of providing transportation solutions?

Are you in the business of printing newpapers or providing news?

Are you in the telephone business or the communications business?

Waste management is a good example of a company which adapted to changing social mores about waste. Originally a trash hauling and dumping company, they have readily adapted their business model to recycling.

When trying to escape the “problem closure” problem, organizations need to look not at the solution they are currently implementing to define them but the problem they are solving for their customers. Once they do that, they can open up their eyes to potential solutions that solve the problem for their customers AND have privacy as a core feature.

This problem is most prevalent, IMHO, in smaller companies who have bet their socks on a particular solution they’ve invented. They don’t have the luxury of having an existing customer base and the ability to explore alternative solutions.

It is a problem I deal with often in trying to convince companies that privacy must be built in. You can’t build the solution and then come back and worry about privacy. It has to be part of the solution building process.