In re comment on Financial Privacy blog

This post is in response to a comment on my blog post about Financial Privacy. See


I use terms like unlinkability and anonymity in the academic vernacular, not in respect to any legal definition. After all, the law can define a word to mean anything it wants. The technique used to anonymize the transaction is similar to Anonymous Lightweight Credentials (see for more information on ASL). Breaking the anonymity would require solving the discrete log problem. Solving that problem would put in jeopardy much of the cryptography upon which the world relies today, so I’m reasonably confident of its security for the moment.  Spending a token under the Microdesic system based on the technique allows the user to prove they have the right to spend a token without identifying themselves as a particular person who owns a particular token.

Now, as far as de-anonymization under fraud, if a user double spends the same token, they reveal themselves. If I were to offer a somewhat real world analogy, it would go like this: I walk into a store. If I’m minding my own business, the store can’t distinguish me from any other customer in the store. I can purchase what I want and remain anonymous (subject to the store taking other measures outside this scenario, like performing facial recognition). However, if I commit a crime (in this case fraud), the store forces me to leave my passport behind. (It is sometimes hard to create real world analogies of the strange world of cryptography, but this should suffice).

In other words, prior to committing that fraudulent act, I’m anonymous. In the act of committing that fraud (in order for the store to accept my digital token/money), I’m standing up and announcing my identity and revealing my past purchases.

Returning, now to the law and specifically Recital 26 of the GPDR, it states “To ascertain whether means are reasonably likely to be used to identify the natural person, account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments.”  There is clearly a temporal element. In other words, we need not account for a super computer in the distant future, or someone solving the discrete log problem. I also doubt the GDPR contemplates forcing the user to reidentify him or herself as a reasonable means of reidentification. Surely, they aren’t saying that if you rubber-hose the user and tell them to identify when and where they made a purchase, that’s reidentification. The data subject always knows that information, the question is whether anyone else can ascertain it without the user’s assistance. Under the Microdesic system, at the time of a non-fraudulent transaction, there is no reasonable means of reidentification (i.e. you must solve the discrete log problem).

The Middle Man

The subject of my previous post was financial privacy vis-à-vis decisional interference. The comment to which this post replies posed the question of whether Microdesic becomes the middle-man with the ability to interfere in the decision-making capabilities (i.e. spending decisions) of the user. Let me first explain by counter-example. When a payment authorization request comes in to PayPal, it knows the account of the spender, the account of the recipient, who those parties are, how much is being transferred and some extra data collected (such as in a memo, etc.). At that point, PayPal could, based on that information, prevent the transaction from occurring. Maybe they think the amount is too high. Maybe the memo indicates the person is purchasing something against PayPal’s AUP. The point is they can stop the transaction at the point of transaction. The way Microdesic works is different. A user in the Microdesic system is issued fungible tokens. From the system perspective, those tokens are indistinguishable from user to user. In fact, the system uses ring signatures which mixes a user’s tokens with other user’s tokens, to reduce correlation through forensic tracing. The tokens are then spent “offline” without the support of the Microdesic server. All the merchant knows is that they are receiving a valid token. Microdesic has no ability to prevent the transaction at the time of transaction.

Now for a bit of a caveat. Because the tokens are one time spends, the Merchant must subsequently redeem the tokens, either for other tokens or for some other form of money held in escrow against the value of the tokens. Microdesic could at this point require the Merchant to identify themselves and prevent redemption. Merchants that weren’t approved by Microdesic might therefore be excised from the system by virtue of being unable to redeem their tokens. However, the original point remains. Unlike a PayPal or credit card system, which authorizes each and every transaction, Microdesic has no ability to approve or disapprove of a particular transaction at the point of the transaction.

Financial Privacy and CryptoCurrencies

Financial privacy is most often conceptualized in terms of the confidentiality of one’s finances or financial transactions. That’s the “secrecy paradigm,” whereby hiding money, accounts, income, expenses prevents exposure of one’s activities, subjecting one to scrutiny or revealing tangential information one’s wants to keep private. Such secrecy can be paramount to security as well. Knowing where money is held, where it comes from or where it goes give thieves and robbers the ability to steal that money or resource. Even knowing who is rich and who is poor helps thieves select targets.

Closely paralleling “financial privacy as confidentiality,” is identity theft, using someone else’s financial reputation for one’s own benefit. In the US, Graham-Leech Bliley’s Safeguards Rule provides some prospective protection against identity theft, while the Fair Credit Reporting Act (FCRA), Fair and Accurate Credit Transactions Act (FACTA) and the FTC’s Red Flag Rules provide additional remedial relief.

However, as I’m found of saying in my Privacy by Design Training Workshops, “that’s not the privacy issue I want to talk about now.” All of the preceding examples are issues of information privacy. Privacy, though, is a much broader concept than that of information. In his Taxonomy of Privacy, Professor Daniel Solove categorized privacy issues into four groups: information collection, information processing, information dissemination and invasions. It’s that last category to which I turn the reader’s attention. Two specific issues fall under the category of “invasions,” namely intrusion and decisional interference. Intrusion is something commonly experienced by all when a telemarketer calls, a pop-up ad shows up in your browser window, you receive spam in your inbox or a Pokemon Go player shows up at your house; it is the disturbance of one’s tranquility or solitude. Decisional interference, may be a more obscure notion for most readers, except for those familiar with US Constitutional Law. In a series of cases, starting with Griswold v. Connecticut and more recently and famously in Lawrence v. Texas, the Supreme Court rejected government “intrusion” in the decisions individual’s make of their private affairs, with Griswold concerning contraceptives and family planning and Lawrence concerning homosexual relationships. In my workshop, I often discuss China’s one child law as a exemplary of such intrusion.

The concept of decisional interference has historical roots in US privacy law. Alan Westin’s definition of information privacy (“the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others”) includes a decisional component. Warner and Brandeis’ right “to be let alone” also embodies this notion of leaving one undisturbed, free to make decisions as an autonomous individual, without undue influence by government actors. With this broader view, decisional interference need not be restricted to family planning, but could be viewed as any interference with personal choices, say government restricting your ability to consume oversized soft drinks. I guess that’s why my professional privacy career and political persuasion of libertarianism are so closely tied.

But is decisional interference solely the purview of government actors, then? Up until recently, I struggled with coming up with a commercial example of decisional interference, owing to my fixation on private family matters. A recent spark changed that. History is replete with financial intermediaries using their position to prevent activities they dislike and since many modern individual decisions involve spending money, the ability of an intermediary to disrupt a financial transaction is a form of decisional interference. A quick look at Paypal’s Acceptable Use Policy provides numerous examples of prohibited transactions that are not necessarily illegal (which is covered by the first line of their AUP). Credit card companies have played moral police, sometimes but not always at the behest of government, but more often being overly cautious, going beyond legal requirements. Even a financial intermediaries’ prohibition on illegal activities is potential problematic, as commercial entities are not criminal law experts and will choose risk-averse prohibition more often than not, leading to chilling of completely legal, but financially dependent, activity.

This brings me to the subject of crypto-currencies. Much of the allure of a decentralized money system like Bitcoin is not in financial privacy vis-a-vis confidentiality (though Zerocoin provides that) but in the privacy of being able to conduct transactions without inference by government or, importantly, a commercial financial intermediary. What I’m saying is not a epiphany. There is a reason that Bitcoin beget the rise of online dark markets for drugs and other prohibited items. Not because of the confidentiality of such transactions (in fact the lack of confidentiality played into the take-down of the Silk Road) but because no entity could interfere with the autonomous decision making of the individual to engage in those transactions.

Regardless of your position on dark markets, you should realize that in a cashless world, the ability to prevent, deter or even discourage financial transactions is the ability to control a societ. The infamous pizza privacy video is disturbing not just because of the information privacy invasions but because it is attacking the individual’s autonomy in deciding what food they will consume by charging them different prices based on external intermediaries social control (here a national health care provider). This is why a cashless society is so scary and why cryptocurrencies are so promising to so many. It returns financial privacy of electronic transaction vis-a-vis decisional autonomy to the individual.

[Disclosure: I have an interest in a fintech startup developing anonymous auditable accountable tokens which provides the types of financial privacy identified in this post.]

Trust by Design

I’ve fashioned myself a Privacy and Trust Engineer/Consultant for over a year now and I’ve focused on the Trust side of Privacy for a few years, with my consultancy really focused on brand trust development rather than privacy compliance. Illana Westerman‘s talk about Trust back at the 2011 Navigate conference is what really opened my eyes to the concept. Clearly trust in relationships requires more than just privacy. It is a necessary condition but not necessarily sufficient. Privacy is a building block to a trusted relationship, be it inter-personal, commercial or between government and citizen.

More recently, Woody Hartzog and Neil Richard’s “Taking Trust Seriously in Privacy Law” has fortified my resolve to pushing trust as a core reason to respect individual privacy. To that end, I though about refactoring the 7 Foundational Principles of Privacy by Design in terms of trust, which I present here.

  1. Proactive not reactive; Preventative not remedial  → Build trust, not rebuild
    Many companies take a reactive approach to privacy, preferring to bury their head in the sand until an “incident” occurs and then trying to mitigate the effects of that incident. Being proactive and preventative means you actually make an effort before anything occurs. Relationships are built on trust. If that trust is violated, it is much harder (and more expensive) to regain. As the adage goes “once bitten, twice shy.”
  2. Privacy as the default setting → Provide opportunities for trust
    Users shouldn’t have to work to protect their privacy. Building a trusted relationship occurs one step at a time. When the other party has to work to ensure you don’t cross the line and breach their trust that doesn’t build the relationship but rather stalls it from growing.
  3. Privacy embedded into the design → Strengthen trust relationship through design
    Being proactive (#1) means addressing privacy issues up front. Embedding privacy considerations into product/service design is imperative to being proactive. The way an individual interfaces with your company will affect the trust they place in the company. The design should engender trust.
  4. Full functionality – positive sum, not zero sum →  Beneficial exchanges
    Privacy, in the past, has been described as a feature killer preventing the ability to fully realize technology’s potential. Full functionality suggest this is not the case and you can achieve your aims while respecting individual privacy. Viewed through the lens of trust, commercial relationships should constitute beneficial exchanges, with each party benefiting from the engagement. What often happens is that because of asymmetric information (the company knows more than the individual) and various cognitive biases (such as hyperbolic discounting), individuals do not realize what they are conceding in the exchange. Beneficial exchanges mean that, despite one party’s knowledge or beliefs, they exchange should still be beneficial for all involved.
  5. End to end security – full life-cycle protection → Stewardship
    This principle was informed by the notion that sometimes organizations were protecting one area (such as collection of information over SSL/TLS) but were deficient in protecting information in others (such as storage or proper disposal). Stewardship is the idea that you’ve been entrusted, by virtue of the relationship, with data and you should, as a matter of ethical responsibility, protect that data at all times.
  6. Visibility and transparency – keep it open → Honesty and candor
    Consent is a bedrock principle of privacy and in order for consent to have meaning, it must be informed; the individual must be aware of what they are consenting to. Visibility and transparency about information practices are key to informed consent. Building a trusted relationship is about being honest and candid. Without these qualities, fear, suspicion and doubt are more prevalent than trust.
  7. Respect for user privacy – keep it user-centric  → Partner not exploiter
    Ultimately, if an organization doesn’t respect the individual, trying to achieve privacy by design is fraught with problems. The individual must be at the forefront. In a relationship built on trust, the parties must feel they are partners, both benefiting from the relationship. If one party is exploitative, because of deceit or because the relationship is achieved through force, then trust is not achievable.

I welcome comments and constructive criticisms of my analysis. I’ll be putting on another Privacy by Design workshop in Seattle in May and, of course, am available for consulting engagement to help companies build trusted relationships with consumers.


Price Discrimination

This post is not an original thought (do we truly even have “original thoughts”, or are they all built upon the thoughts of others? I leave that for others to blog about).  I recently read a decade old paper on price discrimination and privacy from Andrew Odlyzko.  It was a great read and it got more thinking about many of the motivations for privacy invasions, particularly this one.

Let me start out with a basic primer on price discrimination. The term refers to pricing items based on the valuation of the purchaser, in other words discrimination in the pricing of goods and services between individuals. Sounds a little sinister, doesn’t it? Perhaps downright wrong, unethical. Charging one price for one person and a different price for another.  But price discrimination can be a fundamental necessity in many economic situations.

Here’s an example. Let’s say I am bringing cookies to a bake sale. For simplicity, let’s say there are three consumers at this sale (A, B and C).  Consumer A just ate lunch so isn’t very interest in a cookie but is willing to buy one for $0.75. Consumer B likes my cookies and is willing to pay $1.00. Consumer C hasn’t eaten and loves my cookies but only has $1.50 on him at the time. Now, excluding my time, the ingredients for the cookies cost $3.00. At almost every price point, I end up losing money

Sale price $0.75 -> total is 3x$0.75 = $2.25
Sale price $1.00 -> total is 2x$1.00 = $2.00 (Consumer A is priced out as the cost is more than they are willing to pay)
Sale price $1.50 -> total is 1x$1.50 = $1.50 (Here both A and B are priced out)

However, if I was able to charge each Consumer their respective valuation of my cookies, things change.

$0.75+$1.00+$1.50= $3.25

Now, not only does everyone get a cookie for what they were willing to pay, I cover my cost and earn some money to cover my labor in baking the cookie. Everybody is happier as a result, something that could not have occurred had I not been able to price discriminate.

What does this have to do with Privacy? The more I know about my consumers, the more I’m able to discover their price point and price sensitivity. If I know that A just ate, or that C only has $1.50 in his pocket, or that B likes my cookies, I can hone in on what to charge them.

Price discrimination it turns out is everywhere and so are mechanisms to discover personal valuation. Think of discounts to movies for students, seniors and military personnel. While some movie chain may mistakenly believe they are doing it out of being a good member of society, there real reason is they are price discriminating. All of those groups tend to have less disposable income and thus are more sensitive to where they spend that money. Movies theaters rarely fill up and an extra sale is a marginal income boost to the theater.  This is typically where you find price discrimination, where the fix costs are high (running the theater) but the marginal cost per unit sold are low. Where there is limited supply and higher demand, the seller will sell to those willing to pay the highest price.

But what do the movie patrons have to do to obtain these cheaper tickets? They have to reveal something about themselves….their age, their education status or their profession in the military.

Other forms of uncovering consumer value also have privacy implications.  Most of them are very crude groupings of consumer in to bucket, just because our tools are crude, but some can be very invasive. Take the FAFSA, the Free Application for Federal Student Aid.  This form is not only needed for U.S. Federal loans and grants, but many universities rely on this form to determine scholarships and discounts. This extremely probing look into someones finances is used to perform price discrimination on students (and their parents), allowing those with lower income and thus higher price sensitivity to pay less for the same education as another student from a wealthier family.

Not all methods of price discrimination affect privacy, for instance, bundling.  Many consumers bemoan bundling done by cable companies who don’t offer an ala carte selection of channels. The reason for this is price discrimination. If they offered each channel at $1 per month, they would forgo revenue from those willing to pay $50 a month for the golf channel or those willing to pay $50 a month for the Game Show  Network. By bundling a large selection of channel, many of whom most consumers don’t want, they are able to maximize revenue from those with high price points for certain channels as well as those with low price points for many channels.

I don’t have any magic solution (at this point). However, I hope by exposing this issue more broadly we can begin to look for patterns of performing price discrimination without privacy invasions. One of the things that has had me thinking about this subject is a new App I’ve been working on for privacy preserving tickets and tokens for my start-up Microdesic. Ticket sellers have a problem price discriminating and tickets often end up on the secondary market as a result.

[I’ll take the bottom of this post to remind readers of two upcoming Privacy by Design workshops I’ll be conducting. The first is in April in Washington, D.C. immediately preceding the IAPP Global Summit. The second is in May in Seattle. Note, the tickets ARE price discriminated, so if you’re a price sensitive consumer, be sure to get the early bird tickets. ]

Pokemon Goes to Church

In case you haven’t read enough about Pokemon Go and Privacy

In the past, you knew you’d arrived on the national scene if Saturday Night Live parodied you. While SNL still remains a major force in television, the Onion has taken its place for the Internet set. Just as privacy issues have graced the covers of major news sites around the world, so too has it made its way into plenty of Onion stories. The latest faux news story involves the Pokemon Go craze sweeping the nation like that insidious game in Star Trek: The Next Generation that took over crew member brains on the Enterprise.

“What is the object of Pokemon Go?” asks the Onion in their article. And their response was “To collect as much personal data for Nintendo as possible.” That may or may not have been part of the intent of Nintendo, but the Onion found humor because of its potential for truth. Often times comedians create humor from uncomfortable truthfulness. In a world of Flashlight apps collecting geolocation, intentions for collecting data are not always clear as was Nintendo’s potential collection with their game. Much has already been written about this. So much attention has been focused on Nintendo, it stirred frequent pro-privacy Senator Al Franken to write a letter. I’d like to focus, though, on something that another news story picked up.

The privacy issue I’m talking about isn’t about the collection of information by Pokemon Go or even the use of the information that was collected. The privacy issue I want to relay is something even the most astute privacy professional might overlook in an otherwise thorough privacy impact assessment. As mentioned by Beth Hill in her previous post on the IAPP about Pokemon Go, a man who lived in a church found players camped outside his house. The App uses churches and gyms where player would converge to train. While this wouldn’t normally be problematic but one particular church was converted years ago into a private residence. The privacy issue at play here is one of invasion, defined by Dan Solove as “an invasive act that disturbs one’s tranquility or solitude.” We typically see invasion issues more commonly crop up related to spam emails, browser pop-ups, or telemarketing.

This isn’t the first time we’ve seen this type invasion. In order to personalize services, many companies subscribe to IP address geolocation services. These address translation services translate an IP address into a geographic location. Twenty years ago the best one could do would be a country or region based on assigned IP address space in ARIN (American Registry for Internet Numbers). If your IP address was registered to a California ISP, you were probably in California. The advent of smartphones and geolocation has added a wealth of data granularity to the systems. Now, if you connect your smart phone to your home WiFi, the IP address associated with that WiFi could be tied to your exactly longitude and latitude. Who do you think that “Flashlight” application was selling your geolocation information to? The next time you go online with your home computer (without GPS), services still know where you are by virtue of the previously associated IP address and geolocation. One of the subscribers to these services are law enforcements, and lawyers and a host of others trying to track people down. Behind on your child support payment? Let them subpoena Facebook, get the IP address you last logged in and then geo-locate that to your house, to serve you with a warrant. Now that’s personalization by the police department!  No need to be inconvenienced and go down to the station be arrested. But what happens when your IP address has never been geolocated? Many address translation services just pick the geographic center of where what they can determine, be that city, state or country. Read about a Kansas farm owner’s major headaches because he’s located at the geographic center of the U.S. at

Many privacy analysts wouldn’t pick up on these type of privacy concerns for no less that four reasons. First, it doesn’t involve information privacy, but intrusion into an individual’s personal space. Second, even when looking for intrusion type risks, an analyst is typically thinking of marketing issues (through spamming or solicitation), in violation of CAN-SPAM, CASL, the telephone sales solicitation rules or other national or laws. Third, the invasion didn’t involve Pokemon Go user privacy but rather another distinct party. This isn’t something that could be disclosed on a privacy policy or adequately addressed by App permissions settings. Finally, the data in question didn’t involve “personal data.” It was the address of churches at issue. If you haven’t been told by a system owner, developer or other technical resource that no “personal data” is being collected, stored or processed, then you clearly haven’t been doing privacy long enough. In this case, they would be more justified that most. Now, this isn’t to excuse the developers for using churches as gyms. An argument could easily be made that people are just as deserving of “tranquility and solitude” is their religious observations as in their home.

Ignoring the physical invasion in religious institution’s space for one moment, one overriding problem in identifying this issue is that it is rare. Most churches simply aren’t people’s homes. A search on the Internet reveals a data broker selling a list of 110,000 churches in the US (including geolocation coordinates). If the one news story represents the only affected individual, this means that only approximately 1/100,000 churches were actually someone’s home. If you’re looking for privacy invasions, this is probably not high on your list based on a risk based analysis.

There are two reasons that this is the wrong way to think about this. First off, if your company has millions of users (or is encouraging millions of users to go to church), even very rare circumstances will happen. Ten million users with a one in a million chance of a particular privacy invasion means is going to happen, on average, to ten users. The second reason that this is extremely important to business is because these types of very rare circumstances are newsworthy. It is the one Kansas farm that makes the news. It is the one pregnant teenager you identify through big data that gets headlines. The local auto fatality doesn’t make the front page but if one person poisons a few bottles of pills out of the billions sold then your brand name is forever tied to that tragedy. Corporations can’t take advantage of the right to be forgotten.

Assuming you can identify the issue, what do you do? Despite the rarity of the situation, the fact that it doesn’t involve information, it isn’t about marketing, it isn’t about your customers or users of your service, and, on it’s face, it doesn’t involve personal data, is all hope lost? What controls are available at your disposal to mitigate the risks? Pokemon Go developers were clearly cognizant enough to not include personal residences as gyms. They chose locations that were primarily identified as public. At a minimum then, they could have done, potentially, more to validate the quality of the data and confirm that their list of churches didn’t actually contain people’s residences. Going a step further, they could have considered excluding churches from the list of public places. This avoids not only the church converted to residence issue but also the invasion into religious practitioners’ solitude. Of course, the other types of locations chosen as gyms still needs to be scrubbed for accuracy as public spaces. However, even this isn’t sufficient. Circumstances change over time. What is a church or a library today, may be someone’s home tomorrow. Data ages. Having a policy of aging information and constantly updating it is important even when it may not be, on its’ face, personal data. A really integrated privacy analyst or a development team that was privacy aware could even have turned this into a form of game play. Getting users to, subtly, report back through in-game mechanism that something is no longer a gym (i.e. no longer a public space), would keep your data fresh and mitigate privacy invasions.

No-one ever said the job of a privacy analyst was easy, but with the proper analysis, the proper toolset and the proper support of the business, you can keep your employer out of the news and try keeping your customers (and non-customers) happy and trusting your brand.

Essentialism and Privacy

I first learned about essentialism while listening to an audio book of The Greatest Show on Earth by Richard Dawkins. Essentialism has it roots in Plato’s Idealism, though I would suggest that our being drawn to it may be a result in the way the human brain functions. For those unfamiliar, essentialism, simply put, is the notion that “things” have an essential form behind them. Thus in Plato’s world, a circle is defined by a perfect ideal of circle and while real world circles may have variations, bumps and such, a circle is essentially a line drawn around a point at all times equidistance from that point.

There a large variety of geometric shapes, triangles, squares, dodecagon, for which humans have assigned monikers. However, there are an infinite number of shapes that defy such simplistic definition. While a line equidistance from a point is the perfect circle, a random squiggle is the best whatever it is, despite us not having a name for it. Now, I don’t claim to be a neuro-biologist, but in my rudimentary understanding, our brains store things in a way that provides simple categorization. Language is built on defining things we can relate to. We see something round, our brain fires off the neurons that represent a circle. We can also abstract by grouping things together. We see a 12 side shape; we may know it is a polygon but not a dodecagon. Our brains are really good at analogizing as well. We learn by analogy. We see something big, strong, with fangs and bearing its teeth, we may not know what it is, but we can recognize it’s probably a predator.

Dawkins discussed essentialism in the concept of evolution. Prior to Charles Darwin, living creatures broken into a taxonomy. In 1735, Carl Linnaeus is the seminal work Systema Naturae started with three kingdoms of nature (only two animals and plants were living), divided into classes then orders, genus, and species. We still use a form of this taxonomy today when we talk about life, only now thanks to Thomas Cavalier-Smith, we have six kingdoms. Dawkins beef with essentialism is that by categorization we make it more difficult to see the evolutionary changes. Take a rabbit, defined as a furry creature with fluffy ears, a bushy tail and strong hind legs. But that’s the ideal, every rabbit is different and if you go back in the ancestry of rabbits, when does it cease to be a rabbit? In the future, as generations are born, when does the descendent of a modern day rabbit cease to be a rabbit? Humans have a hard time dealing with conceptualizing large spans of time, so we can analogize (again, using that great learning technique) to relatives and aging.  My brother is clearly my relative, as are my first and second cousins. Though I don’t know them, I know I have third cousins and more that are relatives. At what point though are we no longer “relatives?” One young girl even claimed to show that all but one of the presidents were related, tracing lineage back to an English King. When I meet someone on the street, do I only not put someone in the “relative” bucket in my brain because nobody has done the analysis? Aging provides a similar means of clearly showing the continuity of life and a break down of our taxonomy of age. We are born as babies, grow to be infants, then toddlers, next children then young adults, then adults, then we’re labeled old, and perhaps elderly after that. But what defines those classifications? When do I become old? Do we one day wake up and we’re suddenly “elderly?” Isn’t 60 the new 30?

Once I learned about essentialism, I started seeing the dichotomy everywhere: the breakdown between where people try to classify or categorize things and the reality that there is a continuous line. One of my first epiphanies occurred when I was trying to clean up my vast MP3 collection. Many of the songs had no associated genre or the genre was way off. I set about to correct that. I started labeling all my music. But then I ran into a clear conundrum. Was Depeche Mode “new wave” or “80’s pop”? Was Billy Bragg punk, folk or some crossover folk punk? Clearly the simplistic labeling system provided by Windows was the problem as it only allowed me to pick one genre. I need something more akin to modern day tagging where I could tag a song with a related genre, one or more. But was that really the problem?

I started realizing this problem (though not in the way I’ve characterized it now) about 20 years ago in relations to techno music. There seemed to be all sorts of subgenres: jungle, synth, ambient, acid, trance, industrial. It seemed every time I turned around there was a new subgenre: darkwave, dubstep, trap, the list goes on. Wikipedia lists over a hundred genres of electronic music. I couldn’t keep up and have trouble distinguishing between many of them. SoundCloud has millions upon millions of songs. Many of these defy categorization. What we’re learning from this is that we can like a song without pegging it into a specific category and with the power of suggestion, SoundCloud can find other songs we like without us needing to search the “Pop-Country” section of the local record store.

So now I come to privacy. You may be thinking that I’m going to talk about personalization and privacy and how in order to suggest an uncategorizable song, I have to know about your musical taste. While that it a valid topic for conversation, I’ll leave that to another post. What I want to talk about today is privacy’s taxonomy. I’ve been a big fan of Dan Solove’s privacy taxonomy for quite some time. I think it really does a good job of pinpointing privacy issues that people don’t normally think about and allows me to explore when talking with others. Going through the taxonomy allows me to illustrate types of privacy invasion that aren’t just about insecurity and identity theft. Talking about surveillance allows me to discuss how it can have a chilling effect, even if you’re not the target of the surveillance or “doing anything wrong.” I can talk about how interrogation, even if the subject doesn’t answer, may make them uncomfortable.

But I’ve also been thinking about the taxonomy and essentialism. What are we missing in the gaps between the categories? I’ve been working on a book, hopefully, to be published later this year on a theory of privacy that I hope will fill those gaps. A unified field theory of privacy, I hope. Stay tuned.

Internation Data Privacy Day: The year ahead and in review.

2015 proved to be another banner year for data privacy issues and 2016 is looking to be no different. In my International Data Privacy post last year, I predicted that 2015 would be the year for privacy. While that prediction has partially been vindicated, the steam roller continues to push forward for 2016 with no sign of abating. – See more at:

2015: The year in Data Privacy

Data Privacy Day was being celebrated for the 9th year this January 28th. Known as Data Protection Day in Europe, the date comes from the Convention for the Protection of individuals with regard to Automated Processing of Personal Data, which was opened for signatures at the Council of Europe on that date in 1981. A plethora of organizations, from regulatory authorities to cybersecurity organizations to industry trade groups to businesses across the globe are getting involved. The goal is to raise awareness among consumers about data privacy issues and encourage businesses to respect privacy in their operations and products. –

Auto Privacy

From police planting GPS devices on automobiles to lawyers seeking black box data in vehicles, automobile privacy has never been a hotter topic. In fact, it’s so hot that auto manufacturers recently pledged to adopt new auto industry privacy guidelines.20140513_080829(1)

Automobiles have never had the highest of 4th amendment privacy protections, and for years courts have struggled with the proper line. With the technology changes afoot, the automobile is positioned to become one of the forefronts of the privacy debate in the coming years. The issues are plenty

This, of course, doesn’t even begin to address the significant security issues at stake when combining a computer with a 2000 lb hunk of metal that can move at 80mph.

Unfortunately, the auto industry is woefully unprepared for tackling this problem. Having experienced first hand a company that was transitioning from manufacturing to software, I know that the mental shift is huge. I’ve twice gotten in heated discussions with auto industry representatives about vehicle privacy issues only to find the representatives clueless beyond belief. It’s the same tired old refrain, privacy versus security (or in this case safety). Sure, there are anecdotal stories that showcase how privacy invasions save a life, but they don’t outweigh the societal interest of protecting privacy as a whole. The industry espouses the safety benefits of telemetrics to improve vehicle safety. Understanding what causes crashes and how crashes occur can reduce deaths and injuries. However, they won’t invest the time and resources to developing techniques to gather statistical data without siphoning in reams of individual data about individuals drivers and driving habits. Ultimately this individual data can be used against the individual, either in higher insurance rates, automated traffic citations, in legal proceedings, or by nefarious ex-lovers. Technology like differential privacy or similar techniques like the one recently employed by Google to improve Chrome’s performance.

What they auto industry should be investing in (and they are but maybe not enough) is reducing the biggest risk and danger to driver safety: the driver and other drivers. Every year 1.2 million people die in car accidents, countless others are wounded. Some 93 percent of accidents are caused by human error.

The win win solution for privacy AND safety thus is driverless cars that aren’t tied to the identify of the passengers. I hail the nearest car (ala Uber), it picks me up and takes me to my destination. Unfortunately, it isn’t a boon for the auto industry long term because fewer drivers and fewer accidents mean fewer auto sales every year. One estimate says a shared autonomous vehicle may replace 11 individually owned vehicles. The auto industry doesn’t really have much choice, but privacy and safety may not be in their long term interest.