Privacy, without free will

I recently had the distinct pleasure of reading Robert Sapolsky’s book Determined: A Science of Life Without Free Will. Throughout my reading I kept pondering in the back of my mind the implications for the privacy profession and the general field of privacy. This blog post is an outgrowth of those thoughts.

If you can’t tell from the title, the book provides a thorough debunking of the notion that we, as humans, have free will. Primarily written from a biological perspective, the book delves (note, this was not written by AI) into other subject areas and, in fact, the author says that a singular discipline approach to discussing free will can neither dismiss nor promote it. As with many concepts, a multi-disciplinary approach is needed. The book does an excellent job of breaking down the biology of why our actions are predetermined. Our “decisions” are predicated on the neurochemical reactions that happened in our brain immediately preceding it, which was determined by the chemicals in you at the time and neurons that have been strengthened and weakened over your lifetime, which were done so by the culture that has been built up in the decades and centuries preceding your life and in to which you were born and grew, which were orchestrated by the quirks of developments of social structure by our evolutionary path as social creatures, which was guided by the struggle to survive by genes and their interactions with their natural environment. It’s turtles all the way down

Sapolsky makes some excellent arguments both biological and historical. On the biological side, take the sea slug (Aplysia californica) which studies have demonstrated, despite not having a brain, can learn behavior to avoid an electric shock.  In demonstrating the “decision” to choose a path which avoids electric shock, researchers have shown the way sensory neurons and motor neurons change, through the use of inhibitor nodes and excitatory nodes (basically neurotransmitter bonding). This creates a complex decision system in simplistic neuron cells that cause the sea slug to “decide” when to withdraw or not withdraw its gills. The same neurochemical reinforcement that causes the sea slug to exhibit this behavior, causes humans to make “decisions” based on prior stimuli and reinforcements. You didn’t decide to push that button, everything that preceded the button pushing led you to push it. On the historical side, Sapolsky demonstrates that as we’ve been learning more and more about how the brain works, we’ve been traveling down a road towards the inevitable conclusion of a lack of free will. For centuries, seizures were deemed an example of demonic possession, caused by moral turpitude or invitation of such possession. Thankfully, we now know seizures are caused by misfiring neurons, themselves the result of stimuli, brain development, genetics and a host of other factors independent of the afflicted. Beyond making reasoned arguments against free will, Sapolsky debunks many of the arguments for free will. One such claim that quantum uncertainty creates free will seems particularly easy to knock down. Even if you get past the fact that the quantum doesn’t affect the macroscale of neurons in any meaningful way, such a result would only create random decisions not free will, as people understand the concept. One further note, Sapolsky doesn’t argue that being deterministic biological creatures makes us predictable, rather the complexity of the inputs, in fact, makes us unpredictable (think Butterfly Effect). 

Beyond arguing for a lack of free will, where is Sapolsky going with this? Just as we don’t punish epileptics for inviting demons into themselves, civilized society is coming more to understand that the root causes of antisocial behavior are not people “deciding” to do ill but because of their upbringing, their childhood, their exposure to cultural forces, their genes, the chemical they’ve been exposed to, their lack of nutrition in utero which contributed to irregular development of the prefrontal cortex, evolutionary forces which causes people to defect from social goods to better themselves and causes social animals, like humans, to punish defectors to benefit the greater good. Sapolsky goes on to argue that, rather than punish antisocial behavior, which our brains have been evolutionarily shaped to do, a more appropriate and just way of approaching this would be to both recognize that it’s not a decision to be morally corrupt and the best way for society to deal with antisocial behavior is not to punish but alter those factors which lead to antisocial behavior (think early nutrition), deter and take away the ability for those with antisocial inclinations to act (incarcerate not to punish but to protect society).  Whether or not my measly three paragraphs convinces you we have no free will, I would highly encourage you to read the book Determined. For those of you whose brain is predetermined not to read a 400+ page book, Sapolsky has done numerous interviews, including the one with Neil deGrasse Tyson that led me to his book. Please take the time and pass it on. 

Now, what the heck does this have to do with privacy, you may be wondering. In reading the book, I was struck by three thoughts. First, if we have no free will, where does this leave consent? You didn’t decide to consent. All of the influences, conditions, and history, basically everything that you had no control over, led up to the moment of you either granting or denying authorization. What can it mean for consent to be freely given if you have no free will? First off, I’d argue that, just like the sea slug, our decisions need not be based on free will. Those decisions are based on precedent (through culture, evolution, or even personal history) which has built up in our brain’s neurons to protect us. This is why kids are risk taking and adults tend to be risk averse and why children need, potentially, more protection (because we’ve learned and they haven’t). So, just because our decision is based on the strengthening of neurochemical transmitters built up over time and not “free will” doesn’t mean that our decisions are no less individualistic, owing to our personal circumstances, and thus deserving of individual respect. 

However, this brings me to my second thought. Given that we know that our decisions are based on this neurochemical build up, the manipulation of this seems even more problematic. It further suggests others, acting in their interest, can tweak and hone in their methods to train our brains to make the decisions that are best for them and not ourselves. I’ve long noted the problem of the normalization of privacy invasions changing social norms on what is considered a privacy invasion. Like when Facebook introduced the newsfeed which is now the de facto operation of social media sites, or when the normalization of facial recognition for authentication to one’s phone spills over to hardly a soul disputing the use of facial recognition at airports and borders. The point is we can, and will, be manipulated, and our autonomy degraded. This has ramifications for deceptive designs, not just those that deceive and lead us to decisions in contravention to how our brain would respond with all the stimuli, but those that reprogram the brain to make different decisions. AI has the frightening possibility that it won’t just deceive, but in a maximization effort to drive results, will find pathways that actually change our brain’s makeup. 

My third and final thought on privacy and free will has more to do with our understanding of privacy and how this is directed by cultural and evolutionary development over centuries and millennia. Most privacy professionals have long recognized a cultural difference to perceptions of privacy and privacy invasive acts. It turns out many cultural differences are actually exhibited in brain makeup, mostly in the prefrontal cortex. Similarly, brain composition, as determined by your upbringing, determines your moral compass. Certain groups value obedience, loyalty and purity, while others favor fairness and harm avoidance. The former tend to be deontologists, the latter consequentialists. Given privacy’s placement as a social norm, people’s perception of privacy will necessarily be driven by everything that has come before. Ultimately, the question is how I can utilize that to drive the discussion forward.