northampton_clown

spotnorthamptonsclown / Facebook

Voices


What is creepy?

If businesses only knew the answer, they would surely steer clear of privacy snafus and consumer backlash. Unfortunately, there is no “creepometer” app (yet) to help marketing managers avert the next public-relations embarrassment. In an article recently published in the Yale Journal of Law and Technology, we provide organizations with tips and pointers to help detect “creep.”

Creepiness, defined in the Merriam-Webster dictionary as something “producing a nervous shivery apprehension,” has become a veritable term of art among privacy professionals.

Notoriously difficult to define, “privacy” has been conceptualized as a “right to be left alone” or a “right to informational self determination.” Good luck operationalizing these concepts in a business environment. Creepiness is more visceral — a gut feeling that arises on the verge of a privacy fail — and may be easier to discern.

Consider Eleven Madison Park, one of the snazziest go-to places for foodies in New York City.

Recently, the restaurant, which offers just one menu option — a $225-per-person prix fixe — figured prominently in the press not because of its honey-lavender roasted Muscovy duck, but rather due to its maitre d’s habit of Googling every guest who has a reservation, to “[search] for personal information — birth date, anniversary, profession — so he can give proper salutations when the party arrives. Once he discovers something concrete, he jumps on it. ‘If I find out a guest is from Montana, and I know we have a server from there, we’ll put them together,'” [he] explains.”

Is Eleven Madison Park’s personalization strategy cool or creepy? Suffice it to say that the story’s headline read, “This New York restaurant takes Facebook stalking to a new level.”

But why is Googling the name of prospective patrons less wholesome than using online search to find any other information?

After all, few people go to a job interview, business meeting or date these days without first Googling and sometimes “Facebooking” their counterparts. The lines blur quickly. Is it socially acceptable to Google a person in front of his eyes? Is it appropriate to use Zillow to explore the value of your neighbor’s house? Or to run an online background check on the parents of your child’s playdate?

Indeed, Eleven Madison Park is somewhat old-school in its use of technology. After all, Googling a name before a meeting is so 2004.

Always avant garde, Virgin Atlantic takes it up a notch, arming its customer service agents with Google Glass to enable them to retrieve information about passengers even as they walk through the doors of the posh business-class lounge.

Much has already been written about “Glassholes.” Glass users, who have yet to figure out which pictures to share on Facebook or how to make sure they do not tweet while drunk, are now required to navigate a whole new map of social rules. Should you take off your Glass in a public restroom, lest other visitors think you are recording? Should one ask, “Mind if I post our conversation online? I think our friends would love to comment on it”? Will users of Glass manage to use the product while respecting existing social norms, or will they need to follow a newly invented code of etiquette?

As Jed Bracy put it, “Is this really the beginning of an era where companies must include a set of social instructions with a new product?”

Unfortunately, we have few tools at our disposal to address these real-life dilemmas. Even Google’s executive chairman, Eric Schmidt, speaking at Harvard University’s Kennedy School of Government, recently said, “People will have to develop new etiquette to deal with such products that can record video surreptitiously and bring up information that only the wearer can see. There are obviously places where Google Glasses are inappropriate.” On a previous occasion, Schmidt explained: “There is what I call the creepy line. The Google policy on a lot of things is to get right up to the creepy line and not cross it.”

In the near past, community norms helped guide a clear sense of ethical boundaries with respect to privacy.

We all knew, for example, that one should not peek into the window of a house even if it were left open, nor hire a private detective to investigate a casual date or the social life of a prospective employee.

Yet with technological innovation rapidly driving new models for business and inviting new types of socialization, we often have nothing more than a fleeting intuition (read: creepiness) as to what is right or wrong.

The failure of social norms to rapidly evolve in lockstep with technology is the source of this creep. More than 100 years ago, Samuel Warren and Louis Brandeis wrote their landmark piece on “The Right to Privacy” as a response to a technological revolution, the invention of the Kodak camera. Apparently mundane today, that invention led to quite a moral panic, with the New York Times decrying fiendish “Kodakers lying in wait,” and President Teddy Roosevelt outlawing Kodaking in Washington parks.

In our Yale journal piece, we argue that to mediate the market and achieve a desirable balance between the interests and needs of all parties, policymakers need to pursue a nuanced and sophisticated path. They should recognize that social norms are rarely established by regulatory fiat, and that laws that fail to reflect techno-social reality may not fare well in the real world.

Regulation should not be viewed as an obstacle to innovation and progress. Rather, it should be used strategically to incentivize companies to proceed with caution and educate users to act responsibly on the new data frontier.

Companies will not avoid privacy backlash simply by following the law. Privacy law is merely a means to an end. Social values are far more nuanced and fickle than any existing (and most likely future) laws and regulations.

In order to avoid creep, companies should resist the temptation to act with chutzpah, even though brazen and audacious behavior constitutes a hallmark of Silicon Valley culture. The challenge is for companies to set the right tone when seeking intimate relationships with consumers.

Companies should avoid technological determinism. Engineers should design technologies to mesh well with consumer expectations. Companies should be cautious of privacy lurches, engaging their consumers in the evolution of products and carefully navigating shifts in context. As with all matters creepy, shining the light is the ultimate strategy, providing individuals with access to their information and insight into the data practices deployed.

Finally, individuals should be educated to treat their own data and that of their peers with respect, realizing that in a digital environment, prior prudence and restraint are far more effective than any ex-post cleanup effort.

Jules Polonetsky serves as executive director and co-chair of the Future of Privacy Forum, a Washington, D.C.-based think tank that seeks to advance responsible data practices. Founded five years ago, FPF is supported by more than 80 leading companies, as well as an advisory board comprised of the country’s leading academics and advocates. FPF’s current projects focus on online data use, smart grid, mobile data, big data, apps and social media. Reach him @JulesPolonetsky.

Omer Tene is vice president of research and education at the International Association of Privacy Professionals (IAPP), where he administers the Westin Fellowship program and fosters ties between industry and academia. Before joining IAPP, he was vice dean of the College of Management School of Law, Rishon Le Zion, Israel. Tene is an affiliate scholar at the Stanford Center for Internet and Society, and a senior fellow at the Future of Privacy Forum. Reach him @omertene.



1 comments
Jake
Jake

"...social norms are rarely established by regulatory fiat..."

Some things to think about when trying to recognize 'creepiness':

Who is performing the data collection, a corporation or a corporeal being? 

Does the application of the data collected benefit the collector or the individual?

Is the collected data shared discriminately or indiscriminately?

What kind of data is being collected:

'Surface' data - data shared directly between the individual and the collector

'Public' data - data available through social media

'Deep' data - data obtained through forensic methods


"Regulation should not be viewed as an obstacle to innovation and progress. Rather, it should be used strategically to incentivize companies..."

The consequences of breaking regulations usually take the form of monetary fines, incarceration, or community service. What if the consequences of breaking data privacy regulations took the form of releasing data about the offenders? Either corporate data, board member data, or individual data? Might this be considered a moral or ethical consequence?


"... individuals should be educated to treat their own data and that of their peers with respect..."

There is a saying that defines comportment while spending time in Japanese public bath houses:

"Nakedness is often seen, but never noticed."