head in sandWe’re fast approaching the end of year holidays. If you’re like me, you do a fair percentage of your holiday shopping online for a diverse group of friends and family. This time of year, the oddest things start following me around the Internet. My shopping habits suddenly confound the recommendation engines and they almost seem to give up — pelting me with random products and services. But I must admit, when they have my preferences right, I’m far less annoyed. In fact, there have been times when the ads that followed me around knew my needs even before I did. While there is a part of me that finds that pretty creepy, there is also a part relieved that “the Internet Gods” anticipated I was running out of tea before the crisis actually occurred!

We face the same issues professionally. How much information do we want to give up to increase our productivity? How secure do we want our data? When and how do those preferences vary by situation? These are not easy questions to answer from the personal, professional, or service provider perspective.

So this month we asked the Chefs: Where is the balance between security, authentication, marketing, and privacy?

David Smith: Without security you cannot have privacy. Authentication doesn’t necessarily give you security, but you can’t have security and privacy without it… So this is becomes question of who is doing the authentication and what control you’re given over what they can do with the data they are privy to. At this point it’s probably worth noting that, to interact with any of the publishing or academic systems, a user has gone through multiple authentication (and therefore data broadcasting) steps before they hit our systems. Windows 10? Yup, you’ve logged in to the OS with a Microsoft account. Unless you are very diligent, you are broadcasting all sorts of data to them. Phone? Apple Account/Google Account. Facebook Account. Any other App account. Browser — Facebook, Google, Yahoo (still a thing), all the cookies, then your browser/OS/Geolocation/machine/voice/typing profiles.

So unless you are trying really hard, you are broadcasting an astonishing amount of information about yourself. And if you are cunning… well, they can track the suspicious looking hole in the data. Sorry about that.

But there is a bit of good news if you live in Europe and have to abide by EU rules (The UK will, even with #brexit). There is the General Data Protection Regulation, which is a very good thing indeed, being worked on by some very smart people who are thinking things through carefully. It is worth reading even if you are outside the EU.

Finally, I’m reminded of the quaint days of the early 2000s. A time when WEB 2.0 was a thing. One of it’s key messages was, “You control your own data.” It’s a principle that has been completely lost in the decade since. We should probably make sure that our users can control their data.

Joe Esposito: This is a very, very big question. I have written about a sliver of this on The Kitchen before, but the fact is that no one knows the answer to this one and whatever answer eventually evolves will encompass far more than the world of scholarly communications. Even assuming that the “security” issues involved here refer to the security of a service and not national security, the participants in scholarly communications — authors, publishers, librarians, institutions — will ultimately participate in a broader system not of their making. This is how it is with most baseline technology: we live in the world created by Google, Microsoft, Apple, Facebook, etc. and our concerns in scholarship are a thin layer that sits at the very top of the stack. Technology leads, policy follows. The balance we should seek is in having the needs of the scholarly community expressed in the larger policy forums.

Rick Anderson: At my institution we’re going through an extremely difficult and frustrating migration to a two-factor authentication (2FA) system. Normally, when I encounter a system that is frustrating and difficult to use, my first thought is “This has to change or we’re going to lose customers. It’s not up to them to learn our frustrating protocols; it’s up to us to make them less frustrating.” But with 2FA, I think we may have a situation where the difficulty is simply fundamental to the goal, which is to avoid unauthorized network access — and the stakes are very, very high. This really bothers me. I hate putting barriers between our patrons and the content they need, even when it’s necessary, and I worry that they will find ways to hack around them — and that by doing so they will endanger their own security and that of our students and our campus.

David Crotty: I’ll simply refer back to my post on privacy from earlier this year. There’s a strong tradition among libraries to protect the privacy of their users’ activities, but given the surveillance society in which we live, I question whether that ship has already sailed. If Google, Facebook and Amazon all know exactly which websites your patrons go to, which articles they read and what links they click on, is this battle already over?

Michael Clarke: Users, of course, want frictionless access to the information they use. They (naturally) want no paywalls but also easy access to the many submission and review systems they interact with (for both papers and conference abstracts and proceedings), grant review systems, data repositories, and the myriad other systems professional scientists and scholars interact with on a regular basis. While they want frictionless access they also want strong security. And while they want absolute privacy, they also want customization – relevant alerts and recommendations.

Publishers also want security, but of a different sort. They want their content to be secure and know that it won’t end up being pirated. They further want to know something about the user for purposes of improving product development, providing more relevant recommendations and other sorts of personalization, informing marketing, and making more money from advertising (bearing in mind that very few publishers make any real money from digital advertising). Finding a way to balance the desires of users (for frictionless access, security, personalization, and privacy) is challenging enough before factoring in the interests of publishers and other content providers to say nothing of institutions, funders and other stakeholders.

I will not pretend to know where the balance point is (while acknowledging that there is unlikely to a single balance point). I do suspect that we are at a juncture where standards and expectations — for security, for authentication, and for privacy — are likely to undergo substantial change. Privacy is of increasing concern to users and security is of increasing concern to both users and publishers. The existing authentication protocols (which in the STM and scholarly publishing sector is largely based on IP authentication) are no longer sufficient (see Sci-Hub) but no alternative that is likely to generate widespread adoption has emerged. This problem has long been acknowledged — publishers and institutions developed Shibboleth many years ago to replace IP authentication, but it has not been as widely adopted as was hoped. A new authentication protocol is sorely needed. 

Todd Carpenter: The fact is everything we do online is traceable and monitor-able at some level. There are connections created when we view a page that are far from obvious and inferences discernible from our activity signaling potentially private information even without our knowledge or explicit awareness. For example, when we view a library catalog page, data about that book search is often provided back to Amazon so that it can provide a cover image. As the network upon which we operate has grown more complex and the analytical tools more robust, the signals that we send to the world about our behavior become increasingly insightful about our collective and individual beliefs, proclivities and interests. Another example, was how Target calculated that a teenage girl was pregnant before her family did based on her purchase of vitamins, and unscented lotions. And this extends beyond our online activity and extends into the “first world” as well. Several years ago, I was at a technology meeting in which facial recognition software was demoed and now that technology is ubiquitous, if not as good as what is available for the government and marketers. The potential privacy implications of these data collection and analysis possibilities are significant.

Where does this leave the scholarly communications community? The library community long ago staked a claim in the domain of intellectual freedom and privacy protections and they continue to be fierce advocates. Publishers haven’t had to deal with many of these concerns in quite the same way as libraries, as they previously had sold content and had little end-user interaction.  This changed with licensed content online provided by publisher websites. Now content providers must focus on patron privacy issues, both for ethical, contractual, and legal reasons. Similarly, vendors who intermediate between, publisher, library and end user are also capturing data from which private information could be gleaned. Our world is much more complex, with regards to privacy and security than it ever had been.

Privacy protection consists of a variety of elements. It begins with security, it then extends through trust, respect, and sensitivity to users’ expectations of privacy. Privacy protection needs to begin with security of that data. Protection requires trust because no system is perfect. It is important that reasonable efforts are put forward, not only with regard to protection, but also for practice once data is collected — to not share it. Finally, sensitivity to the privacy concerns is important because while some activities might not be legally or technologically prohibited, customers deserve some measure of respect from content providers or intermediaries such that the proper thing is done regarding user data.

Secure authentication systems require a strong identity management system. A content provider needs to know that a person is part of a subscribed group to provide access, be that individually or as a member of an institutional group. However, securing systems doesn’t necessarily mean that those systems can’t provide basic privacy protections. The authenticated site does not need awareness of who is making the request and what is being served content. Systems can be built that are simultaneously focused on security and sensitive to privacy. They need not be exclusive notions. While I don’t believe that Blockchain is the solution to privacy problems, there are potential lessons to be learned from this technology. Other solutions that are similarly based on these dual principles can be advanced as well. Of course, simply because information is protected that does not mean that provides privacy. Data once captured requires respect and care in order to be private. This includes data protection, limitations on sharing, and policies that support privacy for data that is collected. NISO’s Privacy Principles initiative provided guidance on this for publishers, software providers and libraries.

A belief in both security and privacy need not exclude the idea that advertising and marketing are valuable services in the world. Being aware of potential offerings, deals, and products of interest need not be excluded from a world that protects information and respects privacy. For example, there are a variety of products I use regularly and value targeted offers about those things. I also would love companies that I am loyal to, to respect that loyalty and reward it with discounts or special offers. That doesn’t mean that I want to have my inbox flooded with a daily “special, personalized offer”. It also doesn’t mean that I want that company to troll my network behavior to pinpoint those times when they can best serve me the best offer. That is creepy and will drive me away. There needs to be intentionality in my actions and the offers I receive. Companies shouldn’t presume they know me better than I know myself — even if through data analytics they could.

Ultimately, the balance between these four elements needs to reside on the side of the user or patron. Decisions taken by content suppliers — be they libraries, suppliers, publishers, software providers, or tertiary data providers — about whether data may be used should be based on the best interests of the user, not the institution serving the user. This doesn’t mean that a company infers a benefit to the patron and therefore invades a user’s privacy.

Realistically, the line between these four things is not a bright, well-delineated barrier. It is a fluid line that is incredibly porous and vague. Some data may leak through, while others may be fiercely protected. Some providers may draw the line at one point, while others draw it elsewhere, and it continues to shift, often at different places based on jurisdiction. Unfortunately, there is no obvious right or wrong place to draw the line between privacy, security, authentication, and marketing. Often, we will only know where the line was after it has already been crossed.

Jill O’Neill: I am not sure there is one. There is an ongoing tension with regard to control in the interaction between vendor and customer. And as long as both sides feel that their needs should take precedence over the needs of the Other, there will be a certain failure of respect.

Protecting a digital or physical asset (security) and ensuring that those who have paid are the ones actually entering within the bounds of that protection (authentication) is a legitimate concern for any vendor – whether it be Nordstrom or Elsevier. The intent of the vendor is not to preclude active engagement with the product, but to ensure that the seller is dealing directly with a legitimate customer base. It’s a commercial transaction and it’s not unreasonable that one expects to know the identities of all parties to the transaction.

But with regard to marketing, as the customer, if I don’t want or need your product or service at a particular moment in time, then your message pressing me to take advantage of its availability is a source of distraction, an invasion of mental space. It doesn’t matter if it is political advocacy (via robo-calls) or a slightly-less intrusive marketing email. It is not the information I need and I resent your bid for attention. Marketing is a necessary activity from the perspective of a business trying to get the word out; that perception doesn’t hold if one is the recipient of an undesired message.

None of the above is confusing. It’s when we begin to discuss privacy (a personally- and culturally-defined social construct) that business demands create conditions conducive to rejection. There may well be individual experiences that transform benign marketing in one context (browser cookies, emails, etc.) into an uncomfortable reminder of a confrontation in a parking lot or a sad reminder of loved one’s passing. The vendor cannot fairly be blamed for their ignorance of the pain evoked by the message; after all, how are they to know something which an individual has sought to keep private? There arises a new tension between control and respect. The best that either vendor or customer can hope for is that respect for the protection of “fences” will be accorded to both.

The current philosophy of business holds that sustainability resides in activities that scale to the highest degree of efficiency and that any activity impeding such efficiency (such as dealing with the individual consumer and his or her notion of privacy) is undesirable. It may even be irresponsible. Again, it’s an issue of perception.  Do businesses see customers as only individual transactions fueling the momentum of the economic engine or do they perceive their customers as individuals struggling to communicate with, or perhaps fend off, a faceless (and mechanistic) entity. Until a new philosophy emerges that respectfully subordinates the economic contract to the social human-to-human exchange, we’re not going to eliminate the tension or achieve the needed balance.

Lettie Conrad: There is no universal answer to this question, I think this needs to be negotiated at the individual / community level — meaning, the reader and the university, the corporation, the society, etc. The subjective trade-offs between information access and individual rights will fluctuate depending on the prevailing information experiences and societal priorities of a group. If authentication is scarce (for example, where economic resources are limited), our threshold for giving up liberties, security, and privacy increases. Personally, I’m more willing to give up personal data and tolerate daily marketing emails, some which aren’t relevant nor useful, so that I can maintain currency and privileged access via my favorite channels of information. 

_________________________

Reading the responses from the Chefs, and even embedded in the question itself, it’s clear to see that this is a balancing act. How much privacy are you willing to relinquish for convenience? How much effort (authentication) are you willing to expend for security? Do you find it helpful when products and services follow you around on the web? Creepy? Somewhere in between? Does your assessment depend on how much your interested in the item that’s stalking you?

None of those questions can be answered in generalizations. As both Jill and Lettie point out, preferences vary both culturally and situationally. They vary at the group level and at the individual level. So now it’s your turn!

Where is the balance between security, authentication, marketing, and privacy?

Or should the question be – Is there a balance?

Ann Michael

Ann Michael

Ann Michael is Chief Transformation Officer at AIP Publishing, leading the Data & Analytics, Product Innovation, Strategic Alignment Office, and Product Development and Operations teams. She also serves as Board Chair of Delta Think, a consultancy focused on strategy and innovation in scholarly communications. Throughout her career she has gained broad exposure to society and commercial scholarly publishers, librarians and library consortia, funders, and researchers. As an ardent believer in data informed decision-making, Ann was instrumental in the 2017 launch of the Delta Think Open Access Data & Analytics Tool, which tracks and assesses the impact of open access uptake and policies on the scholarly communications ecosystem. Additionally, Ann has served as Chief Digital Officer at PLOS, charged with driving execution and operations as well as their overall digital and supporting data strategy.

Discussion

19 Thoughts on "Ask The Chefs: Where Is The Balance Between Security, Authentication, Marketing, and Privacy?"

Not mentioned in these comments is the special challenge that can exist for university presses as publishing houses doing business from within a university, whose policies and procedures, including in technology, may not be maximally beneficial for those presses operating as businesses. E.g., at Penn State when I was press director and we wanted to start an online order fulfillment system, the university’s own IT division was not set up to handle or authorize such transactions, and we ended up having to establish our own separate domain name in order to conduct our business efficiently.

Having failed to submit a response to this question (sorry Ann!) I will add my thoughts here instead.

It seems to me that a vital component of having any chance of balancing these four elements is transparency, ie, allowing people to make informed decisions about where their own personal balance lies (and that will vary, as Lettie notes) because they know exactly what they’re signing up to (or not). This is very rare at present. Unless you’re one of the presumably tiny number of people who do actually wade their way through the pages and pages of fine print before accepting Google’s or Apple’s or whoever’s license agreements – which, to me, do NOT constitute transparency – then most of the time you really don’t know what you’re agreeing to or how your data is going to be used.

We at ORCID have been doing a lot of thinking about these issues this year as part of our ORCID Trust initiative (https://orcid.org/about/trust/home), which is intended to provide greater transparency around our principles. One of these is that researchers should maintain control of their own data, which means they get to decide what information to connect to their ORCID record, who’s allowed to access it and for what purpose(s), what data is publicly available, shared with trusted parties, or kept completely private – all of which can be changed by the researcher at any time. I’m not going to claim that we have it all worked out – we have a long journey ahead of us. But we are committing pretty significant resources to it, including a full-time privacy specialist (who started today, in fact!) because ensuring that our community has confidence and trust in us around these issues is vitally important.

My experience is that the balance point really depends on your context. Some publications derive their value from unique, high-value content that warrant the extra cost/friction of greater security. Other publications derive a competitive advantage from the features and functionality provided alongside the content, and value ease of access over security. Some users value the personalisation available from sharing the preferences, and others are horrified by it. And some institutions face major security concerns, while other simply want their resources to be used more. As context can also be dynamic, appropriate security may need to vary from one session/user to another depending on usage patterns, location, content served etc.

As content is re-purposed across diverse platforms, products and markets, it’s no longer possible to rely on a single approach (balance point). But traditional authentication infrastructures struggle to support the flexibility needed to address these different contexts.

Like death and taxes, technological change is a certainty here and the industry needs more agile and cost-effective identity and access management solutions that allow stakeholders to select the balance that best meets their needs.

A belated addition here – I’ve just been reading about Citizen Me http://citizenme.com which seems pertinent to David’s points about “controlling your own data” – perhaps not such a lost cause after all?

It is a whole new world and like so many other things that have lead to the present, it is spiralling out of control. A necessary evil of sorts, unfortunately. Every time my husband goes to Target, since the security breach years back, he brings up their tracking and information storage of customers. This is also what has changed politics so quickly; I think people forget how much the media knows. Now that they can tap into our thoughts, wants, desires, they can convince us of things in a way that we are unaware of. It is more important than ever to be cautious, mindful, and think critically to avoid being taken advantage of!

I really do think that we can’t do anything about that, but the one thing that can be done is to become aware of the things that we don’t want and that we don’t find necessary, marketing is mass psychology and at every moment we can put a wall between what we see, what we want, what we need and what is ridiculous.
Authentication is for the ones that lives inside the machine so that data can be broadcasted through marketing channels that in return create a psychological folder and files based on our wants and wishes, so here we return to the marketing factor.
Security is a big failure because our datas are raped by more brainy people than us, but i always wondered if the security system was more intellectual and like really good, if perhaps there would be more bad people like drug dealers, criminals, pedophiles, all the people that commit frauds by robbing innocent people.
And as for privacy, we are free to do as we wish, google whatever we want, but it comes a big problem when our privacy is raped and distributed to people of no concern about our private status, then it become disgusting.
So my conclusion about all that is the real good intention behind the ones that provides security and privacy, if themselves are no good at nothing how will they be able to bring a balance to all these factors.
From someone that has a big phobia of personal security.

Some of these smart people who keep determining our lives and telling us who we are and who we ought to be should pull up their socks and justify themselves.

In recent months my impression of internet marketing is how crude rather than how effective it is. It seems to always put things I’ve just bought in front of me as if I might like them so much I’ll immediately want to buy another one.

On reflection this may be a variation of the point that we object to internet marketing more when they get it wrong. When something appears that captures my interest I am by definition distracted and the question of how it got there is forgotten.

To an extent then, I’m happy when it works but there is a more sinister side to this which hasn’t really been touched on and focuses more on content served up on social media than straightforward adverts I know are trying to sell me something.

People are becoming increasingly shocked by the result of elections. To a degree this is because the political elites on both sides of the Atlantic are under fire and there are some lamentable populist protest vote options available. But it is also due to the fact that social media keeps us in an artificial bubble by serving us content that is likely to dovetail with our values and opinions. Algorithm politics makes for a more comfortable experience – fewer confrontations with people whose opinions jar with our own, but it paints a false picture where we start to believe everyone thinks like we do.

People making serious political points end up preaching to the converted. Our online experience is reassuring and affirmative but increasingly artificial. This has totalitarian implications that are perhaps more worrying than anything else.

I work in digital marketing so I am one of those people that creates the ads that follow you around the internet and are almost “creepy” in how much they know about you. The data that we collect about people is very secure and private though. I do work for a large institution that has strict rules on these things though. I often start thinking in depth about the philosophical implications of my work.

The worst part is that their is very little choices, and on the other side I am a business owner and so are my clients. We depent on the data collected to target market the right viewer so i am at a hard position.

Its so creepy that when I search for something online, the Facebook ads are spot on with what I am looking for, but I guess when your online we really have no privacy.

Anything posted on Facebook changes the ads, it doesn’t have to be a search. I think event the people who “like” posts and what they say changes the ads you see. The system is very sophisticated. I have long suspected that all of the cutesy “what kind of a princess are you?” or “what is your Christmas movie?” quizzes are collecting serious data about you for marketing purposes. It also occurred to me the other day that some of the games could be collecting an awful lot of information about how people think and react that could be used for manipulative purposes.

It sounds a bit paranoid, but we just saw an election that was purely about perception and manipulation of perception. The outcome was most likely determined by a foreign government skilled at doing precisely that.

Privacy isn’t possible with the internet. That doesn’t make the internet useless, it is like having the curtains to your living room open: if you don’t want your neighbors, the mail carrier, or whoever is driving by to see it you’d best do it somewhere else. The real problem is that “somewhere else” is harder and harder to find …have you apid cash at a brick and mortar store recently? Maybe we should all be looking for opportunities to do that so our options don’t all disappear.

Comments are closed.