The past year has been a wake-up call, rousing us from our rather somnolent walk with technology that has taken us down a long and unknown path.
A decade after one of the most exciting innovation years on record — in 2007, the Kindle, the iPhone, PLOS ONE (December 2006), and Facebook were all introduced — we awoke to find ourselves in the midst of some of the implications and unintended consequences of these and subsequent innovations.
The Kindle’s effect has been less than imagined, dampened by the market’s tepid response to e-books. As a disruptor, the Kindle has been incremental and manageable for publishers, while accelerating Amazon’s domination of the retail book space. The biggest change attributable to the Kindle has been the emergence of self-publishing as a viable path to marketing and selling books.
By contrast, the iPhone, and the smartphone more generally, has transformed the world, disrupting industries (GPS, music players, cameras) and creating or amplifying new marketplaces (apps, podcasts, e-commerce).
Social media has been potentially even more disruptive on personal, social, and commercial levels.
PLOS ONE kicked off the commercial era of the megajournal, the journal cascade, and the open access article processing charge (APC). It is still unclear which of these, or which combination of these, could lead to a major disruption of the industry.
Here, then, in no particular order, are some of the lessons I think we learned this past year, albeit at different rates and undoubtedly with differing interpretations.
Social media is not benign. By now, the litany is familiar — the degree to which social media was exploited in 2015-2017 to influence votes in the US, UK, and EU, to drive social fractures along racial and economic lines, and to advance various political factions was finally brought to light. In addition, the damage social media is doing to young people’s psyches also became more apparent, as the tolls of suicide, depression, and social isolation mounted.
Even when social media was used for ostensibly positive responses to injustice like #BlackLivesMatter and #MeToo, these campaigns revealed some persistent downsides of social media — its tendency to instill a mob mentality and its ephemeral nature. Both social media movements created an unknown amount of collateral damage that more deliberative and less public processes might have avoided. As the year ended, being accused of racism or sexual misconduct of any degree or type (or even through the willful misinterpretation of facts) had become akin to receiving the marked slip in the social media equivalent of “The Lottery.” Social media is designed as a cudgel, not a courtroom. Both movements were also more fleeting than they deserved to be, as if we collectively ticked a box by having outrage for a few days or weeks on social media. Deeper structural and social change is needed, which is well beyond the scope of social media.
Social media is no substitute when the topics are serious, complicated, and require care to get the best and most enduring outcomes.
We learned even more starkly this past year that social media can be weaponized, misused, and exploited, and that it is no substitute for serious deliberations when the topics are serious, complicated, and require care to get the best and most enduring outcomes.
Technology is not benign. While dystopian visions of technology are familiar, it’s always worth noting that sometimes reality doesn’t square with visions of what might be. Our weaknesses for design and convenience and entertainment have led to surveillance opportunities via our webcams, GPS, microphones, and televisions, as machine learning and ubiquitous sensors brought these into the mainstream, while the old standby email also proving to be as fraught as ever. At some point in 2017, most of us likely began to look askance at the new technology being introduced — from “smart home” technologies to our apps to our smartphones — and realized the potential for incursion, profiling, and monitoring. More of us became adept at turning off tracking services, shutting down microphones, and covering cameras. California may be setting another concern front and center by warning people about the dangers of cellphone radiation. Net Neutrality and its repeal in the US also means that there is a new area ripe for exploitation and contention — the utility layer of the Internet. Some of the human impulses (e.g., greed) we are asked routinely to rein in are amplified by technology, then combined with inadequately regulated positional power.
Hacking has collateral damage. During the past year, hacking of Equifax, the US Democratic National Committee, Verizon, the US National Security Agency, Bell Canada, and Uber (among many others) showed that running businesses and our lives on the Internet, which is an inherently insecure medium, has risks that ripple well beyond any particular incident. With identities stolen, cyberweapons on the open market, and more, the collateral damage from hacking became profoundly worrisome in 2017.
For us, hacking helped bring Sci-Hub into existence, possibly as part of the overall Russian hacking campaign we now know was used against Western democracies. We may feel less self-important with this realization — our papers weren’t the real target, just a means to an end — but the fact that publisher systems may have presented an easy side door into libraries and other academic systems is another realization we should carry forward.
Open Access publishing is intensely commercial. Initially viewed by some as a counterweight to commercialism and consolidation in the scholarly publishing space, open access (OA) publishing has proven to be just the opposite, as its dependence on volume and the concomitant benefits of economies of scale drive consolidation and further the commercialization of papers. PLOS ONE, which was the first notable megajournal, is now overshadowed by some it inspired, mainly due to the larger commercial footprint and related cascade volume these organizations can bring to the recruitment and retention of papers. Now, the largest OA publisher is Elsevier, and the largest megajournal is owned by Springer Nature. This feels like an inflection point.
Algorithms are not benign. Feeding social media, search, various apps, and our general online experience, algorithms became a hot topic this year, as their “black box” nature was called into question, for reasons that included the realization that many algorithm designers embed hidden biases and don’t even know what their algorithms do. These are potentially alien intelligences that will pursue their ends — no matter how trivial — with a zest and ruthlessness we underestimate, in a manner captured beautifully in Ex Machina, the haunting and prescient 2015 movie. On a more pedestrian level, algorithms simply are not constrained in the ways we inherently are. As Vestager says in the interview mentioned earlier:
. . . these algorithms, they all have to go to law school before they are let out.
There also needs to be accountability for algorithms that misbehave or run amuck. This means dropping some of the outdated protections and laissez-faire attitudes we have cultivated toward platforms, for the following reason.
Technology platforms have become media companies. Once viewed as neutral platforms that media companies could use to distribute their information, platforms have developed outsized abilities to use algorithms to profile users and content into personalized streams, crossing the line and becoming media companies with specific editorial and business goals of their own. Driven by their advertising-based business models, these platforms promote extreme content to provoke clicks and traffic — both fodder for advertising dollars — via strong agreement or strong disagreement, inherently driving social polarization. Whether it’s search companies like Google or social media companies like Facebook, technology platforms now need to grow up and accept the responsibilities that come along with their riches from political and commercial advertising.
This point was driven home (pun intended) in the recent decision by the EU’s top court to regulate Uber as a taxi company rather than a technology service or neutral platform. Technology is the means to the end, and it no longer makes sense to regulate the means, as the ends are mature and require regulation themselves now.
What’s Ahead for 2018?
When it comes to Facebook and social media, we are seeing different responses between the US and Europe. The EU has responded with increased regulation, stronger planning, and a focus on social justice. Meanwhile, the US is allowing greater media consolidation, and has responded in an uncoordinated and impotent manner to the threat of social interference via technology, both political and commercial.
The EU seems ready for the future, while the US seems poised to let commercial interests exploit the Internet, even at the expense of its citizens (e.g., the Federal Communications Commission’s recent repeal of Net Neutrality). Meanwhile, China is shutting down VPN-based pinholes in the Great Firewall and locking out content that it dislikes, both reflective of an increase in their authoritarian control of the medium.
This difference in approaches between the US and EU was captured recently in an interview with European Union competition commissioner Margrethe Vestager, where she summarized the philosophy that has led to major fines being levied in the EU against some of the largest technology companies in the past year:
You have to make sure that it’s not the law of the jungle but the laws of democracy that works.
Given these differences, it’s no surprise that 2018 will likely be driven by some changes emanating from the EU.
GDPR — The General Data Protection Regulation (GDPR) was written specifically to rein in the most exploitative business models of Internet companies by reshaping the ways organizations handle information derived from users of their systems. It takes effect on May 25, 2018, and is generating a Y2K-level of anxiety in IT departments and technology companies. There are some key changes:
- Extra-territoriality, meaning that if you do business in the EU at all, these regulations apply to your organization. You don’t need to have a presence in the EU, just EU citizens using your services. This makes GDPR essentially a worldwide regulation, for all practical purposes.
- Penalties have been strengthened, and there is every indication the EU is serious about imposing and pursuing them. Facebook, for example, has agreed to pay millions in taxes and to stop routing its advertising through tax havens that protected those revenues from local taxation in the markets where the ads appeared.
- Consent must be obtained, and “companies will no longer be able to use long illegible terms and conditions full of legalese.” This frank language hints at some real ire (which many of us share) about how technology companies have foisted one-sided terms and conditions on users.
- Data portability, the right to be forgotten, and the right to access will all lead to fundamental changes in how data are handled, stored, and shared.
To me, the most interesting aspect of the GDPR is the extra-territorial nature of it. That is, now we have a regional law from a market basket of appreciable size that by design applies practically to the entire world. No US entity wishing to do business in the EU can avoid compliance. This precedent is worth watching to see if it holds up, as it’s entirely possible that another set of conflicting extra-territorial laws emerge from another large market. (Another lesson from 2017 might be to urge your children to study IP law.)
Now we have a regional law from a market basket of appreciable size that by design applies practically to the entire world.
Net neutrality — There is no lesson yet to derive from the recent move by the US Federal Communications Commission (FCC) to end net neutrality regulations and allow commercial entities, especially ISPs, more discretion over the traffic traversing their networks. The potential for abuse is obvious — traffic throttled to extort money, punish competitors, or stifle opinions the ISP owner finds objectionable for whatever reason. The fear is that ISPs will hold bandwidth hostage, extracting ransoms from companies wishing to continue business as usual, enter the market, launch new services, or expand and grow. The chilling effect this could have on start-ups and innovation could be significant, and hampering development of the US tech sector. State-level laws could effectively replace the same regulatory environment, with California (the US’ largest state economy and a top world economy) already moving to do so. More fundamentally, the FCC’s move signals a deep antagonism toward regulating the Internet in the current administration, which itself is built of ideas retrieved from a pile of notions we abandoned in the last two centuries.
RA21, CASA, and WAYF Cloud solutions — An entirely rational response to Sci-Hub was introspection about our authentication and authorization systems, which suffer from years of neglect and are generally under-resourced despite their importance both commercially and as they integrate with other systems. In 2018, we’re likely to see the introduction of new approaches, with CASA (a Google/HighWire approach), a WAYF Cloud solution, and RA21 (an STM industry-wide level-setting initiative) all coming to fruition. These promise greater security, more convenience, and a more viable subscription model that works across devices and locations for users.
Fractious politics — The past year has been full of fractious politics, some of which spilled over into scholarly and scientific communication, with texts and titles banned in China, words banned within the US government, rollbacks of environmental protections in the US, attacks on educational funding and student loans in the US, and concerns about the mobility of scientific funding and scientists in the EU post-Brexit. With little changing apparently in the positioning or leadership of the largest players, and elections due to occur in the US in 2018, we can expect things to become even more fractious in the coming year.
A reassessment of OA — Reading the tea leaves on Twitter and elsewhere, there seems to be a reassessment of OA publishing occurring at many levels — from some originators to librarians to policymakers to publishers to society officers to editors. Concerns that were dormant are now animated, while evidence is accumulating that the benefits may not be manifest, even after so long a period. The upshot of these concerns isn’t at all clear, and it may only amount to hand-wringing and introspection, but there is the possibility that some sea change occurs, even if it’s the nearly inaudible sound of advocates moving on.
A greater role for humans in technology and media companies — Whether as circuit breakers, curators, or creative sparks, humans are more clearly needed than ever in technology and media companies. They can serve as necessary brakes on rampant algorithms, ensure machine learning achieves practical and useful conclusions, and collaborate with machines so that human intelligence guides artificial intelligence applications. There is a policing activity humans need to maintain to ensure that machines and algorithms don’t create unrealities or perpetuate hidden biases.
It’s also worth noting this year’s reckoning for the “tech bros,” including the infamous Google letter, should also lead to diversity in tech workplaces. The biases of machines are the biases of humans, and minorities and women are familiar with biases privileged white men probably do not or cannot see. More humans with more diverse backgrounds will be an important part of a future that is fair and reasonable.
Blockchain — I actually don’t think this will matter much for our industry, but I’ll put it here to fool an algorithm into thinking I’ve discussed it in-depth.
Overall, 2018 and beyond will likely be marked by greater legal, technology, staff, and system costs, as privacy, security, regulatory compliance, and competition for talent escalate. It also looks like only a prelude to change, with incumbents consolidating the ground they’ve made over the past decade, using their market power to secure more territory, and cooperating just enough with regulators, customers, and others to avoid obvious missteps. A major theme also appears to be how we reintegrate people into processes that we perhaps prematurely delegated to machines and algorithms.
Ten years from now, what will 2018 have started? What kind of watershed technology or social movements will have left their imprint a decade on?
Only time will tell.