Taken at the Georgia Institute of Technology C...
Image via Wikipedia

A recent video on All Things Digital got a writer at Folio thinking about the future of the human editor, a role that might seem as immune to automation as any.

However, as in any craft, when the toolset changes radically, the craftsperson can either take advantage or be replaced to some degree — if not completely.

In the video, the head of ChartBeat, Tony Haile, says he believes that editors of the future will be able to use new analytical tools to monitor readers’ needs and deliver relevant content almost immediately. To do this effectively, they will be one of two things — cyborgs or robots:

. . . the industry is moving away from the “fire and forget” model of posting something and just hoping for the best . . . editors in the future will be either “cyborgs” or “robots” . . . cyborgs being “people who are enhanced by technology” and robots as “people who are replaced by technology.”

ChartBeat works by providing real-time analytics to content producers. Its flashing dashboard tells you which stories are becoming more popular, which ones are fading. There’s a manic quality to it, and the theory is that content producers can respond in real-time to user demand by sourcing similar stories to extend the engagement, surfacing related items from the archive, and so forth.

At first, ChartBeat struck me as the type of thing that content farms or news sites would be most interested in. After all, surfacing content and chasing audience are what they’re all about. For scholarly publishers, there’s less of a frantic pursuit of audience qua audience. Also, how keepsake scholarly information is generated isn’t really comparable to journalism or opinion pieces — if a paper on a new drug or method proves popular, you probably can’t get another published a few hours later.

However, relevance and engagement are still major ambitions for scholarly publishers, and with large archives online, a desire to provide that elusive “one-stop shop,” and users who are moving into less direct engagement because superior curation exists in other venues, approaches like this can feel mighty appealing.

A major challenge for many publishers is to make use of their online analytics. The data exist, but often access to them is delayed by so many days or weeks that by the time you have them in hand, the opportunity to exploit traffic or trends is long gone. This has made analytics an armchair hobby instead of an active endeavor. Real-time stats in the hands of a cyborg editor could change this. Some publishers are responding by creating new roles, such as Retention Writer or Engagement Editor, roles in which people use data to make editorial decisions and choices.

Moving from cyborgs to robots seems a perilous journey until you begin to integrate semantic technologies. Combined with real-time metrics, if you know what people are clicking on at the conceptual level, your robot could be smart enough to surface related content that’s based on more than just ephemeral clickstreams.

Given the abundance of data, perhaps one of the most audacious changes could be moving from not only a “fire and forget” mentality but to a results-oriented mentality. I remember one editor asking me in front of a group of editors exactly how many hits his article had received online since publication. When I answered “zero,” he was crestfallen, but accepted the reality of it. The illusion of print remains that every word on every page is read. Online, there are far fewer illusions, which leads to the idea that perhaps instead of paying for output, you:

. . . evaluate and pay editors not by the quantity of stories they generate but by the traffic and response they produce.

Would an editorial board receive compensation adjustments based on analytics with a brave face? Would a human editor? A cyborg?

Robots may be the only ones who wouldn’t rationalize their value as superseding the data.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

5 Thoughts on "Will Your Next Editors Be Cyborgs or Robots?"

An interesting thing about science journals is that the readers are also the actors. This is very different from the news, in fact it is a positive feedback loop of sorts. Fields emerge as new ideas are picked up and used, not just watched.

Journals are supposed to be following and revealing the moving frontier of their specific science. If a lot of researchers are looking at something new, a result or a method, that may be like a “pre-citation” in that something in that paper is spreading. This is important data for the community.

Today journals are running several years behind the scientific thinking they are tracking. If we can show where that thinking is going, instead of just where it has been, it might be very beneficial to our readers. Commercially then too, especially if we can help them along.

As the editor of a laboratory methods journal, it strikes me that a tracking system like this could be a valuable asset for a product like ours, data that could be sold to reagent and equipment companies that would influence the direction of product development.

As far as directing readers to related content, not sure this is going to be particularly effective for this audience. For one thing, scholarly articles already come with a list of related content, the references. Can your cyborg/robot editor provide a better level of expertise on a very specific topic than a scholar working in that topic?

Perhaps more importantly is the issue of silo-ing. I can see a system working if it is publisher/journal agnostic. If you can scan the entirety of the literature and provide the most relevant content to the reader, that might be appreciated. But if that content is strictly limited to the journals that your company publishes, it’s unlikely to be satisfying to the reader. Researchers read articles, they don’t read journals and they certainly don’t read publishers. For such a system to be useful, you have to be willing to send the reader away from your properties and to your competitors. As such, it becomes less of a way to drive internal traffic, and more of a service provided to your readers, and it becomes unclear how this then pays for itself.

And as we still live in a world where most readers download the pdf and either print it out or read it at a later date, real-time analysis and updates may prove impossible.

Two immediate thoughts — references tend to be rather dated by the time they appear, so a system like this could provide a real-time complement to a months-old reference list; a downloaded PDF could also include a count of the number of related but unread articles that existed at the time, and maybe a short timeline of when they appeared to give the user an image of whether the topic has a growing amount of related material appearing, which could drive them back to the site or to other related sites.

Good point. The references will give you a list of related articles that were already available at the time of publication. But if you’re reading an article that’s a few years old, it might be nice to see if there’s anything more recent on the subject.

For the pdf, you’d need to generate that list live as the reader requests the pdf and downloads it. I’m thinking of the sorts of pdf “wrappers” that many platforms offer that allow a publisher to include an advertisement or other types of material with a pdf, so perhaps you could build some sort of plug-in there that would allow the quick generation of a list and inclusion of links.

Doesn’t solve the question of only offering siloed information from one publisher, but it does sound possible.

Comments are closed.