Publishing is an information technology business. If we look back at the history of our industry, a central theme is the application of technology to increase the speed at which scholarly information is disseminated. The operative word here is application. A desire and willingness to invest in technology is not enough; it has to be applied in thoughtful ways that solve specific problems that customers and end-users have.
Publishers struggle to apply technology, even when they can see the benefit. As both a publisher and technology vendor, I’ve found that even when technology is a really good idea, with a clear business case and minimal or no development needed, the bandwidth to integrate it into an organization can be frustratingly elusive. It’s not for want of willingness to invest, many publishers spend an awful lot of money on platforms of various sorts. Instead, it’s a difficulty in creating the processes around the technology that are needed to make the best use of it.
The problem isn’t technology, it’s process.
My second job in scholarly publishing was running an editorial department. The company was quite young at the time and a big part of my job was to make sure the department was as efficient as it could be. One of the first things I did was to figure out exactly how we were handling manuscripts. I had to understand and visualize the flow of documents through the system. So I did what most people would do; I asked everybody what they did and drew a flow chart.
I didn’t realize it at the time, but I was employing value stream mapping, a technique from lean management principles that was made most famous in manufacturing by Toyota. The basic principle is that in any process, you can only go as fast as the slowest workstation, so if you want to go faster, you need to find the bottleneck and figure out how to leverage it better.
Publishing obviously contains workflows as manuscripts move through submission, editorial, production, and dissemination activities. What’s less well understood, not only in publishing — and in libraries — but in many sectors, is that IT functions are also production lines. IT production lines are harder to see, they don’t have the advantage of having all the workstations laid out one after the other like they are in a car factory, and there isn’t necessarily a document that you can trace through a system as it gets transformed, but a process is definitely there.
Lack of visibility of the workflow harms technology delivery. It’s too easy to see technical operations as a number of functional units that each perform a task, like vendor management or server provisioning.
In reality, whenever a company wants to do something new, whether it’s a small business change or a new product line entirely, there is a process. These processes start with the original idea, move through a series of product or project management phases, solution design, sometimes development is involved, sometimes vendors are used, but ultimately a solution is deployed, usually involving IT operations staff.
Challenges occur when the work that’s being done at one stage in the process is either not sufficiently complete and accurate, or it’s not being communicated in a way that the next person or group in the value chain can understand it. In some cases, individuals may not even know who the next person in the value stream actually is, because no one has ever mapped it. Even worse situations can occur when necessary workstations are missing entirely. In those cases, everybody wonders why repeated projects grind to a halt and stop. Problems can occur unnoticed or are unintentionally passed on, at one point in the process, only to cause much bigger issues downstream.
So what can be done about this?
In 2009, at the O’Reilly Velocity conference, the head of operations and head of engineering at a growing photo sharing site gave a joint presentation called “10+ Deploys Per Day: Dev and Ops Cooperation at Flickr“. From that presentation, the term “DevOps” was spawned. In reality, it was the application of Lean methodology to the software development value stream all the way through to delivery. It was a breakthrough in technical operations and since then, almost 70% of small to medium-sized businesses in the US are applying it to some degree.
While DevOps is a huge step in the right direction, it’s limited in that it assumes that the process begins once the business decides what it wants to do with IT. This can leave organizations in a difficult position where traditional, waterfall style project management techniques are being employed right up to the point where a hand-off happens to IT. This can lead to product managers (upstream of IT) struggling to understand what developers or vendors need to know to be successful, and IT teams looking at an ever-growing to-do list and somehow having the de facto responsibility of prioritizing, despite not being best placed to make prioritization decisions. In turn, this leads to management and organizational frustration while everybody wonders what are all those IT people doing all day?
The solution lies in understanding the value stream all the way from ideation to delivery and managing the work through an end-to-end governance mechanism. There are multiple ways to achieve this. For some smaller organizations that are focused on new and innovative products, Agile project management and techniques like flash builds immerse product managers into the build process as they act as a direct proxy for customers, guiding development at every stage. For larger organizations, portfolio management offers a way to help business stakeholders better interface with IT, build shared understandings of business goals and guide prioritization.
These approaches sometimes involve a shift in mindset that doesn’t come easy and a willingness to expand people’s comfort zones. Having said that, as technology becomes ever more important to publishers, and the pace of change continues to accelerate, publishers have to find ways of better-integrating technology operations into their overall business strategy.