The “early adopters” among us have been in meetings where our appreciation of technology and what it can do is dismissed or downplayed as “geek” or marginal in some way. It happens less and less, but even if the boundary is now navigated in silence, the swell of conceits about technology is still palpable. Computers aren’t print, there’s not enough tradition, they’re for young scientists or physicians.
In one of my talks, I point out that while the stereotype of technologists has been to portray them as “young, marginalized, and powerless,” it’s clear that technology users are now “middle-aged, central, and powerful.”
Yet, this bias against technology and what it can do continues to permeate the upper reaches of corporate decision-making and even the thinking of technologists themselves. Technology is still viewed as something “other,” a set of strange and opaque items foisted upon us just to make money, drawing us away from the age where our aspirations still reside.
As if there’s an alternative.
A new Kindle book, “Race Against The Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy,” outlines in blisteringly concise detail why there isn’t any choice. Information technology represents a new general purpose technology, like steam, electricity, or the internal combustion engine. General purpose technologies not only create their own industries, but change all other industries by virtue of being extensible and beneficial across the board.
Industries and employees that have been affected are almost too numerous to mention — travel (Kayak and kiosks), books (Kindle and Amazon), banking (ATMs and online bill pay), and so on. And we are probably only at the beginning of a long cycle of rapidly growing computer capabilities.
The authors relate the story (perhaps apocryphal, and credited to Ray Kurzweil) of the inventor of chess, who, when asked by the delighted king what he wanted as a reward, said that he’d like to have a grain of rice on the first square of the chess board, two on the second, four on the third, and to keep doubling the grains per square all the way to the last square. This seemed a small price to pay to the king, and all was fine until about the halfway point of the board, when it became clear that the doubling would give a pile of rice the size of Mt. Everest and make the inventor richer than the king. (The inventor was soon beheaded for his insouciance.)
To apply the analogy, the authors take 1958 (the year the government first used the term “Information Technology” as a professional category) and Moore’s Law (computing power doubles every 18 months), and find that in 2006, we crossed over the figurative half-way point of the same chess board when it comes to computing power. Since then, we’ve seen successful real-world tests of autonomous cars, Watson win Jeopardy, Siri on our iPhones, and so forth. Each 18 months is another doubling on the chess board of technology.
Tim O’Reilly recently talked about this from a different angle, noting that the Google autonomous vehicle of 2006 provided a slow and dicey ride — yet the autonomous vehicle fleet of 2010 drove thousands of miles. As the authors of “Race Against the Machine” note, the only accident this fleet suffered occurred when a human driver rear-ended one of the Google cars at a stop sign.
In less than 5 years, autonomous vehicles went from curiosities to the point where now states are beginning to draft sample regulations to allow them on roads.
Robotics are poised to make huge leaps, as well, as this recent demonstration of Honda’s Asimo shows:
This is the power of doubling. But the power of general purpose information technology goes even further, through complementarity. As economists Timothy Bresnahan and Manuel Trajtenberg wrote:
Whole eras of technical progress and economic growth appear to be driven by . . . GPTs [which are] characterized by pervasiveness (they are used as inputs by many downstream sectors), inherent potential for technical improvements, and “innovational complementarities,” meaning that the productivity of R&D in downstream sectors increases as a consequence of innovation in the GPT. Thus, as GPTs improve they spread throughout the economy, bringing about generalized productivity gains.
One of the most significant consequences of these changes is that even higher-skilled jobs are being eliminated or chipped away as computers intrude quite naturally and capably.
As computers climb the ranks from calculator to artificial intelligence, jobs at various levels are lost or severely curtailed. The first signs of intrusion into our worlds was the disappearance of the mail, FedEx, or UPS guys. What used to be stacks of envelopes bulging with manuscripts became electrons flowing through tracking systems, and clicks sent the right manuscript to the right editor’s in-box. Artwork is now delivered, calibrated, sized, and placed electronically. And with the truly sophisticated systems, metadata keeps it all stored nicely in libraries, complete with logic around permissions, access levels, and rights.
Even in editorial ranks, you can feel the first gentle pressure of change as pre-processing routines check references, do initial styling, and introduce basic formatting. Full XML workflows push this even further. And if editorial decision-making is truly fair and unbiased, it’s certainly amenable to being replicated programmatically. Imagine a program not unlike IBM‘s Watson that could check a manuscript submission for novelty (via CrossRef, iThenticate, and other APIs), statistical validity (mathematical processing), sufficient referencing (citation network analysis), author disclosures, and conclusions that flow from the hypothesis and data. A set of algorithms and processes like these could certainly make a reasonable first pass at filtering the flood of submissions.
It’s worth noting here that a computer program (SCIgen) has beaten human editors multiple times, generating realistic-appearing but nonsensical scientific abstracts and even full papers, one of which was part of a notorious prank here.
I can actually imagine that a computer might be able to do a better (faster, more thorough) job at some seemingly complex editorial tasks, being unbiased and able to process more inputs faster and more objectively. It’s really just a matter of applying technology that already exists to a different task. As William Gibson famously said,
The future is already here — it’s just not very evenly distributed.
The lesson of “Race Against the Machine” strikes me as, “Don’t stand still.” Technologists need a stationary target. By advancing what it means to create knowledge, we win in two ways — we push ourselves to do better, and we make it harder to catch up with what we’re doing.
However, I doubt we can double our capabilities as quickly as . . . [sentence to be completed when computer editor arrives.]