English: Symbolic image depicting the computer...
English: Symbolic image depicting the computer virus Stuxnet (Photo credit: Wikipedia)

We live in an increasingly computerized and connected world, with computers controlling millions of processes while often linked to one another via wireless and cellular networks. Bluetooth connects cars to cell phones. Some trains, some planes, and some automobiles are wi-fi hotspots. Pilots have recently started taking iPads into the cockpit. While the benefits of all these computers and connectivity are readily apparent, new and significant risks are also slowly emerging into the public’s consciousness.

In an interview which aired late last year, Jeff Bezos, CEO of Amazon, was asked by Charlie Rose about cyberwafare and its looming reality. Bezos offered a sobering assessment:

Most people . . . think that computer viruses can mess with their data. They don’t think of computer viruses as things that can destroy physical infrastructure.

Bezos refers to the Stuxnet virus, which was used to damage or destroy Iranian centrifuges. Stuxnet was designed to proliferate widely, but included a malware payload that targeted Siemens equipment especially. When Iran procured embargoed Siemens equipment from dealers and attempted to use it as part of their nuclear program, the Stuxnet virus may have spun up their centrifuges beyond their limits, damaging or destroying them. Certainly, damage was done, and the Iranian nuclear program was set back by months or years. Stuxnet was widely believed to be the work of foreign governments seeking to retard Iran’s development of nuclear technology.

As Bezos notes, the same potential exists for malware or viruses to destroy other type of turbines, notably power plant turbines, which operate at their structural limits most times already. Spin them up another 20% with a computer virus, and you can cripple a city or a nation.

Other recent investigations show a high level of coordination between such virus deployments and virus programming, suggesting coordinated efforts by national governments. In one case, malware went to 5,000 unique IP addresses — 3,700 in Iran and 1,300 in Sudan. This kind of targeted deployment makes security experts strongly suspect the involvement of one or more nation states.

The targets of malware and other hackery can be more personal. A recent article by Charles C. Mann in Vanity Fair discusses the emerging threat that may exist given the number of implantable medical devices, use of the Web, and the proliferation of cell phones. The scenarios presented are rather concerning:

  • A type of “denial of service” attack on pacemakers, making them fire more than they should, draining their batteries and leaving the victim susceptible to death from the condition the pacemaker was designed to resolve.
  • Hacking into a Web site for people suffering seizures, and creating flashing backgrounds that induced seizures. (This one has already occurred.)
  • Reprogramming of implantable insulin pumps, so that insulin either floods the bloodstream or is blocked, killing the victim.

A security expert, Barnaby Jack, is profiled by Mann. Jack became concerned after he realized how vulnerable these medical devices were, as this vignette involving an implantable cardiac defibrillator (ICD) shows:

. . . with a custom-built transmitter, Jack had discovered how to signal an I.C.D. from 30 feet away. It reacted as if the signal were in fact coming from the manufacturer’s official I.C.D. programmer. Instructed by the counterfeit signal, the I.C.D. suddenly spat out 830 volts—an instantly lethal zap. Had the device been connected to an actual human heart, the fatal episode would likely have been blamed on a malfunction.

Jack’s concern is that smartphones will soon be able to launch undetected attacks at these implanted medical devices, with the result — death, disability — looking like a malfunction, its nefarious origin undetectable. In fact, the proliferation and complex abilities of smartphones makes them ideal devices for such malfeasance. They contain multiple radios (cellular, wi-fi, Bluetooth) and can be easily programmed. Users can get within feet of intended targets without carrying bulky gear or appearing suspicious.

Implanted medical devices aren’t the only risk outlined in the article. Many computers in other settings are addressable in a similar manner, including the tire pressure monitoring systems (TPMS) in automobiles, which can be hacked so that cars can be tracked surreptitiously; automatic vehicle disabling technology can be hacked into, stopping cars remotely; and hands-free cell phone technology can be hacked so that conversations in the car can be recorded.

Software updates for cars from the Cloud are being introduced, increasing the risk of undetected hacking and viruses.

Your home isn’t your castle once it’s connected to the Internet through a “smart meter,” for instance. Smart meters are being installed by the thousands, recording the energy consumption in a household and transmitting this to the power company, sparing them from having to trek out to peer at your meter and allowing homeowners real-time monitoring of their power consumption. However, these meters also allow breaches of privacy, if you know what you’re doing:

By studying three homes’ smart-meter records, researchers at the University of Massachusetts were able to deduce not only how many people were in each dwelling at any given time but also when they were using their computers, coffee machines, and toasters. Incredibly, [the] group at the University of Washington was able to use tiny fluctuations in power usage to figure out exactly what movies people were watching on their TVs. (The play of imagery on the monitor creates a unique fingerprint of electromagnetic interference that can be matched to a database of such fingerprints.)

Children’s toys are also becoming a risk, as wi-fi connections and Web cams are integrated into them. For instance, a toy robot with wi-fi connectivity and a Web cam (and VOIP phone capabilities) is currently available. Security consultants are worried that these create even more security risks — risks that are invisible to parents.

And yet we move forward, with driverless cars, fly-by-wire airplanes, more and more cell phones, and more highly computerized cars, homes, and bodies, leading one expert to opine thusly:

In 10 years, computers will be everywhere we look, and they’ll all have wireless. Will you be able to compromise someone’s insulin pump through their car? Will you be able to induce seizures by subverting their house lights? Will you be able to run these exploits by cell phone? What’s possible? It’s more like ‘What won’t be possible?’

With the smartphone as one likely instrument of havoc in a world full of connected hostilities — personal, military, national — the benign face of technology is being remade. As Mann writes at the end of his Vanity Fair piece:

If Ridley Scott, say, were to attempt a remake of Dial M for Murder, I’m not sure he’d know where to begin.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

9 Thoughts on "Risky Connections — How Ubiquitous Computing and Connectivity Are Creating New and Hidden Dangers"

Kent, does digital publishing and networked distribution have its own inherent risks, or is the infrastructure sufficiently redundant and resilient against such nefarious attacks?

There are plenty of system risks in digital publishing. Customer data is one of these. A few years ago, when credit card systems were being hacked, a lot of publishers had to go through their processes and upgrade their security, both technological and regarding personnel. Customer data, especially transaction data, are probably the most valuable data sets being transacted with digital publishers. The articles, the content itself, isn’t nearly as desirable or hard to get to. So most of the security is around customer data, with a bit around one extension of that, which is email.

That said, the integrity of digital systems is important for digital businesses. Measuring advertising inventory and clicks accurately; guarding against misuse or abuse of institutional or individual accounts; and having the right data at the right place and time are all important.

As we get into things like workflow and device integration for our content, these things may become more important. A naked API might not be enough.

Also, for some publishers with information that is sensitive — nuclear research, cutting-edge infectious diseases research, and the like — security around author submissions and production systems could also be important.

Basically, we have to remember that we have created a two-way pipe to the world with many of our systems. It’s not just the mailbox and the UPS guy anymore. It’s not just 9-5 availability anymore. It’s 24/7 accessibility from everywhere. Vendors should be grilled about their capabilities, and internal systems buttoned up as much as feasible.

What I find interesting is when organizations become more concerned about their workers watching dirty videos and clamp down on that, while they leave huge holes open in their e-commerce or authentication infrastructures. That’s poor prioritization.

How about correcting the obvious typo in the Bezos quote? The second sentence reads, “They don’t of computer viruses….”

There is also an increased risk of unapparent single points of failure in complex interconnected systems. When Amazon Web Services go down they take Netflix, Redit, Flipboard and other resources with them. An error in a load balancing algorithm took down Google mail, calendar, sync and other services affecting millions of users. The Internet concentrates as well as distributes resources.

The only way to insure reliability of complex systems is to test them. Complex systems from the space shuttle to oil drilling platforms to electrical power grids all fail with regularity. New jet engine designs fail more frequently than established and more rigorously tested designs. The problem with accelerating rates of innovation is that we have an increasing number of untested complex systems in widespread use.

And, we’re creating and deploying systems without recognizing that they might have risks. We’re being a little oblivious.

Dangers can lurk in hidden places even for libraries. I remember a few years ago when I was stopped from accessing the site of a major foreign library vendor by my browser because the site had been infected to give all visitors a piece of malware. I emailed the firm where I had great difficulty in convincing them that anything was wrong until I sent a copy of an article on how it could happen.

Another possible danger next year will be a major solar flare that might take down the electrical grid, communications satellites, and other important parts of the infrastructure. This is a real danger because parts of Quebec were without electricity a few years ago when the flare caused the electric lines to burn up. (Quebec is more at risk because of its northern location.)

Traditional signature based antivirus approaches are broken [1,2]. As centers for research and development activity, universities are targets for directed attacks, and libraries are inviting sites for the dissemination of malware because they are widely used and trusted. As a result libraries need to be particularly vigilant in the maintenance of their servers and security protocols. To quote Leon Trotsky – “You may not be interested in war, but war is interested in you.”

1) http://www.wired.com/threatlevel/2012/06/internet-security-fail/
2) http://www.nytimes.com/2013/01/01/technology/antivirus-makers-work-on-software-to-catch-malware-more-effectively.html?pagewanted=all&_r=0

Comments are closed.