Back in 1963, Charles Schulz drew a prescient character, who was introduced in the strip below:
Back then, with computers edging into mainstream lives, numbers were replacing names as main ways we identified ourselves. Instead of fighting this, 5’s parents gave in. His sisters were 3 and 4, “Nice feminine names,” as Charlie Brown once noted.
Fast-forward 50 years, and now any one of us can be quickly sequenced into our basic genetic code, which can then be stored as numbers in a database somewhere. Or, a few cells can be preserved for later exploration, again reducing us to data. And, combined with the vast networks, storage facilities, and computer crunching capabilities, we and the data that can be derived from us can mix and mingle in ways we’ve only started to contemplate in the era being dubbed “Big Data.”
Informed consent dates from the Nuremberg Trials, when Nazi atrocities led Western medicine and law to establish rules protecting patients from abuse by authorities. Yet, problems continued, with UK citizens finding out in 2004 that physicians had been storing tissues — many from children and infants — without first obtaining patients or parental consent. In another case in the US, blood gathered for one purpose was also studied for another purpose, one the participants never agreed to give blood for.
Samples are difficult to handle and transport. But data? They are very easy to handle and transport, but can be even more revealing and damaging, especially in the era of “Big Data,” when separate data-gathering initiatives can accidentally knit together, revealing secrets nobody agreed to share.
A recent article in Nature News covered emerging concerns around Big Data, informed consent, and medical testing. While the problems are abundant and worrisome, the answers are not at all clear.
The problems start with how the data are gathered. Are they opt-in? Or are they opt-out? The BioVu database at Vanderbilt Medical Center forces patients to opt-out of inclusion in the biomedical database being assembled there. Some worry this takes advantage of people when they are ill and potentially desperate, when they might worry that not complying could put them at a disadvantage.
Then, how the data are stored — who has access to what, and what controls are in place to prevent intentional or unintentional misappropriation — becomes a real issue. The BioVu example has plenty of safeguards — genetic and patient data are stored in separate databases; patient records are scrambled and discarded at random; and sample collection dates are sometimes altered. Even all these precautions only generate a level of protection that makes it “difficult” to match records to particular patients.
At a medical meeting I attended last week, a patient with a remarkable story talked about being invited to a party hosted by his physician, who also conducted research in the same area. Quickly, the patient began to feel eyes slowly turning to him and whispers through the room. Suddenly, someone burst out with, “Hey, everybody, this is Patients 219!” Applause followed, but confidentiality was broken. And, the patient had no opportunity to consent to having his identity revealed in public, despite the fact that spouses and dates were likely in the room.
There are two sad aspects to the eager drive to “Big Data” and the answer we seem to be arriving at for dealing with the consequences. First, there is the unbridled and almost reckless enthusiasm to push people into databases, consequences be damned. Second, the answer to these consequences seems to be an abdication — make everything “transparent” to the patient, and let them control their data.
So, as “Big Data” emerges from “Big Brother,” we end up working for them both?