Private collection
Pandora. Image via Wikipedia

The institution of science thrives on indicators.  Public and private expenditures on research, PhDs granted, the size of the scientific workforce, publications, patents, and citations, among others, are regularly collected and made public for interpretation.  Which country is leading in genomic research?  Is China catching up to the United States in terms of engineers?  Should we encourage more women to study science?

While the sciences have a good handle on institutional trends, the humanities have largely avoided this model of tracking.

Until now.

The Humanities Indicators Prototype (HIP) is a recent development by the American Academy of Arts and Sciences.  The HIP is modeled after the Science and Engineering Indicators produced by the National Science Board.

The Humanities Indicators is still, as it name depicts, a prototype.  The current report is based on existing data,  although the webpage explains that the group is in the process of gathering original data through surveys sent to 1,500 college and university humanities departments.

The current report is organized into five topics:

  1. Primary and Secondary Education in the Humanities
  2. Undergraduate and Graduate Education in the Humanities
  3. The Humanities Workforce
  4. Humanities Funding and Research
  5. The Humanities in American Life

What is most surprising to me about this endeavor is not the attempt to gather and report on institutional trends, but about how the authors attempt to contain the meaning and interpretation of these data:

Indicators describe; they do not explain anything. They are factual and policy neutral. At best, they provide a “reality check” against which arguments about changes can be tested.

Uttered from the mouth of a scientist, this statement seems naïve; but from a humanist, it is downright puzzling.  Data are never just “factual” and “policy neutral” — they are imbued with meanings, values, and politics.  Data are selected for collection because it is believed that the object under investigation (like the number of faculty employed in the humanities or the number of students declaring a humanities major) says something meaningful.  They are indicators of how we, as a society, value the humanities.

Conversely, the decision to discontinue data collection can also reflect political values.  When one no longer knows how many women are in the workforce, or jobs reduced by mass layoffs, meaningful political debate becomes impossible.

Perhaps the architects of the HIP know this.  Perhaps they are nailing Pandora’s box shut, knowing that opening the box at this time would allow critics to question the very notion of creating indicators in the humanities.  Perhaps they are trying so hard to emulate the sciences that they have become hyper-rationalistic.

As a statistics professor of mine once said in class, “all data tell a story.”

The absence of data tells one, too.

Reblog this post [with Zemanta]
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist.


1 Thought on "Metrics for the Humanities"

Comments are closed.