Our Peer Review Week posts this year and others have often revealed that peer review processes and expectations not only function differently across different disciplines and fields, but also for different types of publications. In the sciences versus the humanities, peer review is differently organized and differently conducted. There are different forms of peer review, for example, like Open Peer Review made easier by new online publications.  

For today’s post we talked with Daryle Williams, Dean of the College of Arts, Humanities, and Social Sciences, University of California Riverside, and Editor of the Journal of Slavery and Data Preservation. Williams is a scholar of Latin America who specializes in modern Brazil, especially most recently histories of slavery and emancipation; he has been a co-PI for a series of grants to fund Enslaved.org, an open access, open source, linked data tool for discovering the lives of millions of people enslaved in the Atlantic slave trade, and is the Editor of its journal, The Journal of Slavery and Data Preservation (JDSP). The most recent issue of the journal includes a dataset, for example, on sales of enslaved people in New Orleans by one prominent slave trading company, Franklin and Armfield, and its agents and partners between 1828 and 1836. Another includes the baptisms of more than 1000 African and African-descended people in Havana, 1590-1600. The sheer range of chronologies and American geographies represented in the datasets compels readers well beyond academia. 

Given the theme of this year’s Peer Review Week on research integrity and questions of trust in research, and the critical questions of data ethics around sensitive subjects of all kinds, Williams offered key insights on how a fairly traditional form of peer review functions for this new, full online journal of humanities data that serves both academic and public audiences. The interview has been edited for length.

peer review week logo

What is the specific need that JSDP serves for scholars and scholarship?

The Journal of Slavery and Data Preservation is the peer-reviewed journal for a larger project, “Enslaved Peoples of the Historical Slave Trade” (enslaved.org). It is a largely academic, but not exclusively academic journal for data driven research. Enslaved.org has resonated far beyond the academy with public audiences like K-12 audiences, and genealogists or black family historians. Our core though, at least at the origin, was university-based historians who have been working on slavery. So it was not remarkable that we would look for ways to demonstrate that their research process, which was historically hidden or undervalued or kind of set aside in terms of the publication of an article or a monograph, was very important. The humanities have really struggled with what to do with non-traditional forms of scholarly activity and scholarly output such as a dataset, or what to do with digital humanities more broadly. And so we were also trying to find a way to create something that would be legible for people who would be motivated to contribute in terms of their career progression.

What does it mean for humanists to preserve data, and why is data preservation particularly important for the histories of slavery and the enslaved?

What we probably think of as the most familiar form of data is something that is standardized information, based on a series, has numbers associated with it, and is gathered and recorded, often in a ledger or a table or what we can make recognizable in a spreadsheet. There are just vast amounts of information about the lives of the enslaved in various settings, which is in this form, ledgers of sales, for example, all of this most often written for and by the logic of enslavers, to structure information about a number of individuals who are commodified, placed in a situation of being counted for labor of some form, of death, of birth. Then there are registries of their status, especially of formerly enslaved people, especially in transitional settings. So just lots of information, important sources for historians to understand what are the experiences of labor, what are the experiences of childbearing, of childrearing, of demographics, economic activity or identities. And then of course we ask questions about who created these sources, and their original context. And that gives us some sense about the logic of that time and place, the enslaving author’s perspective.

So there’s kind of a range of things that historians can look at, and social historians have been doing this kind of work for a long time, looking at both series and social experience. What we particularly focus on is the individual and how we can center their name, even though in a spreadsheet format. We try to see where the named individual then is related to all these attributes, experiences, characteristics, in both imputed and self-declared ways. So that’s one way that I think that historians are really attentive to and can benefit a lot from looking at information, which is already in kind of data, from the logic of slave holding societies, using some modern tools that you can query and manipulate and visualize and explore.

So when we think about a  spreadsheet, depending of course on what other information you have, those same cells within the spreadsheet can be linked to other cells about that person, or someone might have similar characteristics, or differences, which then becomes something more like an arc of a life. And you may be able to use those cells to place that particular experience, whatever it is in that cell, in dialogue with the experiences of many enslaved people’s or various kinds of experiences of enslavement and various spaces of time.

This goes back to contesting the dehumanizing logic of the records themselves created by enslavers. We do primarily publish data, but we do also have, what are called stories or more conventional narratives of a life arc, you know, something was known about this person when they either were born, or at least when we first encounter them in the historical record and, X, Y, and Z happened to them.

How does peer review function in this new (not traditional?) journal?

We use an anonymized review process, which is very comparable to peer review in the humanities, social sciences, or STEM. What is distinctive in our case for a humanities journal is that we are usually sending out a submission which is pretty light on text. It’s a general description of the research, a description of the methodology of the data set, which is usually a CSV file, and then a data dictionary. Those three component parts of a submission go out for experts to review. We try to get a blend of experts who might be really invested in histories of enslavement, but not in especially attentive to digital scholarship, but they look at the integrity of the research questions and how they’re presented then looking at the methodologies that inform the creation of this data set.

And then we also try to usually get someone who is very invested in that digital scholarly information sciences side. What is a structure? What is a composition?What’s made explicit, or is it adequately explicit, for the creation of that data set? Some people are working from transcriptions, some people are doing other kinds of data extraction work. Some people are using original source materials that are already in tabular form. Some people are looking at very, very different kinds of things and creating that information and tabular forms. So the peer review process is to look at the quality of the original questions, the execution of those questions as a research enterprise, and that quality of the materials presented both in prose and also in data form in a spreadsheet.

We also have a couple different other forms of peer review that look a bit different. These include what we’re calling editorial reviews. We’re using in-house editorial discretion and judgment for contributions that are historic in nature. More on the preservation of data side here, maybe the contributor of the original dataset is deceased, or has no interest in writing a full article, but we think it’s important to make sure that their research information is available. We also have pieces which might be coming from community contributors; community submissions are non-peer reviewed submissions but still do go through an editorial review. Does it meet minimum standards, does it center on the lives of the enslaved, for example.

Why was it important to institute some traditional practices like peer review?

When we think more broadly about knowledge quality, the peer review process, even for those people who are outside of the academy, does have a certain kind of cache of expertise. It suggests that it’s an expert creating this information based on standards, and that other experts have had an opportunity to weigh in, to say it’s good, or it’s not.

Even when we think about users of the information who are not academics, they also want to say, this is something that I can trust. I can have faith that there are standards. Whether a transcription or the whole dataset they can look at that and feel that there’s been a process. Because they may be asking a very specific question, like, is this my great, great grandfather? I want to know if this spelling I have is the same spelling here, and is this the same person or not.

Karin Wulf

Karin Wulf

Karin Wulf is the Beatrice and Julio Mario Santo Domingo Director and Librarian at the John Carter Brown Library and Professor of History, Brown University. She is a historian with a research specialty in family, gender and politics in eighteenth-century British America and has experience in non-profit humanities publishing.

Discussion