A researcher’s core interests may be in a specific set of areas, but effective discovery also helps that researcher to stay aware of adjacent areas of interest or even potential areas of unknown interest. Personalized approaches to discovery can improve research efficiency without sacrificing serendipity.
The second public access plan from a US federal funding agency has been announced. Some first impressions…
The benefits of personalizing discovery are already playing themselves out in the consumer space, suggesting tremendous opportunities for using data to personalize the research process. Given the scale of data needed for effective personalization, the implications of changing discovery processes will cascade through the scholarly ecosystem.
Technology is great, but does it deserve top billing? Leon Wieseltier’s essay in the New York Times as well as articles by other academics raise a challenge to the information industry as a whole.
In the final part of a series on library publishers, Phill Jones explores the relationship between library publishing and institutional repositories against a background of funder data sharing mandates.
A short video on the art of data visualization, an increasingly important subject in the era of “big data”.
With increased pressure from funding bodies and others for researchers to make their data open, as well as their research articles, it’s important to understand who is already sharing what data, how, why – and why not…
The idea of “reanalysis” needs to be rethought, if recent examples are any indication of what this trend could do to science.
Three different items recently published discuss the current state of thinking about discovery tools for purposes of research. Which one captures the right mindset? What should content providers be doing to support discovery?
Public access to research data offers clear benefits for reproducibility in some fields. But in the world of cancer cell biology, complexity reigns, and replicating results is not so easy a process.
PLOS has set a new policy, requiring authors to make all data behind their published results publicly available. This has been met with a great deal of controversy from the research community. Thoughts on why this policy and why now…
Businesses are using more data than ever to inform decision making. While the truly large Big Data may be limited to the likes of Google, Amazon, and Facebook, publishers are nonetheless managing more data than ever before. While the technical challenges may be less daunting with smaller data sets, there remain challenges in interpreting data and in using it to make informed decisions. Perhaps the most daunting challenge is in understanding the limitations of the dataset: What is being measured and, just as importantly, what is not being measured? What inferences and conclusions can be drawn and what is mere conjecture? Where are the bricks and mortar solid and where does the foundation give way beneath our feet?
An expert on the semantic Web, structured markup, and the emerging area of research data services talks about the current state of play.
The US government views data policy as an emerging area. A new National Academies report reveals the potential and the barriers, many of which are financial.
As communications in science begin to incorporate data elements more routinely, the standards for describing these, versioning these, and preserving these have to be considered. And we will all have to learn how to use data labeling processes correctly.