Ask The Chefs: AI and Scholarly Communications
What is the future of AI in scholarly communications? How can applications of AI in scholarly communications effectively leverage research artifacts?
What is the future of AI in scholarly communications? How can applications of AI in scholarly communications effectively leverage research artifacts?
Despite increasingly sophisticated library automation, the data on books in libraries is often hard to come by.
Franklin Foer’s new book is a bracing account of the current information economy, the monopolies and motivations at its heart, and the weakening of democratized knowledge.
A few take-aways from STM Week, including London Information International — why publishers have to take security seriously, why OA may need to itself be disrupted, and why we might want to rethink the “content business” positioning we have.
A new book reviews various instances of piracy in the media industry and proposes using Big Data analyses as a means to manage it.
The new book, “Weapons of Math Destruction,” calls out many worrying trends in the application of big data, with particularly salient entries on higher education rankings, for-profit universities, the justice system, insurance, and employment.
While it certainly is the case that scholarly publishing is a mature business, some of the companies operating in this industry have found new avenues for growth by expanding beyond the publication of content into data science. This is an opportunity that is only available to the larger companies with enlightened management.
During the AAP/PSP annual meeting last week, a panel discussion with representatives of a consortia, a publisher, and a technology provider explored the topic of text and data mining.
Lack of information about how books are actually used has resulted in a set of actions that don’t make solid economic sense. Now that more end-user information is becoming available, the book business is likely to adjust its practices.
At the opening of the Frankfurt Book Fair this year, a pre-meeting session was held called CONTEC. This follow-up to the much beloved, but now defunct, O’Reilly Tools of Change conference brought together an interesting mix of leadership from traditional […]
“Big data” continues to draw attention, but will it ever amount to more than a hypothesis-generating engine and supplementary findings?
Businesses are using more data than ever to inform decision making. While the truly large Big Data may be limited to the likes of Google, Amazon, and Facebook, publishers are nonetheless managing more data than ever before. While the technical challenges may be less daunting with smaller data sets, there remain challenges in interpreting data and in using it to make informed decisions. Perhaps the most daunting challenge is in understanding the limitations of the dataset: What is being measured and, just as importantly, what is not being measured? What inferences and conclusions can be drawn and what is mere conjecture? Where are the bricks and mortar solid and where does the foundation give way beneath our feet?
Lessons learned from Mike Walsh’s keynote speech at the Special Libraries Association Meeting.
An electric car’s data versus a journalist’s experiences — and neither proves sufficient for the task of telling us exactly what happened.
“Big data” isn’t what the Nate Silver story highlights. It highlights data curation, management, analysis, publication, iteration, and integrity, none of which “big data” guarantees.