New approaches to quality control in publishing
There are four key components to publishing, and they’re all about to change.
Ten years from now, publishing will be done in ways that we are only beginning to envisage. Politics and profit will of course compel these changes. But the specific innovations coming our way will be driven by a generation of tweeters, bloggers, status updaters and Wikipedia editors.
Publishing starts with (i) an author who has (ii) something to say. It requires (iii) a system of quality control and then (iv) a way to produce and distribute the results.
These four core elements of publishing are the same, whether we are communicating scientific results, writing for a newspaper, telling a story in a novel, or blogging. And they are usually also the chronological stages, especially regarding quality control and distribution.
What changes are we about to experience? Where is the system soon likely to do something differently?
As we learned from Charles Darwin, one key way to identify where big changes are about to happen is to look for variation. Innovation leads to variation, and variation leads to breakthroughs.
In publishing, a current example of this is in the way editors assure the quality of research results that are to appear in academic journals.
When a paper is submitted, editors send it out to a few experts who offer their anonymous opinions. This system of peer review is traditionally the sine qua non of the quality control system in science. But it is slow, expensive, dependent on the goodwill of our colleagues, and potentially discriminatory. In my field (linguistics), it is not uncommon for three years to pass between initial submission of a paper and its appearance as an article.
Furthermore, a growing body of research suggests that peer review is not effective for quality control, only in part because of its costs. One major problem, among many, is the unpublishability of negative research results. Some are even asking why bother publishing in journals?
Many OA journals are concerned to demonstrate that they have the same quality control system as traditional journals. I think this is a mistake. Doing the same thing as the old, traditional, conservative system is not a good strategy for finding a competitive advantage.
You can’t win by doing as good as your competition. If Open Access is to become the dominant model for scientific publishing, it has to offer something better than what we already have.
And something better is emerging. There is now variation in how peer review is carried out. Various models are succinctly described in Wikipedia’s article on Open Peer Review.
One model allows authors to post articles on electronic archives. The scholarly community can then engage in discussion, which may lead to publication in a journal.
Another model has authors submit their articles — and also names of reviewers. Consider this editorial statement from WebMedCentral.
We at WebmedCentral have full faith in the honesty and integrity of the scientific community and firmly believe that most researchers and authors who have something to contribute should have an opportunity to do so. Each piece of research will then find its own place in scientific literature based on its merit.
We have introduced a novel method of post publication peer review, which is author driven. It is the authors’ responsibility to actively solicit at least three reviews on their article. […]
[R]eaders would have full access to the entire communique. Our intention is to generate a healthy debate on each published work.
A different approach to post-publication peer review lets the traditional journals continue their work, but then adds another layer of evaluation. This is the approach of the burgeoning Faculty of 1000, which describes its process as follows.
Faculty of 1000 (F1000) identifies and evaluates the most important articles in biology and medical research publications. Articles are selected by a peer-nominated global ‘Faculty’ of the world’s leading scientists and clinicians who then rate them and explain their importance. From the numerical ratings awarded, we have created a unique system for quantifying the importance of individual articles and, from these article ratings, journals. […]
Launched in 2002, F1000 was conceived as a collaboration of 1000 international Faculty Members. The name stuck even though the remit of the service continues to grow and the Faculty now numbers more than 10,000 experts worldwide. Their evaluations form a fully searchable database containing more than 100,000 records and identifying the best research available.
On average, 1500 new evaluations are published each month; this corresponds to approximately 2% of all published articles in the biological and medical sciences.
The scientists behind these approaches are motivated in part by the spirit of the Web, which tells us to make information publicly available, to eliminate filters, as Cameron Neylon would say. They are motivated in part by a conclusion that the current system both puts profit into the wrong pockets and does so without successfully assuring quality.
Quality control is only one part of the publishing system. The way we create content, the way it is distributed, even our conceptions of authorship are sure to change soon, as I noted in Publishing in the Adjacent Possible.
Where do you see variation today? Where are breakthroughs likely? What do you think they’ll include?