I lost sleep lately over the ideas I had on future science publishing. I wrote before that I think an independent publishing platform financed by an international group of public funding organizations would be an answer to several problems in academia of financial nature but also problems with internal academic evaluation systems.
So here is my sleepless-night-concept of how I would construct an archive system that covers many, maybe all, wishes I currently have – it may sound a bit like F1000 here and there, but it is different, I promise. And it doesn’t take any new technology at all, we just need to take what’s out there and mash it together like Jobs did with the iPhone (yes, guys, Apple just mashed technology together that had been invented financed by public funding ;)). My idea has two main components: a publication side and a community side.
The basis of the system I’d like to propose is an archive that accepts articles as well as data. These would be interconnected, possibly going into the direction envisioned by Steven Bachrach. He proposes to take the traditional article apart and connect the pieces via database links, also allowing other authors to for instance connect their own study to your introduction. I personally am not sure to what extend studies can be compatible enough so that one can just copy and paste parts of one study report into their own – with intact citations of course. However, taking advantage of modern information technology and linking reports and original data of related research studies sounds a great way to make science publishing open to big data analysis. Of course, the human mind still wants story telling, so a human legible article should be there at some point. All contributors need to verify that they are researchers with academic degrees. There are many communities on the internet that are already in use to ensure the identity of their members. Probably the most feasible solution is to check whether the email address is hosted by an official institution and verify it. One can further use a crawler to identify email addresses belonging to corresponding authors… or even just make use of an existing service like the ORCID. The database itself obviously also includes a search engine that allows finding what you are looking for, just as in PubMed for example – and also with a comment section like the new PubMed Commons (I highly recommend using the PubMed Commons and spread your opinions on articles! Alternatively there is the independent and anonymous PubPeer)
Okay, we now have the compost heap of science where verified scientists dump all the knowledge they produced and which they think can be the soil on which science as a total grows. The next step is to bring it on the field and see if something really grows. (Maybe) for a small submission fee, authors can put their article up for an official review at PLOSone level – if it is good science it will be accepted. Don’t get me wrong, comments can be made to any entry in the database. The official review is the step that raises an article to the ‘accepted by peers’ status. The official review mechanism will invite an editor and reviewers based on the information the system has on its members (see more below). The editor acts as a supervisor who will judge whether the reviewers are doing their job and the authors are revising the manuscript appropriately. The review process is visible to the public – whether reviewers stay anonymous is still up to discussion – and registered members can add comments to the reviews (not anonymous). Everybody who read the manuscript may rate the current quality of the manuscript after each revision to help the editor decide whether the article is now accepted by the peers that got involved. I envision a quick five star evaluation with questions like ‘Do you like it?’ and ‘Is this study interesting to a more general audience?’ etc pp. The decision to finally accept or reject the paper could even be automatized but such systems can usually be tricked somehow.
Peer accepted articles are marked as such and the search engine displays them as published in a journal. Up to this point the article was already available to the public, reviewed by peers and accepted as solid science. Now begins the hard part ;). Believe it or not but I am not against glam if it is done correctly: based on the quality of the article, PLOS calls it ‘Article-Level Metric’ and there is also Altmetrics and others have their own ideas. But how do we measure the success of this paper in a timely manner and increase the visibility of the most useful articles to a more general audience without waiting for it to be cited? What puzzles me the most is that none (?) of the metrics actually asks the readers whether they liked it and whether they would think it is cite-worthy. Rather than just looking at how often an article was downloaded I would continuously ask readers upon their next log in how they would rate articles they looked at during the previous session with the same quick 5-star evaluation system as before. I like the idea of FrontiersIn to then allow articles to raise in tiers according to their popularity which allows them greater visibility. After a determined time period (for example every three months) the system would score all articles published in the same month according to the member ratings and promote a certain percentile to the next tier. The score will, of course, take into account how big the field is, etc. Whenever an article is about to raise a tier, the authors will be given the opportunity of re-opening peer-review to make revisions to the article that enhance the legibility to readers of a more general audience and maybe work in corrections or even add data and experiments – all previous ratings and versions of the manuscript will be kept for reference. Then the next evaluation round begins where the system just waits for more (new) member ratings. Three tiers or journals could represent for example the 100, 50 and 10 percentiles of ratings in the ladder. However, a journal can not rise in tiers after it had been evaluated three times so that after about 1.5 years – or a time span one would expect review process for a top-tier journal usually takes. At some point one just wants a fixed metric, right? Also, the article at this point was probably public for two years and now citations will come into play that I would count as a second metric.
Will there be many different journals for any sub-field in science? No. All science would be accepted. I believe that we can build something more advanced than that based on the information we have on the members: keywords. And here comes the social media part into play. This archive is not just a repository with search engines it is also a community that gathers around scientific interests like Mendeley and also ResearchGate. After the members registered, the system will question them about their interests. Just like in PubMed scientists will be able to set automated searches and the system will be further able to crawl their publications for keywords and follow their manual searches. When the members agree the system will use this information to find fitting reviewers and editors – co-authorship will be used to exclude certain reviewers. Based on keyword combinations and direct ‘following’ among the scientist, the system can identify clusters of scientists that form a ‘field’ and suggest to them papers to read, just like amazon does (people who read X also read Y).
A system like this can be advanced into a one service for all. You can add for example services for science writers who would be recommended certain successful articles and could have their articles proof-read by scientists. You could add a review journal that invites authors who gathered relatively many citations or are followed by relatively many people – relative to the size of their field that is. You can open the comment sections to the public and add a special membership for non-scientists. Their choices could be used to identify what sort of science the public is interested in. All kinds of things, actually.
My proposed system is envisioned as a big single thing that does it all under one roof. However, since there are many solutions out there already one could also think of connecting those to achieve the same functionality. The biggest problem to make this work is the relatively high number of members a system like this demands right from the beginning. I would therefore begin only with then archive and community part until we reach numbers that allow sufficient statistical power for all those scoring and rating systems. I also have some ideas for the front-end, the user interface if you so will. But I think I’ll leave you with this for now ;).