Thursday, February 15, 2018

To what extent is evolution predictable, and why?

<Posted by Dan Bolnick on behalf of Patrik Nosil>

Evolution is often portrayed as a descriptive science, rather than a predictive one. Nonetheless, time series data on ‘evolution in action’ can be used to quantify the predictability of evolution. In this week’s issue of the journal Science we published an analysis of the predictability of evolution in the stick insect Timema cristinae (Figure 1), using a 25-year longitudinal study of morph frequencies, experiments, and genomic analyses.


Figure 1. The focal species (T. cristinae) used to study the predictability of evolution. Credit: Moritz Muschick.

We find that the evolution of both color morph (green versus melanistic individuals) and pattern morph (striped versus unstriped individuals) frequencies is strongly influenced by selection, yet the two traits differ in the predictability of their evolutionary dynamics. Color morph frequencies are only modestly predictable through time, because they are driven by multi-faceted and complex selective regimes that are still poorly understood. In contrast, pattern morph evolution is highly predictable, being driven by a more understood process that causes consistent up and down fluctuations in morph frequency, namely, negative-frequency dependent selection. Thus, evolution might often be as predictable as the types of mechanisms driving it, and our understanding of these mechanisms – good understanding of natural history and selective environments can lead to greater predictability.

In this post, I’ll offer a brief historical perspective on the making of the study (rather than give a detailed account of our paper, for which I direct interested readers to the article itself). My own interest in the predictability of evolution dates back to when I was a graduate student, and read the now classic 2002 paper in Science by Peter and Rosemary Grant about unpredictable evolution in Darwin’s finches. It struck me then that the real problem here was predicting the weather, because if that could be done then subsequent effects on seed size distributions and the evolution of finch beaks might be predicted. My interest in limits on predictability was peaked, but I had to wait two decades to accrue the date required to publish on it (in terms of a temporal context like the finch study).


Figure 2. Collecting stick insects. Credit: Moritz Muschick.

From the years 2000 to 2017 inclusive I collated data on morph frequencies in T. cristinae in the hills around Santa Barbara, California (Figures 2 and 3), and Cristina Sandoval (who introduced me to the system) contributed data from the 1990s. I proofed the data and centralized it into a master database over a recent summer. When asked why I was doing this, my reply was ‘I don’t know, something will emerge’. Patterns consistent with the conclusions above emerged from the time series data. This was interesting, but not sufficient to establish causality. We thus used genomic analyses to bolster the evidence for selection, and experiments to test for sources of selection, such as negative-frequency dependent selection. The story was coming into place, but we still did not have the focus required to write a compelling paper. After reading Jonathan Weiner’s book ‘the Beak of the Finch’ and having pub discussions concerning the role of deterministic versus random events in evolution, we were finally ready.

The manuscript was then written, many years after the seed of interest was sown. It’s now published, and we conclude that our constrained understanding of selection and environmental variation (i.e., limits on data and analysis), rather than inherent randomness, can limit ability to predict evolution. In terms of eco-evolutionary dynamics, these limitations may affect our understanding of ecological processes, because to the extent that evolution can be predicted, perhaps so can its consequences for population dynamics, community structure, and ecosystem functioning. In T. cristinae specifically, changes in morph frequency affect bird predation, which in turn can affect entire arthropod communities and plant herbivory (e.g., Farkas et al. 2013 Current Biology). Thus, limits to predicting evolution within species may be data based, with far reaching consequences for interacting species.

Posted by Patrik Nosil, from the Horseshoe Canyon Ranch in Arkansas.


Figure 3. Typical habitat in which T. cristinae is collected. Credit: Aaron Comeault.




Monday, February 12, 2018

Gaia, cancer, and the “holobiont”.


I just got back from a trip to Dalhousie University, to which I was invited by grad students to speak in the Department of Biology. Among the many interesting conversations was one I had in which Gaia and cancer somehow came together. My host, Sarah Salisbury, and I were speaking with Dr. Ford Doolittle and Andrew Inkpen, about their ideas on how natural selection might act at the level of processes – as opposed to the “things” that generate those processes. I won’t spill the details as they will be outlined in a forthcoming paper of theirs. However, I did want to relate how we got from Gaia to cancer and, then, during the course of writing this post how I ended up at the holobiont. (Get ready for a lot of “scare quotes” as I try to extend terminology in each of these areas to the others.)

Dalhousie is 200 years old - how cool is that. 

According to Wikipedia, the Gaia hypothesis “proposes that living organisms interact with their inorganic surroundings on Earth to form a synergistic and self-regulatingcomplex system that helps to maintain and perpetuate the conditions for life on the planet.” One way of framing this hypothesis is that global ecology and the evolution of life feedback to one another in such as way as to facilitate and maintain life on earth. Taken to extremes, one could say that natural selection favors the self-regulating feedback itself as life would otherwise cease to exist – but how would we ever test this hypothesis?

Here is the top hit in a Google search for "Gaia". SOURCE

In group selection or “higher-level selection” arguments of this sort, one would normally need multiple entities with heritable traits influencing differential “survival” and “reproductive success”. In the case of Gaia, then, we would need life originating on multiple planets among which selection was then occurring – because it is impossible to have selection among things if you only have one thing. Of course, one might argue that selection has indeed acted in this way and that our planet (or others we can’t perceive) is the only one left – because life did not evolve synergistic self-regulation on other planets and therefore went extinct. Or one might argue that selection on our Earth has weeded out organisms that do not participate well in this synergistic self-regulating system. Any of these speculations could well be true, but it is impossible to study such processes in the real world because we only have the one world.

Or do we really only have one world? As we were discussing Gaia, I started to think that perhaps we could use cancer as a Gaia model. Consider the similarities. Cancer starts with a single cell than then proliferates into a great diversity of descendent cells; much as current life on Earth has proliferated and diversified from a single initial cell. Then, the cancer that proliferates on an individual “host” organism will (in most cancers) leave no descendants when that organism dies; just as we might expect that (barring colonization of new planets) life on earth will cease to exist when the Sun has run its course. In addition, cancer can kill its host and be the reason for its own extinction, as well as most other life occupying that host; just as some life – us perhaps –could severely damage the Earth and kill much of the life on it. Thus, perhaps we can view cancer as a lineage of organisms proliferating on its own finite planet that, should the cancer disrupt the synergistic self-regulation of that planet, the cancer itself and much of the life on that planet will cease to exist. Considered in this way, perhaps we can – at least for the sake of argument – speculate on how cancer and its proliferation in individual hosts might yield insights into the Gaia hypothesis.

An awesome book on cancer.

When I started this post, I was thinking that much of the analogy from cancer would seem to parallel ideas supporting the Gaia hypothesis: for instance, “nicer” (as opposed to “meaner”) cancers should leave the host alive longer, nicer cancers should enable the persistence of more life on the host for longer, and so on. Indeed, all of these things are true. However, it now seems to me that one fact runs directly counter to the Gaia hypothesis: cancer exists, is common, and routinely kills its host. This fact might be taken to mean that selection does not generally favor a synergistic self-regulating system in the context of cancer and its “planet”. Why might this be? I speculate it is because cancer (the non-heritable kinds) are never transmitted to other hosts – that is, to other “planets.” In this case, selection would never favor cancer being nice to its host because, no matter how long the host lived, the cancer would never be passed on: as opposed to infectious/transmittable diseases of hosts where selection can indeed favor reduced virulence. For this reason of non-transmissibility among hosts/planets, presumably selection cannot act among the cancer/life on different hosts/planets to favor synergistic self-regulating systems on those hosts/planets.

Ok, wait, you might say, cancer is actually nice to its host because it rarely strikes before reproduction: however, the reason here isn’t selection on the cancer but rather selection on the against alleles in the host genome that increase the chances of pre-reproductive cancers. That is, it isn’t selection on the inhabitants of the hosts that favors synergistic self-regulation, rather it is selection on the host itself. Unlike hosts, however, planets do not have genomes that can be selected to “punish” lineages (species) that are not nice to the overall system. Or is that true? What if we consider all life on the planet as its genome – in this case, the “genome” of a planet perhaps could be selected to eliminate parts of that genome that do not promote self-regulation. That is, life could self-police itself through elimination of non-cooperative life. Interestingly, even this analogy could perhaps be extended to the cancer scenario: all the genes in a host, including all genes in all species living on or in that host (the so-called “hologenome” or “holobiont”), could be selected to act against cancers that are detrimental to the host itself.


The top hit on Google for "hologenome". SOURCE

I am not sure if any of this is useful in anyway, but it sure is interesting – to me at least. I have no intention of actually studying or testing these ideas in any way, but I have certainly been interested in the evolution of cancer for some time. For instance, here is a previous post in which I speculated about the fundamentally different problem posed by cancer relative to other forms of life that are detrimental to humans. In addition, I have recently become very interested in the microbiome (a key part of the holobiont) as regulator of fitness and adaptation. Indeed, just today, my student Lotte Skovmand, had her qualifying exam, which she passed (Congratulations Lotte!), in which she will examine the drivers of microbiomes in plants and howler monkeys. Perhaps that is how I got from Gaia and cancer last week all the way – today – to holobionts. Nothing like colleagues and students to get you thinking about new things!

Monday, February 5, 2018

Why have a gatekeeper, and who should it be?

Let's face it: scientific publishing is changing, fast. Open access journals. Online-only journals. Preprints. Post-publication peer review. For-profit redistribution services like Academia.edu or Researchgate that siphon web traffic away from the original publishing journals, to their detriment. Predatory journals.  Accelerated peer-review. Double-blind peer review. Open Peer review. Et cetera, and then some.

There are, without a doubt, good ideas in there. And, there are ideas that are maybe utopian but impractical. Open to abuse, or just not likely to work well in practice. So, when a new idea pops up on the landscape, it is worth taking skeptical notice. This past week, the President of the Howard Hughes Medical Institute, Erin O'Shea, co-authored a policy essay, "Scientific Publishing in the Digital Age" with the HHMI Chief Strategy Officer Bodo Stern. This is worth taking notice of, because these are two very smart people, and because they run the largest private non-profit biomedical research program in the world. HHMI helped kick-start the PLoS journals. Later, they helped kick-start eLife. So when HHMI's leaders say scientific publishing should change, you can bet they will implement the policy they propose, and implement with gusto and deep pockets.

So, I'm going to take a skeptical read through their article, and write my reaction as I go. I want to emphasize a few things. The following is my personal opinion.  Also, the following is my knee-jerk reaction, which is risky to blog. So if you disagree with me on something, try reasoning me out of it rather than getting agitated. Because people are certainly getting agitated about the changing publishing landscape, or agitated because some of us are traditionalist sticks-in-the-mud.

Before I dive into Stern and O'Shea's article, let's see if we can agree on a few (almost) universally shared goals and values, just to help remind readers we are on the same team, aiming at the same goals, even if we disagree on how to get there.

Research Quality:
   Goal 1)  Do not publish papers that are fraudlent, logically flawed, technically incorrect, misinterpret or misrepresent results,  incomprehensible prose, and so on...  This is the analogue of "do no harm" in medicine. We don't want to promulgate false conclusions, or fraud. We don't want scientific publishing platforms to give voice to unsound anti-vax papers, unscientific young-earth creationist articles, or racist or sexist rants.

   Goal 2) Take flawed papers and, if possible, improve them through (iterative?) review and revision until they are good enough to publish (e.g., to not violate Goal 1).
   Goal 3) Maximize the readability of the paper. This spans both the technical veracity of the research, but also the quality of the scholarship and its presentation (compelling prose, clear figures, etc).

Speed:
   Goal 4)  Get new ideas, methods, data, results, insights into the public sphere as quickly as possible, without violating Goal 1, and hopefully also meeting Goals 2 and 3.  There is a trade-off here. Meeting Goals 1-3 typically requires in-depth review.  Hurried review is often cursory. So if we are to get good quality independent reviews, we need to be willing to sacrifice time. If we are to revise thoroughly and properly, we need even more time. So, we often must balance Goal 4 versus Goals 1-3.  Also, good copy-editing is valuable (helps mostly with Goal 3), and copy-editing takes time.


Financing:
Publishing costs money (e.g., see this earlier blog), so someone has to cover these costs.
 Goal 5) Make taxpayer-funded research freely available to taxpayers. Proponents of Open Access also argue that open-access papers will be read more, cited more, and thus have more impact all else being equal. Typically, this means the authors pay. As has been noted elsewhere, this creates a potential conflict of interest in which predatory journals benefit from selling authors the right to publish, leading to violations of Goal 1.
   Goal 6)  Let researchers with limited funding publish. Graduate students doing independent work. Postdocs operating on a shoestring budget. Labs between grants. They all have valid ideas, good data, things to say. But, an article in some top Open Acccess journals can cost upwards of >$5,000.  That's a huge barrier to entry into publishing. So, Goal 5 and Goal 6 pose a Gordian Knot of a trade-off. Personally, I prefer the American Naturalist's model in which those who can pay to publish open access, do, and those who can't, get cheap page charges (or even a waiver), with operating costs met by subscription fees.


Reaching a target audience:
  Goal 7) It should be easy for scientific readers to find the highest-quality articles that interest them most, without having to wade through a thicket of irrelevant papers. It's also nice to be steered to the papers that are most likely to change how we think about, or how we do, our own research. This was a problem with PLoS One: too much chaff compared to the wheat (and there are good papers there, to be sure), and poorly organized along conceptual themes. This goal is getting easier and easier with recommendations from software like SciReader, and social media (though the latter can fall into group think and can amplify cultural biases).

   Okay, I'm sure you can add other Goals (post a message!), but that'll do for now. We'll use these in my comments below on the Stern and O'Shea.

One more thing before I dive in: I have a few disclaimers. First, I am Editor-In-Chief of The American Naturalist, which is a smallish non-profit society journal. It is still printed on paper and sent to libraries. So, I'm part of the traditional "System". Second, the following is my personal opinion and not the stance of the journal nor its publisher.

So here we go. This'll be sort of like a live twitter feed as I read. If you want to read along with me, here's the link again: http://asapbio.org/digital-age 

I've read but won't comment on the abstract. Presumably these ideas are all developed more below.

Introduction:

So the Royal Society started peer review in the 17th century? I wonder how that worked then. I know that The American Naturalist's editors in the 1860's through 1890's used a very informal review system at least sometimes, but that often involved showing it to the professor down the hall, who would write things like "it is one of the most miserable and inadequate things ever printed ". Formal peer review as we know it now seems to have started later, maybe as late as the 1950's, when the instructions for authors printed in the journal mentioned the need to submit two copies of a manuscript, for review purposes.

Stern and O'Shea write: "It made sense for publishers to charge consumers subscription fees in exchange for hard copies of journals and to establish editors as the gatekeepers of publishing, when printing and distributing scientific articles was expensive and logistically challenging. These limitations no longer apply" I want to point out that many journals are still in print, and there are benefits to print journals (seredipity of discovering something unlooked for when leafing through, for instance; and I retain info better from the printed page still, but maybe that's just me). Also, journals still cost money to run. There's a website to maintain, staff to handle communications with authors and editors, copy editors: there's a lot that goes on behind the scenes and that's not free. Still.

Next S&O'S are reiterating arguments for Open Access, but saying it isn't enough. They repeat the claim that paywalls are a barrier that slow science, and limit who can build on existing knowledge. True, to a point. For many journals (like AmNat), the large majority of universities have subscriptions. The University of Chicago Press gives away thousands of institutional subscriptions for free to universities in impoverished and middle-income countries. So more people have access than you might think. But it is true that you might not have access from home, which might stop you from downloading and reading my students' excellent AmNat papers, if you didn't want to take the time to log in remotely or from campus. And high-school teachers can't easily read paywalled scientific papers for their classes. There is a problem, for sure. Then there are new journals like Nature Ecology and Evolution which most libraries won't subscribe to for some waiting time. The University of Texas library won't subscribe for at least 5 years they told me, when I asked whether or how I could access my own publication there (Stuart et al 2017).

Now S&O'S are arguing that "The subscription price that publishers charge is inflated because it is not based on the specific value that publishers add. By imposing a toll for access to scientific articles that were created and evaluated by scientists for free, publishers hold these scientists’ products “for ransom,” charging for the whole product instead of for the publisher’s specific contributions to that product.". There's some truth to this, especially for many journals from commercial publishers. As the Editor of a not-for-profit journal, the cost we hand off to consumers covers the publisher's contributions, and that's it. So I do object to S&O'S stating this as a broad generality, painting us with the same brush as some journals from high-profit publishers you might name.

Ah, good, S&O'S do recognize the conflict of interest that Open Access creates, favoring a pay-to-play system where predatory journals and fake editorial boards can thrive (violating Goal 1). Their solution is to make the review process transparent: publish reviews, so that fake-review journals are exposed for the frauds they are. I agree that will help. But at the end of the day, an author who needs more lines of publications on their CV for promotion may still gladly pay to publish something shoddy at a journal that does half-hearted review. I'm not convinced this solution will fix the problem.

Wrapping up the section on Open Access. I agree with most of what they say here, even if they over-generalize a bit (in ways that directly concern the journal I Edit). But they totally ignore Goal 6 (cheap publishing for authors), as you might expect for people with a history of great research funding. HHMI started eLife which initially was free to authors AND readers. But no longer free to authors, sadly.

Now we are on to Impact factors and the academic incentive system.
"Journal name is used as ... an indicator of quality". That's mostly true, though we can all think of papers in Nature or Science where we thought "how did THAT get published?" - but maybe that's just sour grapes (more on that soon, I think). I totally agree with them here, that it would be nice if articles were judged on their own merits and not so much on the merits of the articles' neighbors. To use a personal example my 2003 AmNat paper is cited 10 times more than my 2001 Nature paper. But the latter is surely what got me my job interview at UT Austin as a finishing PhD student. Okay, so we should judge papers on their own merits. I don't think anyone disagrees in principle. But I can't read everything, and so I rely on someone (Editors, reviewers) to collate the things most likely to interest me into nice succinct tables of contents (that's meeting Goal 7).

Interesting point here: "the opinions of two to four peer reviwers... by chance [may] not be representative". Everyone with experience as an author knows this - things get published with a casual nod from someone who doesn't take the time. Or a great paper can be misunderstood and savaged by someone with an axe to grind or not enough coffee (though a reminder: if a reviewer misunderstands, it may be the author wasn't clear enough). But in the context of this essay, this made me think about the role of sample size: the more people who read and rate and comment on a paper, the more accurate the evaluation is. Let's imagine each paper has some 'quality' parameter. Sampling N=2 isn't really enough to estimate that parameter accurately, we have a high standard error. So it really is with many reads and ratings/comments by readers, that we converge on a high-confidence measure of its quality. We need an Amazon 1-to-5-star rating system? But would it be used?

S&O'S point out correctly that hyper-competition for high impact factor journals is creating an incentive system that can drive people to fraud (violating Goal 1).

Integration of peer review with the publishing decision.
On to the next section. Here, they take issue with the privacy of reviews. "nontrasparent". "Most journals keep peer reviews a confidential exchange among editors, reviewers, and authors, which gives editors flexibility to use their own judgement in deciding what to publish".   I don't quite see the link between this and impact factor, as they claim, but maybe I'm missing something.  This non-transparency is certainly true. Before I read on about why they dislike it, I'll mount a pre-emptive defense for sake of argument. Submitted manuscripts contain omissions, mistakes, and other potentially embarrassing flaws. Many are minor, but some are big. A young scientist is nervous enough submitting a paper for the first time, to expose themselves to the criticism of strangers. How much more horrifying if that criticism were broadcast for all to see? I suspect many trainees especially, but also senior scientists, when asked, would really rather have a chance to quietly correct mistakes outside the limelight. We always tell our students and postdocs when they write, speak, interview: "put your best foot forward first", or some variant on that. Public posting of peer review does the opposite. This may be a major disincentive to anyone with self-doubt or anxiety over their place in science. That's my guess, at least.

S&O'S write : "The main purpose of peer review should be to provide feedback to authors in order to improve a manuscript before publication. But, in service of the publishing decision, peer review has morphed into a means of assisting editors in deciding whether a paper is suitable for their journal. " This is obviously true, especially for top journals. At Editor at AmNat, I can't publish everything that comes in. We have a limited page budget and limited copy-editing staff. So I have to be picky. And I feel I have a duty to my readers, to bring them the papers that I and my co-Editors think will be most likely to interest them. That's not a decision I take lightly, and I am keenly aware that "whether a paper is 'novel enough'" (as S&O'S put it) is subjective and the hardest criterion to use. But that doesn't mean I totally agree with their characterization of peer review being a means for making this cut. Usually that cut is made without peer review, just me or me and an Associate Editor. When I do so, I explain my logic. I handle a few papers every day, and when I make an Editorial decline because something isn't suited to our journal I often write a page, sometimes several pages, of my own comments and recommendations. Our goal, as a journal, is to leave every paper better than when it came to us, whether or not we publish it. In this regard, the intense efforts of the reviewers, Associate Editors, and Editors, is very much focused on improving the papers. In this regard, I disagree with the  claim by S&O'S quoted above. At AmNat, review is still very much focused on helping the paper. If it weren't, the AE's and Editors wouldn't bother writing long and careful commentaries on papers we reject. The fact that we do sets our journal apart, to be sure. We are proud of that (and a bit exhausted).

"The intense competition for publication in high impact factor journ
als likely increases how often and to what extend [sic] scientific articles are revised before publication". Hm. That mistake might have been caught by a reviewer or copy-editor.

"While most papers are significantly improved through revisions suggested by reviewers and editors, there is a sense among scientists that a significant fraction of the time spent on revisions, resubmissions and re-reviews is not adding sufficient value and needlessly delays the sharing of findings" Maybe I'm just a crappy scientific author, but I consistently feel that my articles are improved by review and revision. I am always surprised by the sentiment in the quote above. So, about 6 months ago I did a totally unscientific poll. About half of the responses indicated they felt review greatly improved their paper. About half said it somewhat improved. And only about 5% (if I remember right) said review had no effect or negative effect. That 5% may be a very vocal minority.

"it is time to acknowledge that peer review before publication is just the initial step in scientific evaluation": Interesting. I don't disagree, but that's not a reason to water down peer review either, or change how publish / not publish decisions get made.

Now we hit the author's recommendations. This is where it gets fun, I bet.

Improvements to peer review.
- Publishing reviews along with the papers. My main problem is what I noted above: the disincentive arising from authors' fear of having their mistakes aired. Would reviewers get credit? Named? Can that go on their CV? That might create an incentive to do more reviews, and more careful reviews. Certainly when I became an Associate Editor, and knew my name would be listed at the end of a paper, I became more cognizant of doing a thorough job.

- Consultative peer review: conversations among reviewers and editor before a decision.  I like this idea. It gives everyone a chance to correct each other's misreadings. At present, AmNat's Editorial Manager web system isn't designed to do this in a way that would maintain mutual anonymity, which is a barrier. That's just a technical barrier. The other barrier is the extra time it takes. In another unscientific poll I did on twitter, it seemed most people would be okay with this as long as it didn't delay publication more than a couple extra days.

- Peer reviews should focus on technical quality: are conclusions warranted. I do often see reviewers commenting that they don't think the paper is suitable for our particular journal, though it would be fine elsewhere. Given that we have a constraint on how many pages we can publish, I find that slightly useful, but for the most part I reach that conclusion on my own based on the technical details. I rarely pay close attention to the 'suitability' comments, and sometimes override them.

- Ah, now they are saying 'Give recognition for peer review'. (see three paragraphs up here). Specifically they want reviews signed. I agree that recognition for good reviews is important (Maybe AmNat should come up with some sort of award for great reviewing). But objections are well known. When a reviewer is critical, being outed can create animosity that can hurt younger reviewers. There's an unrecognized flip side here: when a reviewer is positive and names herself, this creates a feeling of obligation / patronage. For instance I know now that Joe Travis and David Reznick reviewed my 2017 Nature paper. And that feels more awkward for me than if I had known they reviewed and rejected it. Because now I feel like I owe them something as a thank-you.

Next up: "Put dissemination of scientific articles in the hands of authors".
This is weird. They argue that funders trust scientists to do the research, so we should trust them to choose what when and how to publish. "why don't we ask independent parties to oversee experimental design and execution as well?". Um, two things. First, we do: they are called grant panels. Unlike at HHMI, at NSF and NIH you need to get your experimental design past a critical panel. Second, we do: manuscript review serves this purpose.

So here's what they are arguing for, this is the crux: Authors submit a paper, it gets reviewed. Authors can choose to revise (or not), then decide whether or not to publish. Its sort of like putting something on BioRxiv, but after getting reviews. You can heed the reviews or not, then post on BioRxiv or not. Up to you. The barrier to publishing is not an editor, but is your own self-respect: have you gotten enough feedback and done enough revision that you are comfortable posting it?

Okay, right away I have a problem with this. By this criterion, someone could go ahead and publish creationist rants and call it a scientific publication. You'd be really surprised what comes in the door to journals:  creationism, offensively sexist or racist, and so on. Heck, some people tried to publish the bigfoot genome. It wasn't until it was soundly rejected (with review!) from some respectable journals that the authors bought their own predatory journal and self-published in what they said was an open-access reviewed journal. As soon as you let authors be "the decider", I promise there will be bunk. And that bunk will inflame the creationist movement and intelligent design (the latter was smacked down by the judge in the Kitzmiller vs Dover court case specifically because they weren't publishing in scientific journals. Now we let to get them decide?)

That's my main knee-jerk objection, now let's see what S&O'S have to say in favor:

"Since authors have such a clear self-interest in publishing their own work, nobody would equate the author’s decision to publish with a stamp of quality. This stamp of quality has to come from elsewhere, including the published peer reviews and post-publication evaluations described below"  Okay, I can see that. But that means that newly published papers are not organized into batches of higher-quality articles that are more likely (on average) to be worth my very limited time. That is, this means that Goal 7 is set aside until papers have had time to develop a following, or not.

"the peer reviewers would direct their comments to the authors focusing their peer reviews on improving the manuscript as opposed to advising the editor on suitability for a journal." Yes please!  But actually, this is the standard way people review things, at least at AmNat, and at Evolution, and most society journals. I think this is mostly a problem if you are trying to mostly publish in Nature and Science and PNAS and Cell. At those journals, the reviewers have all had their own rejections. They then have sour grapes, and think "well if my paper wasn't good enough, neither is this". So I think S&O'S are right, but in a limited domain; more often for us publishing mortals we work with journals where reviewers already take this advice.

"the time and resource savings would be significant: authors wouldn’t have to perform experiments that they deem unnecessary; " Again, this is more of a Nature & Science problem, not so common in our fields.

"demanding revisions and multiple rounds of review": When revisions are cosmetic, Associate Editors shouldn't be sending things out to re-review anyway. When revisions are substantial (new data, new statistical analyses, substantially large chunks of new text), it is entirely appropriate to have that new material reviewed, which means another round of review to ensure quality (Goal 1). That's appropriate. To be sure, I've had papers sent out to re-review and thought "you're kidding me, this was cosmetic, just make the decision yourself and speed it up".

Now S&O'S are tacking possible objections. The first one is what I raised above as my knee-jerk reaction. I think I'm more cynical than they are. They write: "Few authors will knowingly want to put out poor-quality work."  I'm not so sure. As long as promotion is based on counting papers, this will be a hard sell. And as long as there are crackpots out there with pseudo-scientific ideas, their proposal will be an open invitation. (by the way, is any of that crackpot stuff showing up on BioRxiv?)

Here's the most compelling part: "The peer reviews themselves will be a powerful restraint on the author, since they will be published together with the paper (see above). An author may, for example, prefer to withdraw a paper submitted to a journal if the reviews reveal fundamental flaws that cannot be addressed with revisions. And if an author decides to publish a paper despite serious criticism from reviewers, at least those criticisms will be accessible to readers, who can decide for themselves whether to side with the author’s or the reviewers’ point of view." In a utopian world, I totally agree this is a great model. Most scientists are conscientious, careful, and will use reviews as a source of feedback to improve, then publish or not publish their work. But wait: do we REALLY think most people would say "oh, that's a great point, there was a mistake, I'll just delete this paper that represents a year of my life"? That's a hard thing to do (I can attest personally).

So "where does this leave journals and editors", they ask. They envision a hybrid model, part way between tradition and the open wild-west of BioRxiv or F1000Research. They suggest that papers go to to journals (still organized by theme, to meet my Goal 7). Editors assign reviewers, as we have done. But, Editors stop making the publish/don't publish decisions. Instead, that is up to the authors, once they get reviews.

Ah! Here we go: Here near the end they write: "In rare cases, editors may need to step in and stop publication of an article when the peer review process reveals that publication would be inappropriate – for example, in cases of plagiarism, data fabrication, violation of the law, or reliance on nonscientific methods."  So the Editor still does some triage to keep out the riff-raff.

And here's another nod to something I was objecting to: "At the moment, society journals are between a rock and a hard place. They can’t afford to switch to open access, since the open access fees required to replace their subscription income would be too high for readers. On the other hand, they feel considerable pressure from for-profit publishers who are launching competing journals at breakneck speed. Academic publishers risk becoming obsolete if they don’t adjust." That is a concern. The solution they propose is that society journals charge for peer review, then basically guarantee that authors can publish when they have received the reviews and feel ready to do so. They figure journal income goes up, reducing the per-article open access fee. Okay, but that assumes that the journal's costs are flat. In reality copy-editing and data-archiving fees and some other features are on a per-article basis, so cost per article will be less sensitive than S&O'S think, especially when most reviewed articles eventually get published. And of course this won't work for print journals, which wouldn;t be able to keep page numbers to within budget for the printer's.

The last part of the essay is about post-publication processing. They argue that after an author decides their paper is ready, the paper and its reviews go online. Then, the reviewers and/or subsequent readers can 'tag' the paper with various kinds of tags, for rigor, interest level, data sharing, code review, data downloads, citations, pdf downloads. This is crowd-sourcing the process of rating and ranking papers for my Goal 7. Its like going on Amazon and seeing a product has lots of positive reviews, though more multi-dimensional in the kinds of metrics. What could possibly go wrong giving people the chance to comment & review & tag things online????? (hint, read this Washington Post article on scientists posting reviews on Amazon if you haven't already)  



Now here's the interesting point where they start to back-peddle a little bit. They say, although the editor gave up the role of gate-keeper, the editor could place a warning tag on a paper basically saying the article was published by the authors against the reviewers' recommendations. A "read the reviews carefully & take this with a big grain of salt" tag. The Editor could place high general interest tags on the articles they would normally publish, and low general interest tags on articles they wouldn't usually touch.

Well, that's it. I really should have spent tonight doing some data analysis, but their essay was interesting to read and thought provoking. And I, for one, benefitted from writing my thoughts as I read it.

So, will AmNat implement this? No, not soon.'

Update: Based on a comment on their article, Bodo Stern shifted from "Tags" to "Badges". To which I must, in a fit of late-night infantile humor, respond:


Predicting Speciation?

(posted by Andrew on behalf of Marius Roesti) Another year is in full swing. What will 2024 hold for us? Nostradamus, the infamous French a...