Tuesday, October 18, 2016

What if all my papers were reviews?

When I was a young professor, I looked down my nose a bit at professors who only published review papers, leaving all the empirical papers to their students. Now that I am a middle-aged professor, it feels like all I do these days is write review papers. Just these last few years, I have written (or helped to write) reviews for Heredity, Journal of Heredity, Philosophical Transactions of the Royal Society B, Evolutionary Applications (2), Science, Annals of the New York Academy of Sciences, Trends in Ecology and Evolution, and others. I haven’t written a true empirical paper since 2013, when I published two. Moreover, even my forthcoming book can be thought of as one long review paper.

A major reason for this shift is that “older” Professors are better known and so are increasingly asked to write reviews. Indeed, most of the review papers listed above were invited by the journals – and the others were requests to participate from younger researchers. As all of the requests came from friends or colleagues, were in good journals, and gave us the opportunity to say pretty much what we wanted, I accepted them. Of course, I also enjoyed writing them and I hope they stimulate new ideas and research interest. Or could this simply be lipstick on a pig.

Review papers have been criticized for not generating new knowledge, which presumably would be much better than simply summarizing existing knowledge. Indeed – on this blog – one post criticized review papers in eco-evolutionary dynamics for being more frequent than empirical papers in eco-evolutionary dynamics. The basic argument is that people should stop spending their time writing reviews and should instead go out and collect new data and run new experiments. Otherwise, progress in science will be stifled – or perhaps more appropriately it will be like one of those “bubbles” they talk about for sub-prime mortgages. Or a house of cards. Etc.

So what are review papers good for then – and should you take the time to write one? I would argue that – while all of the above is true – review papers (some of them anyway) are very valuable and should be deployed early and often in your career.

1. One benefit of review papers is that they bring together and synthesize a large amount of literature. So many papers are being published these days that it is impossible to keep on top of all (or even most) of them. Review papers thus become great ways to see what is in the literature in a single reading and can help to identify empirical papers that you might not have known about.

2. Another potential benefit of review papers is that they often allow more subjectivity in interpretation and more speculation. Writing an empirical paper can constrain you to only asserting conclusions that are strongly supported by the data. This constraint is good, of course, because empirical papers are specifically claiming that original data support a conclusion. At the same time, the constraint can be limiting in that empirical data might inspire new ideas that are not strictly supported by the data, yet are nevertheless good ideas that can move the field forward. Reviews/syntheses/opinions are great places to get these bold new ideas out there even if they aren’t yet supported by (much) data.

More pragmatically, review papers are great ways for younger researchers to increase their exposure. In some cases, a student can write an excellent series of empirical papers that don’t gain much attention, simply because so many papers are out there. Review papers often gain more attention (or at least exposure through citaitons) and can thereby help a young researcher become known as an expert and an original thinker in a particular area, and they can also bring attention to that researcher’s empirical papers. This pragmatic benefit was certainly the case for me, where a review paper I published just after my PhD (Hendry and Kinnison 1999 - Evolution) helped to promote the importance of contemporary (rapid) evolution and remains one of my highest-cited papers (495 citations on Web of Science, 682 citations on Google Scholar).
Citations to all of my papers by year of publication (to 2010). Comments, responses, etc. are omitted.

But perhaps now that I am getting longer in the tooth, I should stop writing these things – or at least so many of them. Maybe if I stopped writing so many, I could write more and better empirical papers. Maybe it is all a big trade-off and I have shifted too far to one pole. These sorts of questions got me to wondering, what would my CV look like if I subtracted all of my review papers? So I did precisely that. I downloaded Web of Science data for all of my papers, subtracted commentaries, and divided them into review papers and primary data papers.

Review papers are clearly boosting my stats or, more importantly, increasing awareness of my work. Yet the above table isn’t exactly a fair comparison given that writing review papers presumably reduces the number of empirical papers I can publish. So, on the charitable assumption that I could write roughly one empirical paper for each review paper, I simply replaced my 29 review papers with the average empirical paper (ignoring date of publication). That is, I assumed that my 29 review papers were replaced with 29 copies of my average empirical paper, yielding 149 total papers with 7181 citations and an h index of 48.

To continue the absurdity; what if all my papers were review papers? Here I replaced all 120 empirical papers with the average review paper. The stats are shown below and – if all else was equal and if citations were all that mattered – I should simply publish review papers.

Of course all else is not equal. For instance, my tendency to be invited to provide reviews might require a firm empirical footing based on original data. Alternatively (or additionally), my empirical papers might be cited more heavily because of the exposure brought by reviews. Or both, which indeed is my point. Not only are reviews good for the reasons described above but they also form a nice conceptual and promotional complement to related data papers.

In short, I strongly recommend that advanced PhD students and postdocs should write review papers. Those papers can strongly influence your research field. They will be cited. They will simulate your own thinking. They will enhance your empirical research.

As for what makes a good review paper, I have a couple of suggestions:

1. Meta-analyses are much better than conceptual thought pieces. Indeed, I have included meta-analyses in my empirical category above. Of course, the best reviews would be both – conceptual and meta-analytical.

2. Don’t just review the evidence, present new ideas and advance new hypotheses. Although you can make some hay from rehashing previous review topics, the way to make a real influence is to come up with new ideas and review new topics.

The day after writing the above, I (by coincidence) was a guest editor at the annual meeting of the board of the Annual Review of Ecology, Evolution, and Systematics, where I agreed to write another paper. And, of course, I have another paper in preparation for Trends in Ecology and Evolution, and several other reviews papers beyond those. So this trend will clearly continue for awhile longer. I am starting to really miss those empirical papers!

Tuesday, September 27, 2016

Undergraduate Field Research: Making it Happen

A group of female Himalayan Tahr looking at us!
It’s really amazing how far you can get when you aren’t afraid to “just go for it”, no matter what point you are in your career! My dream started sitting next to a colleague and best friend of mine at our graduation from the Panama Field Study Semester, an amazing program offered to undergraduates at McGill University. We turned to each other that day and made a promise that one day we would work together on a project. A few months later when I was back in school in my last year of my undergraduate degree in Biology, I got a phone call from this friend who was all the way in India with the exciting news that the local community he was working in, had approached him with a project. Our golden opportunity was here! As intimidating as starting this project from nothing was, we begun doing our research: what is a Himalayan Tahr and Himalayan Serow? Where is Uttarakhand, India? How do you write a project proposal? Which grants do we apply to? Slowly but surely, everything started coming together and I learnt many lessons along the way (and am still learning today!).

And so we were off to India! 
One of the most important lessons I learnt is: do not be afraid to contact people to ask for help! Though experts in their field such as professors may seem intimidating at first, they are just people! Every single professor we contacted was more than willing to help us out whether it was through providing specialist knowledge on a species, teaching us parasitology techniques, writing a letter of recommendation or being a constant support and helping us fund our project (thank you Andrew for being one of those!).  

In the end, through dedication, support from researchers and professors we had contacted worldwide, and above all a positive attitude, we received the necessary funding to go out and do our project. It is so that the Wild Ungulate Research and Conservation Initiative was born! (Check us out on Instagram: @wurc_itindia).

Our two study sites: Rudranath and Sohkark (near Tungnath)
Our project had two main goals: (1) To assess the health impact of livestock grazing on two ungulate species in the Kedarnath Wildlife Sanctuary, Uttarakhand, India, and (2) To conduct an ecological review on these two ungulate species as there has been little research done on them to date. The two species we were studying were Hemitragus jemlahicus (Himalayan tahr) and Capricornus tahr (Himalayan serow). To tackle the first aspect of our project, we collected tahr, serow and livestock faecal samples and used the FLOTAC method to analyze parasite eggs. For the ecological data, we took many focal and scan samples to record behaviour of individuals and the group. At each point of collection we also took a GPS point and will map their distribution, with special focus on the serow as their habitat preference remains unclear. We split our time between two research sites, the first being Rudranath: a site where tahr and serow are known to overlap with livestock herds, located between 3000-3800m along a pilgrimage trail to Rudranath Temple. The second site, Shokarkh, was expected to be devoid of livestock (but as so often happens in the field, this was not the case) and encompassed heights between 2700-4000m.

Me on the lookout for tahr - gotta get 
low to hide from their view 
Although the fieldwork is still in progress, we have some preliminary results. From scans, it was found that between both sites, Himalayan tahr spend the majority of their time foraging (58% and 65%). However, focal results indicated that adult females allocated more time in Rudranath to foraging and less time resting compared to in Sohkark. Even more interesting is that sub-adult and adult males showed seasonal differences in foraging activity. Furthermore, we observed curious subdivision of groups by age and sex in the Himalayan Tahr.

For prevalence and intensity of parasites, we found there were more parasites in both livestock and tahr in Sohkarkh than in Rudranath, however there was no significant change in intensity. This is interesting as livestock were in closer proximity to the tahr in Rudranath than in Sohkarkh so we had predicted the opposite to be true.

Our lab set up in the field
As you might have noticed, the Himalayan serow was mentioned a lot less in our findings. This is because we were not able to obtain any direct observations as we have yet to spot an individual. Nevertheless, we were able to infer some information from indirect observations and informal interviews. The most alarming finding was that Himalayan Serow populations are suffering big losses this year as they are being affected by sarcoptic mange disease, a skin disease caused by a mite (species unknown). Affected serow have been found on both sides of the Mandal valley. This was further reinforced when we found a diseased and deceased serow near Mandal village (base for trek to Rudranath), as well as by the lack of serow spotted this year which according to our local guide, is very unusual. This is super interesting and a huge red flag, so we are hoping to really keep exploring this issue and learning more about the species.

Deceased Himalayan Serow found near Mandal village

As fascinating as all the research was, what I will really take away from this time in the field was all the lessons learnt. And by this I mean more than just learning how to improve our record keeping sills and how to adapt our schedule to maximize and improve data collection. This project was deeply satisfying in two main ways. The first was through the confirmation of my love for field biology and research, which has given me the drive and assurance that this is what I want to be doing for the next few years of my life. The second was through shaping my personal growth, mainly through showing me that I am capable of more than I give myself credit for. A lot of the field work was very physically demanding as it required us to trek for long hours at high altitudes every day, and I came to appreciate my mental strength proving to myself how many barriers you can overcome through being mentally strong. Furthermore, the fact that this project went from a dream to a reality was proof enough of itself that if you work hard enough and put heart into something you want, results will show!

A great field example of the lesson we have all heard before that goes something like “don’t give up hope when things don’t seem to be going your way” was our situation with finding male tahr. For days we had been searching for a herd of male tahr without fruition. Some team members even embarked on a gruesome hike up to 5000m in altitude in attempt to go find them. Then suddenly, we got notice that they were a 15min hike from our camp! And one night when we had set out to go have tea with a local shepherd in the area, without the intention of collecting any data at all, we spotted the group of male tahr!
The group of subadult and adult males tahr found in Rudranath

Perhaps what I developed the most appreciation for this summer is something that too often goes unmentioned: the essentiality of local team members. Our local field guide was invaluable when it came to finding our way around the mountains, recognizing animal signs, establishing crucial local contacts and even in what would appear to be simple tasks as shopping for food supplies. Our field cook was another invaluable and not-frequently-enough-mentioned part of our team as by him taking on the cooking duties, we were able to go out into the field for longer periods of time and really dedicate our efforts on doing the research.  Our project would have never gotten as far without their collaboration, and the value of local knowledge is something I will always treasure.

Harsh Maithani, our local field guide,
posing for his picture
On a similar note, what was amazing to see is the amount a project like ours can do for a community. 
In addition to learning more about the species they treasure, it created well-paid jobs (alas temporary for now) for locals in a field they are passionate about and gave them the opportunity to work doing something they love. As an example, Prabhat is a boy of 18 who worked as a field assistant and relief cook with us this summer during his summer vacation. Instead of hanging around the village with his friends, he was ecstatic about the opportunity to learn what it is like to be a field guide and discovered that it is possible to have a career that matches his love for the mountains.                                                                                     

All the girls at camp and Prabhat
posing in front of the beautiful
I could not be more grateful to have had this incredible experience. We are now working on finding additional funding to continue this project next year and hopefully expand our research initiative. It is astounding what can happen if you really put your mind and heart towards something and give it your best effort. If there is something you have always wanted to do, but have always thought is not quite feasible or is too much work, I encourage you to give it a try. You never know what will come of it!

Our field team for Sohkark, site 2!

Friday, September 16, 2016

No prize for finishing (or starting) your PhD first

The first time I came to Montréal was a couple of years ago. I was just finishing up my undergraduate degree at the University of Notre Dame and had the opportunity to attend the Genomes to/aux Biomes conference where I presented some research I was doing on speciation genomics of apple maggot flies out of Jeffrey Feder’s lab.

discovering the culinary delicacies of the north

It was a pleasant and sunny few days packed with science and poutine. For me, it was also a chance to explore the city and McGill University where I was to begin my Ph.D. in the fall and meet up with Rowan Barrett, my supervisor. I knew Rowan from years back when we were both working in Hopi Hoekstra’s lab and I had already gone out to the sandhills of Nebraska with him a few times to catch mice for the project that I would work on in my Ph.D. I had already been accepted to McGill and the funding was in place. The situation was ideal and everything was going according to plan. But sometimes life has other things in store.

corn thuggin with Rowan

Besides this Ph.D., the one thing I applied for was the MEME Erasmus Mundus Master Programme in Evolutionary Biology. It’s a joint 2-year master programme between four European universities (University of Groningen, Ludwig Maximilians University of Munich, Uppsala University, and University of Montpellier) and had been somewhat of a dream of mine ever since I found out about it in my freshman year.

our logo is pretty lit

The programme is set up such that students choose to take courses in either Groningen, Netherlands or Uppsala, Sweden in the first semester. Students then go to either Munich, Germany or Montpellier, France for more courses and a half-semester research project. In the final year, students conduct two separate thesis projects in any of the four universities, Harvard University (a partner of the programme), or basically any university or research institution in the world so long as a professor from one of the four universities is willing to supervise the project. In the end, students earn double or even triple M.Sc. degrees and often come out with multiple publications. Having never been to Europe before in my life and having been awarded a full scholarship, MEME was a once-in-a-lifetime opportunity I could not refuse. So I decided to take a detour to my Ph.D. I figured, if it was meant to be, I would find my way back eventually. Thankfully, Rowan agreed. :)

What I can say is it was simply the best time of my life. We were 22 representing 17 countries, each bringing something different to the table from our diverse cultural and educational backgrounds. Our discussions ranged from Dawkins and The Selfish Gene to the insanity of dealing with French banks to which new country was going to be our next adventure. And I won’t lie, it was fun to see a Syrian doctor interested in evolutionary medicine and bacteriophages struggle on the mudflats of the northern Dutch island of Schiermonnikoog doing field work, wondering out loud why the entire field of evolutionary ecology exists in the first place.

flatness can be beautiful too

Travel became life and life fit in a backpack. MEME took me from the Netherlands to France to California to Sweden to China, all within a span of 24 months. Each new country came with its own set of challenges and trying to open and close entire chapters of your life within months wasn't easy. A Malaysian classmate of mine put it best as going through breakup after breakup, but with each new relationship, you learn and become more experienced. The projects I worked on were equally diverse from the genetics of starvation tolerance in European seabass with Bruno Guinand to genetic mark-recapture of giant pandas with Per Palsbøll, Matt Durnin, Katja Guschanski and Jacob Höglund and taxonomic assignment of metabarcoding data with Douglas Yu. It was an intense, crazy, unforgettable experience. A rollercoaster or a whirlwind... or a rollercoaster caught in a whirlwind. And don’t get me started on the parties. Oh the parties…

MEME graduation 2016 - Erken, Sweden

So a full 2 years later, I now have 3 M.Sc. degrees in evolutionary biology from 3 countries, 1 paper accepted, 1 submitted and more to come. I have a deeper understanding of what it really means to be a global citizen and greater personal and scientific maturity to start my new life and Ph.D. at McGill. So if you’re reading this and this all sounds pretty cool to you, the next application cycle opens soon on October 15, 2016. My advice to any undergrads out there is to take your time. In fact, I almost wish I had taken more. The academic road is a long one and there is no prize for who gets their Ph.D. first. Of course, its best to be productive by becoming a research assistant or doing a masters, especially if you want a career within academia, but if you're not sure about your next step, it wouldn't be the wisest idea to jump straight into a Ph.D. Or perhaps this is just the European culture rubbing off on me (which isn't so bad!). In part because I decided to do MEME before starting my Ph.D. at McGill, I successfully applied for the Vanier Canada Graduate Scholarship so it seems like I made the right decision after all. The next big challenge for me will be to settle myself in at McGill, get used to living in one place for more than 5 months, and sink my teeth into some long term projects, which I now gladly accept.

Tuesday, September 13, 2016

Is Prediction an Exquisite Fiction?

As I described in a previous post, a long-standing topic of discussion is the usefulness of a given scientific endeavor or study. Along these lines, science is often divided into BASIC and APPLIED. Applied science is – by definition – useful. It cures some disease. It improves crop levels. It saves some endangered species. Basic science is – at least classically – not obviously or immediately useful. Instead, it addresses a (hopefully) interesting question – interesting at least to the researcher. Sometimes called “curiosity-driven” science, basic research might one day have great utility but, at the time it is conducted, its uses aren’t obvious.

The motivation for my earlier post on basic vs. applied science.
Basic science was once considered an admirable pursuit – perhaps even preferable as an intellectual, university-based enterprise. More recently, however, universities and funding agencies want to hear how your research – whether basic or applied – will have “broader impacts” or “direct benefit to the people of ...” No longer is it enough for the science itself to be interesting and clever and well designed; it also has to have a clear utility. When justifying a research project, these pay-offs are expected to be clearly and forcefully presented, usually at the outset of a proposal and in an explicit section at the end.

For basic scientists in ecology and evolution, these applied justifications tend to involve conservation (e.g., saving some endangered species or place), management (e.g., of natural resources), discovery (e.g., new drugs), or ecosystem services (e.g., greater biodiversity generates greater productivity or resilience or whatever). In many cases, the specific link between the science and the proffered application is PREDICTION. For example, “we need to be able to predict what is going to happen, in the face of environmental change or management actions, if we are going to design effective strategies for conservation or management.” This sort of justification is a natural and easy one because we can always say “If we don’t understand the system well, we can’t predict it. My research will help us to understand the system better, which will improve prediction, which will be useful, right?”

Just last week I – along with 21 other scientists – published an opinion/review paper in Science amplifying this last point. Specifically, we need to predict what will happen with climate change and – to do so accurately – we need much more information about organisms, communities, and ecosystems than we currently have. In this post, I would like to play Devil’s Advocate to my own paper by arguing that prediction is often hopeless.

From our Science paper.
A first important distinction is whether we wish to make a prediction or whether we wish to make an ACCURATE prediction. It might seem obvious that we want the latter but even the former is sometimes hard. That is, we might not have enough information about a given system to even speculate effectively as to whether or not some action (e.g., climate change) will have a particular effect on a particular species. Most of the time, however, we are able to make some sort of prediction based on intuition or similar systems or mathematical models or experiments or whatever. So the real concern becomes “how correct (accurate/precise) will be our predictions?”

The accuracy of prediction will depend on the type and precision of prediction. For instance, we might first want to predict simply WHETHER a given environmental change or management action will have an effect at all. Here we might be safe in many instances. Will climate change influence biological diversity? Yes! If the environmental change is large, something will respond to it. However, this isn’t the sort of prediction that we – or the public or managers or governments – care about.

We might next want to predict the DIRECTION of an effect. In some cases, this will work fairly well. For instance, we can safely say – based on many examples from nature – that climate warming will advance the timing of reproduction of many plants and animals and that commercial fisheries will lead to smaller body size in harvested populations. A few exceptions will certainly occur but these will tend to be of the type that “prove the rule.” In many other cases, however, predictions as to the direction of an effect will be incorrect. Will climate warming increase or decrease local biodiversity? Hard to say. Will fish harvesting increase or decrease productivity? It depends. In such cases, increased information – including from “basic science” – might improve predictions. 

Experience teaches, however, that expectations developed from theory, from related systems, and from detailed information are – not infrequently – incorrect.
At the most precise level, we might want to predict an effects size, such as a particular rate or endpoint state. How fast will species be lost with climate warming? How many species will be present 25 years from now – and where will they be? How small will harvested fish become and how quickly will they recover when fishing ceases? I suggest that – in many cases – predictions of this sort will be hopelessly inaccurate, except perhaps by blind luck. Each system (and year) has so much contingency that prior information will not be sufficient. Of course, this is precisely the logic that we invoke when seeking funding: “We can’t make accurate predictions unless we get more information, so give me some money to get it.” It is certainly true that if one had complete information on the driving forces in any given system and complete information about how those driving forces will change in the future, then accurate predictions of endpoints and rates might be possible. But this “complete” information is generally unattainable.

Another opinion in Science about prediction
In short, many of the arguments one reads in proposals that the particular basic science being proposed is critical for better prediction are really just smoke-and-mirrors or, perhaps more accurately, a bait-and-switch. Five years later: “Although I didn’t make better predictions, I did do some cool stuff anyway, no?” Of course, these studies can also weasel out of accountability by saying “Here is some new information that other people might find useful in making better predictions” or they might say “Here are some new predictions.” – with the last being particularly disingenuous because the accuracy of those predictions won’t be known for sometimes decades.

My point in this post isn’t that basic science should be abandoned in favor of applied science. My point instead is that it would be nice if we could all just drop the applied BS at the start and end of our proposals. That isn’t why we are doing the study – it is just what we think the reviewers want to hear. The reality is that science has made incredible strides in the past few centuries – and most of those advances, I will speculate, were made by basic rather than applied science. Think of all of the ramifications Darwin’s theory or natural selection, and – coincidentally – all of the incredible and amazing applications. At the time, however, Darwin – and the people who read his book – didn’t focus on its potential applications but rather its potential to explain how the world around us came to be.

I had better circle back to that Science paper for which I am here playing Devil’s Advocate. It is certainly true that we don’t have enough information to make good predictions of how biodiversity and species ranges will change with climate change. It is also true that getting more information about those species and environments has the potential to improve predictions – although we won’t know if we are correct for decades. Thus, I am not disputing the main arguments we made in the paper. Instead, I am using it as a jumping-off point to argue that additional information is probably even more useful simply in improving our understanding of the world around us, whether or not we attempt predictions. Sometimes this improved basic understanding will eventually have massive benefits for biodiversity and the humans that depend on it.

I think it cheapens, and potentially slows, progress in science to require it (or encourage it) to have obvious immediate applications. The best route to the best possible future applications is to simply turn researchers loose to study what they feel is most interesting, whether applied or basic. Basic research isn’t flawed and in need of an applied crutch to hold it up.


After I posted this, I was told about a similar post on Dynamic Ecology:

Wednesday, August 31, 2016

The Trouble with the Plankton

Earlier this summer, I went to a Gordon Research Conference on Ocean Global Change Biology at the invitation of Sinead Collins. I don’t typically work on oceans, and so the fit might not see obvious, but the relevant part was that the field has taken on a distinctly evolutionary flavor. It turns out that many ocean biologists are now focusing on adaptive responses of marine organism to climate change, especially ocean acidification. It was a wonder to sit through talk after talk of studies assessing the potential for (usually) plankton to adapt to either increased acidity or warmer water. Even the talks that didn’t focus on evolution almost always referred to it in an informed and considered manner. I had previously been to a similar conference 8 or so years earlier, and that time I saw only the barest hint of evolution – so this was an exciting change. Yet it isn’t in my nature to be complementary without qualification (or critical without qualification) – and the same will apply here.

I couldn't decide my favorite photo so here are Google's favorites.
In 1961, G. Evelyn Hutchinson wrote a paper titled The Paradox of the Plankton, in which he discussed the apparent paradox that so many species of plankton coexist even though they compete for similar nutrients – ostensibly in contradiction to the principle of competitive exclusion. I wish to here introduce – by way of verbal analogy – The Trouble with the Plankton, which is somewhat related to the Paradox of the Plankton in its emphasis on variation.

Stated simply, I suggest that Ocean Global Change biologists should stop worry about whether or not plankton will evolve in response to climate change – they will! Unlike many other organisms, evolution is not normally going to be a problem for phytoplankton (or even zooplankton) – for four main reasons.

I couldn't decide my favorite photo so here are Google's favorites.
1. Most species of plankton are extremely abundant, which means that standing genetic variation will be huge, as will be mutational inputs. In short, genetic variation – the raw material for evolution – should be massive for essentially all marine plankton.

2. Most species (and indeed many populations and even individuals) of plankton experience dramatic fluctuations in environmental conditions across space (vertical and horizontal) and time (hourly, daily, seasonally). This past variation in environmental conditions means that past selection will have tested (and sometimes favored) adaptive genetic variants for a wide variety of conditions – again maintaining high genetic variation in adaptively-relevant variants.

3. The rate of abiotic environmental change in the ocean is very modest not only in relation to the above-noted past and present spatiotemporal variation in selection but also in relation to the generation time of phytoplankton (and zooplankton). As a result, the per-generation shift in the environment owing to climate change will be tiny in relation to the potential evolutionary speed of plankton.

4. Many plankton show adaptive plasticity in response to different abiotic conditions, including acidity and temperature. This plasticity should buffer the immediate negative effects of environmental change and thus allow further time for evolution.

Fitting these expectations, every study at the conference showed strong evolution in response to dramatically altered environments (often much more so than projections for climate change), despite often extremely limited starting genetic variation. Many studies of freshwater plankton have similarly shown that evolution in even small experimental populations can accomplish – in only a single summer – full adaptation to environmental changes projected to take place over decades. And “resurrection” studies that bring past zooplankton to life also show rapid responses to all sorts of environmental changes. So I suggest that we don’t need more studies asking “can plankton adapt to climate change” – they can – simple as that.

However, I do think that further evolutionary studies are critical for Ocean Global Change Biology – I merely suggest that their focus should be a bit different.

1. Studies could profitably ask “what are the consequences of the evolution of plankton for communities and ecosystems.” I image that the evolution of plankton in response to climate change could dramatically alter their relationship with other species in the community. Some of those species, especially those with longer generation times, such as planktivorous fish, might have trouble responding adaptively. Thus, it would be fascinating to take those experimentally evolved lines of plankton and see how they interact with other key species in the community.

Here you can find more arguments for considering the ecological effects of evolution.  
2. Although most (maybe all) plankton will have no trouble adapting to abiotic changes associated with climate change, they might have trouble adapting to some correlated biotic changes. For instance, planktivorous fish might dramatically change in abundance with climate change, which might then impact plankton populations in ways that are strongly modified by evolution.

Of course, the general statements above are not intended to imply that all marine invertebrates will easily adapt to climate change. Corals for instance seem to be near their physiological (and evolutionary) limits already and might have no suitable genetic variation to respond to selection. Of course, changing their symbionts might be another way to adapt – although that too will have limits. Also, species in already extreme conditions (e.g., the hottest or most acidic water) might not be able to persist locally as those conditions change. Indeed, acid rain caused the extirpation of many (but not all) plankton species – and very warm (or cold) temperatures could do the same.

Regardless of whether or not I am correct that plankton will have little trouble adapting, I do think evolutionary studies are extremely informative. I can’t wait to be invited back for the next Gordon Research Conference – or perhaps I won’t be given this post. 

Thursday, August 25, 2016

A story on competition. Competition and parasites, sort of.

I was recently talking about my PhD work with some new colleagues in the Department, when I realized I was getting a little bit nostalgic. Working in Trinidad was a great experience, and I had a lot of fun doing all the small and large experiments, but in reality one of the things I missed the most was one little fish – and no, it was not the Trinidadina guppy.

Fig. 1. I guess that they do look like they are smiling...

During one of my first trips to the island, I got to familiarize myself with the very charismatic jumping wabeen, or more formally Rivulus hartii. It is hard to explain why I got so fond of this little guy, but many people that have worked with Rivulus share the same feeling – although many others don't even want to hear about them, and you will know why soon. My love for Rivulus, like any other love relationship, went through ups and downs. Rivulus tend to jump (a lot!), and that is why their local name is jumping wabeen. You cannot leave one inch of the aquaria uncovered because one by one they will jump out, and you will end up with a parade of jumping little blobs on your lab floor. Those were the days I hated working with Rivulus. Then were the days that they were sitting quietly in their aquaria, looking straight at me with their huge, baby eyes and big smile (or at least that is how their mouth looks to me, see for yourself in Fig. 1). Those were the days when I loved working with Rivulus. But then again were the days where you needed to transfer two of them to a new facility, and make the mistake to put different sizes together. You still arrive to your destination with the two fish, but one will definitely be inside the other – in a way, it was similar to  the painting " Big fish eat small fish" by Pieter Bruegel the Elder (Fig. 2), where pretty much every fish is munching on a smaller one, which is munching on a yet smaller one. Those were also the days I hated working with Rivulus.

Fig. 2. Big fish eat small fish by Pieter Bruegel the Elder. I really like the walking fish munching on a smaller one.

The fact that Rivulus are relatively small and voracious, and coexist with the Trinidadian guppy has been of great interest to many ecologists and evolutionary biologists – among which I can certainly include myself. In particular, I was very interested in how the interaction between guppies and Rivulus could be affected by a guppy specific parasite, Gyrodactylus (you can see some of my previous blogs about guppies and Gyros). But before I go into the details of that experiment let me talk a little bit more about Rivulus' ecology. Adult Rivulus  are much  larger  than  adult  guppies (the largest Rivulus are almost three times larger than the largest guppies!) and  are  strict  predators,  foraging  mainly  on everything that fits in their mouth, like invertebrates  and  small  fish,  including  juvenile  guppies. Juvenile  Rivulus,  on  the  other  hand,  are  of  similar size as guppies, and directly compete with guppies for shelter and food (i.e. aquatic invertebrates). Previous studies have also shown that the  presence of guppies  decreases the  growth  rate  of  size-mathced juvenile  Rivulus   – through resource competition – but dramatically increases the growth rate of adult Rivulus, through guppy predation on Rivulus young, and the release of adult Rivulus from intra-specific competition. Given these strong interactions between the various classes of Rivulus and guppies, it was very conceivable that a guppy-specific parasite could tip the balance in Rivulus' favor. Or at least that was my initial hypothesis.

I designed an experiment that would allow me to break down the different effects of Rivulus, guppies and the guppy-specific ectoparasite, Gyrodactylus, could have on both fish species' growth. The experiment  consisted  of five  experimental  treatments:  guppies  only  (GO);  guppies and Gyrodactylus (GG); guppies,  Gyrodactylus, and Rivulus (GGR); guppies and Rivulus (GR); and Rivulus only (RO). I made sure that the total biomass was, if not equal, very similar among the different replicates, and put the sized-matched fish in mesocosms that replicate natural streams (i.e. lots of gravel with invertebrates and algae, flowing water, etc.), and came back 20 days later to see how much guppies and Rivulus had grown in the presence of each other and/or the parasite. (if you want to check out the full article with the detailed methods you can do it here).

 The results were somehow surprising. Remember that I said that my hypothesis was that Gyrodactylus was going to tip the balance in favor of Rivulus. I found that the presence of  Gyrodactylus  parasites  decreased  female  guppy  growth, and this effect was much stronger than the effect of Rivulus  (Fig.  3), but more intriguingly I found a very  strong  antagonistic  interaction:  Gyrodactylus  reduced   guppy  growth  in the  absence but  not the  presence  of Rivulus.  In  short, the  relative  effect  of  Gyrodactylus  on  the  growth  of  guppies  was  much  greater  than  that  of  the  competitor  (and potential predator), but the two effects were strongly interactive. In other words, Gyrodactylus did not significantly affect the interaction between Rivulus and guppies!!!

Fig. 3. Gyrodactylus did not influence Rivulus-guppy competition, but certainly had an effect on its own!

 In the paper, my co-authors and I suggest  two  potential  mechanisms that may have prevented Gyrodactylus from influencing the guppy-Rivulus interaction. First, the coexistence  between  guppies  and  Rivulus  has  been commonly viewed as a balance between predation and competition, with guppies being the better competitors, but large adult Rivulus actively preying upon juvenile guppies. So, although we did find a trend for a decrease in growth of Rivulus in the presence of size-matched guppies, this was not significantly different from the Rivulus-only control. It is possible that under these experimental conditions competition is lessened due to relatively low fish density per mesocosms; however, an alternative possibility is that Rivulus grow larger than guppies and shift their diet towards terrestrial prey that are too large for guppies to eat, releasing them from  resource  competition. Indeed, at the end of the experiment Rivulus in the mixed-species treatment were almost three times larger than female guppies, despite being of similar size as the largest female guppies in the mesocosms at the beginning of the experiment. Second, as  an  apparently  adaptive response  to reduce  Rivulus  predation on juvenile guppies, these increase their growth rate when exposed to chemical cues from adult Rivulus.  Guppies  might  thus  show  a  phenotypic response  to  Rivulus  as  a  potential  predator, and not a competitor.  Even  though the Rivulus in our experiment were not large enough to eat the guppies, the presence of small Rivulus is presumably a  reliable cue of the likely presence of larger Rivulus. If guppies increased their growth in response to chemical cues signaling the presence of Rivulus, this would have partially counteracted the negative effects of parasitism on guppy growth, consistent with our observation that female  guppy in the presence of Gyrodactylus and Rivulus was intermediate between guppy-only and guppy–Gyrodactylus treatments.

 I certainly think that these results generate several important insights into the nature of guppy–GyrodactylusRivulus interactions and, more generally, food web interactions. But these results have also helped me better understand what is going on in the small and shallow tributaries of Trinidad, where I spent several nights and days exploring and collecting fish.

Rivulus are more easily collected at night, so Pierson Hill had this great picture of one of those days I helped them collect Rivs for David Reznick's FIBR project.

Our paper:  Pérez-Jvostov, F., Hendry, A. P., Fussmann, G. F. and Scott, M. E. (2016), An experimental test of antagonistic effects of competition and parasitism on host performance in semi-natural mesocosms. Oikos, 125: 790–796. doi:10.1111/oik.02499.

Saturday, August 6, 2016

On Failure

By Dan Bolnick

Sitting on the beach tonight playing chess and drinking wine with my postdoc Yoel Stuart, I couldn’t help but worry about tomorrow. Tomorrow morning is a crucial step in an experiment that colleagues and I have planned for years. The idea came in 2008, but took years to get all the pieces in place. One NSF grant was funded, and completed in three years, to get preliminary data to plan a second NSF grant that was also funded to do this experiment. We also had to convince the Howard Hughes Medical Institute to build a huge fish room in a remote field station (Bamfield Marine Sciences Center). Then we had to get permits. Then we spent a year and a half breeding fish. Field trips to collect parents, personnel time to take care of F1s, a grad student RA position to live in Bamfield and cross fish to make F2s, then more personnel time to breed the fish. Then, weeks spent sewing a kilometer of netting into little cages, and cutting up and assembling two and a half kilometers of PVC piping into cage frames. A week of field work with six people to install the 160 cages in 4 locations on northern Vancouver Island.

Ready for stickleback - but will it work?
Years of preparation, multiple grants, and now a moment of truth: will the juvenile fish survive the 5 hour drive north over rough dirt roads, from their rearing facility at BMSC to their grandparents’ native lakes and streams? If not, the intended experiment will have failed before it really began. No data. No conclusions. (Admittedly, this upcoming transplant is just one of 8 planned transplants in this project, so we have the opportunity to learn, adjust, and recover.)

Naturally, I am thinking a lot about failure tonight. Not just the potential failure of this transplant experiment in particular, but broader questions of failure in science. Evening on a beach is a good time and place for broad contemplation. Pachena Beach, especially, with its slanting sunlight light through drifting fog and tall trees.

So, I find myself wondering?
  • How many attempted experiments failed for logistical reasons and just never get reported?
  • What are the various reasons why we fail?
  • What do we learn from our various experimental failures?
  • When is failure a productive source of insight, versus a plain old flop?

I started to also write the question: “How do we best insulate ourselves from failure?”, then paused. The fact is, failure is not uniformly bad. Sure, too many high-risk projects may leave us empty-handed. But, over-attention to failure can be bad too. Fear of failure can drive some people to paralysis. Others may take risks but falsify the results of failed attempts. Still others opt to rely exclusively on ‘safe’ projects, that often cover well-trodden ground and thus teach us little that is new or interesting.  This leads to the conclusion that we shouldn’t insulate ourselves from failure. Instead, we need to become good judges of scientific risk, choosing an intellectual ‘portfolio’ of projects that combine an appropriate range of risks. A mix of high- and low-risk.

So instead of asking how to avoid failed experiments, I would rather ask how we can teach aspiring students to judge risk in advance, and how to be brave but not foolhardy in taking on projects. This is surely fodder for an entire book. Such books probably even exist. I don’t know, because I am sitting on a beach without wifi (thank goodness). Lacking access to the web, I will attempt a much more modest goal with this blog post. I will attempt a taxonomy of scientific failures. And, I will illustrate these failures with vignettes from my own experience. Consider this my mea culpa of failed attempts at science. Hopefully it will be both cathartic for me, entertaining for you, and get the right karma in place to keep my north-bound fish alive in the coming day.

Spoiler alerts: the following contains some references to events in Game of Thrones. If you don’t care, fine, you can just ignore the ‘literary’ references and focus on the ideas and biology. If you’ve read the books or seen the TV show, fine. If you haven’t yet read these but intend to, then you might want to skip down to the line saying . Sorry.

Failure category 1: The Viserys Targaryen. For “Game of Thrones” aficionados, Viserys is a minor but entertaining character: the child of a dethroned king who connived to reconquer his ancestral kingdom. He thought he had a plan to do so, but sort of bumbled along and didn’t implement things very deftly, with the result that his plan fell apart (and he had molten gold poured over his head). Had he thought a bit more clearly, he should have foreseen some of these problems. So, I’ll invoke Viserys to represent a very common category of failure, in which the basic plan sees reasonable at a cursory glance, but the details and implementation don’t live up to expectations. This is perhaps the most common and most avoidable form of failure. You come away empty-handed, except perhaps with a better understanding of how NOT to design an experiment (which is indeed useful).
Viserys and his golden crown.

Failure category 2: The Wise Masters. Continuing with our literary theme (if you choose to call it that), the Wise Masters really thought that they had a well-worked out path to their goals. They simply overlooked a colossal and totally unexpected fact: their adversary had massive pet dragons. Oops. Not really something you can plan ahead for. So, we’ll give the Wise Masters a nod in naming failures in which truly unforeseeable problems undermine otherwise well-thought-through plans. These may be more common than we like to think, but are inherently less avoidable than the Viserys Targaryen. Admittedly, these two kinds of failure are going to overlap a bit: an unexpected ‘dragon’ to one researcher might be foreseeable to another. This is why you should show your research plan to colleagues and mentors as much as possible – someone out there may anticipate your particular dragon.

The Wise Masters are about to meet Drogon the dragon

Failure category 3: The Eddard Stark. This one is simple: many beautiful hypotheses are slain by ugly facts. Much like the idealistic Eddard Stark tried to govern but was undermined by the sad fact that political reality was different than he naively believed. We could equally name this after his son, Robb Stark, King of the North, who went to a wedding of an aggrieved subordinate, incorrectly assuming that the rules of hospitality could be trusted. This is the kind of failure that philosophers of science have indeed written volumes about: we have hypotheses about how the world works. We design experiments or other kinds of studies to test these hypotheses (or their null alternatives). Sometimes we ‘fail to reject the null hypothesis’. This is a failure in the entirely constructive sense that we do indeed learn something. Unlike the previous two kinds of failure, we actually get data, we analyze it, and we were wrong about something. We learn something about in the process.

Ned Stark pays the price for honor - or is it naivete?

Failure category 4. The Great Houses. The core of the book series of course, is the battle for political dominion among several families (the ‘Great Houses’), which are so focused on their squabbles that they totally overlooked a fundamental fact that their collective existence was threatened by semi-human magical winter beings. Kind of an important thing to know about, and they had some warnings thanks to the Night’s Watch. Likewise, every now and then we scientists get a hint of something really substantially new and surprising, and we often are so focused on our previous agenda that we overlook the hint, not recognizing the importance of what we just saw. This is perhaps the most problematic failure, because it represents lost opportunities for novel insight.

To summarize, our taxonomy of failures includes 1) poor planning leading to avoidable problems, 2) unexpected interference, 3) incorrect hypotheses, and 4) overlooking important things. I’ve probably failed to include something here – feel free to chime in on the comments.

Now, in the spirit of full disclosure I want to give a few examples of my own, in the hope these will help students or colleagues avoid similar mistakes, raise awareness that one’s career can survive failures (I think…), and perhaps even entertain.

Vignette 1: When I pulled a ‘Viserys Targaryen’, also known among my graduate students and postdocs as ‘Bolnick’s folly’. When I first started working on stickleback I did an experiment in one half of an hour-glass shaped lake. I later returned to that lake to examine the other half in more detail, discovering that the stickleback in the larger deeper basin and shallow small basin were dramatically different in diet (more so than the famed benthic-limnetic species pairs of stickleback). Yet despite this massive ecological difference, their phenotypes were only subtly divergent. Why not diverge as much as the species pairs? 

Ormond and Dugout Lakes on Vancouver Island. The narrow marsh separating them can be clearly seen. The barrier to dispersal was built across that marsh.

Presumably because the two lake basins are connected by a narrow marsh  (~20 meters wide) that permits free movement of migrants between the basins (Bolnick et al 2008 Biol J. Linn Soc.). So, obvious experiment: create a barrier to movement, and track the subsequent emergence of genetic and phenotypic differences, then remove the barrier and watch those differences collapse. All I needed was a barrier. So, in 2007 I found myself back in British Columbia with two field assistants and an extra week on our hands between other tasks. I had planned ahead and obtained permission to build a barrier and leave it in for a decade (~10 generations). All that remained was to make the barrier reality. We installed sturdy steel 8 foot tall fence posts in a transect across the entire neck of marsh connecting the two lakes. We attached chain-link fencing, carefully sunk into the substrate of the marsh all the way across (~30 meters wide including semi-marshy habitat that probably rarely allows migration, but we had to block that just in case). We then attached a fine screen to this fencing - We had to build it with fine enough mesh to prevent passage even of juvenile fish, so we used a sturdy type of coarse mosquito netting. One layer of netting on either side of the fence. Then we installed another layer of chain link fencing to sandwich a mosquito net in between, for added strength. All of this was buried deeply into the substrate, which involved several days of lying face down in muck in our wetsuits cutting into the peat with a saw. The end product looked satisfyingly sturdy (Fig. 2). 

Building the barrier across the marsh.

Now, I knew all along that water flowed from the smaller basin into the larger one – imperceptibly slowly, but still flowing. And I knew therefore that the fence would get water pressure and sediment build-up. But I figured water would keep seeping through, maybe raise the water level a bit. I knew this might be wrong, and the whole thing might fall apart due to water pressure. But, it was a risk I was willing to take.

Ten months later when we returned to the site, it was a mess. The barrier had clearly worked at keeping stuff from moving between the lakes – including small sediment, which built up. The fence became a dam. And those 8’foot tall fence posts were stuck firmly in the sediment (job well done!) but were not up to the task of holding back a 4 hectare lake. They bent over like straws. We found the whole fence lying on its side (Fig. 3), not because the posts came out but because the steel beams bent to nearly 90 degree angles to let the water over them. Experiment finished, no data, no biological lesson. I’d still love to do that experiment, but I just don’t know how to engineer it myself.

The Experimental outcome – no experiment

So, I took a risk, and my design did not work, so the experiment flopped, literally on its side. On the plus side, the total cost was maybe $1000 in materials and three people’s time for 5 days to build it. Low cost, high possible reward, high risk. Did I make the right decision to try this? Perhaps not, but it was exciting while it lasted and makes a fun story.

Vignette 2: When ‘dragons’ – specifically, trout – ate my graduate student’s experiment. My student Brian Lohman and I planned a study in which we would capture individual fish and collect detailed data on their microhabitat at the capture location – then mark and release them. We’d do that for a month, every day, all over a small 4 hectare lake (different one than above). Hopefully we’d get multiple captures of many individuals, obtaining detailed measures of individual movement distances and habitat use. Then we could use a habitat map to evaluate the role of habitat choice in dispersal decisions within a single lake. Things went swimmingly for weeks – it was wet and windy and grey, but otherwise Brian was able to mark a large number of fish. But as time went on, and the number of recaptures stayed at less than 10, he was puzzled. Then, on the first sunny calm day he could finally see what was going on below the surface of the water. Some local trout had apparently learned to associate his small boat with the periodic arrival of momentarily disoriented stickleback. Fish after fish was released back at their capture site, only to be instantly eaten. Not something we had ever experienced before or thought to anticipate, but the end result was too few recaptured (surviving) fish to execute the intended study.

Vignette 3: My ‘Stark’ mistakes – or ‘misStarks’: hypotheses I thought would be correct, but ultimately proved to be unsupported. There are quite a few of these. And reassuringly, many are published – you can publish negative results. I’ll pick one example that I find most instructive. In 2009 I had dinner with Rob Knight, and over wine afterwards we compared our research projects (I talked about individual diet variation in natural populations, he talked about diet effects on gut microbiota in humans and lab mice). We conceived of a simple side-project collaboration: I take an already-existing sample of 200 stickleback from one day in one lake, and get stable isotope data from each individual to characterize their diets. I send Rob DNA extracted from their intestines, and he uses next generation sequencing to characterize their gut microbiota. Then we ask whether among-individual diet variation in wild vertebrates correlates with among-individual variation in gut microbiota. We knew how to execute each lab step, and had done it before. We had the samples in hand. All systems go. Then, when we had the data, the first pass analysis found no significant effect of individual diet (carbon or nitrogen isotope ratio as the metric) and individual microbiota composition. To give you an idea of how odd this was, let me point out that there are tons of studies in humans and mice showing that diet changes the microbiota. This was such an accepted thing, that everyone I talked to about this just said “well of course it’ll work, but it’ll certainly be cool to show this in a wild population for once” – or some variant on that sentiment. But, no significant effect.

After some head-scratching, the reason for our false expectation became clear: although sex had little significant effect on the microbiota, and diet had no significant effect on the microbiota, there was a strong sex*diet interaction. Basically, diet alters the microbiota in males, and in females, but it does so differently in each sex so that in a mixed sample (even keeping sex as a factor), the diet effect is obscured. So, our initial expectation failed because something more subtle was going on (Bolnick et al 2014 Nature Communications).

This particular story illustrates the point that sometimes our failures are because we over-simplified, and if we dig a bit deeper we discover something even more interesting. That’s not to say every failure to reject a null hypothesis leads to some more interesting and subtle insight. Sometimes our alternate hypotheses are truly incorrect, or at least not supported in any way. I’ve put out substantial effort in some studies only to get ambiguous results or no significant support for a core hypothesis (Ingram et al 2011).

Vignette 4: My most embarrassing Great-House failure, however, is just now making itself clear. I’ve collected stickleback for 17 years almost now, and have dissected large numbers of wild-caught fish to determine sex, obtain stomach contents, or examine parasite loads. In all that time, I would frequently dissect fish whose internal organs were oddly fused together – like someone had injected glue inside. I didn’t really know what to make of it, so I ignored it. But now, that overlooked observation is turning out to be a key feature of a story my lab is building up at the moment and approaching publication. Rather than spoil the surprise, I’ll leave the details for another post when this paper is done and published – suffice to say, there are cool biological processes under our noses, and we sometimes just pass them by because we are so busy with our pre-planned agenda.

I suppose the moral of vignette 4 is to remain observant of the natural history of your system, to avoid missing the proverbial “White Walker in the Room”. Ask questions about oddities that you notice, even if it is not in your planned linear trajectory. Constant vigilance! Because it might be something really neat that you are just about to pass by. Let’s face it, we spend so much of our time meticulously planning our experiments to avoid Viserys or Wise Masters type mistakes, and we spend money and time pursuing large sample sizes so we minimize the risk of statistical errors. But the best laid experimental design also generates some blinders that may stop us from noticing the things we never even thought to ask questions about.

To put this all together, I hope it is clear that there are many ways we can fail in science, and that some failures are to be expected – you just don’t know in advance which experiments will fail, and in what way. But sometimes you have a pretty good idea which might fail. That’s not a reason to abandon all hope – sometimes it is worth trying anyway, just in case. Just keep a broad portfolio of studies so you always have a variety of levels and types of risk of failure – that way something will pan out. Speaking of which, (this being the day after I started writing this post), I should be hearing momentarily from my crew whether the 690 fish survived the drive north. I’ll keep you posted. In the meantime, please feel free to respond to this post by putting in comments of any field or lab experiments of your own that just crashed and burned.