We strive, but unfortunately humans are not perfect. Some are further than others, the odd person gets quite close, on a good day, but ultimately we all have our flaws and foibles. For instance, we like to think we are good, rational beings, capable of judging the evidence presented to us and making the most reasonable conclusions, but this is not always true. Even when given decent evidence, and when applying ourselves to it (rather than ignoring if it disagrees with our world view), we are susceptible to biases. Something as simple as the order evidence is presented in can influence whether we rate an idea positively or negatively. This is idea was brought to my attention in Daniel Kahneman’s book “Thinking fast, and slow”, which is really great and I can recommend to anyone interested in how we make decisions.
In essence, the idea is that, when given the positive points about an idea first, then the negative points second, we are more likely to judge an idea favourably. Whereas if the order of presentation is reversed, but the evidence kept the same, we are more likely to judge exactly the same thing negatively. We are primed by the initial evidence, either positively or negatively, to consider the idea in that light, even if we subsequently receive evidence of equal weight to the contrary. Of course, our minds can be changed by weight of evidence in the opposite direction, but order can still play in important role.
This seems like an intuitive idea, much like confirmation bias (which, having learnt about in the same book, I am now seeing everywhere…). And I’ve been thinking about it in terms of the scientific literature we consume every day (ish). In a paper, the cool and interesting bits supporting the hypotheses are the Results; they are the evidence for whatever argument the authors are making. On the other hand, the parts that make you doubt the findings are most likely to be in the Methods; whether their experiment tests the hypothesis they think it does, whether the analysis chosen does what they say, and so on. So grant me a degree of artistic license if I class the Methods section of a paper as the minus points negating a paper’s thrust, while the Results are the positive points supporting it. How does this relate to the order effect outlined above?
Well, all journals tend to place the Introduction at the start of paper, and all of them place the Discussion and conclusions at the end. But there is a degree of division about what to do with the Methods in relation to the Results. Many journals place the Methods squarely before the Results, so that you can understand where the findings are coming from. Another set of journals have the Results directly after the Introduction, so you can find out the answer to the questions being posed immediately, and then peruse the Methods at the end to discover exactly how it was done. Finally, a 3rd set of journals tend to relegate most of the Methods to online supporting information, rendering what remains in the paper largely useless for following exactly what the authors have done or even contemplating recreating their work yourself.
Hopefully you now see the link. Journals with the Methods before the Results are placing the negative points first, priming the reader to disagree with the findings. Conversely, journals that place the Methods after the Results are priming the reader to agree with the findings, even if the weight of evidence is the same. Finally, journals that banish the Methods to the supplementary materials are removing the negative points from public view. Obviously, the latter is essentially skulduggery and should be ceased forthwith*, but is the 2nd option devious as well? Are journals that place the Methods at the end of the paper intentionally or unintentionally taking advantage of the reader’s unconscious biases?
Well its hard to ever prove something like that, and I am sure no current working editor on these journals considers this explicit journal policy. It is very insightful however to consider the journals that tend to place the Methods first, which from within my own field primarily include society journals such as Behavioral Ecology, Animal Behaviour, Proceedings B, Ecology Letters, Evolution, and pretty much all journals around that level of “prestige” and lower. Which journals place the Methods last, or banish them from the pages completely? Nature, Science, Current Biology, PNAS…. See a pattern? I will note that the BMC stable lets you choose, which disrupts my point somewhat (but possibly creates a dataset one can test this idea in…) but no theory is perfect.
So, am I suggesting that it is the “glamourous” or “prestigious” (or “tabloid”…) journals that tend to take advantage of order effects to promote their publications, while the good, honest society journals do the decent thing and put the Methods first? Or perhaps it is this tactic of putting the Methods last that helped their articles, and so the journals themselves, gain popularity? Or maybe these things are totally unrelated and I have formulated a conspiracy out of nothing. I’m not sure, but it’s an interesting thought. Learning how we think and process information is always a useful endeavour, and something scientists should be aware of, given we rely on our ability to do this every day. So be on the lookout for your own biases.
*I am currently reading Moby Dick, and perhaps Melville’s language is leaking through
2 thoughts on “The accepted order of things”
Fascinating argument! I wonder how you’d fit the Discussion into this, though. Lots of those “negatives” appear in the Discussion; in fact, I would have said that was more than the Methods where those lie. (It’s certainly the way we teach undergraduates to write lab reports!). I wonder if the placement of the “study limitations” within the Discussion has the kind of effect you mention?
Certainly would influence people’s interpretation of a study if it had a section dedicated to limitations. I remember being taught to not spend the whole discussion talking about limitations, as that wasn’t interesting!
I would hope readers would pick up the limitations from reading the study design, rather than have to have them pointed out at the end?