Our Responses to Lewandowsky’s Reply

A joint post by Ruth Dixon and Jonathan Jones

We do not find Lewandowsky, Gignac and Oberauer’s Reply to our Commentary persuasive. Their Reply is paywalled, but a Summary is here.

Our responses are on the following pages:
Part I: Reversing relationships
Part II: Satisficing
Part III: Skew
Part IV: Heteroscedasticity and Skew


5 thoughts on “Our Responses to Lewandowsky’s Reply

  1. The idea that you can reverse the independent and dependent variables, impose an arbitrary non-linear assumption, reanalyze the data using an inferior approach and conclude, bingo, the article is invalid is patently absurd. I am not a partisan of anything other than sound science. Your reanalysis is anything but.

    • Thanks for your comment. Have you read the pages linked above? They address most of the points you raise.

      There is no reason to treat CY (belief in conspiracy theories) as necessarily being a predictor variable and CLIM (opinions on climate change) as being the dependent variable. They are both just averaged responses to survey questions. Lewandowsky’s research design does not allow causality to be inferred (there’s no time-dependent or experimental component to his study). Therefore we looked at the relationships in both directions (see ‘Reversing Relationships‘).

      Furthermore, there are examples of similar variables being used in both directions in the academic literature. For instance, a paper was published in Jan 2015 in which belief in conspiracy theories was treated as the dependent variable throughout (the study, interestingly, detected a curved relationship): Political Extremism Predicts Belief in Conspiracy Theories.

      And in a book chapter published in 2009, Lewandowsky and colleagues developed a scale to measure scepticism (in this case towards information from the media and politicians) which they used as a predictor variable, the opposite to their treatment of climate scepticism in LOG13 and LGO13.

      There is therefore nothing fundamental about ‘scepticism’ or ‘conspiracy beliefs’ that determines their use as dependent or independent variables.

      On your other point, it’s amusing to see that you describe Loess as ‘imposing’ anything on the data. The strength of that method is that it makes no assumptions about the underlying structure of the data, but simply reveals it. A linear fit (‘imposed’ by Lewandowsky et al.’s method) is far more restrictive.

    • On your more general point, a relationship with practical significance should be obvious in the data, however it is analysed. As Roger Pielke Jr recently put it in the Guardian:

      “As researchers, we should recognize that meaningful relationships ought to be detectable with simple methods and robust to alternative methodological approaches. If the effect you are looking for requires a complex model, data transformed away from intuitive units or sophisticated statistics to detect, then the effect that you think you have found is probably not practically significant, even if you are convinced that it truly exists. Consider that the effects of vaccines or the consequences of smoking are easily seen with understandable data and simple statistics, under a variety of experimental designs.”

Comments are closed.