Sophie PanelSenior lecturer in economics at Sciences Po Grenoble and the Cesice laboratory
How would you define objectivity in the social sciences?
To start with what objectivity is not: objectivity does not imply an absence of moral or political stance towards a phenomenon. On the contrary, it requires us not to deny the existence of a phenomenon simply because we don't like it. To take a topical example, Islamophobia, i.e. all the negative stereotypes and discriminatory behavior directed at individuals because of their real or supposed membership of the Muslim religion, exists and has been highlighted by numerous studies. Everyone is then free to think that these attitudes are justifiable or do not deserve to be made a priority, but objectivity imposes a minimum of not denying the fact.
How do we know if a claim is "fact"? On the one hand, there's the criterion of diversity of evidence: if qualitative, quantitative and experimental studies lead to the same result, this increases the degree of confidence we can have in that result. This is all the more true if the consensus is interdisciplinary, and the work of economists, for example, confirms that of sociologists or historians. There is also a criterion of quantity: the more research that converges on the same conclusion, and the fewer studies that contest it, the more certain we can be that this conclusion is well-founded.
The existence of a scientific consensus is sometimes interpreted - wrongly - as reflecting the absence of ideological diversity among social scientists, or the fact that they would feel forced to conform to mainstream thinking. But these fears are exaggerated. When the vast majority of climate scientists agree that rising temperatures are linked to greenhouse gas emissions, it is precisely their sheer numbers that lend credence to their position. There's no reason not to extend this reasoning to the social sciences: if all the specialists on a subject defend the same conclusion, it's perhaps because that conclusion is correct!
On the other hand, there are safeguards against the development of ideological monocultures. The "publish or perish" culture, with all its shortcomings, is a good example: an article whose results go against the grain of the consensus would be more likely to be published, and bring greater visibility to its author, than an article of similar methodological quality that merely replicates established results. There is certainly a status quo bias, following the principle that "extraordinary claims require extraordinary evidence", but this bias is more than counterbalanced by the premium placed on extraordinary results. The existence of a scientific consensus is therefore an extremely strong signal.
Is researcher neutrality possible and desirable?
Neutrality - which, as I understand it, implies an absence of ethical bias towards the phenomenon under study - is I think neither possible nor desirable. To take my own example, I work on armed conflicts and authoritarian regimes, and, like most of my colleagues, I think war is a catastrophe and have no particular sympathy for dictators. This lack of neutrality does not prevent me from being objective when analyzing the causes of conflict or the workings of non-democratic regimes. Similarly, we may disagree on how to explain (or fight against) poverty, corruption or domestic violence, but nobody thinks these are desirable phenomena.
Contrary to what we sometimes hear, this lack of neutrality is not specific to the social sciences. Just as most economists are "against" unemployment, most climatologists agree that global warming is a problem, and it would be hard to find an epidemiologist in favor of the spread of Covid. The often-repeated accusation of bias is therefore factually correct, but irrelevant. If social scientists confined themselves to subjects on which they had no bias, their research would be of no interest.
How important are methods to you as a researcher?
Methods occupy a central place in my research activities, whether in my own work or when I am called upon to evaluate that of my peers. "Methods" is to be understood here in the broadest sense: it's not just a question of technical issues such as the estimators to be used, it's also a question of asking whether, for example, such and such a case is appropriate for testing such and such a hypothesis, whether the indicators selected correspond well to the concept being measured, whether the results of a statistical analysis can be interpreted causally, whether these results can be generalized, and so on.

These questions form the core of scientific discussion and controversy, at least in my areas of specialization. When I have to evaluate an article for a scientific journal, it is on the basis of these criteria that I give my opinion. And conversely, when a journal refuses to publish one of my own articles, it's 99% of the time because of methodological shortcomings, more rarely because the question addressed does not correspond to the journal's editorial line or is considered to be of minor interest. I've never had my results challenged for their political implications.
These methodological considerations may seem tedious, overly technical and inaccessible to the uninitiated (and indeed to researchers who do not practice the method in question), but they are nonetheless essential. Firstly, because the reason we give credence to a researcher's claims is not his or her credentials or the institution in which he or she operates, but the rigor with which he or she backs up his or her assertions. Secondly, because methods are the only valid criterion for contesting the results of an article: we can disagree with the conclusions of a study, but only on methodological grounds, not because these conclusions offend our convictions or displease us ideologically.
Could you present an example of research, ideally from your own work, to illustrate the issues and tensions surrounding objectivity and neutrality in the social sciences?
I'm rarely (if ever) tempted to sweep one of my results under the carpet because I don't like it for moral reasons. On the other hand, I may have a preference for a particular hypothesis for other reasons, for example because it's more convincing or because I'm the first to have put it forward, and this is where potential problems of objectivity can arise. Theoretically, methods should serve precisely to eliminate unfounded hypotheses. In practice, it sometimes happens that several models (or measurements, etc.) are a priori equally well suited to analyzing a particular question: if one of these models produces stronger results in favor of the hypothesis I favor, I'll be tempted to put it at the heart of the article, and relegate the others to an appendix.
Added to this is the problem of "publication bias", i.e. the fact that scientific journals are far more interested in a conclusion such as "X causes Y" than in one such as "X has no effect on Y". On the one hand, publication bias encourages researchers to select their models based on the significance of their results; on the other, looking at scientific publications as a whole can give the illusion that a result is solid, when in fact there are plenty of analyses refuting this result, but which have never been published.There are, however, initiatives aimed at correcting these problems. To give one example, researchers are now encouraged to publicly register their research protocols before accessing the data. This is equivalent to committing upstream to adhere strictly to the pre-registered design when analyzing the data, and thus eliminates the temptation to make last-minute adjustments to the model, measurements, etc., to "inflate" coefficients. In other words, the aim is to select scientific studies on the quality of their methods, not their results. These procedures, which are commonplace in medicine, are now spreading to the field of social sciences, with some journals, for example, accepting article submissions only if the study has been pre-registered. The interesting aspect of these initiatives is that they do not rely on the good will of researchers, but seek to reform research institutions by strengthening incentives for objectivity.