On the Role of Values in Science: A Pragmatist Deconstruction of ‘Impartiality’

Joint Session of the Aristotelian Society and Mind Association
University of Kent, July 2020 (with subtitles)

Abstract:

“Debates on the role of values in science have often revolved around the issue of the impartiality of scientific methods. They have therefore tended to hinge the accountability of scientific research programs on the issue of whether scientific methods can provide us with an objective picture of the world. I suggest that this framing of the debate is due to a particular abstract perspective of what kind of practice science is in human life. I suggest an alternative, pragmatist perspective of science which does not need to make positive or negative claims about science’s impartiality in order to make claims about the intrinsic role of values in science. This is because, in such a view, science is an evolved set of habits leading to interventional success in improving lived human environments, which already brings with it intrinsic value-considerations.”

5 responses

  1. Thanks for a very interesting talk! I was wondering whether you think that the kind of pragmatist approach you outline provides us with any guidance on the tricky question of *how* to effectively critique scientific enquiry on value-based grounds? Although you make a persuasive case that value-based critique need not be ruled out in principle, we can still expect there to be other kinds of tension remaining. At one extreme, I’m thinking that science couldn’t be effective at doing what Dewey says it should do, if it was not allowed to have *any* autonomy from whatever our most pressing concerns are at the moment of enquiry. It seems quite plausible that we’d miss out on various stable, reliable, and *useful* patterns if we always had our eyes on our values in this particularly extreme way. So I was wondering whether you think that there is anything interesting in general to be said about the different ways in which we might try to make room for autonomy, here?
    (My question was prompted by thinking about one possible reading of Carnap,* according to which, *given* a set of concepts enquiry is, as you put it, ‘mechanical’, but where the choice of concepts itself is a largely pragmatic one; it seems as though that sort of view would encourage one very specific division of labour between scientific critique and value-based critique, whilst making some room for each. But it also seems like there are other possible ways of dividing this, and it seemed as though your preferred picture was slightly different.)

    [* I apologise if this is a bad reading of Carnap, it’s not really my area! But it seems like a view that someone might have, at least.]

    • Thanks for your question! You’ve actually brought up two things I write about in the paper this talk is based on, so thank you for giving me a place to elaborate on them. (Please forgive the time it took to reply–I have had a busy week).

      About Carnap, briefly: he and many contemporaries did tend to think that, once you settle on the methodological principles, the scientist is basically a detail-oriented human computer (i.e. the process is mechanical). But this was blown out of the water with Kuhn, who showed that, even if you have a set of principles like simplicity, scope, etc., there will be wiggle-room in how you apply them. An example used by Kuhn (and actually by Philipp Frank before him) was the question of what constitutes simplicity. Is it the variety and complexity of mathematical apparatus required, or the difficulty of the procedural workings-out? Rooney and Longino (much later) showed (separately) a similar kind of non-mechanical nature of deciding what even counts as explanatory scope. There is a lot of literature on why such a division of labour between value-based and epistemic considerations in science isn’t clear-cut (Heather Douglas also talks about this, and helpfully summarizes so much of the literature).

      This leads to the other thing I wrote about but didn’t mention: Dewey does say in Logic that the methodological principles (including rules of logic) guiding science are importantly different to more common-sense practices and procedures, in the sense that they are more abstract and general. Interestingly, he comes up with a few principles which sound a lot like Kuhn’s—scope, consistency, simplicity, fruitfulness—that generally distinguish scientific methods from other practices. And this is one major reason for science’s strength, success, reliability, etc.—it has abstracted away from the immediate problems of everyday life to focus on knowledge that possesses these qualities, and the methods that lead to its acquisition. The important point, for me, was that this does not imply that science is a purpose-detached, representatively-objective form of inquiry. It only implies that science is more purpose-generalized. Purposes, values, needs, goals, are still operating in the selective development and practice of scientific methods. Dewey invites us to think evolutionarily about this: where would these methods have come from if not to serve an adaptive purpose in negotiating our lived needs? Could logical principles possibly come to us ab extra? He says no. We’ve developed scientific methods of inquiry because they proved more successful in the long term at resolving a wide variety of problems we otherwise seek to resolve with haphazard methods. These problems and the values they involve have been part and parcel of the selection of methods.

      On your point about autonomy, you’re right that it is still an important principle in a reinterpreted sense (though I would hesitate to call it ‘autonomy’ because of its possible implications). We don’t, for example, want certain forms of political censorship, personal persecution of scientists, or uninformed popular input on science’s directions. And you’re also right that I was silent on that point! I do need to give this more thought, but I will try to answer it. The way I’d like to see science structured is with a feedback loop between scientific institutions and communities that have to live with their effects. Firstly, at risk of sounding self-interested, I think ethicists or sociologists should be part of any scientific research team (with allocated funding for such a role), to guide scientific research with their expertise in wider social concerns. Secondly, as Philip Kitcher has written about, boards of community members should have structured opportunities for becoming informed about scientific research, and having educated input in what directions scientific research takes, so that social values are aligned with scientific priorities. Now, that might mean adding methodological directives (as Longino has suggested) like accessibility and decentralization. Honestly, I think many scientists are naturally motivated and moved by social concerns, without prompting, but encounter barriers like lack of funding or job insecurity. So those structural barriers also need to go. I don’t have any interest in telling individual scientists which specific research questions to focus on, or how to conduct their investigations (barring unethical experiments, etc.). What I’d like to see is structural changes that encourage feedback and communication between institutions of research and wider society. I think these kinds of systems would still give scientists a lot of autonomy to decide their personal research focus, figure out their own research methods, and honestly interpret their results.

      • Thanks very much for this very interesting and detailed reply — a lot for me to go away and think about!

  2. I’m sympathetic to much of this (and I’m a Peircean pragmatist by inclination).
    I wanted to ask about objectivity and impartiality.
    It strikes me (and I think I got this off Putnam) that we can frame these concepts in a Capital letter way, or a lowercase way: The transcript of nature idea of Objectivity seems to be out (we can’t step outside the context of inquiry to contemplate a value-free nature to describe). But we can still distinguish between inquiry aimed at truth, and what Haack calls ‘sham inquiry’ where might come up with reliable means of getting people to agree with us without caring about whether what we say is *true*.
    Similarly for impartiality. I can’t be Impartial in the sense of operating in a vacuum of value, but I can make sure that the values I’m operating with are, by and large, values that transcend my own personal situation, and are ones that I might think could be, in principle, reasons that compel other inquirers too.

    Rather than a value free and perspective-independent God’s-eye-view of the world, we need to regulate inquiry by making a (value-based) distinction between appropriate and inappropriate evaluations. Am I engaged in motivated reasoning, or following the evidence where it leads? Am I in a context that makes genuine (rather than sham) inquiry possible or harder than it needs to be? Are the reasons I am investigating these things ones that are defensible given the values that science ought to embody? (Note here that the values running through all of this are both objective and impartial themselves in this lowercase sense).

    We can still have objectivity and impartiality as regulative ideals (albeit ideals we can’t perfectly satisfy in practice) even if we reject Objectivity and Impartiality as incoherent. And hopefully pragmatism avoids the position Peirce criticised where we dismiss evidence we don’t like: ‘Oh, I could not believe so-and-so, because I should be wretched if I did.’

    Does that sound compatible with your view, or must we reject the objectivity and impartiality of values, on your view?

    • Hi Graeme, thanks for your comments on my talk. It’s always exciting to engage with another pragmatist! (Please forgive my tardiness–my dissertation is being finalized this week).

      To start, yes, I absolutely do think there are standards for determining good from bad reasoning, and reliable from unreliable methods in science. We can certainly criticize people who disregard or overlook these tried and tested ways of coming up with knowledge as being, well, just shoddy.

      In that sense, I agree with you about distinguishing between the small-letter guiding principles and the capital-letter concepts of Impartiality and Objectivity. My main goal in this talk was to reframe our understanding of what impartiality is capable of being (i.e. not absolute representative, capital-O objectivity), so as to invalidate the persistent objections to criticisms of research programs with arguments like, “How can the objective representation of reality hurt anyone or be bad in any way?” In that sense, it was pretty basic from an internal, pragmatist point of view. It enables us to see the question, “what effects are being enabled by the development of this research project?” as legitimately central to the processes of all science. If the answer is, “Well, maybe some medical uses far in the future, but for the time being just a lot of conservative and fascist politics,” then we can make a strong argument for closing the door on that research project, and we’ve hopefully preempted representative-objectivity-based objections that “the truth must come out!”

      But in ways I was not able to elaborate in the talk, Dewey states in his Logic that the standards we have developed (most pertinently, scientific methodology) are there because they have served us well in a variety of historical circumstances. They got carried over into subsequent inquiries because they tended to get the results we wanted across a wide domain of situations. Although these methods can be very reliable, logic and methodology are still partially constituted by the interactive interests of human agents and their associated needs/values. There isn’t really a viable fact/value distinction here, if we consider the issue in the abstract. So we don’t have to encourage motivated ignorance in a simplistic way; we can just acknowledge that knowledge and facts are fundamentally related to use and practice, and be cautious about the kinds of knowledge we seek out because of their likely practical results.

      For that reason, I think attempting to move away from value-considerations is not the right way to go. This threatens to reproduce the understanding of science as somehow uninformed by values. On the contrary, even when we’re developing theoretical, non-interventional science, we are still seeking to satisfy a need or desire (to come to an interpretive understanding of something that we deem important). The questions then become, “Why do I find this so important?” and “Given the evidence that it harms people in significant ways, are my personal cravings for this information more important than the safety of others?” You can do “bad science” by employing value-judgements wrongly and by ignoring values altogether. But to do “good science,” you must employ value-judgements appropriately. This might even involve (as Longino has suggested) altering our scientific methods to include new values. For example, as well as “simplicity,” we could have “accessibility” (i.e. capable of being utilized and implemented by various non-centralized communities—a clearly value-based consideration) as a guiding value. Under my view explicated here, this would not be fundamentally at odds with doing good science.

      With regard to sham inquiry (as a specific subset of bad science), I think this should be treated like more general instances of manipulation, lying, cheating, etc. The values operating are oriented towards narrow interests (self-interest, usually), and can be criticized in much the same way as selfish inter-personal behaviours, undemocratic political processes, uneven economic systems, and other value-based practices. For example, lying under oath for personal gain is an instance of perverting justice, but that is not to say that values are not also operating in the general structures of the justice system. I’d want to make a similar point for “sham science” and good science. Values are everywhere; it’s just about considering and employing them appropriately and responsibly.

Leave a Reply to Pete Cancel reply

Your email address will not be published. Required fields are marked *