Is really science what naturalism says it is? Federico Laudisa



Download 159.01 Kb.
Page1/2
Date14.11.2017
Size159.01 Kb.
#36862
  1   2
Is really science what naturalism says it is?

Federico Laudisa

Department of Human Sciences, University of Milan-Bicocca

Piazza dell’Ateneo Nuovo 1, 20126, Milan, Italy

federico.laudisa@unimib.it

tel. +39-02-64484828




Abstract
In spite of the relevance of a scientific representation of the world for naturalism, it is surprising that philosophy of science is less involved in the debate on naturalism than expected. Had the viewpoint of philosophy of science been duly considered, naturalism could not have overlooked the established lesson, according to which there is no well-defined recipe for what science must or must not be. The present paper addresses some implications of this lesson for (some forms of) naturalism. First I will question the very significance of the distinction 'ontological vs. epistemic naturalism', by defending a conceptual priority of the latter over the former. Then I will focus on the implications of this priority for naturalization strategies, claiming that these strategies underestimate the normativity of scientific theories themselves. Finally, on the basis of the above points, I will have a critical look at an especially ‘aggressive’ variant of naturalism, according to which all epistemic facts are natural facts.





1. Introduction

In a first, general approximation, we can take naturalism to be that philosophical attitude that (i) accepts as possible true entities of the world only the sort of things that scientific theories posit as objects of their inquiry, (ii) holds that the scientific theories’ methods are the only methods that yield true knowledge, and (iii) denies any privileged role for a philosophical, conceptual analysis in the justification of knowledge itself. The term ‘approximation’ is justified, since a commonplace in the philosophical literature on naturalism is that there is no single core of assumptions that jointly characterize it uniquely. In a work devoted to the American origins of naturalism, for instance, Jaegwon Kim refers to the existing “plethora of naturalisms” (Kim 2003, p. 84). More mildly Daniel Andler remarks that “philosophers have different views about the nature, structure and scope of naturalism, conceived as a very general stance towards human knowledge and the role played by the natural sciences” (Andler 2009, p. 284), whereas in her book Second Philosophy Penelope Maddy turns ironical by noting that “the term ‘naturalism’ has acquired so many associations over the years that using it tends to invite indignant responses of the form ‘but that can’t be naturalism! Naturalism has to be like this!’ ” (Maddy 2007, p. 1).

Be it as it may, it can be hardly debated that naturalism has been the Zeitgeist in the analytic philosophy since the second half of the twentieth century. Dissenters are not totally absent (see Williamson 2011 for a notable example), but again Jaegwon Kim simply described an actual state of affairs when he wrote that “if current analytic philosophy can be said to have a philosophical ideology, it is, unquestionably, naturalism. Philosophical naturalism has guided and constrained analytic philosophy as its reigning creed for much of the twentieth century.” (Kim 2003, p. 84).

But even if we put aside the extent of the divergences between the worshippers of naturalism and its enemies, it is useful to point out a factor that turns out to be constitutive, although not always easy to characterize precisely in every domain: naturalism essentially relies on a model of knowledge that derives straightly from science and, more generally, on the role of paradigm of knowledge that science has been playing in the last three centuries. The central position of a certain image of scientific rationality in all variants of naturalism implies then a special attention to a circumstance: that the scientific revolution has introduced into Western culture a sort of new category – that of having an existence according to science – that simply did not exist previously (Stein 1993) and that appears to decisively shape the whole subsequent philosophical investigation concerning ontology, epistemology and their complex relationships.

On the very basis of the extent to which naturalism relies on a scientific representation of the world, however, it is surprising that the perspective of the philosophy of science is much less involved in the analysis of a naturalistic outlook than expected, outweighed as it is by the perspectives, say, of epistemology or philosophy of mind. Had the viewpoint of philosophy of science been taken more seriously into account when discussing the philosophical foundations of naturalism, a basic fact could not have been overlooked: namely, that in spite of the naturalism reliance on a scientifically-oriented paradigm of knowledge, one of the few established lessons of the philosophy of science of the twentieth century is that the question «What is science, exactly?» is far from settled in abstract and rigorous terms. There seems to be no set of necessary and sufficient conditions that determine the boundaries of a scientific theory, as opposed to a non-scientific one and this appears to have relevant implications for (scientific) naturalism, implications that the debate on naturalism does not seem always to take seriously.

In her above mentioned book Second Philosophy, for instance, Maddy is among the few who recognize this circumstance as a potential difficulty that naturalism might have to face: in fact Maddy aims to pursue a new project that turns out to be still naturalistic – a project she qualifies as ‘Second Philosophy’ – but since “there is no hard and fast specification of what ‘science’ must be, […] there can be no straightforward definition of Second Philosophy along the lines ‘trust only the methods of science’ ” (Maddy 2007, p. 1). The particular implications Maddy draws from this recognition, however, are controversial. Instead of developing an alternative set of arguments in the ordinary way, Maddy chooses to introduce a ‘character’, that she calls ‘the Second Philosopher’, and to defend her philosophical claims by describing the actions – so to say – of this character. With particular reference to our main problem, the Maddy’s Second Philosopher


uses what we typically describe with our rough and ready terms ‘scientific methods’, but again without any definitive way of characterizing exactly what that term entails. She simply begins from commonsense perception and proceeds from there to systematic observation, active experimentation, theory formation and testing, working all the while to assess, correct and improve her methods as she goes. (Maddy 2007, p. 2)
Provided that there is no formal recipe of what science must be, I doubt that a fruitful way to proceed from there is to get everything – from perception all the way to theory construction and checking – melted together in a philosophical fiction, under the assumption that in the sort of post-modern epistemology of our age what we can do is just observing Second Philosophers in action!

In the present paper, I will follow a different route. Trying to address directly some of the implications of the [no-recipe-for-what-science-must-be] result, I will start in section 2 by questioning the very significance of the distinction between ontological and epistemic naturalism. I will claim that due to a conceptual priority of the latter over the former, there can be as a matter of fact only an epistemic form of naturalism, a conclusion that strengthens the idea that in order to characterize what naturalism is one must be clear on what the nature of scientific theories is. This point leads to section 3, where I will focus on the role of scientific theories in what looks as a typical move of scientific naturalism, namely the so-called naturalization strategy. As a matter of fact, a large part of present philosophical frameworks inspired by scientific naturalism implicitly assume that when in the naturalizing strategies we move from a notion-to-be-naturalized toward science, there is a corresponding decrease of normativity. This process is taken exactly as one of the most desirable and sought-for aims of the strategies themselves: if the notion X to be naturalized is a highly normative one – hence a notion that for this reason might appear at first sight hard to integrate into a scientific view of the world – the naturalization treatment is often taken to be a sort ‘de-normativization’ process. Part and parcel of my analysis will be to argue, on the contrary, that real scientific theories are far more normative than ordinary scientific naturalism is ready to accept, a circumstance that at a minimum is bound to force most naturalization strategies to re-define their significance. The import of (certain aspects of this sort of) normativity from the viewpoint of the philosophy of science will be addressed in section 4, including a critical look at substantive naturalism – an especially ‘aggressive’ variant of naturalism according to which all epistemic facts are natural facts: finally, in the last section, I will draw some general conclusions



2. On the mutual independence of ontological and epistemic naturalism
In the debates on naturalism, there is a customary distinction between ontological naturalism – concerning what there is – and epistemic naturalism – concerning how we are supposed to know what there is (for the distinction see e.g. De Caro, Macarthur 2004, pp. 3-6). In the former case, ontological naturalism identifies nature as self-sufficient and identical to the totality of reality. Even before the well-known Sellarsian adaptation of Prothagoras’ fragment – “in the dimension of describing and explaining the world, science is the measure of all things, of what is that it is, and of what is not that it is not” (Sellars 1963, p. 173) – already Ernst Nagel used to characterize interestingly ontological naturalism as follows:
In my conception of it, at any rate, naturalism embraces a generalized account of the cosmic scheme and of man's place in it, as well as a logic of inquiry [...]Two theses seem to me central to naturalism as I conceive it. The first is the existential and causal primacy of organized matter in the executive order of nature. This is the assumption that the occurrence of events, qualities and processes, and the characteristic behaviors of various individuals, are contingent on the organization of spatio-temporally located bodies, whose internal structures and external relations determine and limit the appearance and disappearance of everything that happens. That this is so, is one of the best-tested conclusions of experience. […] The second major contention of naturalism is that the manifest plurality and variety of things, of their qualities and their functions, are an irreducible feature of the cosmos, not a deceptive appearance cloaking some more homogeneous" ultimate reality"or transempirical substance, and that the sequential orders in which events occur or the manifold relations of dependence in which things exist are contingent connections, not the embodiments of a fixed and unified pattern of logically necessary links. (Nagel 1956, pp. 8-9, italics in original; similar statements can be found for instance in Armstrong 1981, p. 149, Armstrong 1983, p. 82, and Kim 2003, p. 90)
On the background of modern science, formulations like these sound intuitive at first sight. There is a highly non-trivial problem underlying it, however. This problem is likely to affect any formulation of ontological naturalism since it has to do with the view, largely presupposed, according to which the task of ontological naturalism is to flesh out the metaphysical implications of scientific theories. This means that, according to this view, it is a scientific theory that obviously leads naturalism to qualify the existence of objects. But if it is a theory that provides a structure through we which we are supposed to access reality, it follows that the ontological commitments are mediated by epistemic requirements of the theory itself, those requirements through which we are naturalistically entitled to say that there are really existing objects and structures. The objects of the theory – what the theory is about – are to a large extent constructs, let us call them C, characterized by a number of abstract conditions that the theory is supposed to assume in order to be truly a theory of the objects C. This feature largely goes along with the semantic view of theories typical of the post-positivistic philosophy of science (van Fraassen 1980, Suppe 1989), and with has been called a contextual theory of the meaning for scientific terms: the way in which theoretical terms refer – even when they are assumed to genuinely refer – depends, often holistically, on the global structure of the theory itself (Holger 2013). Hence, if the ontological characterization of portions of natural reality essentially depends on epistemic constraints, what is known as ontological naturalism is likely to become ‘soluble’ into its epistemic counterpart: moreover, these epistemic constraints are not uniform since different theories structure their relevant portions of reality according to possibly different standards.

Let us take into account in this direction, for instance, a passage from the above Nagel text, in which the author introduces the assumption that “the occurrence of events, qualities and processes, and the characteristic behaviors of various individuals, are contingent on the organization of spatio-temporally located bodies, whose internal structures and external relations determine and limit the appearance and disappearance of everything that happens”. The spatio-temporal constraint is again, at first sight, entirely plausible if we speak of a natural reality, but it must also be stressed that in order to make sense it requires a precise and well-formulated theory of space-time, since – as Nagel claims – it is the set of structures and relations dictated by this theory that determine “the appearance and disappearance of everything that happens”.

According to another eminent naturalist philosopher, Hilary Kornblith, the task of a naturalistic metaphysics is “simply to draw out the metaphysical implications of contemporary science […] A metaphysics which goes beyond the commitments of science is simply unsupported by the best available evidence” (Kornblith 1994, p. 40). The wording of this formulation is especially apt to support the claim that the epistemic one is in fact the only non-derivative strand of naturalism. For if scientific theories fix the commitments that a naturalistic metaphysics – whatever it might be – cannot afford to transcend, this means that it is the wide class of epistemic structures that has a priority over the ‘world’ of ontological structures. This is further strengthened by the reference to the ‘available evidence’: under the hypothesis that metaphysics needs to be supported by scientific theories in order to be acceptable, the role that evidence might play in this support is highly theory-dependent, not to speak of the highly theory-dependent character of the very notion of evidence itself (a further lesson we have learned from the post-positivistic philosophy of science). In a vein similar to Kornblith, David Papineau claims that “the driving motivation for ontological naturalism is the need to explain how different kinds of things can make a causal difference to the spatiotemporal world” (Papineau 2007, emphasis added). Namely, Papineau stresses that what ontological naturalism is about (the ‘content of reality’, as Papineau puts it) must be formulated in terms of what makes a causal difference to the spatiotemporal world. The condition of ‘making causal difference in spacetime’ prescribes then the kind of property that an x must exhibit in order for x to be really an object of the natural world: but this a causal requirement, which implies that (i) it must presuppose some theory of causation on the background, and (ii) it must take into account the circumstance according to which different theories may have wildly different causal requirements. Finally, in a recent paper devoted to the effort of defending a compatibility between a certain form of naturalism and the Husserlian phenomenology, Ramstead characterizes ontological naturalism as “the position that all things and their properties are natural things and properties, or supervene on natural things and properties” (Ramstead 2014). Once again, however, this metaphysically-tinted formulation hides a dependence from epistemic strictures, since it is a scientific theory – and not the Nature – that tells us what is ‘natural’: as Ramstead himself claims two lines in advance, “natural stuff is the kind of stuff postulated by the ontologies of the natural sciences” and it is the relevant theory that decides what belongs to such an ontology and what does not1.

It should be clear that the claim that any ontological naturalism is bound to be epistemic naturalism in disguise has no anti-naturalistic tone per se. Nevertheless, one might hold that for other reasons a place should remain in the logical space for ontological naturalism, and might try to resist my above claim of solubility of ontology into epistemology. How? Perhaps by assuming that – in deciding what is an object for natural sciences – there can be a sort of ‘pre-comprehension’ of natural reality, something that tends to include certain items and to exclude others: this pre-comprehension would concern items like matter, space, time, causation and the like, and would be conceived of as an intrinsically metaphysical character, independent from da specific scientific formulations of these notions. In the words of Horgan and Timmons:

We take the naturalist outlook in philosophy to be at bottom a metaphysical view about the nature of what exists. The vague, pre-theoretic idea the philosophical naturalist attempts to articulate and defend is that everything – including any particulars, events, facts, properties, etc. – is part of the natural physical world that science investigates (Horgan, Simmons 1993, p. 182, emphasis added)
Even if we put aside that it is the wide class of contemporary physical theories that tell us what ‘the natural physical world’ is, how could this approach still qualify as naturalistic? It is far from clear, since in a naturalistic framework a metaphysical hypothesis is hardly meaningful if not through the mediation of a scientific theory (MacLaurin, Dyke 2012). Therefore the following dilemma seems inescapable: either we assume metaphysical hypotheses unrelated to science, with the consequence that we are likely to transcend the ordinary boundaries of naturalism, or we accept that the metaphysical hypotheses are in effect constrained by what scientific theories tell us concerning the items involved in those hypotheses, with the consequence that there is no genuinely ‘ontological’ naturalism but only an epistemic one.

3. The role of scientific theories in the naturalization strategy
Whatever the plurality of naturalisms, there is a typical move in the scientific brand of naturalism: the so-called naturalization strategy. Just like what happens for the very definition of naturalism itself, we can interpret also a naturalization strategy in several different ways, according to the different tasks such a strategy is supposed to perform, and also according to whether we think that such a strategy is advisable or not! Once again, however, a common feature can be discerned, all this variety notwithstanding: a feature according to which a naturalization strategy works as a sort of decrease-of-complexity tool, namely a tool adopted with respect to a certain philosophical notion or issue A, when A is assumed to be ‘intractable’ to a serious extent, that is too dependent on subjective, contextual, normative factors and the like.

Although expressed in admittedly vague terms, this formulation of (the core of) a possible naturalization strategy is in line with ordinary descriptions of what is the effort in which a naturalistic attitude is supposed to engage in this or that area of investigation. In a review paper on the project of naturalizing semantics, for instance, Barry Loewer describes in these terms the crucial issue under discussion:


The semantic properties of mental states are what makes them intentional states. Thus the intentional content of e.g. the thought that the cat is on the mat is the truth conditions of the thought. The topic of this paper is the question: In virtue of what do intentional mental states/events possess their semantic properties? For example, what makes it the case that a particular thought is about the cat and has the truth conditions that the cat is crying? The answer cannot be the same as for natural language expressions since the conventions that ground the latter's semantic properties are explained in terms of the semantic properties of mental states. If there is an answer, that is, if semantic properties are real (and really instantiated) and are not fundamental, then it appears that they must be instantiated in virtue of the instantiation of certain non-semantic properties. (Loewer 1997, p. 108)

Still in Loewer’s words, the project of the philosophers he calls “Semantic Naturalizers” is to address the above issue in a naturalistic vein: “Semantic Naturalism's central contentions are a) that semantic properties, laws, causal relations involving them obtain in virtue of the obtaining of facts constituted entirely by naturalistic properties etc. and b) that semantic properties are kinds of the sort suitable for investigation by the methods of the natural sciences.”

By attempting to isolate the ‘naturalistic properties’ that should ground semantic properties, the naturalization strategy for semantics displays an element that seems common to most variants of naturalism and shows its directly scientific inspiration: its being essentially a simplifying strategy. That is, what explicitly or implicitly a naturalization strategy usually tries or hopes to do is to select a specific scientific domain, that is supposed to play the role of the ‘simplified’ domain in terms of which the notion A might be fruitfully reformulated: after this naturalization treatment, clearly, we expect the complexity of A to have been drastically reduced.

Although at this stage I am not in the position of making this notion of complexity very rigorous, let us suppose for the moment that this simplification attempt suitably represents one of the most pressing ambitions of naturalism. In order for this sort of naturalization strategy to make sense, it seems that at least two general conditions should be satisfied. If AST-Nat denotes the notion A after its naturalization in terms of the scientific domain SD, the conditions may be sketched in the following terms:

(1) Compl (ASD-Nat) < Compl (A)

namely, the complexity of the SD-naturalized A must be strictly less than the complexity of the original, non-naturalized A;

(2) Compl (ASD-Nat)  Compl (SD)

namely, the complexity of the SD-naturalized A must be ‘comparable’ to the complexity of the notions that SD itself is able to control.

Clearly, under the hypothesis that we are able to master the complexity of the central notions of SD, if the SD-naturalization of A failed to satisfy (1)&(2) then it would not result in a significant gain in making the notion A ‘more tractable’. Should (1) not hold, there would be no point in naturalizing A in the first place; if, on the other hand, we could reduce the complexity of A but only to an ‘insufficient’ extent – namely, should (2) not hold – the naturalization would be unsatisfactory anyway, since our original point was to ‘translate’ A in terms of notions that the scientific domain SD does master. As far as philosophy of science is concerned, it is this complexity-reducing role that – in my view – has been an actively operating factor for the attractiveness of naturalism. The complexity-reducing role leads one to hope that, for instance, w.r.t. issues like the mind-matter relation, intentionality, meaning and many others, it might be science that is called to do the ‘dirty job’: deciding once and for all whether there might be an ultimate answer in scientific terms to thorny issues like <Is there anything to the mind but matter?>, <Can a computer can really catch meanings?>, <Can an artificial cognitive system extract semantics from syntax?>, etc.

This emphasis suitably resonates with how naturalization is usually conceived. In his above mentioned work, Ramstead characterizes naturalization as a true paradigm shift:


[…] to naturalize a thing entails that one mobilizes only those concepts that pertain to the ontologies of the natural sciences to explain a given phenomenon, and to abandon those concepts that were previously used to account for it which are not part of the lexicon of the natural sciences […] I propose, then, to read the expression ‘to naturalize a thing or property’ throughout as meaning ‘to give an explanatory account of a thing that is coherent with the ontologies of the natural sciences’. It is thus a manner of speaking about a change in our conceptual or semantic network with regard to a thing or property that was heretofore not conceptualized as a natural one. (Ramstead 2014)
Should the attempt of giving ‘an explanatory account of a thing that is coherent with the ontologies of the natural sciences’ fail to simplify the complexity of such possible account, or even keep it essentially the same (not to speak of increasing it), I strongly doubt that the naturalization strategies would have deserved the appeal they in fact had (see also Wright 2007, pp. 585 ff).

If for the sake of argument we assume the above characterization of the naturalization strategies, we must note a further element that the complexity-reducing procedures of the different naturalization strategies attempt to reduce: the amount of normativity inherent to the not-yet-naturalized notion A. In principle, this attempt looks perfectly reasonable in the overall strategy, since the difficulty of suitably integrating normative notions into a scientific view of the world is in itself an addition of complexity that a naturalistic outlook might wish to reduce, eliminate or confine. Although reasonable, however, this ‘de-normativization’ carries with itself a problem, whose potential is once again more apparent if we agree to consider it from the viewpoint of the philosophy of science. For the naturalization strategy, with all its promises of simplifying matters, is far too optimistic over the possibility that when we proceed from the not-yet-naturalized A to the SD-naturalized A (what we indicated above by ASD-Nat , for some scientific domain SD), we actually follow a path that starts from a highly normative domain and ends to a significantly less normative domain, or hopefully non-normative at all: in other words, the scientific domain SD might be much more controversial than expected as to what it takes to explain satisfactorily the naturalized notion ASD-Nat. The problem is simply that the expression ‘having a scientific explanation’ is strongly theory-dependent and far from having a unique meaning. Different theories have different modalities of taking their pertaining phenomena to be ‘explained’ within their frameworks: sometimes, a the­ory may claim to have ‘explained’ a class of phenomena just because there it possesses a predictively effective model or because a correlation with more familiar phenomena has been established, whereas other theories will require much more. In his Beyond Reduction (2007) Steven Horst voices a similar worry with regard to naturalization strategies in philosophy of mind. After defining a general ‘naturalistic schema’ about a domain D as “the view that all features of D must be accommodated within the framework of nature as it is understood by the natural sciences” (Horst 2007, p. 13), he aptly remarks that


even once we have pinned down what we mean by ‘accommodating’ the mind within nature, the expression ‘the framework of nature as it is understood by the natural sciences’ is still rather vague. Just what our naturalistic schema means will depend heavily upon what one considers to be central to how the natural sciences operate, and how they represent the natural world. (Horst 2007, pp. 14-15, emphasis added)2
Moreover, in addition to recognizing that the scientific domain that is supposed to host the post-treatment notion ASD-Nat is significantly more normative than hoped by many die-hard naturalists, there is also the problem of how the naturalization strategy is supposed to work. In fact, any naturalization strategy is not pursued in a vacuum but according to a more or less well-defined set of criteria, that shape the particular ways in which the notion A gets translated into the notion ASD-Nat : now, these criteria turn out to be to a large extent normative and hardly naturalizable themselves.

4 Normativity, again
The tendency to underestimate the role of normativity within scientific theories is at the very origin of the twentieth-century naturalism vis-à-vis the philosophy of science, namely the Quinean naturalization of the theory of scientific knowledge. In a well-known passage of “Epistemology Naturalized”, Quine writes:
Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science. It studies a natural phenomenon, viz., a physical human subject. This human subject is accorded a certain experimentally controlled input — certain patterns of irradiation in assorted frequencies, for instance — and in the fullness of time the subject delivers as output a description of the three-dimensional external world and its history. The relation between the meager input and the torrential output is a relation that we are prompted to study for somewhat the same reasons that always prompted epistemology: namely, in order to see how evidence relates to theory, and in what ways one's theory of nature transcends any available evidence. (Quine 1969, pp. 82–3)
In the Quinean reflection on the nature of scientific theories, the meager input-torrential output relation is exactly the relation that malgré soi is apparent in under-determination, since in its being ‘torrential’ the output far outweighs what the ‘meager’ input strictly allows. It is exactly under-determination, however, that at the same time makes room for normativity even within the Quinean framework, which is the naturalistic project par excellence. In detaching himself from the Carnapian project and its view on the role of observation in science, Quine famously wrote that “the most modest of generalizations about observable traits will cover more cases that its utterer can have had occasion actually to observe. The hopelessness of grounding natural science upon immediate experience in a firmly logical way was acknowledged.” (Quine 1969, p. 74)

There is a tension here. In the passage above on the meager input-torrential output relation, as a matter of fact Quine seems to assume without further argument that scientific evidence is completely fixed – in a sort of functional way – by the several causal relations between perceptual input and theoretical output. But exactly due to the pervasive phenomenon of the under-determination so vividly represented by Quine himself, the reason why a class of perceptual inputs works as evidence for a certain theory clearly exceeds the mere causal relations that the perceptual inputs may establish with bits of the theory, and depends also on the normative decisions taken by the subjects involved in the evaluation whether that class can be really treated as evidence or not (Stroud 1984, BonJour 1998).

Quine did not ignore altogether the issue of normativity within the framework of his naturalistic epistemology, but proposed a sort of ‘weak’ version of it according to which normativity is accounted for in terms of predictive efficacy.
Naturalization of epistemology does not jettison the normative and settle for the indiscriminate description of ongoing procedures. For mc normative epistemology is a branch of engineering. It is the technology of truth-seeking, or, in a more cautiously epistemological term, prediction. […]The normative here, as elsewhere in engineering, becomes descriptive when the terminal parameter is expressed. (Quine 1986, pp. 664-5)
In Quinean terms, the normative side of epistemology lies in the selection of the tools that should turn out to be the most effective in improving the predictive performances of science in the aim of making optimal the above mentioned meager input-torrential output relation. Although weak, this formulation does not stand less in need of explanation: what would be the normative source of this ‘engineering of knowledge’? Even if we admit that ‘normativity’ here reduces to a sort of checking procedure for the plausibility of certain predictive strategies, who or what legitimates that procedure? (Kornblith 2002, pp. 137-139; for a deeper analysis of the Quinean notion of normativity, cfr. Houkes 2002).

But it is also the other celebrated critic of the logical empiricism’ views, Thomas S. Kuhn, that has been included – in a somewhat surprising way – in the reference source for a naturalistic approach to scientific knowledge. As is well known, in The Structure of Scientific Revolutions Kuhn conceives scientific theories not as observationally interpreted formal systems but rather as structures whose meaning is acquired within paradigms, conceptual frames of reference in which notions, techniques and values of a scientific community turn out to make sense: it is with respect to a given paradigm that scientists construct their experience and their observational evidence. As a matter of fact, according to Kuhn the dynamics of scientific theories is the dynamics of paradigms, a process made of long and rather steady periods of normal work followed by short, revolutionary turning points that globally re-define the very nature and features of the new paradigmatic structure.


Each [scientific revolution] necessitated the community’s rejection of one time-honored scientific theory in favor of another incompatible with it. Each produced a consequent shift in the problems available for scientific scrutiny and in the standards by which the profession determined what should count as an admissible problem or as a legitimate problem-solution. And each transformed the scientific imagination in ways that we shall ultimately need to describe as a transformation of the world within which scientific work was done.

(Kuhn 1996, pp. ??)


According to this picture, the dynamics of science depends in a non-negligible way not only on intrinsic, formal aspects of the theories – as in logical empiricism, to a large extent – but also and foremost on a possibly large set of normative criteria, that contribute to construct new theories or select among existing candidate theories and that cannot but operate as true epistemic constraints.

Download 159.01 Kb.

Share with your friends:
  1   2




The database is protected by copyright ©www.sckool.org 2023
send message

    Main page