Program

September 22nd
Room: GA03/42

Time  Speaker and Topic
09:30 – 10:00  Coffee Break
10:00 – 11:15  Matteo Colombo (Tilburg): Bayesian Cognitive Science, Unification, and Explanation
11:15 – 12:30  Ulrike Hahn (Birkbeck): The Bayesian Boom: Good Thing or Bad?
12:30 – 14:00  Lunch
14:00 – 15:15  Jan Sprenger (Tilburg): A Bayesian No Miracles Argument
15:15 – 16:30  Igor Douven (Groningen): Explanation and Inference
16:30 – 16:45  Coffee Break
16:45 – 18:00  Moritz Schulz (Tübingen): Decisions and Higher-Order Knowledge
18:00 – 19:15  Peter Brössel (Bochum): Rethinking Bayesian Confirmation Theory
20:15  Dinner

September 23rd
Room: GA03/42

Time  Speaker and Topic
09:30 – 10:00  Coffee Break
10:00 – 11:15  Nick Treanor (Edinburgh): The Proper Work of the Intellect
11:15 – 12:30  Anna-Maria A. Eder (Duisburg-Essen): Rationality, Normativity, and Implication
12:30 – 14:00  Lunch
14:00 – 15:15  Martin Smith (Glasgow): When Does Evidence Suffice for Conviction?
15:15 – 16:30 Christian Feldbacher (Düsseldorf): On the Sociality of Epistemic Norms
16:30 – 16:45  Coffee Break
16:45 – 18:00  Dunja Šešelja (Bochum) and Christian Strasser (Bochum): The Normative Role of Evaluative Stances in Scientific Disagreements
19:00  Dinner

ABSTRACTS

Peter Brössel: “Rethinking Bayesian Confirmation Theory”

Standard Bayesian confirmation theory holds that an agent’s epistemic state is best represented by her degree of belief function alone and confirmation is defined in terms of these degrees of belief. In particular, according to the standard theory the evidence confirms a hypothesis for the agent if and only if learning the evidence increases the agent’s degree of belief in that hypothesis.
The first part of the papers argues that the standard Bayesian conception of confirmation cannot be used for its primary intended purpose, namely, saying how worthy of belief certain hypotheses are and thereby justifying the agent’s degrees of belief. In addition, standard Bayesian confirmation theorists have problems incorporating other possible purposes such as deciding what experiments to conduct. We conclude: we should rethink Bayesian confirmation theory!
The second part of the presentation, the constructive part, introduces a new conception of Bayesian confirmation theory that cannot only be used for its primary intended purpose, but also for purposes such as deciding what experiments do conduct. The new Bayesian conception of confirmation suggests that an agent’s epistemic state is not just represented by her degrees of belief, but by her reasoning or confirmation commitments, in terms of which confirmation is defined, and her evidence. Finally, I unfold the implications this new conception has for Bayesian epistemology and cognitive science.

Matteo Colombo: „Bayesian Cognitive Science, Unification, and Explanation“ (Joint work with Stephan Hartmann)

It is often claimed that the greatest value of the Bayesian framework in cognitive science consists in its unifying power. Several Bayesian cognitive scientists assume that unification is obviously linked to explanatory power. But this link is not obvious, as unification in science is a heterogeneous notion, which may have little to do with explanation. While a crucial feature of most adequate explanations in cognitive science is that they reveal aspects of the causal mechanism that produces the phenomenon to be explained, the kind of unification afforded by the Bayesian framework to cognitive science does not necessarily reveal aspects of a mechanism. Bayesian unification, nonetheless, can place fruitful constraints on causal-mechanical explanation.

Igor Douven: „Explanation and Inference“

In my talk, I present two new results regarding the so-called Inference to the Best Explanation (IBE). The first result comes from a comparison of IBE with Bayes‘ rule in a social setting, specifically, in the context of a variant of the Hegselmann-Krause model in which agents do not only update their belief states on the basis of evidence they receive directly from the world but also take into account the belief states of some of their fellow agents. So far, IBE and Bayes‘ rule have been studied only in an individualistic setting, and it is known that, in such a setting, both have their strengths as well as their weaknesses. I will show that in a social setting, IBE outperforms Bayes‘ rule according to every desirable criterion. The second result concerns the descriptive adequacy of IBE. Experimental results are presented which show that people indeed attend to explanatory considerations in updating their degrees of belief.

Anna-Maria A. Eder: „Rational Belief, Normativity, and Implication“

The closure principle of rational belief states, roughly, that rational belief is closed under implication (of standard logics). Appealing as the principle seems at first sight, it faces some serious problems. Such problems have led Harman to conclude that “there is no clearly significant way in which logic is specially relevant for reasoning” (1986: 20). Seemingly, given such problems, other philosophers have concluded that standard logics are mistaken. I shall argue that there is another way out.
Since objections to the closure principle are sensitive to its precise normative specification, I shall first introduce some such specifications that receive most attention in the literature. They are stated in terms of beliefs one ought to have or in terms of beliefs one is permitted to have. After presenting the serious problems that these specifications face, I suggest two new specifications. As it turns out, the specifications that I suggest do not face the mentioned problems. Finally, I argue for one specification—on basis of the fact that it fits very well with an evidentialist characterization of rational belief.

Christian Feldbacher: „On the Sociality of Epistemic Norms“

Epistemological investigations of belief in philosophy differ from such investigations in psychology. While psychologists focus on the question how real agents actually form beliefs and gather knowledge, philosophers investigate normative questions about more or less idealized agents. The problem of how to interpret this epistemic normativity led many authors to an instrumentalist point of view, claiming, as e.g. Quine (1986) did, that „normative epistemology is a branch of engineering. It is the technology of truth-seeking […] it is a matter of efficacy for an ulterior end, truth […]. The normative here, as elsewhere in engineering, becomes descriptive when the terminal parameter is expressed“. Analog to a deontic means-end-principle, which is often used to express normative instrumentalism in ethics, one may formulate an epistemic means-end-principle for norms of knowledge and belief of the following form: If M is an optimal means in order to achieve epistemic goal G, then, since G is on its very basis epistemically ought or rationally accepted, M is also rationally acceptable. This principle has at least two components that require further clarification: (i) the concept of an epistemic goal and (ii) the concept of an optimal means to achieve such a goal. In this paper we focus on a clarification of the second concept and show how optimality results of the theory of strategy selection allow for spelling out the normative part of rationality by means of social reliabilism.

Ulrike Hahn: „The Bayesian Boom: Good Thing or Bad?“

Bayesian models of cognition have been enjoying a huge boom within Cognitive Science, a boom that has sparked a series of high-profile critiques of Bayesian models. These critiques question the contribution of rational, normative considerations in the study of cognition. However, closer consideration of actual examples of Bayesian treatments of different cognitive phenomena allows one to defuse these critiques showing that they cannot be sustained across the diversity of applications of the Bayesian framework for cognitive modelling.   At the same time, the examples are used to demonstrate the different ways in which consideration of rationality uniquely benefits both theory and practice in the study of cognition.

Moritz Schulz: „Decisions and Higher-Order Knowledge“

An issue in Bayesian decision theory are assignments of probability 1 because they license to bet our lives for a penny. There are various possible reactions to this problem ranging from a ban on probability 1 to contextualist or sensitive invariantist solutions (Greco 2012). In response, Williamson (2005a, 2005b) has suggested that high stake decisions might require higher levels of knowledge. In this paper, I discuss how one might turn Williamson’s idea into a systematic theory.

Dunja Šešelja and Christian Strasser: „The Normative Role of Evaluative Stances in Scientific Disagreements“
History of science is a rich source of deep disagreements. Many philosophers have emphasized that disagreements and a proliferation of competing research programs are vital for scientific progress. In contrast, we expect scientists to be competent, unbiased and objective and hence to draw the same conclusions if they use the same body of evidence. This shows that there is a tension between these historical facts and philosophical views on the one hand, and expectations we have towards scientists on the other hand.

In this talk we will explore this tension by distinguishing between some of the central epistemic and heuristic norms that underlie evaluative stances of scientists towards their opponent’s inquiry. Scientists form evaluative stances towards scientific hypotheses, theories, research programs, etc. by judging them to be acceptable, worthy of pursuit, worthy of consideration, etc. As such, evaluative stances include epistemic and heuristic, as well as non-cognitive concerns (such as social, ethical and political values). We will argue that, depending on the available object-level and higher-order evidence, a disagreement on a certain evaluative stance may very well be compatible with an agreement on a different evaluative stance towards the same inquiry. For example, a disagreement on the acceptability of a given theory may be compatible with an agreement that the same theory is worthy of pursuit. Of specific interest for our argument are the stances of pursuit worthiness and epistemic toleration, which are often neglected in scientific debates. We will show that these stances help disagreeing scientists to avoid premature dismissals of possibly fruitful inquiries by their opponents, without giving up on their convictions towards their own inquiries. In this way these two stances allow for a disagreement to be restricted to its relevant evaluative stance, without necessarily spreading to other stances. We will illustrate our point in view of two case studies: the case of the continental drift debate and the case of the research on peptic ulcer disease.

Martin Smith: „When Does Evidence Suffice for Conviction?“

There is something puzzling about statistical evidence. One place this manifests is in the law, where courts are reluctant to use evidence of this kind, in spite of the fact that it is quite capable of meeting the standards of proof enshrined in legal doctrine. After surveying some proposed solutions to this problem, I shall outline a somewhat different approach – one that makes use of a notion of that is distinct from the idea of statistical frequency. The problem is not, however, merely a legal one. Our unwillingness to base beliefs on statistical evidence is by no means limited to the courtroom, and is at odds with almost every general principle that epistemologists have ever proposed as to how we ought to manage our beliefs.

Jan Sprenger: „A Bayesian No Miracles Argument“

This paper develops a Bayesian version of the No Miracles Argument (NMA) in the debate about scientific realism. Unlike previous arguments, it incorporates considerations about the stability of scientific theories in the last decades. This strengthens the NMA against the objection of committing the base rate fallacy (Howson 2000; Magnus and Callender 2004). However, we side with Howson (2013) that the argument is essentially subjective and dependent on context-sensitive premises. In this sense, we can explain the persistent disagreement between realists and anti-realists from a Bayesian point of view.

Nick Treanor: „The Proper Work of the Intellect“
Much contemporary epistemology is either based on, or recoils against, a view of the place of truth in epistemology that first found expression in the Nicomachean Ethics. „The virtue of a thing is relative to its proper work … [and] the proper work of the intellect is truth,“ Aristotle wrote. Both advocates and opponents of this picture of the roots of epistemic normativity, however, have misunderstood what it is for truth to be the proper work of the intellect. In this paper, I explain where they have gone wrong and offer a new model.
Advertisements