Konrad Urban
Abstract
The paper assumes that conformism is an appropriate response to peer disagreement in idealised cases. It develops a dynamic model that allows to approximate from ordinary and rough cases towards idealised cases (given same cognitive ability) by expanding the evidential set of the doxastic agents through disclosure of evidence. This model is applied to five frequently cited cases that intuitively demand either a conformist response or a response of steadfastness. It succeeds to adequately explain why in the given cases the response should be conformist or steadfast.
The Motivation
There is no universal response to all cases of disagreement. Traditionally, two responses to cases of peer disagreement have been given: conformism or non-conformism. Conformism (or conciliationism or equal weight view): peers who disagree about a proposition should lower their confidence or suspend judgement (see Feldman 2007; Elga 2007; Christensen 2009). Non-conformism (non-conciliationist or steadfast views), on which there is no such requirement (see Kelly 2005; Enoch 2010; Foley 2001)). Arguably, conformism provides a good account in scenarios with ideal epistemic peers but fails in other cases. This paper will defend conformism in these other cases when it is possible to approximate rough peers into ideal peers.
In short, you have an epistemic duty to have informed believes, so you have an epistemic duty to collect evidence. To be able to weigh the evidence of peer disagreement you should understand the disagreement. Only once you understand the disagreement, you can react appropriately.
Limitations
In this paper, I shall take a weak understanding of conformism. Conformism requires to suspend one’s belief or lower one’s confidence in that belief when faced with peer disagreement. In other words, one cannot rationally hold steadfast to one’s belief (or confidence in it) when faced with peer disagreement.
The literature varies on the meaning of “epistemic peer”. Generally, epistemic peers have similar cognitive abilities and similar evidence for the relevant belief. Some scholars put strict limits on this and a distinction between idealised and ordinary cases is often made1. In idealised cases, evidence is identical and cognitive ability is equal2. Ordinary cases allow for vague identity and equality. Ideal cases are rare or non-existent (King 2012). A classic case involves two epistemic peers getting two different results when trying to split the bill in a restaurant (Christensen 2009). Their relevant evidence (the bill) is the same and their cognitive ability is assumed to be the same, yet they have found two distinct solutions to this simple arithmetic problem. Should they suspend or lower confidence in their belief (conformism), or should they hold onto it (steadfastness)?
In this paper, I assume that uniqueness thesis is true: given a reliable method of reasoning for all rational doxastic agents, the same set of evidence must produce the same belief (Feldman 2007). It can be one of the strongest motivations for conformism3. Given the uniqueness thesis, peer disagreement (i.e. non-unique solution given the same evidential set) is strong evidence for one of the agents having reasoned badly. Given this evidence about both agents’ (as this evidence is symmetric) bad reasoning, one must suspend belief or lower one’s credence in that belief.
Four Cases
I shall briefly discuss four scenarios that reoccur in the literature on peer disagreement. I will take the intuitions on the appropriate attitude as granted. Conformism as it stands fails to generalise onto some of these cases:
Restaurant with Mental Maths: Two epistemic peers want to split a restaurant bill. They both arrive at different results using mental maths (Christensen 2009).
Restaurant with Calculator: Same as above but calculator is used (Christensen 2009).
Horse Race case: We are both at a horse race. I see the horse pass the finish line first. You see the horse pass second. We both paid equal attention (Elga 2007).
David Lewis case: David Lewis is my epistemic superior (he has better cognitive ability and has more relevant evidence). He believes in real possible worlds and I don’t (Frances 2010).
I have included the David Lewis case in the discussion about peer disagreement because it is a stronger case than peer disagreement. If a peer’s disagreement is evidence for unreliable reasoning (or incomplete evidence – as can could be in this case), then it is even stronger in the case of peers.
In the literature, these cases were brought up to argue for either steadfastness or conformism. The Mental Math and Horse Race cases were brought up to argue for conformism – if the results of mental math or observation are different, then surely at least one agent has made an error and thus belief should be suspended, or credence lowered. The Calculator case was brought up as a counter-example to this, but was heavily debated – after all, I can rule out that I am not dishonest, but I can’t rule out that you are not, so I should trust myself more (cf. Christensen 2009). The Horse Race case is meant to support the steadfast attitude, claiming that my sight is more direct evidence than Alice’s claim about her sight. The David Lewis case was brought up to show that extending the conformist stance to all beliefs was too demanding or counter-intuitive. After all, one would have to lower credence or suspend belief about all controversial matters where peers held incompatible views, e.g. in religion, philosophy or politics. Let’s accept that three of the cases support either the steadfast view or the conformist view.
In more recent literature, this opposition has been broken into several mixed views incorporating the above-mentioned intuitions into one ruleset or denying universalisation (Goldman and Blanchard 2016; Matheson 2015). Instead of discussing their failings, I construct an alternative explanatory and normative model. It should fully explain the intuitions of the abovementioned cases. Its downside is that it is – although more unified – more demanding on the agents than some of the other solutions.
Towards a Dynamic Model
Some literature suggests that agents ought to fully disclose their evidence (Bergmann 2009). One reason is evaluating whether they have reasoned using the same set of evidence (to evaluate peerage). Another reason is locating potential errors in the reasoning itself. If the set of evidence changes after full disclosure, then it allows for a dynamic model of peer disagreement. Of course, in such a case the agents were only rough (not ideal) peers, but they can move towards idealised peerage by sharing their evidence. My dynamic model will hopefully give a procedure for cases of disagreement and tell us whether (and when) conformism should be abandoned.
So far, almost all models of disagreement have been static, that is, they required or allowed for a static equilibrium without changing circumstance. Hopefully, it will become evident that this is why some fail to generalise. There have been some attempts at a dynamic model by Douven (2010)4. I shall go in a similar direction, but with a stronger emphasis on locating the disagreement in the web of evidence and belief. The model will show when conformism is appropriate, controlling for what counts as peerage.
Disclosure and Entanglement
The concept that beliefs are entangled has been mentioned5, but it has not been appropriately applied to evidence. The literature on peer disagreement so far has assumed that a given set of evidence entails a belief, and revolved peerage and reliable entailment or what counts as belief (Matheson 2015; Goldman and Blanchard 2016). Little attention has been paid to the entanglement of evidence and belief. Drawing a parallel from the holistic science thesis (Stanford 2013), evidence is not simple (see Moffett 2007 for an attack on conformism in this spirit). It can only point towards a belief if it is interpreted with the use of auxiliary beliefs. For instance, my evidence of the restaurant bill will point to my result only with the use of my auxiliary belief about basic arithmetic – for which I have evidence. Such beliefs are also entangled with other beliefs and so on. In a simple case like the restaurant bill, both agents have reasons to belief that their auxiliary beliefs will be the same. For beliefs that are result of very complex deliberation, it is unlikely that doxastic agents will share auxiliary beliefs – even in cases of agreement.
If this is indeed how evidence and beliefs work, then disagreement does not challenge a particular belief, e.g. the reality of possible worlds, but a whole web of entangled beliefs. The disagreement itself does not reveal the location in that web. Why does this matter? If two agents work under different auxiliary beliefs, then they are not ideal peers and their disagreement reveals little about the proposition at hand. For instance, if I believe in solipsism and David Lewis beliefs that people exist, then the fact that we disagree about the proposition that real possible worlds exist is no evidence about said proposition. Similarly, say, we have a different understanding of addition and we disagree about the restaurant bill. If we don’t know about each other’s auxiliary beliefs, then all the disagreement does is show that our entangled webs of relevant beliefs are non-identical. If we know the auxiliary beliefs are different, then the discovery of disagreement seems to give no reason to modify the disagreed upon belief or its credence. If anything, it invites an evaluation of the auxiliary beliefs.
Rounds of Disclosure
We have shown that beliefs are entangled, and that disagreement is a challenge to the web of beliefs, not to the isolated belief that manifested the disagreement. This means that the precise location of the disagreement remains initially unknown. When agents disclose their evidence and their auxiliary beliefs, they see the relevant disagreement travel (disagreement overall expands, but the relevant disagreement for each round travels). In the aforementioned example, it travels from the disagreement about the restaurant bill split to the disagreement about how arithmetic and addition work. Rounds of disclosing evidence and beliefs should continue, letting the initial disagreement travel and locating new disagreements (in which case the discussion branches off into a new series of rounds). Rounds continue until either agreement is reached or disclosure cannot continue because of fundamental beliefs (beliefs not based on evidence or reasoning, such as intuition-based beliefs). The conditions for agreement are dictated by more general rules of epistemology, not necessarily specific to social epistemology. In the case of fundamental beliefs, they can of course be irrational, but, again, this is relevant to other areas of epistemology.
Many rounds can be required as these webs of beliefs and evidence can be extremely deep. So far, we have held cognitive ability constant and equal for peers. This means that both peers will reach a similar level of depth before entanglement overwhelms their brainpower. When the web is too deep, it becomes easy make errors and there will be a point where an error is more likely than furthering understanding about the disagreement. Because the agents know that they will be able to make a common judgement when to cease disclosing. At this point, they can give up disclosing and hold steadfast to the belief from the most recent round.
The ever-increasing complexity of system can be demanding in terms of time and brainpower. This means that other considerations will often override the epistemic duty to locate the disagreement at fundamental level. For instance, perhaps I should not continue rounds of disclosure about a political belief at a dinner party as to not bore everyone. Other duties can override epistemic duty.
This model allows for modifying beliefs with each round – not because of conformism as such but because of expanded evidence (e.g. in the form of testimony). This means that the disagreement can travel in an ever-expanding web, but also that new disagreements can be uncovered. There can be disagreement about when to stop disclosing due to different beliefs in what counts as a fundamental belief. When one party reasonably makes the claim that it has reached a fundamental belief, rounds end. If the disagreement about the fundamental belief explains the disagreements from the previous round, then both parties can hold steadfast to their beliefs from the previous round. A crucial part of this model is that agents can often foresee the rounds, especially in simple scenarios.
Overall, the conformist attitude is required in scenarios of ideal peerage. In all rough cases (most, if not all, real world examples), the requirement of dynamic disclosure should be seen as an approximation towards ideal peerage, except in cases where rounds cease due to fundamental beliefs. Generally, this principle can be understood as “you shouldn’t use disagreement as evidence if you don’t understand the (origin of) the disagreement”.
We generally have a feel for how entangled a certain belief is. For instance, we know that in areas where disagreement is common, e.g. politics or philosophy, beliefs are very entangled, so much disclosing must be done. In contrast, in areas where disagreement is rarer, e.g. arithmetic, there rarely is much additional information gained from disclosing auxiliary beliefs.
Now, let’s test this model against the cases.
Restaurant with Mental Maths: Here, the evidence, i.e. the bill, is disclosed to begin with. What needs to be disclosed are the auxiliary beliefs, e.g. about arithmetic. If they are identical, as they probably should be, then it is unconceivable that at least one of the parties has not erred in their calculation. The agents who erred (which must become obvious to them after disclosure) must modify their belief. Because the case is symmetrical and agents can foresee that disclosure will locate the error, they ought to lower their confidence or suspend judgement. Conformism is applicable (for a different reason than in the literature so far).
Restaurant with Calculator: Same as above, but auxiliary beliefs include how to use of calculator.
Horse Race case: There are few auxiliary beliefs here (e.g. that we are both sober and that our eyes work). If the agent believes that sense perception is fundamental, then steadfastness is permissible. If the agents disagree about these auxiliaries (e.g. one thinks the other is on drugs), then the disagreement travels to that belief – conformism is applicable.
David Lewis case: The case of real possible worlds is too deeply entangled to make it possible to approximate ideal peerage. Because I can’t even establish whether we have similar auxiliary beliefs, I cannot locate the actual source of our disagreement. This lets me stay steadfast in my beliefs.
Conclusion
Hopefully, the dynamic model gave a convincing account on each of the four cases. Disagreements are often deeply entangled and to take a rational stance on them, one ought to disentangle them to the most fundamental level. Because agents can often reasonably foresee how likely it is whether a disentanglement of disagreement will reach a fundamental point, they can reasonably hold steadfast in so far as their fundamental belief entails the disagreed upon belief. This will be often the case in philosophy or politics and rarely the case in simple mathematics. If one does not explore the entanglement of the belief, then one cannot treat the disagreement as good evidence. For instance, in cases where it is impractical to continue disclosing, the disagreement itself is not good evidence, because it is poorly understood.
What remains to be done is to try to generalise the model to more cases. The greatest challenge to this model is that its applicability is belief-laden: on the theoretical level it remains neutral as to its methodology (e.g. how to unravel entanglement or how auxiliary beliefs help the relevant belief), but on a practical level these questions may become central to applications.
Bibliography
Bergmann, Michael. 2009. “Rational Disagreement after Full Disclosure.” Episteme: A Journal of Social Epistemology 6 (3):336–353.
Cevolani, Gustavo. 2014. “Truth Approximation, Belief Merging, and Peer Disagreement.” Synthese 191 (11). Springer:2383–2401.
Christensen, David. 2009. “Disagreement as Evidence: The Epistemology of Controversy.” Philosophy Compass 4 (5). Wiley Online Library:756–67.
Douven, I. 2010. “Simulating Peer Disagreements.” Studies in History and Philosophy of Science Part A 41 (2). https://doi.org/10.1016/S0039-3681(10)00025-7.
Elga, Adam. 2007. “Reflection and Disagreement.” Nous 41 (3):478–502. https://doi.org/10.1111/j.1468-0068.2007.00656.x.
Enoch, David. 2010. “Not Just a Truthometer: Taking Oneself Seriously (but Not Too Seriously) in Cases of Peer Disagreement.” Mind 119 (476). Oxford University Press:953–97.
Feldman, Richard. 2007. “Reasonable Religious Disagreements.” In Philosophers Without Gods, Meditations on Atheism and the Secular Life, 194–214.
Foley, Richard. 2001. Intellectual Trust in Oneself and Others. Cambridge University Press. https://doi.org/10.1215/00318108-112-4-586.
Frances, Bryan. 2010. “The Reflective Epistemic Renegade.” Philosophy and Phenomenological Research 81 (2):419–63. https://doi.org/10.1111/j.1933-1592.2010.00372.x.
Goldman, Alvin, and Thomas Blanchard. 2016. “Social Epistemology.” The Stanford Encyclopedia of Philosophy.
Kelly, Thomas. 2005. “The Epistemic Significance of Disagreement.” Oxford Studies in Epistemology 1 (3):169–96. https://doi.org/10.1007/s13398-014-0173-7.2.
———. 2010. “Peer Disagreement and Higher-Order Evidence.” In Disagreement. https://doi.org/10.1093/acprof:oso/9780199226078.003.0007.
King, Nathan L. 2012. “Disagreement: What’s the Problem? Or a Good Peer Is Hard to Find.” Philosophy and Phenomenological Research 85 (2). Wiley Online Library:249–72.
Matheson, Jonathan. 2015. “Disagreement and Epistemic Peers.” Oxford Handbooks Online. 2015. http://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199935314.001.0001/oxfordhb-9780199935314-e-13.
Moffett, Marc. 2007. “Reasonable Disagreement and Rational Group Inquiry.” Episteme 4 (3):352–67. https://doi.org/10.3366/E1742360007000135.
Stanford, Kyle. 2013. “Underdetermination of Scientific Theory.” Stanford Encyclopedia of Philosophy, 1–17.
See Goldman and Blanchard 2016; Matheson 2015. Enoch (2010, p.996) for a reliability-based approach. ↩︎
Arguably, there is only one case where evidence is identical, myself. It’s irrational to hold a belief in p∧¬p. If I encounter an irrational belief, I should see what went wrong. ↩︎
For a counter-argument see Kelly (2010). ↩︎
Furthermore, there is a wealth of literature in a similar spirit that focuses on truth approximation and belief merging (Cevolani 2014). ↩︎
Elga (2007) calls this entanglement “knots”. Enoch (2010) also argues against seeing disagreement as point-like and agents as “truthometers”. ↩︎