The Title Piece of The Conflict of Mind by O.G. Rose

The Conflict of Mind

O.G. Rose
117 min readNov 20, 2020

On the Faulty Mechanics of Thought and Resulting Reasonableness of Moderation

Frozen Glory Photography

1. If x is true but there is no evidence verifying x, then it is irrational, intellectually irresponsible, and correct to believe x. In this situation, it is correct not to believe what is correct, and the right thing to do is to not believe what’s right to believe.

1.01 This assumes something can be true without evidence that it is true (beyond unfalsifiable claims). Examples supporting this assumption and/or possibility:

a. I tell you that I plowed my sleigh into a tree when I was twelve.

b. Hemmingway claims he is being monitored by US intelligence (before the intelligence community releases his file).

c. I claim that you will love me one day.

1.1 If x is false but there is evidence verifying x, it is rational, intellectually responsible, and wrong to believe x. In this situation, it is wrong not to believe what is false, and the wrong thing to do is to not believe what’s wrong to believe.

Audio Summary

1.11 This assumes something false can be (seemingly) proven. Though it may ultimately be the case that this doesn’t hold, it can at least occur temporarily and/or ostensibly. Examples:

a. “The Replication Crisis.”

b. My father shows me tickets to a concert and says we’re going but knows he’s lying.

c. I dislike you but for the sake of manipulating you, I buy you flowers, chocolates, and praise you.

1.2 If x is true but there is evidence verifying and disproving x, it cannot be said for sure if it is epistemically moral or immoral to assent to x. Over x, there can be anxiety.

2. If x is true and will be verified in ten days, for ten days, it could be irrational, intellectually irresponsible, and correct to believe x.

2.1 If x is true but never verified, it is always irrational, intellectually irresponsible, and correct to believe x. The more rational the person, the more it is expected to be the case the person never believes the truth.

Music for Reading

3. To allude to W.K. Clifford, it is not the case that a person who is epistemically moral will necessarily be more correct than someone who is epistemically immoral. In fact, it is possible for epistemic responsibility to be what drives someone into error.¹

3.1 If Hemmingway tells me that he is being monitored by US intelligence but cannot provide me with any hard evidence, it is reasonable for me to tell Hemmingway to see a doctor and make sure he isn’t suffering from hallucinations. If he claims I don’t believe in him, I am justified and even epistemically obligated to tell him something Bertrand Russel said he would tell God: “Why don’t you have more evidence?”

4. In line with the thought of “Flip Moments” by O.G. Rose, if x will be verified in ten days, between now and then, it is epistemically irresponsible to believe in x; in ten days, it will (suddenly) be epistemically responsible to believe in x, as if it was always epistemically responsible to believe in x.

4.1 For ten days, there will be reason to believe those who believe in x are “poor thinkers”; in ten days, there will be reason to believe those who (always) believe(d) in x are “good thinkers.” Unless, that is, it is understood that “being epistemically responsible and wrong” isn’t to be a “poor thinker,” but rather to be caught in a tragedy of “being thoughtful” (as this paper hopes to establish).

5. For ten days, in regard to unverifiable and true x, to the degree a person is epistemically responsible and moral is to the degree the person will be wrong, while to the degree a person is epistemically irresponsible and immoral is to the degree the person will be right.

5.1 “Being right” isn’t (necessarily) the same as “being epistemically responsible,” as “being wrong” isn’t (necessarily) the same as “being epistemically irresponsible.” This point is similar to the idea that “being true” and “being rational” aren’t identical, as explored in “The True Isn’t the Rational” by O.G. Rose.

6. In ten days, it will be epistemically responsible and moral to believe in x. Hence, it is possible for what is epistemically irresponsible temporarily to be epistemically responsible ultimately (and vice-versa).

7. It is possible there are truths which will always be epistemically irresponsible and/or immoral to believe, as it is possible there are falsities which will always be epistemically responsible and/or moral to believe. Discerning if this was ever the case would require being God.

8. The ethics of rationality can conflict with truth (that rationality is “toward”); the ethics by which people know a thing can conflict with the truth of that thing.

8.1 People will inevitably find themselves in the middle of this conflict:

a. Someone will tell us about something that happened to them and there won’t be evidence for it.

b. The government will tell us things that they are unwilling to unveil how they learned to avoid compromising their secret operatives.

c. Peer-reviewed articles will be published that are later falsified.

d. Government officials will propose lessening privacy in order to stop terrorist attacks, attacks which the officials claim are likely to happen.

9. The ethics of the mind can conflict with the rightness of the mind. Resembling “the life of the mind” phrase, this dilemma is what I will call “the conflict of mind.”

9.1 The bigger and more complex the issue, the higher the likelihood there will be “conflicts of mind,” suggesting an advantage of smaller systems.

10. Democracies that fail to prepare for and educate people about “conflicts of mind”-situations are democracies that will prove more likely to fail than those that do.

II

Full Reading (Sections I — IV)

1. Since absolute objectivity is impossible for humans, is it meaningless to worry about whether x is wrong in a situation in which it is epistemically responsible to believe in x? Likewise, is it similarly meaningless to worry about whether I’m wrong to believe what I’m epistemically responsible to believe, or to worry that what I’ve decided is epistemically irresponsible to believe is actually correct?

2. In line with the thought of “On Worry” by O.G. Rose, there is a difference between “worrying about x” and “being aware of x.” No, people should not worry that what they find epistemically responsible to believe is actually false, but they should be aware of this possibility, as they should be aware of the possibility that what is epistemically irresponsible to believe could be true. Though worry about the possibility could lead to anxiety, awareness could lead to intellectual humility.

Full Reading (Sections V — VI)

3. Intellectual humility is important in our Pluralistic Age when we live in the midst of people who think fundamentally and essentially different than we do (see “Belonging Again” by O.G. Rose).

a. It is likely we will encounter people who we believe are wrong and those we will likely be quick to “verify” as wrong. Intellectual humility can help us combat this natural tendency to “ideology preserve” (as discussed in “The True Isn’t the Rational” by O.G. Rose).

b. When we encounter someone believing something that is epistemically irresponsible to believe, it might be the case the person is still right, and intellectual humility will increase the likelihood that we don’t dismiss the person outright (as if “the conflict of mind” didn’t existent).

c. Socrates was intellectually humble.

3.1 The more aware we are that epistemic responsibility can conflict with truth, the more we won’t be quick to judge those who are currently epistemically responsible to believe x as those who will (necessarily) always be epistemically responsible to believe x (though such may end up being the case). Likewise, we won’t be quick to judge those who are currently epistemically irresponsible to believe x as those who necessarily believe something that is false (nor will we act like we are objective like scientists when we turn out to be right, aware that “the right” isn’t necessarily “the epistemically responsible”).

Full Reading (Section VII)

3.2 The more aware we are that we can be epistemically responsible yet also wrong, as we could be epistemically irresponsible yet right, the more we will also develop valuable self-skepticism and intellectual “openness” to those who don’t think like us, us being “not too sure” that we are right.

3.21

‘The spirit of liberty is the spirit which is not too sure that it is right…’

- Judge Learned Hand

3.3 The less aware we are that epistemic responsibility can conflict with truth, the more likely we are to conflate “being epistemically responsible” with “being right.” If we conflate these two, then when we meet someone who is wrong, we will assume the person is (always) “epistemically irresponsible,” and hence we may not bother reasoning with that person. Furthermore, since no one believes what they believe thinking they are wrong, we will all necessarily and naturally think of ourselves as (always) “epistemically responsible,” protecting ourselves and our ideologies with a “moral layer” that will increase the likelihood we will not change our minds even when we should.

Full Reading (Section VIII)

3.31 The conflation of “being right” and “being epistemically responsible” can also increase the likelihood that we will think of someone who is right as (necessarily) “committed to truth no matter where it leads” (to allude to Thomas Jefferson); similarly, we are likely to think of someone who is wrong as (necessarily) “not committed to truth” (all while we necessarily think of ourselves as so committed).

3.32 As of a “flip moment” when x becomes true and hence epistemically responsible to believe, it will be as if those who didn’t believe in x before the “flip moment” were “always” epistemically irresponsible, “ideological,” “willfully ignorant,” “oblivious to the facts,” etc. (even if at the time disbelief was the epistemically responsible thing to do). The converse also holds: it will be as if those who believed in x while it was epistemically irresponsible were always epistemically responsible.

Full Reading (Section IX)

3.33 As of a “flip moment” when x becomes false and hence epistemically irresponsible to believe, it will be as if those who believed in x before the “flip moment” were “always” epistemically irresponsible, “ideological,” etc. (even if it was the epistemically responsible thing to do). The converse also holds: it will be as if those who disbelieved x while it was epistemically irresponsible were always epistemically responsible.

3.34 Determining epistemic (ir)responsibility can be like the problem of “observational equivalence”: we observe output x, but not context y necessary for interpreting x.

4. Intellectually humble, we will be more likely to keep our ears, hearts, and minds open to “the other,” as is needed in our divisive age.

4.1 If there is a failure to realize that what constitutes “epistemic responsibility” is relative to what a person believes is true, the nature of the evidence for the truth, one’s relative standards of justification, and the like, there will be a lack of intellectual humility, and furthermore calls for people to be “intellectually honest” could very well worsen the problem.

III

1. Classically, philosophy has defined “knowledge” as “justified true belief” (JTB).

1.1 Edward Gettier famously devised hypotheticals in which I could meet the criteria of JTB and yet not have knowledge. However, I believe Lukasz Lozanski in his article “The Gettier Problem is No Longer a Problem” has provided a strong argument for why we should continue using the classic definition.

2. If I believe in something as knowledge that doesn’t meet the criteria of JTB, I am epistemically irresponsible. In many respects, what makes knowledge “epistemically responsible” is precisely that it is JTB.

2.1 If I believe it is knowledge that tomorrow afternoon I will be at the train station (versus “probably knowledge”) — when I could theoretically be in a car wreck, get sick, etc. — then I act epistemically irresponsible. And yet tomorrow at the train station it will suddenly be JTB that “I will be at the train station on New Year’s Day, 2017, at 2PM,” as if it was always JTB.

2.2 If I believe it isn’t knowledge that “I will be at the train station on January 1st, 2017, at 2PM” while at the train station at that date and time — because I think reality might be an illusion, that I might be dreaming, etc. — then I act epistemically irresponsible. And yet if at that date and time I’m not at the train station, it will be as if the premise was never JTB or knowledge.

2.21 But isn’t it possible that the world could be an illusion? Yes, but such a claim cannot be falsified, and considering Karl Popper, believing such a claim constituted “knowledge” would be incorrect — it’s only a possibility, which frankly means very little. For more on this argument, consider “Bridging the Kants” by O.G. Rose.

3. If at twelve years old I plowed my sleigh into a tree, this memory, which relative to me is a “justified true belief,” constitutes knowledge.

3.1 It is a belief because it is something that I think happened in the world (to me).

3.2 It is justified because I still have a scar on my arm from the wreck and a vivid memory of it happening.

3.3 It is true because I actually did hit the tree (and have no falsifiable reason to believe I am in The Matrix).

4. If I tell you that at twelve, I plowed my sleigh into a tree, what is JTB to me isn’t JTB to you. If anything, it is an “unjustified, maybe true, belief,” and yet if you believe me, you believe something that is true and something that is knowledge to me (and hence is knowledge). In this way, “unjustified, maybe true, belief” can be “relative knowledge” and/or “personal knowledge.”

4.1 It can be a belief because you believe what I am telling you.

4.2 It isn’t justified because you can’t know for sure that the scar I show you is actually from the wreck (though you can believe me), and furthermore you cannot share in my memory of the wreck.

4.3 You can’t know if it’s true or false because you can’t know if I actually did hit the tree (and there’s no evidence of it happening or not beyond what I say) (though again, you can believe me).

5. If you believe what I am telling you is knowledge — which it actually is to me — and it doesn’t meet the criteria of JTB to you, there is a sense in which you are epistemically irresponsible and yet correct.

6. Hence, though it is only epistemically responsible to consider JTB knowledge, it isn’t the case that everything which isn’t JTB to me can’t be knowledge to anyone, and furthermore it isn’t the case that what isn’t JTB or epistemically responsible is necessarily false. Conversely, it isn’t the case that what is JTB or epistemically responsible to me can necessarily be JTB or epistemically responsible to anyone (though it might be).

6.1 Could this problem of relativity be avoided by only defining JTB in terms of scientific evidence exclusively, or ‘the experience of no one in particular?’² This means ‘particular persons are interchangeable’: a scientific evidence can be observed as itself meaning what it means regardless who observes it.³ Certainly, “liberal science” is necessary for any functioning society — Jonathan Rauch made this argument powerfully in Kindly Inquisitors — but liberal science will not be enough to save us from “conflicts of the mind.” Not all truths are scientific truths, and not all evidence is scientific evidence. Furthermore, all truth is experienced through and in an “I” (even scientific truth), so even when it comes to believing a scientific truth, I remember it and experience it “like” a personal memory: it resides in the same brain and flashes across my consciousness in a similar way. No, a scientific truth is not identical to a memory — other people can observe it, it’s not as easily manipulated, it can be falsified, etc. — but it is like a memory and any other “relative knowledge” I may hold within, and all I have when I’m not in the actual presence of the scientific evidence (which is often). Hence, a scientific JTB is something I will usually experience “like” a nonscientific JTB, and on this point, it is possible that I find myself in relative situations over a scientific JTB like I can find myself over a nonscientific JTB (say when a scientific study is proven to be wrong). Furthermore, science cannot provide me with “total certainty” (see “Ludwig” by O.G. Rose), and so though necessary, liberal science alone cannot save us from “conflicts of the mind” (especially not in daily experiences), nor even make us feel like we’re safe.

6.11 To comment further on Kindly Inquisitors by Jonathan Rauch (which I rarely miss an opportunity to do), I would add that though liberal science may help us define “belief” from “knowledge,” it is unfortunately not the case that only what is knowledge is true (and Rauch does not claim such). There are true beliefs, and though liberal science may be invaluable in helping us determine what we should do with perhaps-true-knowledge over and/or against perhaps-true-beliefs, we are still faced with the problem of figuring out how to manage our perhaps-true-beliefs (which entails our fundamental axioms or what James Davison Hunter called “first principles”). Determining this is imperative for our Pluralistic Age, and I believe Hunter can help us solve this puzzling problem (his work is discussed at length in “Belonging Again” by O.G. Rose, as well as toward the end of this paper). Before the Shooting Beings by Hunter is particularly invaluable.

7. A world in which everyone ascents only to JTB and is epistemically responsible isn’t a world in which everyone is necessarily correct. Failure to understand this could lead people to believe problems in democracy can be solved by teaching everyone to only believe what is JTB and epistemically responsible, when in fact a society full of people committed to JTB and epistemic responsibility will not necessarily be a society free of conflict. In fact, it could be a society more destined to be stuck in conflict, precisely because it believes it knows how to be done with conflict: just convince/force “the other” to change.

7.1 If you tell me that at twelve you plowed your sleigh into a tree, you tell me that which cannot be falsified (especially by me) but is nevertheless JTB (relative to you). Considering this, it isn’t the case that what cannot be falsified is that which necessarily isn’t JTB.

IV

1. In line with “Ludwig” by O.G. Rose, since total and/or absolute certainty is impossible — since I cannot know for sure that all the scientists aren’t lying to me about Global Warming, that the Vikings did in fact exist, that I don’t believe everything I do because of subconscious biases, that there aren’t angels in the room with me, etc. — is intellectual honesty and/or epistemic responsibility possible?

2. To have “total certainty” isn’t the same as being “epistemically responsible.” Though Karl Popper’s “falsification” cannot give me “total certainty,” it can give me “practical certainty” (or confidence), and as long as this certainty always remains open to falsification, my “practical certainty” becomes practically indefinable from “total certainty.” Hence, living according to Popper’s thought is perhaps to live as “epistemically responsible” as is possible.

3. But can everything that is true be falsified? Knowing this would require knowing everything true, which only God can know. It is very possible that there exists knowledge and/or JTB that cannot be falsified. This being the case, what would constitute “epistemic responsibility” would entail the application of the right method of truth to the right kind of truth: to apply falsification to truth that can be falsified, to apply aesthetic epistemologies to works of art, logical arguments to ethics, and so on. Hence, “being epistemically responsible” entails “applying the right epistemologies to the right kinds of truths.”

3.1 If I apply falsification to art, I act epistemically irresponsible, but in a positivistic society that “practically believes” that “all truth can be scientifically verified,” I very well might be deemed “epistemically responsible.”

4. Determining if someone is epistemically responsible about x requires knowing which epistemological method is “right” for knowing x. A society hence that isn’t well-trained in epistemology is likely a society that lacks capacities necessary for discerning levels of epistemic responsibility accurately (and yet is a society no less likely to make these kinds of judgements).

4.1 If someone is using an aesthetic epistemology by which to judge a painting, and I am a positivist who believes that only scientific epistemology is valid for knowing truth, relative to me, the other person is (“objectively”) epistemologically irresponsible (and if I conflate “being epistemologically responsible” with “being right,” I will also think of the person as “wrong”).

4.11 Only using a positivist epistemology, I lack any method for knowing I am wrong (unable to “see” non-positivistic phenomena as “evidence”), and hence I have every reason to think I am right to think the other is “epistemologically irresponsible” (and “wrong”). In fact, my method for understanding reality actually makes it so that I have “evidence” that the other is “wrong,” and hence I have reason to believe I am “being epistemically responsible” for thinking such (and if I conflate “being epistemically responsible” with “being right,” I have reason to think I am also “right”). The conflation of “being right” and “being epistemically responsible” doesn’t create a bubble but an impenetrable fort.

4.2 Whether trained in epistemology or not, people will claim of others that they are “ignorant,” “uninformed,” “willfully ignorant,” and the like — claims about the epistemic responsibility and morality of others will fill any democracy, precisely because if a person thinks x and not y (in a society where some people do believe y), the person must convince his or her self that he or she is justified to believe x instead of y. Hence, the more Pluralistic the society, the more likely (ideology justifying/preserving) claims about the epistemic responsibility of others and oneself will be made. Furthermore, this also means the more Pluralistic the society, the more imperative it is for people not to conflate “being epistemically responsible” with “being right.”

5. What constitutes epistemic responsibility is relative to the truth it is “toward” and the method(s) by which that truth can be known.

5.1 If we agree that how the truth is known of a memory is different than how the truth is known of chemistry, then we agree that there isn’t a single epistemology that we can use as a standard by which to determine epistemic (ir)responsibility. I believe this assumption is fair to make, seeing as people do have memories that really have happened to them, but (real) “memories” are different in kind from (real) “rocks” — physical and observable phenomena that aren’t bound to a single subjectivity in their conceivability.

5.2 In regard to epistemology, in light of “Monotheorism” by O.G. Rose, a “monotheoristic society” — one that ascribes to one theory (of epistemology, in this case) by which to explain all phenomena versus multiple theories — is one that won’t be epistemically responsible.

5.21 If we say that “I am only epistemically responsible for investigating claims that meet the standards of epistemological method x” (“liberal science,” for example), then we are only epistemically responsible for investigating claims that fall under that method. On the other hand, if we say the opposite, we are epistemically responsible for much more, regardless the complexity. This suggests the extraordinary temptation to be “monotheoristic,” both in epistemology and generally.

6. Since “total certainty” is impossible, to define epistemic responsibility in its terms would be illogical and unrealistic; rather, it should be defined against “practical certainty” in terms of the “right” epistemological method for the truth “toward” which the epistemic responsibility is directed.

6.1 But how do we determine which epistemological method is the “right” one? This is the question which the field of epistemology is devoted to: it exceeds the scope of this paper by many books. That said, it should be noted at least that mastering epistemology will increase the likelihood that we align “being epistemically responsible” with “being right,” though there is no guarantee.

6.11 If for truth x one person uses epistemological method y and another uses epistemological method z, by what standard can we say y is right and not z for x? If the study of epistemology cannot provide us with this answer (one that must arguably involve ontology), then the first person is as justified as the second, and both will be equally valid to be “locked in” to whatever conclusion they reach about x, regardless how wrong that conclusion might be. If epistemological methods can all be equally ascribed to, “conflicts of the mind” are even more likely, for there will be no standard against which people can determine if the method they are using in which to define and be epistemically responsible is the correct and/or “fitting” method for a given truth claim, increasing the likelihood that epistemic methodology, epistemic responsibility, and truth all come in conflict. In other words, if it cannot be determined that Thomist logic should be used to determine an ethical claim about murder, for example, then a person’s intellectual commitment to neuroscience will keep the person from determining “the right thing to do” about murder, a commitment the person will have no reason to move out from.

6.2 But how do we determine at what point we achieve “practical certainty” about premise x? This leads us to questions of justification that shall be explored in section VII.

V

1. Let us explore a situation that could help flesh out the arguments of this paper and highlight why “the conflict of mind” is a very real and pressing problem.

As of January 2017, allegations emerged that President-Elect Donald Trump was propped up by the Russian government. Supposedly, Russia could blackmail Trump, and for this reason they helped Trump become President, hoping they could then control America in line with Russian interests. As of the writing of this sentence, Trump’s inauguration is in seven days, and the claims against Trump are currently unverified.

If the allegations are truth, it is rational, morally imperative, and vital for America’s safety to keep Trump from becoming President. However, if Trump were kept from becoming President until the reports are verified, this would be a threat to American democracy itself, draw into question the legitimacy of its elections, raise the possibility of future Presidents being kept out of office via unverified allegations, risk social unrest as Trump supporters potentially riot, and it would also give the CIA extraordinary power over the Executive Branch (that wasn’t “checked and balanced” by branches of government).

If Trump were to become President and ten days later the allegations against him were verified, the very fact someone who Russia aided and could control became President (for any amount of time) would risk world war, cause there to be increased paranoia and anxiety about all future candidates, bring about questions on if the National Election should be redone (for surely a Russian operative’s VP shouldn’t become President), and also potentially launch a witch hunt against all government officials to assure no one else worked for Russia.

In this situation, what should the epistemically responsible and intellectually honest individual do? It should be noted that the failure of our Pluralistic and democratic society to acknowledge the difficulty of this question has contributed to our sociopolitical failures and tensions (as has our failure to be aware of “tragedy,” as discussed in “The Tragedy of Us” by O.G. Rose and in The Fragility of Goodness by Martha Nussbaum).

1.01 Consciously or unconsciously, it is conceivable that a way in which people seize totalitarian control, manipulate, deceive, confuse, etc. is by setting up epistemic responsibility in conflict with truth — by creating “conflicts of the mind” for the public which benefit them and their agenda. On this line of thought, it could be precisely our epistemic responsibility that keeps us from stopping something like the rise of a Third Reich, as it could be precisely our lack of epistemic responsibility.

1.02 Perhaps it is the responsibility of the press to avoid unnecessary “conflicts of the mind,” to keep people in power from (intentionally or unintentionally) creating and/or using “conflicts of the mind”-situations to their advantage and the advantage of their agendas, and from allowing the same of the public (intentionally or unintentionally). If it is, a press that is unaware that this is their responsibility will be conflicted about what exactly constitutes “journalistic integrity” in a democracy, and furthermore likely create “conflict of mind”-situations unnecessarily (and is perhaps especially likely if the press is increasingly decentralized across the internet and millions of “reporters” who don’t agree on their role).

1.1 If I decide not to believe Trump is being used by Russia until the reports are verified and the reports are indeed verified, though I am epistemically responsible (following a more positivistic method), in the eyes of history, I will be one of the many who failed to try to stop Trump from becoming President. I will have failed to stand up against corruption and failed to fight to keep America safe, and yet I will have been epistemically honest.

If I decide to believe the reports, stand against Trump becoming President, and the reports are later proven to be false, in the eyes of history, I will be an ideologue who tried to hide his partisanship behind claims like “better safe than sorry.” I will be someone who acted unjustly toward democratic results with which I didn’t agree.

If I decide to believe the reports, stand against Trump becoming President, and the reports are verified, in the eyes of history, I will be someone who “appears” to always follow the facts and who is always epistemically responsible, and I won’t be an ideologue, but rather one of the few who stood up against the impending threat that everyone else was too afraid to see clearly. I will be one of the few who skeptics didn’t deter; I will be one of the few who did the right thing.

If I decide to not believe the reports and the reports are later disproven, I will have been epistemically responsible and correct about a story that, in the eyes of history, was just another of the thousands of conspiracy theories and “fake news”-stories generated daily.

No one knows what will happen over the next seven days, let alone the next year.

History only sees what’s history.

No one knows what the future holds: no one knows about today what in the future will be obvious.

No one knows about today what in the future will seem as if it couldn’t have been thought about in any other way.

We must choose now what we will believe, and we must choose now according to what standard we will believe what we believe — a standard which will orientate and define our epistemic responsibility. Failure for our society to acknowledge how future events reach back and ostensibly change what constitutes epistemic responsibility, and failure to acknowledge that we cannot always say for sure what posture (actually) constitutes epistemic responsibility until a “flip moment” in the future, reduces the chances that our society handles well the great responsibility of democracy.

1.11 Perhaps I could have avoided this dilemma by defining epistemic responsibility less empirically? But that would mean that I demand less evidence and less verification for believing that Trump was being used by Russia, which would mean that I lowered my intellectual standards, not raised them. If I decide I don’t require as much proof or evidence, then I decide I will increase my trust in those who wrote the reports and the reports themselves. But without evidence, in what way is it epistemically responsible to increase my trust in the reports, and how could I be justified to increase my trust? There seems to be no way.

Perhaps though I should just lower my standard for the amount of evidence I need to believe it is justified to accept the reports, which is different from not demanding any evidence at all? Hence, I can avoid the dilemma by shifting my standard of justification (from say requiring ten pieces of convincing evidence to five). But is the decision to lower my standard of justification a decision I make based on evidence (or is it justified in of itself)? Is there evidence I should lower my standard of justification, or just a feeling I should? Is it epistemically responsible to make such a decision based on emotion? If not, why not; if so, why so; and is there an objective standard?

And is it epistemically responsible to lower (or even raise) my standard of justification (in order to avoid the dilemma Trump poises or even to overcome it)? Isn’t that the height of epistemic irresponsibility? Yes/no: my standard of justification is what defines my epistemic responsibility, and if I change it, I change what for me constitutes epistemic responsibility. Perhaps relative to my former standard, I am epistemically irresponsible, but relative to my new one, I am not (as if I was never epistemically irresponsible). And in fact, if my previous standard was a bad one, my change is in fact an act of epistemic responsibility, and so it cannot even be said for sure that changing standards is epistemically irresponsible (it depends, but not according to an easy-to-determine framework). And who can say for sure that my previous standard of justification wasn’t a bad one (or that it was a good one)? By what (epistemically responsible) standard?

And how do I determine what should constitute my standard of justification in the first place (or ever), and hence determine my baseline for epistemic responsibility?

We are beginning to glimpse a shaking center and approaching the question of justification.

But not yet.

1.2 Note that time is an important dilemma in the situation involving the President-Elect, and it is important that time is recognized as paramount to the question of epistemic responsibility. Imagine that someone called and told you that a terrorist attack was about to go off near your home. What would be the epistemically responsible thing to do? You don’t have evidence that the person is telling the truth, and the person who called is someone you’ve never heard from before. Two main choices come to mind:

a. Ignore the person: if you do this and the terrorist attack doesn’t happen, the call will be a nonevent relative to which you (seemingly) acted epistemically responsible; but if you do this and the attack does happen, the call will be the event for why you (seemingly) acted epistemically irresponsible; after all, you had evidence a terrorist attack was going to happen (the call itself), and you ignored it.

b. Trust the person: if you do this and the terrorist attack doesn’t happen, the fact you considered a random call from a random stranger as worth trusting will be evidence that you are easily manipulated and epistemically irresponsible; but if the attack does happen, you will be epistemically responsible and provide proof that a random call from a random stranger can be valid evidence (potentially shifting your entire epistemology).

Note that the imperative to make a decision is determined by the sense that the bomb will go off soon — that time will soon unveil the “truthfulness” of the warning and hence determine the epistemic (ir)responsibility of listening/ignoring it (though no bomb may go off, keeping you “stuck” in (a feeling of being) a moment-before-a-bomb-might-go-off until so many such moments have passed they begin losing their emotional edge — which might be precisely when the bomb goes off). It is precisely this imperative that will perhaps “lean” you in the direction of running from the house, despite the lack of evidence. “The cost/benefit analysis” weighs heavily in favor of running: ignoring the warning risks your life, and by ignoring it, you merely avoid a temporary inconvenience. Because the risk of being wrong is death, you have reason to “lower” your standard of justification/evidence (and redefine it as epistemically responsible to believe the stranger on the phone). So you run, only to find out an hour later that it was a prank: you were wrong to lower your standard of justification, even though it was the right thing to do according to your “cost/benefit analysis.” You were wrong and right, and though you didn’t act on a JTB, only a “justified belief” (JB) (someone warned you of an attack and you believed the stranger), in order to know if you were acting on a JTB versus JB, you would have had to wait for time to unveil the case (potentially killing you).

But were you actually epistemically responsible (or only relatively so)? Determining this would require determining at what point (of threat) is a person actually justified to shift his or her standard of justification for believing a claim. If it is the case that if claim x is true, the world will end, am I justified to believe claim x even if I have only unverified evidence? On the other hand, if it is the case that if claim y is true, the world won’t end, am I unjustified to believe claim y when I only have unverified evidence? In other words, does threat level change standards of justification? If so, a dictator could easily exploit and manipulate a public by making all his or her claims ones that if true the world would end. But surely we should take big threats more seriously than small ones? Yes, but what does it mean to take something “more seriously?” To hold it up to a lower or higher standard of justification? If lower, by what standard? If higher, by what standard?

If it is the case that the world will end if case x is true, should we always act as if case x was and/or will be true? Is that the only epistemically responsible way to act?

It would seem so.

It would seem epistemic responsibility would have us be easily manipulated, and in a manner that we could only resist by being epistemically irresponsible.

Perhaps this is what all the worst dictators, manipulators, exploiters, and cult leaders know.

1.21 For more on this problem, please see “Death is the Event Horizon of Reason” by O.G. Rose.

1.22 Is it epistemically irresponsible to act on a JB versus JTB? If so, in the above example, to be epistemically responsible, you would have to risk dying and/or die (at which point you wouldn’t know if you were epistemically responsible, so what would it matter?). Your cost/benefit analysis would make “being epistemically responsible” the same as “being irresponsible.” But to live purely by cost/benefit analysis could quickly make a person OCD: if I believe not touching a doorknob will get my parents killed, it is always rational to do something crazy (even a million times, given that I love my parents limitlessly). So shouldn’t I think in a manner that combines cost/benefit analysis with epistemic responsibility? Yes, but how? If there’s an answer, it’s situational. So, there’s no objective standard? No, and a democracy that fails to realize this is a democracy that will apply countless subjective standards upon its own people — benefiting no one.

The question of justification continues to approach.

1.23 Does a claim over a phone in favor of premise x function as evidence for premise x? Why or why not? In a situation in which we had more time to make a decision, we may say “not if it cannot pass the test of liberal science,” and then turn to the work of Jonathan Rauch. However, would Rauch’s criteria work when it comes to determining if x is evidence that the painting in front of me is “good art?” No, but does that mean there is no such thing as “good art,” only subjective opinion? If we say “yes,” on what grounds could we say that Picasso’s work is better than the work of my toddler?

“Evidence” is “phenomenon toward a case (justifying or disproving it)” (as discussed in “Self-Delusion, the Toward-ness of Evidence, and the Paradox of Judgment” by O.G. Rose). When should x fall under the category of “evidence for y?” Why it can is because within us we can carry a “case” relative to which x can be evidence, but should it be such is a different question. In other words, when are we justified to consider “x as evidence for y” versus just consider “x as x?” This is another question of justification which we will address shortly.

Considering the terrorist hypothetical, how do we face the question “What is evidence?” in a situation in which a decision must be made now? It would seem that a cost/benefit analysis can change what we are willing to consider evidence, and this hints that what we treat like evidence is relative to the situation (though our textbook definition for “evidence” may stay the same). What is practically evidence is situational, relative to cost/benefit analysis, and hence so also is epistemic responsibility (even if “the actual truth” will later unveil that we were epistemically irresponsible). Relative to the moment in which we acted, we acted epistemically responsible; relative to what ultimately turned out to be the case, we may have acted epistemically irresponsible. But our cost/benefit analysis couldn’t let us worry about that at the time.

So it goes with determining if Trump should be kept out of office.

Sometimes it seems that there isn’t a good or bad decision, just a decision that we later color in terms of good or bad, seemingly unable to help conceive of it any other way (because of the nature of history and time to make what “turns out to be the case” seem “as if” it was “always (obviously) the case”).

There’s just us.

1.231 Do note that the period between when epistemic responsibility and truth are in conflict (before verification) tend to be quick periods — a few hours, a few days, a few months — but increasingly we are pressured in our democracy to take positions now (in the name of justice, national security, etc.), thanks mostly to the internet and modern media (and how people on social media will know if we fail to take a position, observing our lack of posts, etc.).

Also, it should be noted that it isn’t the case that there is a “conflict of the mind” in every situation, but in such situations (perhaps happening less often than others), we can be trained to think of an actual “conflict of the mind”-situation not as one at all: when we are used to x, we are likely to see y as x. This tendency sets us up to be unprepared for when a “conflict of the mind”-situation actually occurs.

1.232 It is also important to note that if we decide to “wait until we get more evidence” to decide if the reports about Trump are true, for example, we have taken a position that history (and others) may judge as epistemically (ir)responsible. Likewise, if I were to “wait until more evidence” to believe the phone call that warned me about a terrorist attack that was about to happen, I would have to practically believe that the terrorist attack wasn’t going to happen.

Agnosticism isn’t escape.

In withholding judgment — in being what I’ll call “intellectually agnostic” — though I don’t take a position of intellectual belief, I cannot avoid taking a position of practical belief which is practically identical with an intellectual ascent. So be it, but perhaps intellectual agnosticism can still be more epistemically responsible? Perhaps, but it will be practically identical with either epistemic responsibility or irresponsibility (relative to what turns out to be true), and hence the perhaps greater epistemic morality of intellectual agnosticism will be practically irrelevant. The possibility is solipsistic and practically useless for the question of how democracy should function if it is to be epistemically responsible.

Lastly and theoretically, even if there was a time when intellectual agnosticism was more epistemically responsible, there can come a point where continuing to be intellectually agnostic suddenly becomes epistemically irresponsible, such as when new evidence comes out in favor of the premise that Trump is being blackmailed by Russia (say if Trump were to admit it). At this point, it would be epistemically irresponsible to continue being intellectually agnostic. But couldn’t it be the case that Trump was forced to confess something that wasn’t true? Indeed: the question arises at what point is a person not justified to continue being intellectually agnostic; at what point must a “maybe” shift into a “yes” or “no?”

The question of justification continues to approach.

1.24 To offer additional examples similar to being called and told a terrorist attack was about to happen (moments that could be considered some of the purest existential experiences possible):

a. Imagine Ben Bernanke walks into your office and tells you that if you don’t immediately bail out the banks, another Great Depression will occur.

b. Imagine that a top scientist tells you that we have to lower greenhouse emissions quickly or Global Warming will destroy the world.

c. Imagine someone tells you that millions of babies are being slaughtered every year and every minute we do nothing another dies.

1.25 Like particles, we constantly exist in a state of unstable possibilities, and democracies that fail to acknowledge this are democracies that will struggle. We exist in this state because we exist in (space)time, and since we are always temporal, epistemic responsibility is always temporal (it is always defined in and provided being as itself within (space)time). Thinking is always against time, and time is its background: thinking is always upon time and over time. And yet we often treat thinking as “above” time and “transcendent” of it, when in fact thinking never escapes time: it is always in and shaped by it. Time portrays thought, as epistemically responsible, as correct, as rational, as always “seeming” correct, and as always “seemingly” on the right side of history or the wrong. Failure of a democracy to realize the power of time to portray thought is a democracy that will fail to value thought in terms of temporality, and the democracy will rather likely judge it in terms of “the eyes of history” or what time unveils to be true. But we thinkers never exist above time knowing what time will hold: to judge thought in terms of “the eyes of history” is to judge it in terms of what we never live in or experience. This is likely to contribute to us approaching thought wrong, and in fact make us nervous and even paranoid to think in an age that requires thinking and philosophical thinking like never before (as argued throughout the works of O.G. Rose).

Thinking is a race against time — not only in that thought seeks to think before it fades into death, but also because thought attempts to “know” what truth lies in the future before time arrives at the moment when it unveils that truth (to think, for example, is to try to know if the reports against Trump will be verified or not before they are verified or not). Time (un)veils truth and thinking strives to know what is behind the curtain before the curtain is raised, like a spectator trying to figure out how a magic trick works before the magician unveils the secret. But time always has the upper-hand, for even when it is slower than thought to determine that x is true, time has the power to seemingly reach back in time and make it “as if” it was always “obviously” the case that x was true; it has the power to make it seem as if all thought against the validity of x was thought that was “always” epistemically irresponsible; and so on. Time portrays thought, but thought can also impact how time manifests by determining and/or influencing in which direction of spacetime us thinkers travel. However, when it comes to the question of unveiling truth to us and influencing how “the eyes of history” fall upon those who questioned the “obvious” truth, time always has the upper-hand.

Time profoundly determines how truth is portrayed, and hence it has power over what we consider epistemically responsible and irresponsible, and for how long and when. Epistemic responsibility isn’t a “solid” doctrine, as appeals to “pursuing truth to wherever it may lead” imply; rather, epistemic responsibility exists in a constant state of instability, like a particle that changes upon being observed (into a state that (suddenly) seems “as if” it was always in).

The observation of time changes the reality of epistemic (ir)responsibility.

Democracies that fail to realize the power of time are democracies that are likely to struggle. The same goes with the study of epistemology, which for too long has discussed “intellectual honesty” as if what we are epistemically (ir)responsible “toward” isn’t unstable and isn’t relative to (space)time like a particle.

VI

1. What is the epistemically responsible way to respond to someone who always lies? If we know that everything the person says is false, to investigate every claim the person makes would seem to be epistemically irresponsible — or would it? Can it ever be epistemically responsible not to investigate a claim? Perhaps: we already know ahead of time what our results will be (we’ll find out the person lied), and to find out something we already knew, we had to spend/waste a lot of our time, our mental resources, the time of those who will listen to our findings, and so on. Hence, there’s good reason to argue that gathering evidence to confirm the claims are false of someone we know always lies is epistemically irresponsible.

Once it is established that a person always lies, it seemingly becomes epistemically irresponsible to investigate all the claims of that person, but how does one decide someone “always lies” in a manner that is epistemically responsible? To decide someone “always lies” is an extraordinary act, freeing us of any epistemic obligation to investigate (all) the claims the person makes, to gather evidence against/for the person, and so on. Considering this, the standard must be very high for someone to be justified to claim “(s)he always lies,” and it’s fair to wonder how someone could ever be epistemically responsible in making this claim.

If someone does in fact always lie and I fail to determine this, and hence to discern that it is epistemically irresponsible to investigate all of the person’s claims, I have ultimately acted epistemically irresponsible, even though relative to me, if I don’t know yet that the person always lies, I act (seemingly) epistemically responsible. Hence, determining if a person always lies is of the utmost importance, but how exactly do I do it?

How many times must a person lie before it becomes epistemically irresponsible to not assume that the person “will always lie,” hence making it epistemically responsible to not investigate every claim the person makes? Must the person lie nine out of ten times? Or ten out of ten times? Or five out of ten times? If the person lies ten out of ten times for five years, am I justified to assume the person will always lie, even though the person in five years may suddenly never lie again?

If a person lies nine out of ten times, it is perhaps fair to say that I would be rational to assume that the person will “always lie” and hence epistemically responsible to not investigate everything the persons says, but it doesn’t follow that I will always be right (say the one time the person tells the truth). In this situation, due to a reasonable and rational probability assessment, “being epistemically responsible” can conflict with “being right.”

1.1 What is the epistemically responsible way to respond to someone who doesn’t lie all the time, but who lies a lot? Or what is the epistemically responsible way to respond to someone who doesn’t intentionally lie, but who accidentally and unintentionally spreads false information every now and then? Person one perhaps relays false information three out of ten times, while another relays it six out of ten times — is it epistemically responsible to assume both are people we can ignore and be epistemically responsible? If we determine we are justified to ignore the first person and not the second, by what (objective) standard do we make this judgment? By what standard do we determine we should investigate and trust in what someone tells us up to four lies but not after five? Or by what standard do we decide that we should begin taking claims (gradually) less seriously?

1.11 It can be epistemic responsibility itself that can moralize our susceptibility to manipulation; it can be our very commitment to intellectual honesty that can contribute to the rising up of a dictator or the spreading of a conspiracy theory. A democracy that fails to realize this is a democracy that is more likely to fail than thrive.

1.12 We are all at different times someone who tells falsities like truths, but not necessarily because we are lying. I could knowingly or accidentally change or embellish a story I tell about wrecking on my sleigh (for the sake of entertaining others); I might not remember all the details due to the event being so long ago; I might accidentally mix in a dream with what actually happened (in which case I wouldn’t even realize I wasn’t relaying something true); and the like. Still, the basic element of a “sledding accident” would remain consistent, and if you believe what I tell you, seeing as there is no other way to investigate and/or be epistemically responsible about the story, it is because you “take my word for it.” You have no outside reason to either believe or disbelieve what I tell you, and in this unique situation, perhaps you are epistemically responsible to believe my (consistent) “macro-point” but not the corresponding details? Perhaps you should just let my story “wash over” you and react to my emotions and enjoy the entertainment, but not think about it further once the story is over? Seeing as in the grand scheme of things how you decide to “believe” my story is ultimately inconsequential, the question of epistemic responsibility in this situation isn’t a dire one, but while this might be the case between friends, the situation dramatically changes if it’s between the NSA and the citizens of the United States of America (for example).

What is the epistemically responsible thing to do in a situation in which you don’t have reason to either disbelief or believe what I tell you? To take my word for it (perhaps only in my “macro-point”)? If we agree to this, then a leader or dictator can manipulate us by making claims that we have to “take his or her word for it”; to avoid this, we must agree that epistemic responsibility shifts relative to the level, power, and authority of those making the claim. But if it is epistemically irresponsible to “just take the NSA’s word for it” whenever they tell us something that we have no outside reason to either believe or disbelieve, what would be epistemically responsible for us to do (the same could be asked about the media, the academic community, the corporations, and the like)? Surely if we do nothing we are likely to be controlled and manipulated, but then what must we do? Investigate the NSA? But that’s impossible without breaking into the NSA, and there’s no guarantee that what’s uncovered will be useful. Where should we begin looking and when should we stop searching? It seems that to avoid manipulation, we should do something that we cannot exactly determine. Furthermore, it seems that in order to avoid being controlled, we must take “The Pynchon Risk,” but that possibility is almost too much to bear (as we will discuss).

If Americans don’t have enough evidence to either believe or disbelieve the claim that the Kremlin actively conspired to arrange for Trump’s election, the consequences of antagonizing a nuclear-armed power with false claims carries with it consequences different from — but just as dire — as allowing foreign intervention into our elections without repercussion. Considering these options, what is the epistemically responsible thing for an American citizen to do? Investigate the claims that could antagonize a nuclear-armed power or leave uninvestigated the claims that warn our national sovereignty has been compromised? Whose word do we take for it? Those who claim the government doesn’t have good evidence that Russia has actively conspired in Trump’s favor or those who claim there is good evidence? How can we know without taking a “Pynchon Risk?” The existential anxiety of this situation is only worsened by our technologies (as discussed in “The Grand Technology” by O.G. Rose), and it seems to be the case that the internet and social media have “thrown” us (to allude to Heidegger) into a world where we have to face and realize the problem of justification, perhaps causing us to suffer more “conflict of the mind”-situations than ever before.

Under government and powerful institutions, it is likely we will find ourselves in this “gray zone of uncertainty” often, and it is likely that if there are powerful people who want to control us, it is in their interest to keep us in a place where epistemic responsibility cannot tell which direction is the right way. Though perhaps no one controls us and the powerful are in a “gray zone of uncertainty” like the one they put us in (for they can do nothing else). Perhaps we are all ultimately controlled by the nature of reality (“there is no Big Brother”), where epistemic responsibility must conflict with truth because of the ontological structure of reality itself. But that possibility is almost too heavy to think on.⁴

1.2 What is the epistemically responsible thing for the media to do when faced with reporting on someone who lies all the time or often? If they don’t report on and investigate all of the liar’s claims, not only will they be accused of partisanship (by those who support the liar), but won’t they fail in their commitment to investigate truth? But if they investigate every claim, their time and resources will be sucked up, they’ll create continual static, and they won’t be able to get to the real stories (perhaps benefiting the liar). However, if they report on the lies, don’t they risk implying these countless claims are legitimate? Similar questions can be asked of the scientist when faced with the question of debating someone over something that is outlandish (say that “the world is flat”). But how can the media know which stories are lies without investigating them ahead of time?

(It should be noted that it is likely that those who think the claims of the liar should be investigated and those who think otherwise will be divided along ideological lines.)

1.3 In “Self-Delusion, the Toward-ness of Evidence, and the Paradox of Judgment” by O.G. Rose, it was argued that we should never say “a person is a liar,” only “that person lied”; in other words, we shouldn’t judge, only assess; we shouldn’t define a person’s essence by a person’s accidents (which following Aristotle, the paper argued was a logical fallacy). But if this is so, are we not stuck to investigate every claim a person who always lies makes (and at what point do we know we’ve investigated enough)? If so, couldn’t a manipulative person who wanted to establish a totalitarian regime easily control the people by declaring it epistemically irresponsible to decide that everything he says is a lie without investigation, resulting in the people never standing up to him, caught up in an endless investigation (that the dictator could assure remained endless by continually making new claims)?

We shouldn’t judge a person “as a liar,” but it doesn’t follow from this that we cannot make the assessment that a person “is more likely to lie than not” or “the person will probably lie when he or she speaks.” Based on an assessment, we can determine that it is rational (based on probability) for us not to investigate every claim a person makes, but in this situation, it should be noted that we could end up acting epistemically irresponsible if the person ends up saying something true. There’s no way to escape possibly ending up in a situation where “being rational” and “being epistemically responsible” come into conflict: we have to accept the possibility of tragedy or blindly end up forever investigating what is epistemically irresponsible to forever investigate. Democracies that fail to realize or acknowledge this kind of predicament are those most likely to end up investigating forever the claims of a person who always lies.

All well and good, but at what point is a person justified to assess (versus judge) that a person will “probably lie whenever he/she speaks?” And for that matter, when are we justified to assume our assessment about a person is correct?

The question of justification still looms.

2. What is the epistemically responsible way to respond to a computer that never lies? If we know everything the computer told us is true, it would seem that investigating every claim of the computer would be epistemically irresponsible, for wouldn’t it be a waste of time? We already know ahead of time what our results will be, but if we just accept what the computer said, don’t we “fail to think for yourself?” In this situation, it would seem that it is epistemically irresponsible to “think for ourself,” and yet isn’t it epistemically responsible to “think for ourself” (as countless graduation speakers have preached)? In this situation, isn’t it epistemically irresponsible to be epistemically responsible?

This situation is similar to the one with the person who lies constantly and how I approach it should be similar. At what point am I justified to claim/assume that a given computer (or anything for that matter) will never lie or never be wrong? After it has been right a thousand times? A million? If I decide a thousand times, by what standard am I justified to make this decision versus at a million (seeing as I must suddenly declare insufficient the very logic by which I decided to make my standard of justification a thousand versus a hundred)? Wouldn’t I be more justified at a million versus a thousand? But wouldn’t I be more justified at a billion then a million — ad infinitum? I seem to have fallen into a “runaway train fallacy,” as all questions of justification may end up falling into.

If a computer is right a thousand times, it is perhaps fair to say that I would be rational and reasonable to assume that the computer will “always be right,” and hence epistemically responsible to not investigate everything the computer claims in order to “make sure for myself that the computer is right” (and hence “think for myself”). In this situation, due to a reasonable and rational probability assessment, I find myself in another “conflict of the mind.”

2.01 This question is similar to “What is the epistemically responsible thing to do if God tells you to build an ark because it’s going to rain for forty days and forty nights?” This question is explored in “The Eclipse of Faith and Reason” by O.G. Rose.

2.1 What is the epistemically responsible way to respond to a computer that is wrong only once out of every thousand computations? This is the problem similar to the person who doesn’t lie all the time but every now and then, as has already been discussed: we will find ourselves having to choose between “being reasonable and epistemically irresponsible” and “being unreasonable and epistemically responsible” — a paradox rarely discussed at colleges dedicated to cultivating “the life of the mind.”

2.2 The idea of an “expert” or “scholar” is to be like a computer that never lies, and arguably this is the hope of the media as well. The ideal is that an expert would be someone who would tell the public something that it would then be epistemically irresponsible for the public not to believe, even without investigating for themselves (which is arguably impossible for an average person to do to the same degree as can a professional scholar). In fact, the hope of an expert is to make it so that investigating what the expert says is in fact epistemically irresponsible (except perhaps by other experts), as it would be to investigate what a computer claims that never lies (except perhaps by other computers that never lied).

Arguably, a society that lacks experts like this (who are assumed to be speaking truths far more often than speaking falsities and who are assumed to never intentionally portray falsities as truths) is a society that lacks a desperately needed group of people. The public simply cannot investigate everything, and yet in a democracy, the public finds itself having to make decisions about nearly everything. We require access to claims that we can trust are true without investigating them ourselves: we must be able to believe that “x is true even though I personally haven’t been epistemically responsible regarding x.” We require a group who can represent us in being epistemically responsible, a group through who we can be epistemically responsible without being so directly. It’s similar to how in a functioning economy we require for other people to do jobs we cannot do: I cannot be a truck driver, a teacher, a pencil maker, a coffee grower, etc. — I require others to fill functions I cannot fill, as they require the same of me. And if we don’t trust one another to fill these functions, the society will suffer. Likewise, if we don’t trust our experts, scholars, and media to be epistemically responsible for us, the society will suffer.

All this points to why “the legitimation crisis” which Jürgen Habermas warns about is so dire: once we cease to trust our experts, scholars, and media to be epistemically responsible for us — once we all feel we have to be epistemically responsible for ourselves about everything (an impossibility) — democracy will suffer, our capacity to avoid (unnecessary) “Pynchon Risks” will diminish, the likelihood we fall into radical doubt and cynicism will skyrocket, and the likelihood will diminish that we ever return to a day in which we don’t suffer from a “legitimation crisis.”

2.21 To emphasis a point: if there is no “expert class” we can trust, we cannot be/feel robustly epistemically responsible without studying everything, which is impossible. Even if we acknowledge that “we cannot study everything,” without an “expert class,” we will feel uncomfortable with this truth, precisely because we will feel more responsible for our failure to do enough.

2.22 “On Trust” by O.G. Rose argued that trust is something that must be given until a person has reason to withdraw it: trust cannot be earned. Do note that the question of justification is a question of “when are you justified to trust x” versus “when are you justified to give x a chance to earn your trust” (the second is hopeless, as argued in the paper). “On Trust” does not argue that we should trust everyone and everything, but it does argue that it should be our default to trust until we have reason not to, and that we shouldn’t think of trust as something earned or unearned. Problematically, we do have reason to withdraw our trust from the media, politicians, etc. (we don’t know the people directly who make it up and, on occasion, some members of the press and government have attempted — intentionally or unintentionally — to manipulate and deceive us), and yet we can’t investigate everything. This suggests a problem.

3. What is the epistemically responsible way to approach a conspiracy theory, myth, or the like? If I decide “Big Foot doesn’t exist” without first looking for Big Foot, have I acted epistemically irresponsible? But if I began looking for Big Foot, at what point of investigation would I slip from being epistemically irresponsible to believe “Big Foot doesn’t exit” to being epistemically responsible for believing such? After checking ten mountains? After checking fifteen? By what standard can I say ten is enough and not fifteen? Is there no objective standard?

Is it epistemically responsible or epistemically irresponsible to investigate the claims that 9/11 was an inside job? On the topic of investigating conspiracies in general:

a. If we say “sometimes it’s epistemically responsible,” by what standard do we determine it is one time versus another (keep in mind that people are likely to disagree about which conspiracies theories are plausible and which aren’t)?

b. If we say “it’s always epistemically responsible,” then we have set ourselves up to not only be easily controlled and manipulated by a dictator, leader, or “other” who will just continually throw conspiracy theories, lies, etc. our way, but we have also set ourselves up to fall victim to every “Pynchon Risk” we come across (as will be discussed). The only way to stand against this would be to face an existential anxiety caused by a feeling of epistemic irresponsibility, which is something by definition we will feel like we shouldn’t stand against.

c. If we say “up to a point it is epistemically responsible,” then how do we decide when it has ceased being epistemically responsible to investigate a given conspiracy theory? By what standard? When we feel we have gone on long enough? Do emotions carry such authority? If we agree with the work of Daniel Goleman, perhaps they should, but by what standard can we say emotions should? Because perhaps emotions have been right for us nine out of ten times? Then in this situation, perhaps we can be rational to follow our emotions, but not necessarily epistemically responsible. And why is nine out of ten times good enough versus nineteen out of twenty?

d. If we say “it is never epistemically responsible,” by what standard can we make this claim, seeing as some conspiracy theories have turned out to be true (such as Hemmingway’s claim that the CIA was tracking him)? Unless we’ve investigated every conspiracy claim to know they all lack any truth, we cannot say “they are never epistemically responsible to investigate” without in that very act being epistemically irresponsible (though perhaps we act reasonably and/or correctly).

It would seem that there is no objective standard by which we could establish that it is “sometimes,” “always,” “up to a point,” or “never” epistemically responsible to investigate a conspiracy theory: it doesn’t seem that anything we decided to do would be the right or wrong thing to do. It would just be a choice that was deeply consequential, but it would perhaps never be possible to say that it was a good or bad choice.

(Keep in mind that the points listed above would also apply to investigating myths and religions: for example, at what point are we epistemically responsible to say Christianity is true or that Christianity is false? And does anyone actually/practically come to believe in a religion by this kind of questioning, or do they just find themselves “in it,” almost mysteriously? The same could be asked of ideologies in general.)

Let us pause and meditate a little more on the possibility that it is right to say, “it is never epistemically responsible to investigate a conspiracy theory.” Yes, even if this was true, it wouldn’t address the problem of when are we justified to say x is “a conspiracy theory” versus “a theory.” Certainly, it would be helpful if we could establish that once we (perhaps justly) categorize x as “a conspiracy theory,” we are then justified and epistemically responsible to not investigate it.

If we define “a conspiracy theory” as “that which (dis)proving would require taking the Pynchon Risk,” then perhaps I am not only reasonable but also justified to ascent to and live by the premise “it is never epistemically reasonable to investigate a conspiracy theory” (though I still face the problem of addressing by what standard am I justified to define “a conspiracy theory” as such (which points to the problem of justification in deciding that x be the signifier for (signified) y, though this is admittedly a technical concern more so than a practical one)).

What is “The Pynchon Risk” (a notion inspired by The Crying of Lot 49 by Thomas Pynchon)? That is addressed fully in “The True Isn’t the Rational” by O.G. Rose; here, I will say that it is “to investigate if x is true when there is no guaranteed point in which we’ll be able to say for sure that x is true/false.” For example, if we begin investigating the question “Does God Exist?” there is no guarantee that if we read every theology book, every religious text, every New Age Atheist book against religion, and study the question for eighty years, that in the end, we’ll be able to say for sure whether or not God Exists. To investigate questions about God’s Existence and God’s Identity is to take a “Pynchon Risk.”

A good image for understanding “Pynchon Risks” could be found in Made in Abyss by Akihito Tsukushi. In the series, there is an abyss that if you enter, there is a “curse” that makes it hard to come out. At a certain point of depth, it’s nearly impossible to climb out without either losing your life or your humanity. Certain characters enter the abyss in hopes of adventure and finding certain goals, and if they accomplish their dreams, then perhaps entering the abyss was worth it (seeing as they might never be able to return home). But if they don’t, not only does the quest prove to be a waste, but now they are stuck, and so the deeper they travel, knowing they can’t go back, perhaps the more desperate they can become to find evidence that the journey was worth it, that in the end, it will all add up. The only way characters could have ever known if entering the abyss would be worth it was by entering the abyss, and standing outside of the abyss at the top, before any journey ever started, longing, curiosity, and perhaps even epistemic responsibility ate at them.

It is not the case that answering any given question requires taking “The Pynchon Risk,” but some do, and faced with those questions, would epistemic responsibility have us take “The Pynchon Risk?” Is the only way to avoid “The Pynchon Risk” to be epistemically immoral and intellectually dishonest? Is the nature of epistemic responsibility to “push us” into “Pynchon Risks” (all of them, I might add)? Is epistemic morality so cruel that it would have us enter the abyss? Is that not where Nietzsche warned we could become a monster?

If we say that epistemic responsibility doesn’t require us to investigate every “Pynchon Risk,” by what standard do we make this claim? It would seem by the practical knowledge that a “Pynchon Risk” could forever consume us (without any guarantee of successfully determining truth) that we are justified to avoid them, even if there is perhaps truth to be found by taking such a risk. Perhaps we are justified to avoid “Pynchon Risks” because of the practical fact that we cannot know if a “Pynchon Risk” is worth taking except by taking it (it is circular)?

Personally, I think this practical reality is legitimate grounds for avoiding investigating the validity of x if determining its validity requires taking a “Pynchon Risk.” But how can we be so sure that x actually requires taking that risk versus only appear to require taking that risk? Furthermore, if we agree that a person is justified to avoid “Pynchon Risks,” couldn’t someone just claim that “x is a Pynchon Risk” to avoid investigating it (and to avoid being compelled by epistemic responsibility to do so)? And to scare people from it and remove epistemic imperative to investigate it, couldn’t a leader (or dictator) claim “x is a Pynchon Risk,” aware that if people found out the truth about x, his or her power would be threatened? And wouldn’t the above logic mean Atheists aren’t intellectually obligated to investigate God’s Existence? Wouldn’t the above logic mean Theists aren’t intellectually obligated to investigate God’s Nonexistence? (Please note that people naturally “absorb” their first worldview versus “think their way into it,” as discussed in “Compelling” by O.G. Rose.)

If we accept that “we aren’t epistemically obligated to investigate Pynchon Risks,” that “we are justified to avoid them on practical grounds,” then we must also concede that the Christian isn’t epistemically obligated to investigate Islam (potentially worsening tribalism in the midst of Pluralism), we must leave open a way we could be controlled and manipulated by people in power, we must make it possible for a tyrant to not answer or feel obligated to address critics, and furthermore give ourselves the power to avoid investigating anything we don’t want to investigate by claiming “it’s a Pynchon Risk” even when it very well might not be (seeing as there is no way to tell what only “appears” to be such a risk versus actually is one without investigating and taking a “Pynchon Risk,” perhaps without knowing it).

But perhaps this trade-off is better than the (“thoughtless”) alternative?

That may require a “Pynchon Risk” to determine.

3.1 When am I epistemically responsible to claim x is “a conspiracy theory” versus “a theory?” Are the allegations against Trump involving Russia “a conspiracy theory” or “a theory” (do note that the severity of the claims increases the direness to answer this question quickly)? By what standard do I determine x is “a conspiracy theory” as opposed to “a theory?” Perhaps “a conspiracy theory” is “a theory that is internally consistent but lacks hard evidence in its favor,” while “a theory” has more hard evidence backing it up? So be it, but does that mean only theories can be true? Not necessarily: ten days from now, hard evidence could appear transforming a conspiracy theory into a theory (as if it was always a theory).

Is “a theory” a chain of contingent deductions no longer than ten, while “a conspiracy theory” is a chain of contingent deductions longer than ten, per se? Does “a theory” consist of deductions that at least sixty-percent can be falsified, while the majority of deductions that make up “a conspiracy theory” cannot be falsified? By what standard can we say which? According to what standard of justification to define x are we justified to say, “we are justified to define x as this or that?”

3.11 At what point does “a conspiracy theory” garner enough evidence to become “a theory”; “a theory,” enough to become “a truth?” Again, the question of justification lurks.

3.2 Intentionally or unintentionally, a leader (or dictator) could manipulate, control, confuse, or turn the public cynical by constantly appealing to conspiracy theories that could only maybe be falsified by taking “The Pynchon Risk.” Lacking a clear, objective standard by which a given person could determine if he or she is justified to not investigate the claims of the leader, the public would likely be at a loss to know what to do (especially if the people don’t trust their experts or media due to a “legitimation crisis” which the leader very well may have caused).

3.21 Since a given conspiracy theory may only be disproven by taking “The Pynchon Risk,” it is possible that the leader who is manipulating the public is also manipulating his or her self without realizing it: the leader may not even realize that the truth claim to which the leader appeals is “a conspiracy theory” versus “a theory” or “a truth.” It is possible in this situation for everyone to be manipulated, including the manipulator — a Kafkaesque, emergent result for which everyone and no one is responsible (see “There is No Big Brother” by O.G. Rose).

3.3 Epistemic responsibility entails a commitment to resist confirmation bias, the act of which we are constantly engaged in (much more than we realize — it could be argued that the common advice for writers to “know your audience” means “know whose confirmation bias you will appeal to”). But if this means we must take a “Pynchon Risk,” is this necessarily a reasonable thing to do? Perhaps the role of confirmation bias is to help us avoid taking that very risk, to existentially stabilize us.

3.4 If the Atheist claims he doesn’t want to investigate religion because it is a “Pynchon Risk,” we should be understanding. If the Theist tells us something similar, we should also extend sympathy.

And yet.

4. What is the epistemically responsible way to respond to someone who says, “what I am saying is a lie?” In other words, considering Gödel, what is the epistemically responsible way to approach a Liar’s Paradox? If what the person is saying is true, it isn’t a lie, but if it’s a lie, then it is true — a paradox. Hence, if we investigate to determine if the claim is true, we must investigate to determine if it is a lie, which would result in it being a lie; if we investigate to determine if the claim is a lie and determine it is, we determine the claim is true. In this kind of situation, epistemic responsibility would compel us to lock ourselves into a paradox, and realizing this, it would be reasonable for us not to investigate the claim in the first place. But would this be epistemically responsible? It would be like avoiding a “Pynchon Risk”: reasonable, but perhaps epistemically irresponsible. It’s hard to say.

Perhaps sometimes decisions are just decisions, neither epistemically responsible nor irresponsible. Perhaps accepting that decisions are sometimes just what they are is our only hope to avoid being led into destruction by epistemic responsibility. Perhaps, but according to what standard of justification?

And so, we face the question.

VII

1. We now turn to the question of justification: if someone presents me one piece of irrefutable evidence in favor of case x, am I justified to believe x, or do I need two pieces of irrefutable evidence? Why not three? Why not four? Certainly, it would depend on the quality of the evidence (and being convinced is ultimately a combination of evidential quality and quantity), but by what standard do I measure quality? And when I am convinced by quality, is it according to an emotional, subjective, or objective standard?

Consider these questions:

a. At what point are we justified to trust x?

b. At what point are we justified to consider x evidence for/against case y?

c. At what point do we have “practical certainty” about x?

d. At what point am I justified to accept axiom x versus axiom y (which will function as a baseline against which I can define epistemic (ir)responsibility)?

e. At what point do I cease to be epistemically responsible to be skeptical and begin to be a nihilistic doubter (who ultimately negates himself)?

f. At what point do we decide (based on assessment) a person has lied or told false information as truth enough to justify assuming the person will always mislead and/or to justify not talking with that person again?

g. At what point have we investigated a case enough to be justified to claim the case is true or false?

h. At what point are we justified to trust that our experts, scholars, and media are being epistemically responsible for us?

i. At what point does “a conspiracy theory” garner enough “hard evidence” to become “a theory”; “a theory,” enough to become “a truth?”

j. At what point of plausibility do we decide that one conspiracy theory is epistemically responsible to investigate and not another?

k. At what point of investigation are we justified to claim, “this is endless” and/or “I am taking a Pynchon Risk” (and hence are justified to “turn back” before it’s too late)?

l. At what point are we justified to claim “x is actually a Pynchon Risk” versus “x only appears to be a Pynchon Risk?”

m. If we say that epistemic responsibility would obligate a person to investigate “all the questions of life” up to point x but not point y (lest the person goes crazy), how would we be justified to make this claim?

n. At what point would we be justified to say “I am only epistemically responsible for investigating claims that meet the standards of epistemological method x” and/or “epistemological methods x, y, and z?”

o. At what point of time are we justified to say “it is probable that x will be true” after x has been true thus far (in other words, at what point are we justified to say “it is probable that (insert)”)?

p. At what point are we justified to be “reasonable though not epistemically responsible” (in other words, at what point are we justified to not be justified)?

q. At what point are we justified to claim, “x is a racist, sexist, person, cup, etc. (any description or identity)?”

r. At what point are we justified to say, “I don’t think like everyone else?”

s. At what point are we justified to think issue x is more important than issue y?

t. At what point are we justified to change our standard of justification (hence changing the standard against which we define epistemic responsibility, hence perhaps making it seem like we were “always epistemically responsible” when such might not be the case)?

Perhaps it is better not to ask about justification: the question may lack an answer. Every potential answer may fall victim to a “runaway train fallacy,” or the answer might be so complex that only few if any will understand it. Or whatever answer there is, it may be one involving “high order complexity” (as discussed in “Experiencing Thinking” by O.G. Rose), rendering it impractical (especially over multitudes). If we ask about justification and find there is no answer, we cannot go back, like finally seeing the hidden image of a “scrambled picture — Find What the Sailor Has Hidden — that the finder cannot unsee once it has been seen.”⁵ Cursed/True.

Perhaps we are only justified to believe there is no justification — should we find out? Wouldn’t it be epistemically irresponsible to turn back now? But if we make the choice to move forward — to choose to see what we didn’t know we would see — we couldn’t turn back (as perhaps it goes with all reading). But even now you cannot turn back without being epistemically irresponsible (though perhaps still sane). Now, we must move forward to be epistemically responsible (though perhaps insane).

In the end, we might conclude there is no objective standard of justification, and because we’ve read this far — because we cannot unread what we have read — epistemic ethics would compel us to perhaps risk the sense of sanity necessary for any epistemic activity to be possible at all. It’s like a joke but not.

God help us all.

1.01 Yes, I’m intentionally being dramatic here, but what I describe is what it is like to take a “Pynchon Risk” (which this paper has speculated epistemic responsibility would often compel us to take).

1.02 The question of justification is like the question of identity brought up by “The Ship of Theseus” thought experiment: if I continue removing parts from a ship, at what point does the ship stop being itself? Conversely, if I started putting pieces together, at what point do the pieces become the ship? Similarly, if parts of me were removed, at what point would I cease to be myself? If the parts of me were separated and began to be assembled, at what point would I begin to be myself?

On this line of thought, it should be noted that the question of justification is like a number of serious theological and philosophical questions that we often confront:

a. At what point does a person properly balance “practice” and “philosophy?” Theology and praxis?

b. At what point should we cease studying justice and begin acting to bring about justice? At what point should we cease trying to understand the world and move to change it?

c. At what point do we cease being responsible for doing something? When we receive 51% of help versus 49%?

d. At what point does a person become moral versus immoral? When 51% of their acts are immoral or 70%? According to what standard?

e. At what point of discussing x have I emphasized it too much in comparison to y? At what point have I failed to be “balanced” in my discussion?

f. At what point does x cease to only mean x and rather (also) means y?

g. At what point does “free speech” become “hate speech?”

Perhaps there is a dialectic here we cannot avoid. Perhaps this question of “balance” isn’t so much for individuals but for the whole society: perhaps we should be more concerned about if the society as a whole is “balanced” versus a given person. We seem capable of everything but balance, especially to others.

1.03 Do note that even if it could be established that x constitutes justification in one situation, it doesn’t follow that x will necessarily constitute justification in a different situation.

1.04 The problem of justification is part of the question “How do we know something is true?” and it should be noted that we must experience truth as something “out there” — beyond our subjectivity — in order for it to strike us as plausible and hence able to provide us with stability through our daily lives. Truth that isn’t “out there” cannot be a “plausibility structure” (as Hunter and Peter L. Berger discussed), yet the justification by which we provide truth a sense of being objective and “out there” is necessarily based on a subjectivity that is “within.” Considering this, all truths are always on the verge of failing to provide stability for our daily lives, a notion that will be discussed extensively throughout “Belonging Again” by O.G. Rose. If there is no objective standard of justification, there is no way to ever escape the possibility of any given “plausibility structure” failing, meaning we are always susceptible to the existential anxiety that Berger warned is likely to make us rationalize totalitarianism to stabilize (as perhaps some dictators know).

1.05 The problem of justification is why ideology often proves more powerful than facts: it enables people who want to believe x to find a single piece of evidence for x and believe it without any sense of intellectual dishonesty, as those who want to disbelieve x can discard it upon finding a single piece of evidence against x (and arguably there is always at least one piece of evidence for and against any given premise, even when the premise is true). If there is no objective standard of justification, it is likely that this sort of thing is what people will do and continue to do, contributing to costly tribalism in the midst of Pluralism. Considering that we require “a hinge” for “the door” of our thoughts to turn (to allude to Wittgenstein), perhaps we are biologically and neurologically wired for accepting or discrediting x as quickly as possible; perhaps the problem of justification helps us with our biological and neurological needs.

1.06 Like the problem of justification, there is also a similar problem of credibility: with how many instances of person x telling the truth am I justified to believe person x is credible? Likewise, with how many mistakes am I justified to believe person y is not credible? The bar is relative to the person, and thus people who are Liberal are likely to have a “lower bar” for deciding when a Conservative pundit has “lost credibility,” and a “lower bar” for what a Liberal thinker must do to be credible (and vice-versa). This can contribute to tribalism and to helping people think they are rational and justified to not take seriously those with whom they disagree.

1.07 It is perhaps the function of thought to provide us with a “sense” of certainty and justification in face of the truth that true certainty and justification are impossible. Thought thinks to stabilize an ever-instability — a chaos that thought cannot even let us catch a glimpse of lest we never (fully or robustly) believe in its stability again (even if we want to — unless that is we can turn off our brains).

(Do note that it is perhaps the case that we are “toward” authoritarianism because we are “toward” establishing existential stability.)

1.1 The question of justification permeates every sinew of “the life of the mind,” and yet it is a question I believe we rarely identify and recognize is lurking in the background. It also might be a question that we are unable to answer, threatening to raze the whole temple of thought to the ground. If we cannot establish an objective standard of justification, to keep the temple up, we still must live like we do: we must live like we have certainty about what we cannot be certain, for without any certainty, we have no standard against which even to measure (un)certainty.

Wittgenstein claimed ‘it belongs to the logic of our scientific investigations that certain things are in deed not doubted.’⁶ ‘[…] We just can’t investigate everything, and for that reason we are forced to rest content with assumption. If I want the door to turn, the hinges must stay put.’⁷ In other words, if I want to function, even if I cannot truly justify what I think, I must act like I can truly justify what I think. And this is exactly what we all do (whether we ultimately must or not), without thinking about it (precisely because perhaps we can’t think about it, especially not in deed).

If we cannot establish an objective standard of justification, then we can only establish subjective standards, which means there will always be room for debate between people — one who believes “four pieces of evidence is justification to believe x” and another who believes “five pieces of evidence is justification to believe x” (regardless the severity of the case, the situation x is situated in, and the like). The first person will be “justified” to believe four pieces is enough, while the second person will be “justified” to believe one more piece of evidence is needed. The first person will likely think the second is in denial and perhaps an ideologue who doesn’t want to believe x, while the second is likely to think the first is too easily persuaded and perhaps even an ideologue who wants to believe x. Both, relative to themselves, will be “correct and justified” to think this of the other, and problematically both will have reason to believe “the other side” cannot be reasoned with, threatening democracy.

We have to believe something (axiomatically) (in order to have a standard by which to determine what we should believe and what we shouldn’t), and yet if what we believe can only be based on subjective justification, then it would seem we have to believe in something ultimately arbitrary in order to believe in anything at all (such as the belief “x is arbitrary”). It would seem an ultimately arbitrary belief is needed for us to have a standard against which we could judge arbitrary beliefs from necessary beliefs, one that we will necessarily not truly believe is (equally) arbitrary with other possible standards (for if we pick x instead of y, we cannot “practically believe” x is equal to y, though we may provide such lip-service). Likewise, we cannot truly believe our subjective view is in fact (equally) subjective (we all necessarily feel our subjectivity is “more objective” than other subjectivities). Hence, we must all ascribe to an ultimately arbitrary belief that we must practically believe everyone ought to believe as well (though we know to claim otherwise when asked directly) — unless that is we can believe something is true that we know is false. As will be discussed, this poises a major problem for Pluralism: the problem of justification is unfortunately abstract and yet practically necessary to solve or learn to live with.

1.2 Considering “The True Isn’t the Rational” by O.G. Rose, do note that the question of justification entails the question, “At what point are we justified to accept x truth instead of y truth?” and keep in mind that it is relative to what we believe is true that we determine what we believe is rational. If I think it’s going to rain today, it is rational to bring an umbrella; if it doesn’t rain, I still acted rationally: “being true” determines the “toward-ness” of “being rational.” Considering this, if I accept x truth, I transform what constitutes “being rational” to myself (which is especially dramatic if previously I accepted y as my truth and defined rationality relative to y). Hence, the question of justification is the question of “At what point am I justified to accept a truth that will change my entire structure of rationality?” (And keep in mind that if I change from y to x, how can I do so without potentially invalidating the entire rationality which perhaps helped me decide I should accept x instead of y? When am I justified to perhaps engage in this contradiction and even hypocrisy?)

2. To be epistemically responsible is to not believe something until we are justified to believe it. Hence, if there is no objective standard of justification, there is no objective standard against which to define epistemic responsibility (according to the right method, evidence, logic, and so on, as has already been discussed). Do note that if we cannot determine when a person is justified to believe x, then we cannot determine when it is epistemically responsible for a given person to believe x except by subjective and relative measures and standards. If this is the case, justification is not only helpless to stop tribalism and Pluralistic collapse in our Age of Pluralism, but it is also the case that justification will precisely make the problem worse, seeing as there will be no way to stop people from ascribing to their own personal standards of justification against which they define themselves as epistemically responsible and others as epistemically irresponsible. This will lead people to becoming increasingly tribal and intellectually self-segregated, all while thinking of themselves as “intellectually honest” and “open to a convincing argument.”

2.1 Is a justification an “ought?” If we ask, “At what point is x justified?” are we asking “At what point ought x be believed?” Likewise, if we ask, “At what point are we justified (and epistemically responsible) to leap from x to y?” are we asking, “At what point ought we leap from x to y?” It would seem so: if x is justified, it would seem we ought to believe, trust in, assume, and/or live according to x; furthermore, if x is justified, it would seem we ought to be/feel compelled to believe x.

The point at which a Conservative is justified to become a Liberal would seem to be the point at which a Conservative ought to become a Liberal, for if a Conservative is justified to become a Liberal, it would suggest there is reason to be Liberal over being Conservative. If I am only justified to say Liberalism is (equally) “as justified” as Conservatism, then I’m not really justified to change from Conservatism to Liberalism: if I do change, it would be a (neutral) choice, but not really a justified choice (I would be justified to “be” a Liberal, but not so much to “leap” into Liberalism). If anything, it seems I would be more justified to stay a Conservative, seeing as it has worked for me up to that point, and seeing as I don’t yet know if Liberalism will be as “practically justifiable” as Conservatism — I would need more evidence on the side of Liberalism to justify making “the leap” despite the lack of “practical evidence.”

A justification implies an ought; after all, justification orientates epistemic responsibility, and what is ethics if not an “ought?” Hence, if there is no objective standard for justification, there is no way to establish an objective “ought”: it is not possible to establish that everyone ought to believe x. If there are only subjective standards of justification, then there are only subjective “oughts” in regard to what people should believe, and if that is the case, there is no way for me to make it epistemically responsible for you to believe x, as there is no way for you to make it epistemically responsible for me to believe y. Likewise, there is no way for the State, church, community, etc. to make it epistemically responsible for the people to believe x, and if that is impossible, it would seem incredibly difficult for a government to keep a civilization stable for long.

If there are only (or predominately) subjective standards of justification, then what people believe is what people will never ought not to believe (and in this way, “the map is indestructible”), for the standard can always be shifted before it obligates us to disbelieve, discredits what we think, or likewise — what people believe is what people cannot be necessarily compelled to disbelieve. This doesn’t mean people can’t change their minds or choose to submit themselves to a given standard of justification, but it does mean that no one must do so (and that it’s unlikely a given person who is emotionally invested will).

If we believe that x justifies believing y, we must think everyone should be compelled by x to believe y — that because x is true, people ought to believe y (and we’re even prone to go so far as to think it is “self-evident”; only to the degree it is (un)obvious to us is to the degree we think it should be (un)obvious to others). If people don’t believe x and y are true, our very belief in y through x would justify us to think of those others as failing to be rational, intellectually honest, and epistemically responsible. The act of believing in y is an act which inescapably frames others.

To believe is to color.

Belief always portrays.

If we believe x is justified at point z, we will necessarily believe that everyone should believe x at z, unless that is we believe that what justifies x to us shouldn’t be what justifies x to everyone. If we believe the Pro-Life position is “proven” with five strong arguments, we will necessarily believe that the Pro-Life position should be proven to everyone who hears/sees those same five arguments, unless that is we believe what convinces us shouldn’t convince everyone else (which would be for us to believe something that is nonsensical and that which only a minority is likely to accept if anyone). If we believe that five pieces of evidence justifies belief in x case to us, but seven pieces of evidence justifies believe in x to others, we believe that who a person is changes the logicalness or “truthfulness” of a given justification or case. This is a kind of ad hominem fallacy, and such illogic is likely to eventually become apparent to us. Sure, people are convinced at different amounts of evidence, but acknowledging this is different from believing people should be convinced by different amounts of evidence.

To be intellectually coherent, we must believe that if we are convinced by z to believe x, everyone ought to be convinced by z to believe x, for otherwise we wouldn’t think z was convincing. But paradoxically, if standards of justification are ultimately subjective, then what we must believe that everyone should think is that which is ultimately arbitrary: we must necessarily think what is arbitrary isn’t arbitrary; we must think everyone ought to believe x because we believe x, as others must think everyone ought to believe y because they believe y. The nature of thought itself forces us into this paradox.

If we believe x is justified by z, we must project over everyone that we encounter that relative to them too, x is justified by z. Since when we experience z we experience it as justification for x, we must think that everyone who experiences y also experiences it as justification for x. Hence, if people experience z and don’t believe in x, we must think they are being intellectually dishonest (ideological, biased, etc.); if people experience z and aren’t compelled by it to believe x, we must think they are being epistemically irresponsible. And the nature of reality is such that we cannot not have standards of justification like z: the nature of reality is such that we cannot not (eventually) be in a situation where if we are being logical (according to the standard of justification we (arbitrarily) accept), we must think of others as epistemically irresponsible (who relative to themselves are being epistemically responsible and we the epistemically irresponsible ones).

The nature of reality itself primes us not only for misunderstanding, but for misunderstanding that we cannot even experience as misunderstanding.

2.11 If we encounter someone who does agree with our standard of justification of z for x, then we will likely believe that the person is intellectually honest and epistemically responsible (when this may not be the case or perhaps is only the case for this instance: it doesn’t follow that a person who is epistemically responsible in one instance is always such). Like the nature of thought described in “The Phenomenology of (True) Ignorance” by O.G. Rose, it would seem the nature of justification primes us for “confirmation bias” (including the confirmation bias that we don’t fall victim to confirmation bias).

2.12 To allude to “Compelling” by O.G. Rose, at what point ought people be compelled to believe x (instead of y)? Do note that to change a standard of justification is to change at what point people ought to be compelled to believe a given premise. If there are only subjective standards of justification, there will be no way to establish objectively that “x is that which everyone ought to be compelled to believe” and/or “x is compelling,” and consequently everyone will necessarily believe that their truth and its corresponding standard of justification ought to be the truth and standard of justification for all people. People who fail to agree are those who — according to everyone’s standards of justification— necessarily fail to be “reasonable,” “intellectually honest,” and the like.

It might ultimately be the case that because of varying standards of justification, we cannot ultimately compel anyone to believe anything (especially if they are creative and smart enough to endlessly rearrange the variables of their ideology: especially if “the map is indestructible,” as argued in “The True Isn’t the Rational” by O.G. Rose). If this is so, then at the end of the day it is ultimately going to be action and/or “the will to act” that will change the world versus argument.

In a world with subjective standards of justification, action is more likely to compel, “justify,” and change than argument, though it is not the case that those “changed” by action will be those who accept or understand the change, raising the probability of social unrest.

This hints at the possibility that violence, war, revolution, protesting, and the like are necessary and unavoidable for y to come to be accepted over x, regardless if y is “better” than x (though if y is accepted over x, it will necessarily “seem” as if y is “better,” given that y doesn’t give rise to obvious bloodshed, mass poverty, bigotry, etc.). This being the case, there is also more incentive to act than to think, though it is still the case that thoughtless action is more likely to cause error than is thoughtful action (though it doesn’t follow that “thoughtful action” is necessarily “good” or “without error”).

2.121 If action is what is more likely to compel than argument, then it is probable that the ideology which is most likely to stimulate action is the ideology most likely to compel (and “flip”) civilization in its favor. As discussed in “The Heart/Mind Dialectic and the Phenomenology of View(s)” by O.G. Rose, this is likely the ideology which emotion favors, for the majority is more likely to act for emotional reasons than intellectual reasons (which isn’t always a bad thing) (perhaps precisely because standards of intellectual justification shift). Unfortunately, if this is always the case, the likelihood that “the shooting begins” and Pluralism collapse into social unrest is great (to allude to Before the Shooting Begins by James Davison Hunter, as will be discussed later and in “Belonging Again” by O.G. Rose).

2.122 This all points to the brilliance of Martin Luther King’s “nonviolent resistance,” which caused “existential reflection” in white supremacists, often causing the supremacists to change themselves. MLK’s efforts didn’t force people to change but forced people to see themselves in a way that horrified them and that made them run from themselves. By resisting non-violently, the resistor lets the actions and will of the aggressor run uninterrupted, which robbed the aggressor of the ease of rationalizing and avoiding existential reflection. By physically resisting, a person allows the aggressor to rationalize his actions: he can say, “I was defending myself,” or “I was just doing my job,” etc. — by not resisting, no such rationalization can occur (or at least not easily). This makes existential reflection all the more difficult to avoid, along with the realization of one’s own monstrosity, immorality, and cruelty. This doesn’t mean there is never a place for direct resistance, but it does mean that if we want to change hearts and minds, we must resist in a manner that makes it difficult for hearts and minds to avoid seeing themselves.

A fuller exploration of “existential reflection” is found in “Equality and Its Immoral Limits” by O.G. Rose.

2.123 If it is the case that no one must ever shift from x to y, then it is likely that those who do shift do so for emotional versus intellectual reasons (thanks to action), and if this is the case, art is more likely to change what people believe than is philosophy. Furthermore, empathetic genius is more likely to change people than is intellectual brilliance.

Lastly, it should be noted that it is unlikely that we will be able to keep history from repeating itself, seeing as we don’t learn from history how the people of history “felt”: we only learn from history the facts of history (if those). Those who in the past made a grave mistake often if not always found themselves in an ideology that contributed to that mistake, and if “the map is indestructible,” it was emotions that were needed to help people escape that ideology. Those feelings mustn’t have come about, and if we learned from history how people “felt” this (versus thought), we might prove ourselves equipped to stop ourselves from repeating the mistakes of history.

If minorities were mistreated in the past because for whatever reason it became epistemically responsible to mistreat minorities or to do nothing as others mistreated them, failing to understand “the conflict of mind,” it is likely we will fail to stop this from happening again. It is very possible that it is epistemic responsibility which continues to make history repeat, and since we don’t learn to “feel” from history, we don’t learn to break the cycle.

It seems it would be unnatural for us to be intellectually and emotionally equipped to avoid repeating history.

2.13 What constitutes our standard of justification is a standard which we must necessarily think everyone who is epistemically responsible should ascribe to; otherwise, we are guilty of a kind of existentially-unnerving ad hominem fallacy. Our standard tends to be “self-evident” to us — if we believe case x thanks to five pieces of evidence, it tends to be “self-evident” to us that these five pieces of evidence should convince anyone of case x — and hence we innocently think that it is also “self-evident” to everyone else; otherwise, they must be unintelligent, epistemically irresponsible, or ideological. This costly, tribe-creating phenomena is what could be called “the self-deception of the self-evident,” and it is similar to how when we speak, what we are trying to say tends to strike as obvious — how when we write, what we mean is crystal clear — and so it is hard for us to truly understand why others don’t understand us. Since our words seemingly “wear their meanings” to us, it is easy to forget that to others, our words appear bare. Steven Pinker refers to this phenomenon as “the curse of knowledge” in his brilliant The Sense of Style, and I think his “curse of knowledge” applies well to the problem of justification.

2.14 Whatever truth we believe ought to be the truth is likely to be in line with whatever ideology into which we were born, seeing as we can shift standards of justification and hence avoid ever feeling compelled to change our ideology (and hurt our family or tribe, for example), and seeing as our first ideology is the “hinge” thanks to which “the door” of our worldview functions (Wittgenstein), to change that is to risk breaking. Also, the ideology of a given person will necessarily feel like he or she ought to keep ascribing to it, and so it is likely that people — trying to be intellectually honest — will conclude that they ought to keep believing their (first) ideology.

(Birthrate seems likely to be the best indication of the future mindset of the globe.)

2.15 What ought we do if someone calls us and tells us a bomb is about to go off, to allude to the earlier example? Ought we to gather more evidence, or would the severity of the claim and the limitedness of time make it that we ought to take seriously? What we “ought” to do isn’t clear.

(A dictator who wanted to control a country would be demonically wise to put his or her citizenry in this kind of situation as often as possible.)

2.2 Hume’s “Is/Ought Problem” could be relevant here: perhaps as we cannot “leap” from “x is x” to “x ought to be x,” so perhaps we cannot move from “I believe x” to “I am justified to believe x.” Solutions to “Hume’s Guillotine” could help with the problem of justification.

2.3 If we are justified to take a “Pynchon Risk,” is it the case that we ought to take it? If so, it would seem epistemic responsibility could be a curse and yet also what makes possible rational civilization.

2.31 Perhaps it is because epistemic responsibility compels the taking of “Pynchon Risks” that so many geniuses go mad?

3. It was established earlier that “total certainty” is impossible, only “practical certainty that is practically indefinable from total certainty.” Hence, what is being searched for in this section isn’t “Platonic total justification,” but (the possibility of) a standard of “practical justification” for a fitting situation (do note that there can be a different standard per situation, assuming there be one for any situation).

To offer some positive news, I do believe there is at least one standard of justification that can be used in certain (if not many) situations and in regard to certain (if not many) kinds of truths. That is the standard of falsification and “liberal science” discussed by Jonathan Rauch, C.S. Pierce, and Karl Popper: falsification, in my opinion, is the crown jewel of epistemology. For it, we owe these great minds much.

Falsification is so useful and wonderful that I personally am tempted to become a “falsification monotheorist” (a temptation perhaps many scientists have given into, understandably), but I’m afraid I cannot be justified and make the claim that “all truths are falsifiable.” As was discussed earlier, it is not so, but how much easier would “the life of the mind” be if it was? We wouldn’t have to think about religions like Christianity, theories like intersectionality, or difficult works of art like Ulysses. And how much more likely would it be that Pluralism didn’t self-destruct (a truth that perhaps the New Age Atheists understand all too well)?

If someone called me and told me that a bomb was about to go off, I (probably) couldn’t falsify it until the bomb went off; if a man who lied often told me something, by the time I finished falsifying the last two claims, there would already be another claim I needed to falsify; if a report came out claiming the President was being blackmailed by Russia to start a war in two days, even if I could falsify it, I wouldn’t have the time; if a young Mother Teresa claimed God told her to serve in Calcutta, I couldn’t falsify her claim (and arguably shouldn’t); and so on. Indeed, falsification could be used to determine if Hemmingway was right about the CIA spying on him, but it couldn’t while Hemmingway was still alive.

Unfortunately, falsification is often practically useless, even though it is a legitimate standard of justification (and perhaps the only legitimate one). Falsification is often useless in real time; it is much better suited for the laboratory, and the laboratory is like a tomb that on the third day we try to push open, but the stone proves too heavy.

Falsification is a tool we can use to help us be epistemically responsible and so justified to believe x, but it only applies to truths that can be falsified, and it is often practically useless (in real time). Still, it’s better than nothing, and falsification does help us manage our “perhaps/likely”-true-knowledge over and/or against perhaps-true-beliefs, even though we are still left with the problem of figuring out how to manage our perhaps-true-beliefs.

3.1 In many ways, the problem of justification is identical with the problem of verification: the question of at what point are we justified to believe x is like the question at what point have we verified x? It is not surprising then that Popper’s response to Hume — who most famously noted the problem of induction and verification — would also be a proper response to the problem of justification.

3.2 Falsification makes possible the establishment of some universal “oughts” on grounds of “practical certainty,” contributing to the possibility of Pluralistic unity.

3.3 But does falsification actually help with the problem the justification? For at what point has x avoided falsification long enough for us to be justified to believe x? How could I be justified to claim “at point y” versus “at point z?” What is the number of tests that x must pass before it is epistemically responsible to believe x? Wouldn’t three tests be better than two? Isn’t it more justified to believe a theory that avoids falsification three times versus a theory that only avoids it twice?

It would seem that falsification works by first deciding if x could be falsified. Then, once it has passed at least one serious attempt to falsify it, there is reason to believe “x is true” — x has earned it. Before the experiment, our relation to x should be one of neither belief nor disbelief: we should withhold our judgment until after the experiment. Then, once x has passed the test, we have reason to believe “x is true” and hence to make “x is true” our new default. The more times we try to falsify x and x passes those attempts, the more justified we are to believe x.

For x to pass a single test of falsification is for a person to be justified to believe “x is true,” but at this point the person is only “weakly justified”: to increase the robustness of the justification, the person should have other people try to falsify x (to be sure that subjectivity isn’t playing a role, and seeing as the person is justified to ask others to attempt, considering he or she has reason to believe “x is true” thanks to previous and personal falsification attempts). The more people who attempt to falsify x and the more x continues to be true, the more justified people are to believe “x is true.” The more times x avoids falsification, the more people are “strongly justified” to believe in x.

At what point are we justified to stop attempting to falsify x and just assume “x is true” (forever forth)? More evidence is better, obviously, but if we cannot establish a standard by which we are justified to say “that is enough,” we’ll find ourselves forever having to retest premises such as “we need air to survive,” “gravity exists,” and so on (and epistemically responsible for doing so). Unfortunately, theoretically, it doesn’t seem to me that falsification can avoid a “runaway train fallacy”: if we establish that it is better for x to be falsified six times versus five (by different people), it follows that we should falsify x ten times versus six, twenty versus fifteen, and so on. Fortunately, practically speaking, I do think there is hope (based perhaps on Hume’s “natural belief,” as discussed in “Deconstructing Common Life” by O.G. Rose).

Falsification is a unique epistemological tool in that the method makes possible a “practical certainty that is practically indefinable from total certainty” about x. If x stands after a hundred attempts to falsify it by a hundred different people, though it would be better for x to stand up to testing a billion times more, at this point, we have reason to be “practically certain” about x — to live like “x is true” (especially since the attempts were done by diverse people). If one day as we are “living” something occurs that gives us reason to believe “x is false,” then yes, we should retest x, but after a hundred or so tests (there is no specific number), we are justified to begin “living as if x is true” (to adopt the premise into our default mode for engaging with life). No, we can never be fully justified to say “x is true” thanks to falsification (Popper, considering Hume, acknowledged such), but we can be justified to “live as if x is true,” and that is practically good enough.⁸

Having earned “practical certainty that is practically indefinable from total certainty” about x, we have reason to make x one of the premises that functions as a “hinge upon which the door of our thinking swings” (to allude to Wittgenstein again). If something occurs that gives us reason to believe x should no longer be such “a hinge,” then we should retest x, but thanks to falsification, it is possible to establish a “practical certainty” about x that justifies us in us “living as if x is true” (insomuch as we don’t live in a way that “closes off” x from the possibility of falsification; we must always keep that possibility open) (and keep in mind, as Wittgenstein said, that we must have some assumptions ‘that we are forced to rest content with’).⁹ ¹⁰

Falsification makes possible the achievement of a “practical justification that is practically indefinable from objective justification,” which puts us in a much better situation with what we can falsify than with what we cannot, hence making it tempting in our Pluralistic Age to only consider premises that can be falsified as “possibly true.” In many respects, it would seem epistemically responsible to be so exclusive, but seeing as epistemic responsibility is relative to method and the kind of truth that the epistemic responsibility is “toward,” this is not the case (but do not be quick to fault those who think it is: Pluralism would seem to have a better chance of holding together if all truths were falsifiable).

Falsification can provide us with “practical certainty” that justifies “living as if” certain premises are true, but is this kind of “practical justification” possible with what cannot be falsified?

It doesn’t seem so.

4. If there are no objective standards of justification for what cannot be falsified, then there is no way to make it epistemically responsible for someone to believe something that the person doesn’t already believe. This doesn’t mean that people can’t change their minds, but it does mean that people don’t have to change their minds. This is especially unlikely if the argumentation isn’t accompanied by action or there is action that undermines the argumentation (for example, Christians don’t act like Christians). Realizing this could incentive people to be sure their intellectualism is accompanied by action, but at the same time, it could be used to rationalize actors not taking seriously “the life of the mind”: to realize action convinces us more than argument could be used to justify not only abstaining from participating in democracy, but to also justify overriding it with action.

If there are no objective standards of justification for the unfalsifiable, there are no objective standards by which it can be established that one ought to be convinced and/or compelled; consequently, no one ever ought to change their minds in regard to what cannot be falsified. At the same time, without such objective standards, we cannot establish that we ought to have certainty about x and not y, nor that we ought to use x instead of y as one of the “hinges” upon which “the door of our thinking should swing” (to allude to Wittgenstein again). We require a certainty that we cannot establish we ought to have in order to have a standard against which even to measure (un)certainty, to even say we are in fact (un)certain, and to even establish what ought to be believed or disbelieved.

What makes possible escape from intellectual nihilism is that which we cannot intellectualize.

In regard to what can be falsified, we can achieve “practical certainty” and “practical justification” and hence it is possible to be “practically, epistemically responsible” at least to some degree. But again, what are we to do in regard to unfalsifiable truths?

Moving forward, this paper will assume there are no objective standards of justification that can be established beyond falsification, and unfortunately it is not the case that we can establish that everything that is true is falsifiable. Accepting this Kafkaesque reality, we must now turn to the question of what we are to do about the problem of Pluralism (a dilemma that’s gravity the works of O.G. Rose never seem able to escape).

If there is a greater thinker out there who can establish objective standards and/or methods of justification, we as a civilization should be forever grateful to that individual. Hopefully, even if that thinker arises, the closing sections of this paper, which will be greatly indebted to the works of James Davison Hunter, will still prove useful.

VIII

1. The problem of justification is a critical problem for civilization, and it is especially a problem for Pluralism. It is a problem of democracy that becomes ever-pressing as the society becomes ever-Pluralistic. Democracy has proven useful to help mitigate (and arguably hide) the problem by giving everyone a sense of solving it through (the possibility of) democratic debate, rational discourse, and the like — all of which tragically are ultimately (or at best often) fruitless (especially without force and/or action) due to the problem of justification and the possibility of (ideological) people forever maintaining a sense of being epistemically responsible by continually shifting their standard(s) of justification. Democracy helps provide a way for people to feel as if societies can change their minds, when a change in leadership isn’t necessarily a change in views: changing leadership provides a sense that views can change easier than they can. And yes, societies and people can change their minds, but the point is that they never have to change their minds, and it is unlikely that argument (alone) will ever bring about this change. And yet there is no guarantee that action will change society for the better, precisely because argument is so powerless to make people realize what is “best” (it is likely action will be blind). This doesn’t mean everything action brings about will be bad — much of it will be good — but it does mean that it is unlikely the actors will be convinced of the goodness or badness of what they do by argument. (Perhaps this subtly hints at why freedom is so rare in human history.)

If there are no objective standards of justification in regard to what is unfalsifiable, then no one must believe anything that they don’t want to believe, and they will (still) be epistemically responsible (within their ideology). Admittedly, even in regard to beliefs that can be falsified, a person can just avoid tests (of action, for example), and then the person will be able to (genuinely) convince his or her self of being “intellectually honest” (as is likely for us “ideology preserving”-humans, as discussed in “The True Isn’t the Rational” by O.G. Rose).

The True Isn’t the Rational by O.G. Rose argues that “the life of the mind” is necessary for us to survive Pluralism, but it seems that “the mind” cannot take us as far as we need to go: what is necessary cannot go as far as is necessary; it can take us to the finish line but cannot cross it. Realizing this, it is tempting to throw up our hands and ask “Why bother?” but to give up is to fail far off from the finish line where the consequences are worse than to fail right at the end (though perhaps still dire). Without “the life of the mind,” Pluralism will likely collapse into radical skepticism and civil unrest, but without an objective standard of justification, perhaps the same will befall us. We seem forced to succeed at the impossible, like trying to see a mirror behind our face.

There is no such thing as “total certainty,” as there is no such thing as “total justification,” only “relative justification.” There is the possibility of “practical certainty” and “practical justification,” but an objective justification seems impossible. This being the case, we in this Pluralistic Age must figure out “What are we going to do?” and in hopes of finding an answer, we will turn to the works of James Davison Hunter and thought explored in “Belonging Again” by O.G. Rose.

2. Rarely thinking about it (if ever), everyone lives according to a subjective standard of justification against which they define themselves as epistemically responsible and hence believe what they (and everyone) “ought” to believe. Being impossible to establish an objective standard of justification for what cannot be falsified, it is not possible to establish “everyone ought to believe x,” and this being the case, the more Pluralistic a civilization becomes, the less likely it is to remain stable, especially if that society fails to take seriously the admonishments and advice Hunter offered throughout all his works (but most notably in his book Before the Shooting Beings). It could be argued that the whole point of this paper on justification has been to help establish Hunter’s profound correctness.

As discussed in much more depth in “Belonging Again” by O.G. Rose, Hunter argued that we must change our focus in democracy from trying to prove “the other side wrong” and instead focus on finding “a middle ground.” By this Hunter didn’t mean so much a “compromise” as he did “a space between different first principles about the nature of reality.” According to Hunter’s work, to use Aristotle’s language, the challenge of Pluralism is that people begin living together who don’t merely think differently accidentally but essentially: they don’t share the same axioms or “first principles” about the world. In the ever-Pluralistic democracy, Hunter was increasingly concerned about the kinds of debates and dialogues that emerged: he considered them attempts at groups to get one another to betray their “first principles” — a doomed enterprise. What Hunter wanted was what he called a “substantive democracy,” which is a democracy focused not so much on changing people’s “first principles” as it is looking for a “middle ground” which all groups can agree to without betraying their most deeply help beliefs. To quote Hunter at length:

‘Let me be clear here; the common ground to which I refer is not ‘dialogue’ in the vacuous sense often invoked by some ministers, marriage counselors, and conflict-resolution specialists. It is, rather, robust and passionate and utterly serious civil reflection and argument. In this sense, it builds upon an agreement about how we should contend over our moral and political differences — a public agreement over how to disagree publicly. As George Weigel has put it, when an agreement is realized at this plane, genuine disagreement becomes an accomplishment, and authentic debate becomes a virtue. Only in this context can there exist the possibility of forging politically sustainable solutions to the conflicts that divide us.’¹¹

If we are to survive Pluralism, we must learn to find “common ground” between people of different “first principles,” versus learn to “compromise” with them — a seemingly insignificant change of focus that changes everything. As long as democracy continues in people’s mind to be about whose “first principles” will reign and whose will be debased if not erased, every election will be a divisive election of great turmoil. If democracy demands people to sacrifice “first principles,” then the winner of the election is the group that must sacrifice “first principles” the least if at all, while everyone else feels deeply dissatisfied, possibly to the point of violence (and do note the people who win can still feel dissatisfied too). One part of civilization will have hope, while the other will lose it: the existential tension will be dire and likely burst. Hunter wrote:

‘The problem, of course, is that when politics is framed as the only (or primary) solution to public disagreement [because there is no substantive democracy], the only practical question people ask is about who is in control. Is it ‘our’ party, ‘our’ candidate, ‘our’ side — or ‘theirs’? […] Indeed, the question of political solutions permits people to avoid or deny altogether the human side of controversy, as though cultural controversy could be formulated as a technical problem permitting a purely technical solution.’¹²

I have borrowed thoughts from “Belonging Again” to help trace out Hunter’s thought: please see that work for a fuller description; better yet, read all the works of James Davison Hunter. Considering the problem of justification, I see no other solution than to put into practice what Hunter described; alongside it, I also believe we should practice the “liberal science” of falsification Rauch discussed in Kindly Inquisitors. Both men are geniuses.

2.1 But is this “middle ground” Hunter discussed possible? Are we sure there is always a “middle ground” between “first principles” that will not require the abandoning of “first principles?” Knowing this would require being God (and note we will naturally feel there is no “substantive middle ground,” since it will demand of us not to get everything we want), but considering the practical benefits of assuming a middle ground is possible (until we have reason to think otherwise, and keep in mind that we must start somewhere), we have reason to believe a “middle ground” is possible. If we find that it actually isn’t after the utmost attempt — God help us all.

2.2 Hunter’s work is not only needed because of the problem of justification, but also rational — far from a “compromise” of “the life of the mind” or a given person’s “first principles.” Often, people react to any talk of “middle ground” as “intellectual betrayal” — it is taken to be a lessening of the authority of beliefs for the sake of achieving (inauthentic) agreement that people’s beliefs, if followed to their logical end, would not allow. Finding “common ground” is often taken to be epistemically irresponsible, which is exactly to be expected (as I hope this paper has made clear), seeing as a person’s “first principles” are what define a person’s epistemic responsibility. The nature of thought is to be against “common grounds”: it is epistemic responsibility itself which makes us skeptical and even antagonist toward them, and failure to realize this “trick” of epistemic responsibility will make it unlikely we follow Hunter’s invaluable advice.

“Seeking a middle ground” is rational both because the nature of thought is such that it never has to be convinced to change (“the map is indestructible”) — so trying to convince others to abandon “first principles” is irrational from a practical standpoint — and also because not trying risks civil unrest, collapsing politics, worsening tribalism, and even war. Furthermore, seeing as shifting standards of justification make it so that no one must change their minds over “first principals,” finding a “middle ground” isn’t betrayal (perhaps it would be more so if “the map wasn’t indestructible”), but rather practical, rational, and necessary (that is unless a person finds increasing social unrest, risking war, etc. “rational,” and perhaps it would be for the time that those representing one’s own “first principles” were in power — but that time would not last, and that period in of itself would threaten social order).

2.21 Ayn Rand once said that ‘[i]n any compromise between good and evil, it is only evil that can profit’ — it is the nature of epistemic responsibility and thought to make us feel that a middle ground is necessarily an instance in which “evil profits” (even if this is not the case). There is truth to what Rand said, but that truth is likely lost if we fail to realize how epistemic responsibility will likely always make us think of ourselves as “the good” in a given “compromise” and furthermore turn us against Hunter and his necessary guidelines for Pluralism.

It should be reiterated that Hunter wasn’t advising a betrayal of “first principles” (and their corresponding “epistemic ethics”), but rather a change in focus to finding a ground upon which people can meet without betraying or watering down their first principles (except as perhaps an utterly last resort). Yet it should be noted that thought and epistemic responsibility are such that any talk of “middle ground” is immediately taken as epistemically irresponsible and threatening (like a mouse reacting to the sight of cheese the mouse has been shocked every time it has touched). We must learn not to be tricked by this “knee jerk” reaction to turn against Hunter’s work.

Because of the nature of thought we all must necessarily feel as if it isn’t us who needs to “move toward the middle with our first principles,” but those who don’t think like us, as it is the nature of thought for those others to necessarily think the same of us (realizing this can help cultivate valuable humility). Because there is no objective standard of justification, we can always avoid being compelled to think otherwise, always continue to make it epistemically responsible to maintain our position, and always make “moving to the middle” that which we “ought” to refrain from doing. Realizing all this can increase to us the sense that it is rational and even epistemically responsible to accept “middle grounds” in face of a sense of epistemic irresponsibility. Recognizing “the conflict of mind” can help us overcome it.

2.3 In an article titled “Moderates are the Real Tough Guys,” Oliver Burkeman writes:

The problem with moderation, Peter Wehner argued in a recent New York Times essay, is that it’s seen as intrinsically lily-livered, a lukewarm compromise between more resolute extremes — ‘a philosophy for tender souls,” as Jean-Paul Sartre said of liberalism. I don’t want to be moderately opposed to dishonest, misogynistic, quasi-fascist politicians; when extremists run the world, we should be extremely committed to their defeat. Yet on the other hand, deep down I know that a victory for My Team over Their Team at the next election or referendum won’t solve much in the long term, either: humiliating the other side simply ensures they’ll come roaring back, more furious than ever, until they regain sufficient power to humiliate me.’¹³

This paragraph is one I believe Hunter would resonate with (though Hunter may point out that “substantive democracy” doesn’t require us to be “moderate” so much as it requires us to step into a middle zone with our “first principles” — though that is certainly what can be meant by the word “moderate”). If we understand the nature of thought, justification, and epistemic responsibility, we will better understand why it is we feel “moderation” is weak. Consider what Peter Wehner wrote:

‘In such a poisonous political culture, when moderation is precisely the treatment we need to cleanse America’s civic toxins, it invariably becomes synonymous with weakness, lack of conviction and timidity […] This is quite a serious problem, as Aurelian Craiutu argues in his superb and timely new book, Faces of Moderation: The Art of Balance in an Age of Extremes, in which he profiles several prominent 20th-century thinkers, including Raymond Aron, Isaiah Berlin and Michael Oakeshott. Mr. Craiutu, a professor of political science at Indiana University, argues that the success of representative government and its institutions depends on moderation because these cannot properly function without compromise, which is the governing manifestation of moderation.’¹⁴

Epistemic responsibility will always lead us to feel that moderation is intellectually dishonest unless we understand that it is rational because of how epistemic responsibility works, because “the map is indestructible,” because of what failing to listen to Hunter risks, and because democracy cannot work otherwise.

Until we truly get “in our bones” that finding a “middle ground” is rational, it will be deeply unlikely that we will ever be able to overcome (our) epistemic responsibility which will necessarily turn us against such moderation: we require (a sense) of epistemic responsibility to combat (a sense) of epistemic responsibility; otherwise, “intellectual honesty” will get the best of us; otherwise, “the conflict of mind” will win. It helps to think about moderation ‘as a disposition, not as an ideology,’ to understand that “looking for middle grounds” isn’t the same as “believing something in the middle.”¹⁵ It also helps too believe deeply something Raymond Aron once said:

Freedom flourishes in temperate zones; it does not survive the burning faith of prophets and crowds.’¹⁶

Lastly, it greatly helps to combat the epistemic responsibility against moderation by deeply understanding that it is the only rational option given that “the map is indestructible” and the problem of justification — that it is epistemically responsible to fight for “a middle ground” against the (sense) of epistemic responsibility which opposes moderation. To close with Peter Wehner:

Moderation is a difficult virtue for people to rally around, since by definition it doesn’t arouse fervor or zealous advocates. But in a time of spreading resentments and rage, when truth is increasingly the target of assault and dialogue is often viewed as betrayal, moderation isn’t simply a decorous democratic quality; it becomes an essential democratic virtue. / In this immoderate age, moderation must become America’s fighting faith.’¹⁷

We desperately need moderates — those who will act kind toward those they believe contribute to the murdering of the unborn, the spreading of racism, the warming of the planet, etc. — and yet a moderate is likely if not necessarily someone who is inconsistent with his or her beliefs. If we believe racism is destroying America, to find a “middle ground” with those who don’t believe in institutional racism is to contribute to injustice; if we believe abortion is murder, to allow abortion in any circumstance to any degree is to fail to fully stand up against a genocide against the unborn; if we believe Global Warming is destroying the planet, to “compromise” with Big Oil is to flirt with the Apocalypse. Considering these examples, moderation is existentially disturbing, can feel morally corrupt and epistemically irresponsible, and yet without moderation, civilization will fall deeper into existential anxiety and tribalism, be unable to agree on what constitutes justice, freedom, and the like, and be unable to participate in “substantive democracy.”

Like Ulysses with wax in his ears, we must sail against the winds of the epistemic responsibility which blow us away from “middle grounds.” We have for too long been drunk on the Enlightenment and assumed by faith that the mechanics and motives of thinking would naturally harmonize upon an undeniable truth (making our only job to increase rationality in the society), and so save us from an existentially disturbing “moderation,” but the mechanics and motives of thinking do not ultimately harmonize. Thinking works, but it also works against itself.

(Do note though that if this paper is correct and “the conflict of mind” is in fact inevitable due to the imperfections of rationality itself, then accepting “imperfect” moderation becomes more rational as a necessary tragedy, possibly easing existential anxiety.)

2.31 Another reason why it is difficult to accept moderation is because if we believe “x is true” and we are logically consistent (and who thinks they aren’t?), we should believe that everyone should believe that “x is true” (this will especially be the case within ideological systems that stress evangelism, salvation, and the need to convert). Yes, when it comes to personal beliefs, we may be more understanding if people don’t think like us, but if we believe x (unless we believe x is true just to us, which is likely an ad hominem fallacy), we will believe everyone should believe x. Considering this, logical consistency and epistemic responsibility both oppose “searching for a middle ground’ — a responsible thing to do in our Pluralistic Age.

If we believe in x, to not stand for x and not insist that people should believe x versus “find a middle ground between x and y” is to do something that x would have us feel is intellectually incoherent and existentially unnerving, even though “searching for a middle ground” isn’t the same as “watering down one’s beliefs” (as Hunter noted).

Furthermore, such moderation can make us feel like our beliefs are less authoritative, special, binding, and true, in the same way “tolerance” can lead to communities having less authority and power to provide belonging, as described in The Long Truce by A.J. Conyers and as discussed in “Belonging Again” by O.G. Rose. What can help us combat this feeling is again by understanding deeply that we can “look for a middle ground with our deeply held beliefs” — we don’t have to abandon them — and furthermore understanding that in our Pluralistic Age, we have no other option. Even if we must risk the authority of our beliefs, to not risk them is to risk our civilization.

2.311 Ideologies that stress the need to convert others, considering that no one must be convinced of x versus y, are ideologies that are likely to cause great frustration as members (often) fail to convert others democratically, precisely because of the problem of justification. This could lead to people using violence or various kinds of State force to convert, and frankly this is probable given that it is ultimately “action” which seems to change people (as this paper suggests). This thought applies not just to religions, but also political ideologies, philosophies, etc., and also hints at the accuracy of what James K.A. Smith discusses throughout his work, most notably in You Are What You Love.

2.32 Another reason why it is difficult to accept (any sense of) moderation is because everyone necessarily believes that what they believe contributes to justice, freedom, beauty, goodness, and the like better than any other possible belief. As Augustine taught us, right or wrong, no one believes what they think doesn’t contribute to justice, freedom, beauty, goodness, and the like. Furthermore, no one believes what they believe contributes to injustice and slavery: even the white supremacist can believe separating the races is “just” and contributes to “freedom” by helping the races “be themselves.” Twisted logic, of course, but the Augustine point is that if we do x, we believe x is good (for/to us).

A moderation of justice is to contribute to injustice; a moderation of freedom, to contribute to slavery. If we ascribe to x, then we believe that to “moderate x” or to even risk such is to fail to contribute in the world to justice, freedom, beauty, goodness, and/or the like, and to do this is to fail to act morally and epistemically responsible. Yes, if we believe y, it is “self-evidently” absurd to believe x contributes to justice, freedom, etc., but remember, what constitutes “absurdity” is relative to what we believe. (For more on this topic, please see “Truth Organizes Values” and “The (Trans)values of Justice and Love,” both by O.G. Rose.)

The non-contingent values of justice, freedom, etc. strengthen the sense that moderation is not only epistemically irresponsible, but immoral, and all of us must necessarily feel that what we believe is true is that which best spreads these non-contingent values around the world. What I mean by “non-contingent” is that “in no circumstance should these values be reduced”: there should always be as much justice, freedom, etc. in the world as possible. Yes, this means we still have to debate what exactly constitutes “freedom” — if freedom is possible without limits, the difference between “freedom” and “nothingness,” etc. — but whatever we believe constitutes a given, non-contingent value is that which we believe should be unrestricted.

But this again poises a problem to moderation: all beliefs necessarily carry with them ideas of non-contingent values and how best to spread them, and hence all beliefs will necessary make moderation feel epistemically irresponsible and immoral. And worse yet, moderation will feel like a non-contingent trespass — a violation and immoral act — in all circumstances and to any degree. And this feeling is intrinsic to the structure and being of thought and belief themselves. If we think and believe, as all must, this feeling will be inescapable.

All of this hints at the destruction and trouble that can be caused by our non-contingent values, which are necessary for humanity to have humanity. However, again, knowing the shortcomings of thought and its mechanics can make accepting moderation rational.

2.33 We are likely driven by ideology and epistemic responsibility to accurse those (perhaps only in our heads) who claim “we need moderation” of being those who are secretly ideological and wanting us to “moderate toward Progressivism” or “toward Conservativism.” This thinking is likely to impede our willingness to moderate.

2.34 Another reason we are naturally against moderation is because we are naturally in favor of certainty — it is structured into our very thinking, considering Wittgenstein — and moderation can feel like a threat to certainty (as can empathy, encountering “the other,” etc.). All humans desire a degree of “solidness” to life, and where there is uncertainty and (in-finite, “not in (our) finitude”) possibility, there is a lack of “solidness.” Philip Rieff argued in The Triumph of the Therapeutic that a life of total freedom and hence total uncertainty is psychological hell: though we want possibility, we don’t want too much of it. Rieff was also concerned about society entering into an age of psychological hell (and/or existential anxiety) due to what he called “the triumph of the therapeutic,” and what Rieff meant by this is explored in “Belonging Again” by O.G. Rose. Here, I only want to note that a reason that thinking is naturally “toward” certainty and against moderation is because thinking is naturally against falling into psychological torture. To ask people to “moderate” is likely taken by them (consciously or unconsciously) as asking them to risk “psychological hell,” and hell is always rational to avoid (at all costs).

2.35 Considering how difficult it is for people to overcome temptations to not moderate in the name of epistemic responsibility and truth, it is doubtful that more than a minority will do so (and in the way Hunter described). Considering this, it is doubtful any given democratic society will survive.

2.4 To reiterate, please don’t misunderstand my use and support of “moderation” to mean that Hunter or I want people to be “in the middle” in their thinking: a centrist in Nazi Germany would be part of the problem, and it is not the case that being a centrist is necessarily a good thing. By “moderate” I mean more so “to moderate how one acts and thinks, and to be willing to accept compromise.” The idea of a “substantive democracy” is precisely that people don’t have to abandon or “move to the center” from their deeply held beliefs, and it is my view that appeals to “move to the middle” have accomplished little and confused much (of course, it depends on what is meant by the phrase).

3. Seeing as objective standards of justification are impossible, perhaps we should instead establish “rules of justification” that we can all agree on, even though they are ultimately arbitrary? The reason we have juries in court cases is precisely because of the fact that different people have different standards of justification. However, there is no jury in an individual: just one standard of justification. But if we were to establish “rules of justification” (and “logical social contract”) it may help us achieve a “substantive democracy” by making democracy more like a court case.

The rules of chess are ultimately arbitrary — there is no objective reason why it is the bishop that can move diagonally across the board instead of the knight — but the rules are what make the game possible. Yes, there is still a problem of justification here — I cannot objectively justify why the bishop should move diagonally and not the knight — but because everyone who plays chess agrees to the rules, this problem is “practically irrelevant” and the rules “practically justified.” Perhaps if we could establish similar rules when it came to democratic debate, we could practically overcome the problem of justification? Perhaps we could better manage our inherently unstable situation?

Changing our focus to look for “middle grounds” is a valuable “rule” we could establish, and we have Hunter to thank for his insights about “substantive democracy.” But this still leaves us with problems of justification (especially regarding what cannot be falsified), and it is in regard to this problem that “democratic rules” could be helpful.

Why should anyone agree to play by rules or one set of rules versus another? Couldn’t everyone just come up with their own rules and impose them upon everyone else? Indeed, but if everyone did this in regard to chess, the game couldn’t be played, and it is because people want to play chess that they agree to the rules of chess established by those who invented and modified the game. If we as a society want to democratically debate, we should likewise want to debate according to rules that will help make “substantive democracy” and the overcoming of problems of justification more possible.

Below are a list of rules that could be used, and regardless if we accept them or not, I hope the point is clear that before a debate, it would be helpful to establish “rules of justification.”

a. A belief that is falsifiable but refuses to be tested is a belief with which parties are not responsible for finding a “middle ground,” for otherwise we could find ourselves having to find “middle grounds” with absurd and even insane ideas.

b. Everyone should bring at least five pieces of strong evidence for a case; otherwise, the case will not be considered.

c. A theory that consists of more than five unfalsifiable deductions that are contingent upon one another will be considered a “conspiracy theory,” and though not necessarily false, it will be treated with less seriousness than an explanation reliant on a smaller chain of deductions (especially if those deductions can be falsified).

d. All “Pynchon Risks” should be left outside the debate as much as possible, and to the degree they must be included, nobody should be demanded to take them.

e. Those entering the debate should have at least a basic understanding of epistemological methods, seeing as “conflicts of mind” are especially likely where epistemology is lacking.

f. A person can only assert “x is true” to the degree a person can explain why x is true, for people cannot be persuaded that x is true unless it is explained to them, and where there isn’t persuasion, there isn’t democracy. More likely, there’s violence.

And so on — this is just a sample list — people can make up for themselves their own rules to make possible their own debates (we as a society could even come up with and publish a “Roberts Rules of Justification,” per se). Yes, I am aware that these rules need a lot of clarification — “What constitutes evidence?” or “Why five pieces of evidence and not seven?” etc. — but my point here is only to provide an example of the kind of rules that could be practiced to help us overcome the problem of justification. One debate could follow one set of rules; another, a different set; and so on.

No, I don’t think we should go up to random people and “throw” a set of rules upon them in everyday conversation, nor worry about such rules when having an everyday conversation, nor request for rules to be established if someone doesn’t impose something they believe. Rather, the rules should be used for official or consequential debates, say in determining which beliefs should be passed into law, and the like.

I am aware that it is unlikely that everyday people in their everyday lives will debate with one another and intellectually live according to some list of arbitrary “rules of justification.” However, just knowing about the problem of justification may inspire people to force themselves to live according to some firmer set of criteria for deciding what to believe and disbelieve — to create a “liberal science test for themselves,” per se — making them less likely to fall victim to natural tendencies to preserve ideology.

If it is the case that action is more likely to compel than argument (as has been discussed), it is unlikely that we will be able to prevent “the shooting from beginning” (to allude to Hunter), and this is especially likely if we don’t arrange rules that could help us overcome the problem of justification. Yes, the rules will ultimately be arbitrary like the rules of chess, and some will see the rules as authoritarian and have every reason not to follow them (especially if they threaten what they believe, as arguably the rules will threaten everyone). Speed limits and tax procedures are also arbitrary, but it is easier to convince people to follow rules that can be seen as helping keep them safe versus rules that can be seen as threatening what they believe. But we need some kind of rules or “social contract” we all agree to (perhaps it could be called an “intellectual contract,” alluding to Rousseau). Because “the conflict of mind” is built into reality, it seems we require arbitrary rules (as some may argue that we require God).

Arguably, many works by O.G. Rose have attempted to provide various rules, procedures, and skills for “the life of the mind” and to help us survive Pluralism. Here, I have meant only to highlight example rules in regard to the problem of justification specifically. There is no guarantee the rules will save us (seeing as they are still subjective and people know they are ultimately arbitrary, so people can dismiss them if they don’t like the results), but I think there is hope. To close with the words of James Hunter:

‘Despite his own seemingly intractable pessimism, Alasdair MacIntyre himself has suggested that one of the critical ways in which rival and incompatible positions on moral issues can be rationally resolved is by recognizing that different positions are not autonomous but are rooted within different traditions of rational justification.’¹⁸

IX

It is time to conclude and arrive where we started — to find out if we will ‘know the place for the first time.’¹⁹

A “conflict of mind”-situation is one in which we can believe something (and even be right about it) while lacking intellectual ethics to support that belief. Any situation in which intellectual ethics and intellectual ascent come in conflict is a “conflict of mind”-dilemma, and I’m afraid for far too long we have assumed that “intellectual ethics” and “intellectual ascent” cannot conflict and must always align. But this notion is a leftover of a belief in “autonomous rationality,” the idea that it’s possible for worldviews to be rational “all the way down,” a notion that hopefully “Deconstructing Common Life” addresses well enough.

It is comforting to believe that being epistemically moral and responsible will make us right, as it is comforting to believe that those who are epistemically immoral and irresponsible are those who will be wrong. It helps us sleep to believe our world is a world of epistemic justice, but it is when we sleep that we dream.

There is no guarantee that only believing in what we have evidence to believe, that diligently avoiding all logical fallacies, and other practices of intellectual honesty will necessarily contribute to us being correct or avoiding (conscious or unconscious) manipulation and/or self-deception. In fact, these practices may be precisely why we end up (ultimately) being incorrect.

Being epistemically responsible can be precisely what leads us to being wrong, failing to stand against an injustice, and ending up on “the wrong side of history.”

Being epistemically irresponsible can be precisely what leads us to being right, standing against an injustice, and ending up on “the right side of history.”

The world isn’t epistemically just. Sometimes it is, but who can say?

“The conflict of mind” is the reality that epistemic responsibilities can demand epistemic errors, impracticalities, and/or impossibilities. What calls us to higher standards is why we may end up lowly.

Do not mistake me as claiming here that the answer is giving up on epistemic ethics — we would be worse off — my point is that if we do what is epistemically responsible, we will not necessarily be more right for it. My point is that there are problems embedded into the nature and structure of thought itself which we for too long have overlooked, for it is comforting — perhaps especially to philosophers, academics, and those who prize thinking — to believe thought, if done right, will necessarily make the world a better place. Because of Pluralism, we cannot run from the truth any longer. We have seemingly run around the world back to the thing from which we ran.

The existence of schools is a testament to our belief that a commitment to truth benefits our society, and certainty society is doomed without this commitment. However, we have come to forget that this commitment will not necessarily make us truer or better. There are situations in which the commitment to truth could conflict with truth, and failure to acknowledge this makes us unaware and ignorant of the tragedy of our being. We will continue to think the problem is simply a lack of knowledge, a prevalence of willful ignorance, a stronghold of ideology, and the like, and though certainly these things can contribute to and even be the problem, there is another possible cause that is existentially terrifying: the commitment to truth itself. Perhaps our saviors can be from who we need saving.

Knowledge is power, but knowledge is no more powerful than the reality which grounds it. Because of the nature of reality, spacetime, and finitude, our desire to be intellectually honest can come in conflict with truth. “The conflict of mind” is built into the world.

By how we treat those we disagree with, by how we think of our own worldview, by how we debate to defeat “first principles” versus find “middle grounds” between them, and by how (our) truth is self-evident to us (as epistemically responsible), it seems that we don’t honestly believe this “conflict of mind” exists.

How could we?

Thought is essentially incomplete, yet thought still proves “actual enough” to help us get through life — it proves practical even though non-axiomatic — and hence we have reason to believe thought is “enough like” reality (considering Popper). The same criteria proves enough for us to live with our worldview even if it lacks “stiches,” for if our worldview (or dimensions of it) proves “actual enough,” we can get by. But what works for an individual is not necessarily what will work for a nation, and a given “pragmatic solution” to “the conflict of mind” cannot be universalized — we cannot establish that everyone ought to believe x because x is practical — and hence we need Hunter’s work. This may seem like a very abstract philosophical problem, but the consequences are very real and practical: we must decide what we ought to do about accusations against Trump now, how we ought to live with our Hindu neighbor now, and so on.

Especially in regard to falsification, because of the problem of justification, no one ever ought believe what they don’t want to believe (and considering the arguments presented in “Defining Evidence” by O.G. Rose, people can even “define themselves out of” believing what prevails through tests of falsification — but acknowledging this might be too much to bear right now). Because there is no objective standard of justification, no one ever ought to be convinced of x instead of y: everyone can always just continue to be compelled by what they already think — unimpeded.

No one ever has to change their minds to continue to be epistemically responsible. This isn’t to say they shouldn’t or won’t be viewed as absurd, but to say that the final say always belongs to us. No one ever ought to change their minds — this is the problem of Pluralism.

To be is to believe — beli(e)ve — for right or wrong, as Wittgenstein has taught us, I cannot be without (a sense of) certainty. I must believe something in order to function, and yet how I achieve this certainty is necessarily thanks to a subjective standard of justification that I cannot be certain about and that I will necessarily think everyone ought to share if they and myself are indeed epistemically responsible, in part if not entirely in order to hide myself from the uncertainty upon which my “(sense of) certainty” is necessarily founded and to avoid the existential anxiety that this realization would cause.

We are “setup” by the nature of reality to “objectively” conclude — according to our subjective standard of justification which we require and necessarily don’t practically think of as “subjective” — that everyone who doesn’t think like us is epistemically irresponsible and intellectually dishonest, as they are “setup” to think the same of us. Taking seriously the arguments of this paper will hopefully help prevent the conflicts this “setup” is likely to cause us, especially in Pluralism — a vast collection of “exclusive truth claims” (as Timothy Keller argues) that must define epistemic responsibility each in their own terms against one another. In this environment, appeals to “pursue truth wherever it leads” or to “be intellectually honest” will only worsen the tension.

It is not “fake news” or “post-truth-ism” that is our greatest threat today, but failure to realize “the conflict of mind” and the problem of justification (which “fake news” and “post-truth-ism” are symptoms of and which our focus on hides the cause). Even our lack of empathy — the unwillingness of people to see the world through the eyes of others — is a symptom of our failure to understand how epistemic responsibility necessarily makes empathy (seemingly) epistemically irresponsible to us (and so also “critical thinking,” as discussed in “On Critical Thinking” by O.G. Rose).²⁰ Realizing this is our problem will take a society that is willing to engage in what it can never escape but rarely realizes it engages in — philosophy (for if I believe something, as all must, I have engaged in philosophy) — but there is no guarantee a society of philosophical geniuses will be one that overcomes “the conflict of mind”; in fact, it may be the case that the greater the genius, the more likely there is failure.

The genius that we need to be saved could be what increases the likelihood that all is lost. Especially without the gift of articulation — it is hard to say.

Because we cannot establish that x ought to be believed at point y versus z, we must become a “substantive democracy” that searches for “middle grounds” and that understands this is epistemically responsible. Otherwise, epistemic responsibility (against moderation) will drive us to tear our democratic civilization apart.

Will realizing this problem of justification necessarily lead to our salvation?

No.

By accepting the problem of justification, the possibility of epistemic responsibility tragically compelling a person to be wrong (and the truth that “being right” and “being epistemically responsible” don’t necessarily follow one another), our need for “substantive democracy,” the essential problems of thought due to the nature of reality itself, and knowing that we are all in this together (unified), perhaps we can find a “common ground’ upon which we can make Pluralism work (as could accepting our “profound ignorance” and how it constantly deceives us into “objectively” thinking we’re much less ignorant than we actually are, as discussed in “The Phenomenology of (True) Ignorance” by O.G. Rose). Awareness can change the world, for it can change how everyone is “toward” the world, and the world is changed by how we think about it.

But if “the map is indestructible” — if ideology is invincible — perhaps awareness is nothing.

In closing, the hope of this paper has been to help readers take seriously the problem of justification so that they take seriously the problem of Pluralism — failure to take seriously one is failure to take seriously the other. Yes, I believe many have understood that Pluralism presents a challenge, but I don’t believe many have grasped that the problem “goes all the down.” Pluralism brings to light essential problems with thinking (and epistemic responsibility), not merely accidental problems, and if we fail to figure out how to live with these essential shortcomings indivisible from thought itself, it is unlikely we will be able to stop “the shooting from beginning” (to allude to Hunter).

Civilization is threatened by the thought it requires.

Thought is fire: it can keep us warm and burn us alive.

We require certainty which can only be established upon uncertainty, a certainty which will entail an epistemic responsibility to which we will necessarily think others ought to ascribe, all while they think the same of us. To avoid this problematic situation, we should accept what this paper has argued — that people ascribe to different standards of justification that entail conflicting epistemic responsibilities — but if we do that, we must acknowledge that our standard of justification is ultimately arbitrary and that our certainty is founded upon uncertainty. In other words, to accept this paper, we must suffer existential anxiety, which is unlikely the majority will do, and hence it is unlikely Pluralistic democracy survive, let alone succeed. And that’s the rub.

Could pragmatism work? Individually, perhaps, if we hide ourselves away in communities of people that think like us — if we hide from “the other” — a world increasingly impossible and undesirable. But collectively, no, and even if pragmatism could, if we ascribed to pragmatism in response to realizing the arbitrariness of our standard of justification, we will experience our pragmatism as a “running away,” as a “drug” — like how the space of a missing arm becomes a constant reminder of the explosion which took it away. And this will make our pragmatism a source of existential anxiety.

The reading of this paper to realize the problem of “the conflict of mind” might be the act which makes the overcoming of that problem through pragmatism impossible.

God help us all.

But I would feel irresponsible if I ended on such a pessimistic note. Today, there is a push for “middle grounds,” “compromise,” and the like, and although these efforts are valid in their goals, without an explanation for why these efforts are essentially necessary as opposed to emotionally satisfying, they are not likely to last. If we tell people they need to be “moderate” but lack a good reason for why, it is not likely people will stay moderate for long if at all; after all, we can’t say for sure that moderation is always best.

Today we seem to want “cheap moderation” versus “earned moderation,” to allude to Bonhoeffer. But if rationality entails essential limits, then moderation becomes a necessary and rational tragedy for democratic and intellectual life. To moderate then is not a betrayal of ourselves, but an acceptance of our finitude.

Thus, in this paper perhaps forcing us to accept the essential limits of mind, it might at the same time succeed at making us realize that compromise and moderation are rationally justified. If this is the case, perhaps the journey was worth it.

.

.

.

Notes

¹For more, please see “Epistemic (Ir)responsibility” by O.G. Rose.

²Rauch, Jonathan. Kindly Inquisitors. The University of Chicago Press. Paperback Edition, 1994: 52.

³Rauch, Jonathan. Kindly Inquisitors. The University of Chicago Press. Paperback Edition, 1994: 53.

⁴This entire point was inspired by and thanks to Nicholas Cummins.

⁵Nabokov, Vladimir. Speak, Memory. New York: First Vintage International Edition, 1989: 310

⁶Wittgenstein, Ludwig. On Certainty. New York: First Harper Torchbook Edition, 1972: 44e.

⁷Wittgenstein, Ludwig. On Certainty. New York: First Harper Torchbook Edition, 1972: 44e.

⁸Consider from “Bridging the Kants” by O.G. Rose on how maturing is a continual process of falsification.

⁹Wittgenstein, Ludwig. On Certainty. New York: First Harper Torchbook Edition, 1972: 44e.

¹⁰Do note that certainty about what can be falsified is different than certainty over what cannot be falsified — the second is in regard to which the term “certainty deterrence” is more likely to apply meaningfully (as discussed in “The True Isn’t the Rational” by O.G. Rose).

¹¹Hunter, James Davison. Before the Shooting Begins. New York, NY: The Free Press, 1994: 35.

¹²Hunter, James Davison. Before the Shooting Begins. New York, NY: The Free Press, 1994: 222.

¹³Allusion to “Moderates are the Real Tough Guys” by Oliver Burkeman, as can be found here:

https://www.theguardian.com/lifeandstyle/2017/jan/13/moderates-are-real-tough-guys-oliver-burkeman

¹⁴Allusion to “One Way Not to Be Like Trump” by Peter Wehner, as can be found here:

https://www.nytimes.com/2016/12/17/opinion/sunday/one-way-not-to-be-like-trump.html?_r=0

¹⁵Allusion to “One Way Not to Be Like Trump” by Peter Wehner, as can be found here:

https://www.nytimes.com/2016/12/17/opinion/sunday/one-way-not-to-be-like-trump.html?_r=0

¹⁶Allusion to “One Way Not to Be Like Trump” by Peter Wehner, as can be found here:

https://www.nytimes.com/2016/12/17/opinion/sunday/one-way-not-to-be-like-trump.html?_r=0

¹⁷Allusion to “One Way Not to Be Like Trump” by Peter Wehner, as can be found here:

https://www.nytimes.com/2016/12/17/opinion/sunday/one-way-not-to-be-like-trump.html?_r=0

¹⁸Hunter, James Davison. Culture Wars. New York, NY. Basic Books, 1991: 319.

¹⁹Allusion to The Four Quartets by T.S. Eliot.

²⁰As discussed in “Self-Delusion, the Toward-ness of Evidence, and the Paradox of Judgment” by O.G. Rose, a case is “toward” which phenomena become evidence. To engage in empathy is to move between cases, and hence experience different sets of phenomena as evidence. If it is the case that art, relationships, and/or emotions are more likely to make a person see the world through another case (versus their own), then it is more likely that art, relationships, and/or emotions will change what people believe than philosophy and debate.

.

.

.

Objective

The hope of this work is to provide rational defense for compromise and “middle grounds” that is based on the essential incompleteness of rationality, versus a claim that none of us are as rational as we think. In my view, there are numerous academics who argue that humans are fundamentally irrational (due to cognitive biases, emotional influence, etc.), and as a result, we should expect less of rationality and be willing to compromise, expecting it to feel wrong. My intention is not to argue that this is false; rather, I hope to point out that this is a negative argument for why we should compromise, and I’m a strong believer that people don’t respond well to negative arguments.

If I tell you that you should compromise with people you disagree with because you’re not as rational as you think, you’ll probably not respond well. After all, it’s the people who you disagree with who are the real irrational ones — who am I to tell you to be humble? Indeed, this kind of reaction isn’t hard to imagine: it’s almost inevitable. In my view, what has a much better chance of compelling people to open up to “middle grounds” is by convincing them that reasoning itself falls short even when done brilliantly. That way, I’m asking us to compromise with people we disagree with not because “we’re not as rational as we think” (which is likely true), but because, even if we are incredibly rational, rationality itself entails essential limits. The problem is not with us, but with rationality; if anything, we are the victims. Rationality seems to be playing games with us, and we should combine forces to overcome its trickery. Regardless our political leanings, we should join forces.

Note the change in tone?

By arguing that rationality on rational terms entails essential limits, a need for compromise is grounded not on a failure of us, but on an incompleteness embedded in rationality itself. The need for compromise is based on making people accept the nature of reality, which isn’t an act of defeat, but of discernment. It doesn’t require humility or humiliation to accept that humans cannot fly without an airplane: it’s just the nature of things. Instead, our inability to fly became a challenge that we can put before our minds to overcome; similarly, accepting the essential limits of rationality will present us with a challenge that I have no doubt we can rise to meet.

When I claim that we are “less rational than we think” due to emotions, cognitive biases, etc., I suggest (perhaps indirectly) that we should be more rational than we actually are: the dream of “autonomous rationality” still seems to lurk in the background. Frankly, my main target is “autonomous rationality” (as expanded on in “Deconstructing Common Life” by O.G. Rose), and though it may seem that democracy dies with this dream, I believe it is the exact the opposite: it is by continuing to pursue and trust in an impossible ideal that we set ourselves up for discouragement and disillusionment. Democracy dies when cynicism pours in. Furthermore, if the dream of democracy is for everyone to be unified on grounds of “autonomous rationality” and such grounds don’t exist, then democracy must fail. If democracy is to thrive, the goal must change.

Compromise is not innately good, but I fear today the arguments justifying compromise imply this claim. It’s suggested that we “ought” to find a “middle ground” with people we think differently than, but how is it good to compromise with Nazis? What about people trying to destroy America? It just doesn’t seem true to people that compromise is “innately good,” and certainly sometimes it is not good. But when it is, if we will only compromise through “autonomous rationality,” then we won’t compromise at all.

The hope of this paper is to establish a strong rational foundation for compromise by arguing that, since “autonomous rationality” doesn’t exist, rationality can create multiple “internally consistent systems,” based on different fundamental axioms, within which there is never a “rational” reason to compromise. However, if there is an acknowledgment that rationality is essentially limited (and if it is successfully argued that it is rational to accept these limits that emerge between systems), then it becomes rational to act and think as if these limits exist and to accept compromise for social functionality. On these grounds, compromise between rationalities is not a betrayal of rational life but utterly necessary. To reiterate the point, my main concern is the dream of “autonomous rationality,” which is the idea that it is possible for rationality to ground itself “all the way down” to its very foundations. This is not possible: ultimately rationality must be grounded on something “arational” or “(un)rational” (not “irrational”), as discussed throughout the work of O.G. Rose.

I do think the case can indeed be made for compromise in terms of arguing that rationality is always influenced by the unconscious, emotions, etc., but my concern is that this seems often to be the only argument around. My hope is to make a similar argument on rationality’s own terms (to make similar arguments regarding a “pure rationality” versus claim rationality is always “infected,” per se, which I do think is true, but less existentially satisfying). In other words, if we do rationality well (and even try to assume that “autonomous rationality” is possible), rationality eventually fails itself (precisely at the point when it is most nearly complete, the point of “establishing axioms” or “grounding”). It’s not that we’re irrational; it’s that rationality is incomplete.

We tend to think today that if everyone was rational, everyone would share view x, and therefore if there is disagreement on an issue, it must be because one side is not being as rational as the other. It’s nearly a dogma of democracy that rationality can eventually get us all to the same point of view (assuming everyone is indeed committed to being rational and/or epistemologically responsible). And certainly, I think this often holds true: many opinions that people hold are not thought through and should be deconstructed, and indeed, in many debates, one side is in fact less rational than the other. When this is unveiled, if all parties are debating in good faith, all sides move close to holding the same position.

My concern though is any belief that disregards the possibility that different people could be fully rational and yet still disagree due to differences in fundamental axioms (and conflicting epistemic responsibilities). Axiomatic differences won’t be the core problem in every debate, but if it’s ultimately not a category in our minds, when we encounter it, we won’t have the eyes to see it. If axiomatic differences are real and cannot be solved by us “being more rational,” then compromise is rationally justified (on terms of rationality’s “essential incompleteness”). Compromise then is not a betrayal of rationality, but a rational solution to rationality’s own incompleteness: it is a fulfillment of rationality, not an abandonment.

In today’s terribly partisan age, arguments that bring people together are desperately needed. My concern is that claiming “compromise is good” is not nearly good enough: there needs to be an argument that rationally justifies compromise positively versus negatively. Speaking personally, knowing about “the incompleteness of rationality” helps me find middle grounds without feeling like I’m betraying what I believe: in knowing about axiomatic differences and “the conflict of mind,” I feel much more existentially stable. The hope is that others could feel the same way, but I’ll leave it up to you to decide if the paper so succeeds.

.

.

.

For more, please visit O.G. Rose.com. Also, please subscribe to our YouTube channel and follow us on Instagram and Facebook.

--

--

O.G. Rose

Iowa. Broken Pencil. Allegory. Write Launch. Ponder. Pidgeonholes. W&M. Poydras. Toho. ellipsis. O:JA&L. West Trade. UNO. Pushcart. https://linktr.ee/ogrose