The Exit of References
Is it a problem when our ideas can’t easily be linked to established thinkers or schools of thought? Or is it a problem when we feel like ideas must always be associable and linked?
Should our works be filled to the brim with references? It can feel like we’ve lost our minds if we make an argument without pulling out supporting quotations from Kant or Keynes — I mean, who do we think we are? Well, if we put too much stock in the thought that we are “no one,” that too could be overly self-focused. I’ve always liked Keller’s “self-forgetfulness” myself…
It’s stressed in “Basic Math,” but if a girl, a boy, a Republic, a Democratic, a monster, a saint — I could go on — says, “2 + 2 = 4,” then the statement is true. What is true is true regardless of who says it. We’ve all heard that a thousand times, but goodness is it hard to remember, and it seems so natural for us to say, “Yea, but (fill in the blank).” Why? Why is it so hard to avoid falling into logical fallacies, either on the side of believing x is false because an idiot said it, or on the side of assuming y is true because a genius said it? None of that follows, and we “know” that directly, but then proceed to go back to “practically believing” it is true. Why?
I think we can start to understand why “ad hominem” fallacies are so common when we understand the divide between “truth” and “certainty.” As argued in “On Certainty” by O.G. Rose, certainty is basically impossible except in very general and rare circumstances, but that’s fine: the impossibility of certainty doesn’t necessitate the impossibility of confidence. Mostly, we are all in the business of gaining and living according to confidence, and we tend to call “certainty” an emotional state where we feel we have enough of a sense of “how things are” to feel “existentially stable” about our world and life. But this isn’t certainty; it’s confidence (and this technical distinction is important, given that certainty entails a lot of unintentional consequences…)
Anyway, the point is that we tend to fall into logical fallacies not so much because we “genuinely” believe these fallacies lead to truth — it’s far worse than that. Rather, we consciously know these fallacies are fallacies, but then practically act like they are not so that we can organize who we should listen to and who we shouldn’t (something we can do precisely because certainty is mostly impossible), all so that we can gain existential stability. We discuss “logical fallacies” as if we use them as shortcuts to “truth,” but I would submit that we mostly use them to create “senses of certainty” so that we can gain existential and psychological stability. We often associate “fallacies” with “intellectual efforts,” and though there is truth to this, we need to associate them more with “emotional efforts,” as matters which arise in response to “the problem of certainty.”
Most of us “know” logical fallacies are indeed logical fallacies; what we tend to miss is how they are “emotional reliefs” that we practice while intellectually acknowledging the shortcomings of logical fallacies. We just “happen” to always listen to the commentators who agree with us; we just “happen” to read the book which share our views; and so on. There is no “direct’ fallacious discounting of people who don’t think like us, but we are driven by our emotions “toward” only reading people we agree with, which means we are “practically” committing a logical fallacy even if we are not intellectually.
Our “reference culture” is a litmus test of our how prone we are to fall into “ad hominem” fallacies. A society which assumes something is false without support from references is a society which is likely lacking in its capacity to handle existential anxiety. Not necessarily, no, but an overemphasis on the need for references and allusions can suggest a people who are existentially untrained (it’s a “high order”-relationship, as discussed in “Experiencing Thinking” by O.G. Rose), which would suggest that it is trying to hide from itself the “break” between “truth” and “certainty,” a self-concealment which also hides from people their “practicing” of logical fallacies emotionally even if not intellectually.
Why do I say this? Well, because there is something existentially comforting about only believing what “smart people” support, as there is something comforting about dismissing what “smart people” dismiss. In this way, our culture of requiring references could be evidence of a culture which is trying to avoid existential anxiety. Now, a balance is needed here, because if x person says something and lacks any support, then there would be “good reason” to doubt it or question it (and without such heuristics, we’d be prone to “conspiracy theories” and the problems of “Pandora’s Rationality” in general). However, we have gone too far if we “practically act” as if it’s “necessarily the case” that references, allusions, and support are needed. References, associations, and the like are fine as “directions” and means to ground an argument, but they are problematic when they become ways to avoid existential anxiety, as I think is natural for us to do. Sartre referred to existential anxiety as a state of “no exit,” hence why we can think about the problematic “exit of references.”
An idea is good, beautiful, and/or true regardless its associations, backing, academic support, and the like. Indeed, perhaps universities and “great minds” help us locate great ideas, but ideas are not great because they are located by “great minds”: they are what they are because they are what they are. And we all know this, but “practically” speaking, for the sake of gaining an emotional and stabilizing sense of “certainty,” we seek the ideas which others agree are good, for we derive from others a sense of stability. If all the professors at the university believe “x is true,” though we know it doesn’t follow that “x is necessarily true,” it still easier to “feel like” x is true and that we can reliably believe in it. This is described throughout the sociological works of geniuses like Dr. James Hunter and Dr. Peter Berger, and basically the idea is that it’s incredibly difficult for us to believe x if everyone around us believes y, even if x is undeniably true. This is how “ad hominem fallacies” slip into our lives even when we know better: because believing something true doesn’t mean we will necessarily “feel like” it is true, we are always seeking emotional support, and our “emotional ad hominem fallacy” is gravitating toward beliefs that grant us emotional and social support (all while we intellectually agree that something isn’t true or false because x person believes or disbelieves it).
Theoretically, if no one on earth believed “2 + 2 = 4,” but we knew the formula was true, we’d likely deemphasize the belief when talking with other people. We might not outright declare everyone else wrong and ourselves right, but we’d likely avoid the topic in conversation as much as we could, agree that “math was probably created not discovered,” and also emphasize how “it was just our opinion.” In other words, in social settings, we’d do everything in our power to “downplay” our position even as we held it, well aware that we are “alone” in our belief. It’s simply too existentially difficult to believe something that no one else believes, precisely because certainty is impossible. If certainty was actually possible, perhaps believing something by ourselves, without social support, would be doable, but the impossibility of certainty only increases the likelihood that it proves impossible to believe something alone (we’d likely have to live in an isolated cave).
Because certainty is most impossible, beliefs are almost always social. We “absorb” most of what we believe versus be “convinced” into our beliefs (following “Compelling” by O.G. Rose), and if we are “convinced of x” it is likely because people around us have likewise been “convinced of x.” And since certainty is mostly impossible, there will always be “space” and “reason” to be “convinced” of something. As we cannot be “certain” that “x is true,” we also cannot be certain that “x is false” (except in obvious circumstances, perhaps, such as how we can be certain that “We are not on Mars” while visiting Detroit), and that means we can always “reasonably” entertain x. And if everyone around us begins considering and believing in x, we will not have “certainty” that we shouldn’t do the same. And thus we likely will.
Truth is not social — it is what it is — but existential stability and confidence are strongly influenced by social forces. Since we cannot know with certainty what is true, we are always vulnerable to being influenced by social forces, for what we ascribe to as being “contingent” and “true” is through an intellectual ascent which we can always find reason to change. And so what we believe is true is practically social even if it is theoretically noncontingent and nonsocial — and this break between “the practical” and “the social” precisely functions to hide us from how vulnerable we are to “emotional ad hominem fallacies.”
Ultimately, due to existential anxieties, we naturally “outsource our judgment” to the society regarding “what is true” (which in “intellectual fields” manifests as a reliance on references), and it makes sense that we would outsource our judgment to experts, renowned thinkers, and the like, for they are supposedly “the best the society has to offer.” And this is not inherently wrong: after all, we can’t think of everything. But there can be a problem if instead of “outsourcing our thinking” due to practical and necessary limitations, we begin outsourcing it to avoid existential and psychological tensions, for this means we are “outsourcing our thinking” to hide ourselves from the reality that we have not cultivated our minds and epistemological skills enough to handle the uncertainty of life. We have likewise probably not learned to exercise the “epistemological humility” we need to thrive under Pluralism versus contribute to Pluralism becoming tribalism. When people feel anxious, they tend to react strongly and viciously, unless that is they have experience with the feeling and can “deescalate themselves.” Unfortunately, a culture which is obsessed with references and “sources” may not be a society which is able to exercise those skills. It is at risk.
To emphasize, we naturally “outsource our judgment” to social units, which is to say we assume an idea is good because it can be associated with x group and bad because it isn’t associable at all. This can create incentive for people to not cultivate their capacity to think for themselves, and also provides people a way to avoid the existential anxiety caused by taking a stand and judging something as good or bad by themselves. If others say x is good, we feel more comfortable to say x is good too, but all of this can function as escapism which hurts the life of the mind.
If x is true, it is true regardless who thinks it, but without social support, it is unnatural for us to think x ourselves. This is an inescapable tendency of human nature, but if we don’t learn to resist it to healthy degrees, we will probably not develop capacities to handle the inevitable existential anxiety of Pluralism (as described throughout “Belonging Again” by O.G. Rose). Existential escalation then seems inevitable.
For more by Johnny R. O’Neill, please see: