Is it a problem when our ideas can’t easily be linked to established thinkers or schools of thought? Or is it a problem when we feel like ideas must always be associable and linked?
Should our works be filled to the brim with references? It can feel like we’ve lost our minds if we make an argument without pulling out supporting quotations from Kant or Keynes — I mean, who do we think we are? Well, if we put too much stock in the thought that we are “no one,” that too could be overly self-focused. I’ve always liked Timothy Keller’s “self-forgetfulness” myself…
It’s stressed in “Basic Math,” but if a girl, a boy, a Republican, a Democratic, a monster, a saint — I could go on — says, “2 + 2 = 4,” then the statement is true. What is true is true regardless of who says it. We’ve all heard that a thousand times, but goodness it is hard to remember, and it seems so natural for us to say, “Yea, but (fill in the blank).” Why? Why is it so hard to avoid falling into logical fallacies, either on the side of believing x is false because an idiot said it or on the side of assuming y is true because a genius said it? None of that follows, and we “know” that when asked directly, but then proceed to go back to “practically believing” it is true. Why?
I think we can start to understand why “ad hominem” fallacies are so common when we understand the divide between “truth” and “certainty.” As argued in “On Certainty” by O.G. Rose, certainty is basically impossible except in very general and rare circumstances, but that’s fine: the impossibility of certainty doesn’t necessitate the impossibility of confidence. Mostly, we are all in the business of gaining and living according to confidence, and we tend to call “certainty” an emotional state where we feel we have enough of a sense of “how things are” to feel “existentially stable” about our world and life. But this isn’t certainty; it’s confidence (and this technical distinction is important, given that certainty entails a lot of unintentional consequences…)
Anyway, the point is that we tend to fall into logical fallacies not so much because we “genuinely” believe these fallacies lead to truth — our problem is arguably far worse. Rather, we consciously know these fallacies are fallacies, but then practically act like they are not so that we can organize who we should listen to and who we shouldn’t. We discuss “logical fallacies” as if we use them as shortcuts to “truth,” but I would submit that we mostly use them to create “senses of certainty” so that we can gain existential and psychological stability. We often associate “fallacies” with “intellectual efforts,” and though there is truth to this, I think we need to associate them more with “emotional efforts,” as matters which arise in response to “the problem of certainty.”
Most of us “know” logical fallacies are indeed logical fallacies; what we tend to miss is how they are “emotional reliefs” that we can practice while intellectually acknowledging the shortcomings of logical fallacies. We can just “happen” to always listen to the commentators who agree with us; we can just “happen” to read the books which share our views; and so on. There is no “direct” fallacious discounting of people who don’t think like us, but we are driven by our emotions “toward” only reading people we agree with, which means we are “practically” committing a logical fallacy even if not intellectually.
Our “reference culture” is a litmus test of our how prone we are to fall into “ad hominem fallacies.” A society which assumes something is false without support from references is a society which could be likely lacking in its capacity to handle existential anxiety. Not necessarily, no, but an overemphasis on the need for references and allusions can suggest a people who are existentially untrained (it’s a “high order”-relationship, as discussed in “Experiencing Thinking” by O.G. Rose), which would suggest that the society is trying to hide from itself the “break” between “truth” and “certainty,” a self-concealment which also hides from the people their “practicing” of logical fallacies emotionally even if not intellectually.
Why do I say this? Well, because there is something existentially comforting about only believing what “smart people” support, as there is something comforting about dismissing what “smart people” dismiss. Now, a balance is needed here, because if x person says something and lacks any support, then there would be “good reason” to doubt it or question it (and without such heuristics, we’d be prone to “conspiracy theories” and the problems of “Pandora’s Rationality” in general). However, we have gone too far if we “practically act” as if it’s “necessarily the case” that references, allusions, and support are needed. References, associations, and the like are fine as “directions” and means to ground an argument, but they are problematic when they become ways to avoid existential anxiety, as I think is natural for us to do. Sartre referred to existential anxiety as a state of “no exit,” hence why we can think about the problematic “exit of references.”
An idea is good, beautiful, and/or true regardless its associations, backing, academic support, and the like. Indeed, perhaps universities and “great minds” help us locate great ideas, but ideas are not great because they are located by “great minds”: they are what they are because they are what they are. And we all know this, but “practically” speaking perhaps, for the sake of gaining an emotional and stabilizing sense of “certainty,” we seek the ideas which others agree are good, for we derive from others a sense of stability. If all the professors at the university believe “x is true,” though we know it doesn’t follow that “x is necessarily true,” it is still easier to “feel like” x is true and that we can reliably believe in it. This is described throughout the sociological works of geniuses like Dr. James Hunter and Dr. Peter Berger, and basically the idea is that it’s incredibly difficult for us to believe x if everyone around us believes y, even if x is undeniably true. This is how “ad hominem fallacies” can slip into our lives even when we know better: because believing something true doesn’t mean we will necessarily “feel like” it is true, we are always seeking emotional support, and the “emotional ad hominem fallacy” is gravitating toward beliefs that grant us emotional and social support (all while we intellectually agree that something isn’t true or false because x person believes or disbelieves it).
Theoretically, if no one on earth believed “2 + 2 = 4,” but we knew the formula was true, we’d likely still deemphasize the belief when talking with other people. We might not outright declare everyone else wrong and ourselves right, but we’d likely avoid the topic in conversation as much as we could, agree that “math was probably created not discovered,” and also emphasize how “it was just our opinion.” In other words, in social settings, we’d do everything in our power to “downplay” our position even as we held it, well aware that we were “alone” in our belief. It’s simply too existentially difficult to believe something that no one else believes, precisely because certainty is impossible. If certainty was actually possible, perhaps believing something by ourselves, without social support, would be doable, but the impossibility of certainty only increases the likelihood that it proves impossible to believe something alone (we’d likely have to live in an isolated cave).
Because certainty is most impossible, beliefs are almost always social. We “absorb” most of what we believe versus be “convinced” into our beliefs (following “Compelling” by O.G. Rose), and if we are “convinced of x” it is likely because people around us have likewise been “convinced of x.” And since certainty is mostly impossible, there will always be “space” and “reason” to be “convinced” of something. As we cannot be “certain” that “x is true,” we also cannot be certain that “x is false” (except in obvious circumstances, perhaps, such as how we can be certain that “We are not on Mars” while visiting Detroit), and that means we can always “reasonably” entertain x. And if everyone around us begins considering and believing in x, we will not have “certainty” that we shouldn’t do the same. And thus we likely will.
Truth is not social (it is what it is) but existential stability and confidence are strongly influenced by social forces. Since we cannot know with certainty what is true, we are always vulnerable to being influenced by social forces, for what we ascribe to as being “contingent” and “true” is through an intellectual ascent which we can always find reason to change. And so what we believe is true is practically social even if it is theoretically noncontingent and nonsocial — and this break between “the practical” and “the social” precisely functions to hide us from how vulnerable we are to “emotional ad hominem fallacies.”
Ultimately, due to existential anxieties, we naturally “outsource our judgment” to the society regarding “what is true” (which in “intellectual fields” manifests as a reliance on references), and do so because (for example), to speak for ourselves risks saying something mistaken that then we will have to then own, but if we speak for Aristotle, then there is a buffer: the mistakes we utter are the mistakes of Aristotle. Furthermore, if we voice Aristotle’s opinion, we cannot readily be accused of thinking that we know what’s best, which others can interpret as prideful and assumptive. References meditate, which is to say references can be used in service of “escaping” anxiety.
Still, referencing isn’t “always good” or “always bad” — life is more complex. After all, it makes sense that we would outsource our judgment to experts, renowned thinkers, and the like, for they are supposedly “the best the society has to offer.” And this is not inherently wrong: after all, we cannot think of everything. Yes, we might employ references to win “glass bead games,” but we might also strive to honor the work of others to which we are indebted, as it can also help people we are speaking with know what “language game” or “mode of discourse” we are using for our conversation (Aristotle uses terms like “essence” and “substance,” and by referencing him, I can signal that I will be using the terms similarly). Furthermore, since certainty is mostly impossible in this life (we must settle with confidence), we are always looking for “reasons to believe” things, and if Aristotle said x and y, then there is “more reason to take seriously” x and y than if a random person said it. No, correctness is not guaranteed, but we at least have a guide to know where to take conversation.
Thinking without authorities would be almost impossible (there are arguments to be made that the phrase “think for yourself” should be replaced with “interpret for yourself”), and yet authorities also threaten thinking (as discussed in “The Authority Circle” by O.G. Rose). Indeed, there can be a problem if instead of “outsourcing our thinking” due to practical and necessary limitations, we begin outsourcing it to avoid existential and psychological tensions, for this means we are “outsourcing our thinking” to hide ourselves from the reality that we have not cultivated our minds and epistemological skills enough to handle the uncertainty of life. We have likewise probably not learned to exercise the “epistemological humility” we need to thrive under Pluralism versus contribute to Pluralism becoming tribalism. When people feel anxious, they tend to react strongly and viciously, unless that is they have experience with the feeling and can “deescalate themselves.” Unfortunately, a culture which is obsessed with references and “sources” may not be a society which is able to exercise those skills. It is at risk.
To emphasize, we naturally “outsource our judgment” to social units, which is to say we assume an idea is good because it can be associated with x group and bad because it isn’t associable at all. This can create incentive for people not to cultivate their capacity to think for themselves, and also provides people a way to avoid the existential anxiety caused by taking a stand and judging something as good or bad by themselves. If others say x is good, we feel more comfortable to say x is good too, but all of this can function as escapism which hurts the life of the mind.
If x is true, it is true regardless who thinks it, but without social support, it is unnatural for us to think x ourselves. This is an inescapable tendency of human nature, but if we don’t learn to resist it to healthy degrees, we will probably not develop capacities to handle the inevitable existential anxiety of Pluralism (as described throughout “Belonging Again” by O.G. Rose). Existential escalation then seems inevitable.
For more by Johnny R. O’Neill, please see: