The Conclusion of The Conflict of Mind
The End of Volume One of The True Isn’t the Rational Trilogy
Truth organizes values — if it is true that my child is upstairs reading, then it is an expression of honesty to say “my child is upstairs reading” versus playing videogames — but we can only be confident, never certain, about if our ideas are rightly organized. At the same time, there is a moral obligation on intellectual grounds (“epistemic responsibility”) compelling us to do the best we can to rightly organize our values, which requires us to determine a truth we can never be completely sure we determine correctly.
We have tools like falsification to help us determine truth, without which we seem doomed to end up “thinking for ourselves” more like a madman than like a genius, but unfortunately not everything that is true is falsifiable (how much easier our situation would be if this was so). There are many truths that require numerous epistemic and intellectual models to grasp, but it is rarely clear which models we should use, and thinking ultimately entails essential limitations we cannot transcend, rendering thinking incomplete if not downright useless in some situations (with the likelihood of uselessness increasing the larger and more complex the subject). But pushed by epistemic responsibility, political problems, and the like, we still have to make decisions about what is true even regarding what thinking is ill-equipped to understand and establish values based upon.
Thanks to the internet and Globalization, we are presented today with infinite information (practically speaking), and over enough time, any ideas or “systems of ideas” that “could” be true — that lack internal contradiction — will be discovered and plausible, and consequently they will be epistemically immoral to outright dismiss. But it is practically impossible for us to investigate all ideas and all systems of ideas, and yet that is what we will be compelled to do, especially if we are democratic voters who must make decisions about various issues, and especially if the State is large and elections entail very real and practical consequences. It would be one thing if elections didn’t matter, but the twin development of State and internet growth places us in a double bind that only seems to tighten every year. This isn’t to say we would necessarily be better off if the State grew and we had no way of knowing about it, or that we’d be better off if the State was small and the internet didn’t exist, but it is to say we suffer a difficult reality, whether it’s the best of all possible worlds or not.
(Please note, moving forward, that arguments made about a large State could just as well apply to large Corporations or large systems in general. Additionally, in my mind, the State today is a “Corporate State,” a mixed market: at this point, to criticize the State is to also criticize the market.)
As the internet expands the scope of information, so expands the scope of information we feel epistemically responsible to learn (after all, if we could learn about x, why don’t we learn about x?). Likewise, as the State expands, so grows the range of the State’s responsibilities, which will grow the imperative for us to be involved in politics and vote well. But the bigger the State becomes (regardless the ideology or party running it), the more complex its responsibilities, and so the lower the probability we will understand what we are voting for, regardless if we are voting on individual issues or for a representative to handle these issues on our behalf. As it becomes more important for us to be involved democratically, it becomes less likely that we will be involved well.
If our ideas and worldviews feel too uncertain — which all of them are vulnerable to feeling because certainty is impossible — then we begin to feel existentially anxious (a condition in which totalitarianism can become appealing, as discussed extensively in “Belonging Again” by O.G. Rose). And the larger the State becomes and the more the internet expands, the more we can feel overloaded both with information and a responsibility to understand that information well so that we are good voters, which we want to be because what happens with the ever-growing State does impact us in very real and tangible ways. But we want the impossible — the responsibility is too great — and yet failure to do the impossible impacts us: we are penalized for failing to accomplish what could not be accomplished. We want to participate in something that impacts us, but we cannot fully understand the thing impacting us (as it probably can’t fully understand itself), and so both alienation and helplessness can set in. Suffering unfairness that is a product of no one in particular, but rather of a system, the citizenship is not likely to respond well.
If we are aware of the limits of thought, acknowledging those limits is integrated into our epistemic responsibility, so learning the limits of thought can help us avoid having our epistemic responsibility compel us to attempt some things that are epistemically wrong, impractical, and/or impossible. But just because we know we cannot understand x does not mean the State cannot be responsible for x and cannot make decisions about x on uninformed grounds: our situation would be much better if it was impossible for the State to make decisions about what transcended knowability. Unfortunately, the larger the State is, the more the State will be responsible for (and have to make decisions about) issues which are “Pynchon Risks” and impossible to fully grasp (which may include the impossibility of grasping that we don’t fully grasp them). And thus, even if we know the limits of thought and know we can’t know everything, we will still feel compelled by necessity to try and do something, risking paranoia and mental breakdowns (perhaps also on a collective level). This may cause grave existential anxiety and possible catastrophe, in the midst of which a totalitarian dictator might become appealing. Yes, knowing the limits of thought can help us cope, but knowing the limits probably won’t make us feel completely better. After all, our lives are being shaped right under us.
Even though we cannot possibly investigate all dimensions of the State to be epistemically responsible about opposing its growth, can we oppose State growth and still be epistemically responsible on grounds of avoiding “conflict of mind”-situations and the “double bind” described above? Perhaps, but it won’t be easy, because we always “might” be better off due to State growth. Just because we cannot fully comprehend everything the State does, it does not follow that what the State does is bad. We can’t know for sure either way: all we know is that we are vulnerable (and perhaps the State knows that the more complex it becomes, the less we can ever be sure that it acts poorly, and thus it has incentive to grow its size to maintain “plausible deniability” regarding its value). Even if we might be epistemically responsible in some ways to oppose the State on grounds of avoiding “conflict of mind”-situations, we might at the same time be directly immoral, for we could oppose programs that help the poor, the marginalized, and so on. We’d have to determine if the State actually helped different communities, for truth organizes values, and yet determining this might be impossible, or the State might help some communities while hurting others (and a cost/benefit analysis would be impossible), etc. The larger the State, the more it transcends knowability and judgment.
Also, determining if the State could be good or bad say in healthcare would require a “flip moment” of the State actually getting involved in healthcare, for example (and that’s even assuming the “flip moment” is something we could ultimately comprehend enough to judge its consequences). Despite the possibility of more “conflict of mind”-situations (which couldn’t we avoid by turning our minds off and just listening to our leaders?), we truly always “might” be better off for a given example of State growth, and so there will always be reason to “give it a try.”
There will likely be internally consistent arguments for why State expansion will work and internally consistent arguments for why it won’t, and all things equal, why not try to help the vulnerable or fix something broken? There will likely be a moral imperative for the State to act (which could justify the State to act without democratic approval, do note). Thus, “flip moments” will aid State growth, and if we recall “Ascent Landing,” where truth is indeterminable, it is rational to then focus on what program could help other people. Between internally consistent systems, truth cannot be determined, and thus the default will rest on the side of letting the State grow to possibly help the vulnerable. But this assumes State growth will actually help the vulnerable, which it easily might not (and recall that truth organizes values). But what if we can’t tell if the program will actually help others or actually hurt them? The State may remind us then that they should try, unless it could be argued successfully they shouldn’t because it might make the situation worse (do note that after the State grows, it is practically impossible for it to shrink again without a revolution, because the State becomes entrenched and always “might” be about to work). But that would require democratic debate, which currently seems broken (and in being so broken, may leave the decision up to the State and representatives — pure power).
As growth of the State makes “conflict of mind”-situations more likely and yet simultaneously and paradoxically increases our need to be involved in the State, the essential limits of thinking will hinder democratic exchange itself. Even if democracy worked perfectly, under a large State, we would be facing an impossible task, but with democracy weakening, our task is even more impossible, increasing our existential anxiety (and likelihood of giving into the temptation of totalitarianism) all the more.
Why is democracy weakening, as polls suggest average people think in 2020? I would argue a reason for this is because democracy is always weak in the same way that thought itself is weak (considering its essential limits, its influence compared to experiences, etc.), but that this weakness can be hidden and lived with generally unnoticed when most people in society ascribe to basically the same “first principles” and “axioms.” Under Pluralism, this ceases to be the case, and the inability of rationality to overcome ideological differences (“truths”) all the more unignorable. It is nearly impossible to “compel” people out of their truths, first because there is no objective standard of justification to make it so that people “ought” to believe x instead of y (especially where falsification does not apply), and two because “ideas are not experiences,” and it is very difficult to motivate people out of their ideology without making them undergo certain experiences (which they have no reason to follow us and experience in not sharing our ideology) (and even if they did, they have no reason to necessarily interpret those experiences the same way we do). Furthermore, changing worldviews can cost a person friends, families, and opportunities — there are incentives to keep the worldviews and values we “absorb” (and do note another reason democracy is weak is because — despite what we think — most of us were never “reasoned” into our “absorbed” worldviews in the first place: democracy demands something that is unnatural).
Democracy is weak, but perhaps when systems are small, “weak democracy” is all we need, because the demands on it are so much less. Even if “weak democracy” is the only kind of democracy possible, that doesn’t mean democracy can’t work, only that it might be best on a small scale. Unfortunately, the larger the State becomes, the more we will feel that we must make democracy work, and it will likely not prove up to the challenge, compounding our existential anxiety all the more.
But could we escape “conflicts of mind”-situations by avoiding the question “What is true?” and instead just ask “What helps people?” Perhaps, but how often can we really be sure that truth is indeterminable? How do we determine truth is indeterminable but by engaging in deep thought, at which point we’ve already entered into the problem from which there seems little if any hope of escape? Also, it is impossible to “help” others outside of a belief of what is actually helpful — a truth — and so even if truth is indeterminable, we must ultimately act on behalf of a truth. And if we give up on truth, we will still have values, but forgo the possibility of systematically shaping, controlling, and containing those values, putting us at risk of not simply chaos, but a chaotic whirlwind supercharged with values and passions which can sweep anybody up (as mentioned in “Truth Organizes Values”).
That said, if we focused on helping people and got it wrong on a local level, the possible unintended consequences would be much less and more contained. Additionally, if David Hume was correct and “autonomous rationality” is impossible, that instead ultimately rationality must defer and ground itself in a “common life,” then for the sake of our minds and for our political lives, we need to keep our State systems small. System size is not merely a matter of making sure the people feel represented but an epistemological necessity for us to best manage — not solve, for they cannot be solved — our epistemological quandaries.
We are compelled by epistemic responsibility to understand and participate in the world with thinking that is inherently incomplete (an act which seems epistemically immoral unless we understand this incompleteness of thinking to be essential). We will make mistakes, so the question becomes how best to manage those mistakes? The greater the complexity and scope of the subject, the greater chance we will make mistakes (and consequential mistakes at that). Thus, small States and systems are more epistemically responsible, and it is best for us to resist large systems even in face of “flip moments” and experiences that suggest their superiority.
On the topic of free will, Noam Comisky argues that knowing whether agency exists may easily fall beyond what human rationality is capable of determining. We know that pigs, for example, can only understand but so much due to the natural limitations placed on their reasoning by their biology, but when it comes to humans and rationality, we seem to “practically” assume that rationality is not biologically limited, despite the evidence in nature that all creatures possess minds that are limited (compared to us). Why should we assume that if we were compared to angels, we too would not be so obviously limited? To higher beings, the dream of “autonomous reasoning” is perhaps beyond laughable, and yet to us the myth seems sacred. But that sense of holiness is perhaps due to the very limitations on our thinking that make the myth of “autonomous rationality” in fact a fairytale.
So, in closing, let us ask: what do we do when we think? It feels like the main function of the brain is to help us understand the world, but the brain is less interested in discovering truth than we might think. The brain is more interested in helping us feel safe.
Feeling safe entails feeling certain and right, and so the brain is prone to make us tribalistic and intellectually arrogant. Worse yet, the brain then works to make us feel like this tribalism and arrogance are a result of us “knowing the truth” and others rejecting it versus “efforts of the brain to make us feel safe.” But could any of us function if we didn’t feel safe? It wouldn’t be easy, and so there is something to be said about the necessity of the brain to gradually introduce us to difficult truth. If we all started out with a sense of the difficulty and size of the challenge, we might all lose our minds before our minds could start to form.
Our brain is a frenemy: it is what makes understanding truth possible but simultaneously what makes knowing the truth so hard. It’s limited, prone to confuse and bind us with epistemic responsibility, and has a tendency to make us believe that rationality can be its own axiomatic grounding. But we have no other tool than the brain: it’s as if we need to tighten a screw and only possess a hammer. If the screw is not tightened, a pillar in our house could fall and bring down the whole roof. Then again, it might be best to leave the screw alone and live with risk. After all, if we’re always at risk, nothing manifests.
The brain wants to simplify the world, translate the dynamic into the linear, and bend things that are crooked into things straight-forward. In so doing, the brain can make things harder than necessary, say in personal relationships, where the brain’s efforts to understand can create drama. “Harder” and “complex” are not similes (though can follow one another), and the brain can increase difficulty by trying to make things simple. Certainly, the truth might be ultimately simple, but that wonderful simplicity should be found on the other side of complexity (to allude to Oliver Wendell Holmes). Unfortunately, the brain wants simplicity now and rather not go on a journey (especially for something it already thinks it has, and especially considering that, during the journey, life will start to feel worse and more complex before it starts to feel better and more manageable). And the brain especially doesn’t want to go on a journey with no guarantee of an arrival, as is increasingly possible with larger systems. The brain is already prone to mislead us in small societies and small systems, let alone when faced with “Pynchon Risks.” But complex things are often indeed complex, and once we let systems become that way, the brain’s natural tendency to (unjustly) simplify must be especially fought. It would perhaps be best to never let systems grow, but system growth seems natural and historically inevitable (and can indeed prove advantageous). If history repeats, the brain will be its own enemy, and the brain is yet to stop history from repeating. We only have one tool to change things with, and it’s yet to work for long.
If faced with a large State or system, the brain will likely attempt to simplify complexities that cannot be simplified, and work to avoid “conflict of mind”-situations that can only be avoided through self-deception and the empty promises of totalitarianism. The larger the State becomes and the greater the complexity the brain faces, the greater these temptations will prove. And yet if we try to face them, we may only find impossible problems, and so suffer existential anxiety all the same.
In closing, we have explored ways in this book that the brain and thinking are essentially limited. Knowing these problems, we will be better able to wrestle the brain and stop it from self-deceiving us. This is no easy fight, but it is necessary. It’s tempting here to discount thinking and rationality entirely and consider them broken and/or corrupt beyond repair, but this would leave us vulnerable to totalitarianism through “the banality of evil,” general manipulation, and the inability to think for ourselves. The dream of the Enlightenment was an “autonomous rationality” that could reliably totalize society for the sake of evolving it, but if we decide to discount rationality entirely, we will be making the same mistakes of the Enlightenment from the other direction: we will have not learned that rationality is essentially limited by treating it like either God or the Devil. Instead, if we abandon the dream of “autonomous rationality” and accept an essentially limited version of rationality, then in having placed reason within its proper bounds (a Kantian dream), it will be all the more useful to us. A world in which we think of rationality as entirely good or entirely useless is a world in which we are irrational about rationality. The truth is, our minds are human.
The brain is the frenemy, but if we know the games it plays, we might be able to treat it more like a friend and avoid being crushed between epistemic responsibility and epistemic impossibility. Then again, at the end of the day, maybe this whole book is just another example of “an internally consistent system” that lacks any truth. Perhaps “the conflict of mind” isn’t real. Perhaps “autonomous rationality” will save us. Perhaps. Always perhaps.
.
.
.
For more, please visit O.G. Rose.com. Also, please subscribe to our YouTube channel and follow us on Instagram and Facebook.