As Featured in The Map Is Indestructible
Pluralism, Indestructibility, Stickiness, Management, and Democracy
Imagine that there was equally rational and true reason(s) to believe in Christianity, Judaism, Islam, Atheism, Jainism, Sufism, Scientism, and let’s say one hundred other ideologies and/or worldviews — which ideology should we choose? More than likely, we would choose the ideology we were already most associated with, which would probably be deeply linked up with the community into which we were born. And “choose” might not be the right word: rather, we more so just “absorb” the ideology of our environment, which doesn’t mean the ideology is false, please note (everyone to some degree “absorbs” a worldview). But let’s pretend we were born behind Rawls’ “veil of ignorance” and had no idea that ideologies even existed, that we didn’t know the difference between secular and religious systems of belief, and that we at eighteen showed up on a college campus and were completely and utterly free of any ideological-leanings — and then suddenly we were presented with a hundred ideologies that were all equally rational and plausible. How would we pick a story? Could we stick with not having an ideology? Such could still be an ideology, and furthermore it seems impossible to never “absorb” a worldview at all.
“The Map Is Indestructible” by O.G. Rose argued that an ideology was entirely internally consistent, to the point where (critically) even its incompleteness proves justified within the system (to incorporate Gödel). In other words, in an ideology, no essential contradictions are found, only “apparent” ones, and considering this, ideologies and/or “maps” are indestructible: all criticisms can be countered, avoided, and/or overturned. Thus, assuming Christianity, Islam, and Atheism are all indeed true ideologies (which I think we have reason to believe considering how old they are and how much “historic testing” they have persevered through), relative to their assumptions and axioms, there is equally rational reason to be a Christian as there is to be a Muslim or an Atheist, so why be one versus the other, or why stay in one as opposed to leave it? Will time and history help us deal with this problem? That’s a question which deserves attention, but considering what Nassim Taleb told us in Antifragile, as the older a book is the greater the likelihood it will be around in the future, so the older an ideology is, the greater the chance it is an “indestructible map.”
Personally, after years studying the apologetics of various religions, I have come to think that at least the major religions of the world are all “internally consistent systems” (though ultimately I’ve only read a very small percentage of all the books there are to read). Somewhere in the mass canon of works written supporting Christianity (for example), there seems to be an answer to every objection, rather in C.S. Lewis, James K.A. Smith, Timothy Keller, N.T. Wright, St. Augustine. St. Aquinas — the apologetic response seems out there. It takes time to rationally conclude “Christianity is an internally consistent system,” yes, and we also must take a Pynchon Risk — but ultimately, this seems to the be the case. Problematically though, so is my experience with Atheism, Hinduism, Islam — the systems are all “internally consistent,” and so has been my experience with Conservatism, Libertarianism, Liberalism, Unitarianism, Empiricism, Stoicism — the systems consistent of no essential contradictions, even if entirely wrong.
Well then, what are we to think? What are we doing when we do democracy? Is it more so a way of social and psychological management (or distraction) than it is debate and persuasion? Is unity in Pluralism impossible? Is anyone rational for ascribing to any worldview, or does the category of “rational” itself not really apply to any of us? Let’s put it this another way: in my experience, I have “no reason not to think” that most worldviews are “internally consistent” (as possible because “reality is situational”), and that leaves us with a number of severe problems that if not addressed, Pluralism could prove our downfall (and, with a nod to Hegel, there’s no going back from Pluralism without dire consequences). Are we doomed? “A Little Fable” by Kafka? “No exit?” Perhaps, but perhaps it’s rather that thinking plays a different role from what we have often thought
I
We seem prodigious at ideology preservation and geniuses at figuring out how our ideologies are internally consistent, and thanks to this genius, we can experience ourselves questioning and examining are (indestructible) ideologies, and so have reason for ourselves that we are critical thinking and not ideological. We easily found our views “internally consistent,” a conclusion we can take as a testament to our genius, our willingness to be critical of our beliefs, and the legitimacy of our beliefs. And if others think differently from us, how could they? Our beliefs lack “essential contradiction,” yes? The others must not have examined themselves like we have…They must not be as responsible or educated…And they want political influence? Are we fools?
Geniuses at creating internal consistencies, perhaps it is no wonder the world is full of indestructible maps, but then again perhaps the maps are indestructible because they are true. Can we say either way? Again, we are geniuses at ideologies preservation, and this is perhaps why humans have been capable of creating so many worldviews that consist of no essential contradictions: perhaps our greatest example of ideology preservation is our construction of ideologies that are indestructible. Then again, perhaps the ideologies are indestructible because they are true: we have equal reason to think the indestructibleness is evidence of mere ideological preservation as it is of truth being present. If it was possible to determine that the indestructibleness of Christianity was due to ideology preservation while the indestructibleness of Islam was thanks to its actuality, we might gain a means for rationality to be justified in favoring Islam over Christianity. But this is unlikely; if it wasn’t, a way to break this “tie” would have likely already emerged in history.
Why are we geniuses at map-making and ideology preservation? Perhaps because there is great incentive to avoid anxiety, confusion, unknowing, “The Real,” and “Pynchon Risks,” the later of which is a possible problem and obstacle because of how rationality and thinking operate in of themselves. The very structure of rationality seems to entail this risk, which means that if we use rationality, we are at risk of various forms of autocannibalism. And yet without rationality it seems impossible for humans to function (distinct from animals), to use language, to coordinate the construction of cities, etc., and so we must ask: how can we best live with this risky rationality? Well, by constructing an “indestructible block” that can help stop us from ending up in “Pynchon Risks,” suffering unending mental uncertainty, and the like, and it is precisely an “indestructible map” which can best carry out this function.
A critical quality of an “indestructible map” is its “internal consistency,” for this is what keeps it from being deconstructed by rationality and instead something which benefits from rationality, seeing as rationality might deconstruct everything else, while at the same time providing justification to the ideology (at least to itself), precisely in finding it “internally consistent.” And these are the critical stakes: what is “internally consistent” is that which rationality serves and defends, while what isn’t so consistent is that which rationality deconstructs. But the dream of rationality deconstructing all “maps” except one of them to which everyone would then “rationally” ascent to — “the dream of autonomous rationality,” the dream of the Enlightenment — has arguably proven false under Global Pluralism (precisely because “the true isn’t the rational”). Rationality deconstructs many notions (unveiling them not to be “maps”), but not all of them — perhaps we are left with say twenty-two — but that is more than enough for “the problem of internally consistent systems” to prove significantly dire (and a major concern of Belonging Again).
II
As some might call it, “the rational life” must always be situated within ideologies, which presents us with a few problems:
1. If ideologies x, y, and z want to work together to find a “middle ground,” should the rationality of x, y, or z be employed? Or should a new rationality be used that emerges in combining x, y, and z? But if this was done, a new ideology would be created (“w”), and now a “middle ground” would have to be found between w, x, y, and z — on and on. This means of solving the problem would only expand it.
2. In “the space between” ideologies x, y, and z, which rationality should I use to determine if I should enter into system x, y, or z? Being outside an ideology and yet to choose one, it would seem I lack any rationality to determine which ideology I should adopt. And if I choose the rationality of ideology x, this would of course bias me to pick ideology x.
3. If I am in system x and thus bound to the rationality of x, how can I ever leave or meaningfully “choose” to believe in x?
4. If the rationality of system y is “better” than the rationality of system x, how could those in x ever use rationality y? It would seem they would be stuck with mediocrity, and not only that, those in x would be incapable of even realizing that rationality y was better, lacking the very rationality needed for realizing the superiority of y (assuming rationalities can be “better” than others, whatever that means, which might not be the case — we’d likely have to take a “Pynchon Risk” to know).
5. If rationality is relative to truth, mustn’t every rationality be circular?
6. When rationality finds our map “internally consistent,” we decide if this is evidence of our genius or something we need to “critically think” about, and rationality is poised by itself to see little evidence of critique. After all, what else is “the internally consistent” but that which shouldn’t be critiqued? (Evidence of entrapment could be evidence of justification.)
And so on. It would seem that if some “objective rationality” existed, it would be in the space between ideologies, but it is precisely the truths of ideologies that make rationality itself; thus, “objective rationality” doesn’t seem possible (however, this doesn’t mean all rationalities are equally good even if they are all equally (likely) true, though of course then we must ask, “How do we determine the good?” — to answer of which we seem stuck using the rationality of whatever ideology we have absorbed). Rationality cannot readily help us solve the problem of internally consistent systems; in fact, rationality is at least partially to blame for why the problem is a problem in the first place (as will be expanded on in “The True Isn’t the Rational” by O.G. Rose).¹
Alright, but what about “personal knowledge?” Could that help? Distinct from social and more communal “empathy” and/or “critical thinking” (as we’ll discuss), might “personal knowledge” and practice help us overcome “the problem of internally consistent systems,” which is to say, “the problem of indestructible maps”? In our time, there has been an effort to overturn dualistic thought and help us think of knowing in new ways, and thinkers as varied as Heidegger, James K.A. Smith, Michael Polanyi, Julian Hartt, Austin Farrer, Nassim Taleb, and many more have contributed to this project. Heidegger suggests that we are not “brains on sticks” but beings “thrown” into a world and “groping” around in the dark: “pure objectivity” is a myth, and we rather know by “feeling around.” Dr. Smith argues that we think the way we do not because of the best arguments we’ve heard but because of the best stories, and that what we love is who we are, not what we believe, because what we love is what shapes our habits. As explained well in Longing to Know by Esther Lightcap Meek, Michael Polanyi profoundly connects personal experience to the life of the mind, and as Meek puts it, knowing God is no different than knowing your auto-mechanic or anything else in the world. As William Wilson outlines in “A Different Method; A Different Case,” Austin Farrer and Julian Hartt point out that foundations for knowing are inseparable from the ontological situations which that knowing claims to participate in; in other words, no knowing is ultimately grounded in premises that are theologically or philosophically neutral or self-evident (“the self-evident” is always itself within a worldview), but this being the case in no way means the resulting knowledge is necessarily invalid. For Farrer, entities are activities, lacking “form” and/or regularity of their own which only “appears” when the entities/activities encounter one another: the world is composed of “determining actions,” and thus if the world is to be known, epistemology must not be based on an indirect conviction that “things are the way they look” but on participation in the “determining actions” of the world (and “appearance” an indication of that “unfolding,” suggesting Hegel). In his book Skin in the Game, Taleb attempts to tear down our traditional understanding of “rationality” and argue that we can only meaningfully discuss “the rational” as “that which increases survival” (risk management), and his arguments are rather convincing, even if I might want to stress “unfolding” versus mere “survival” or something.
Regardless, the point is that there has been a profound effort to connect thinking with acting, which is to say there have been profound innovations in epistemology that have transformed how we understand knowing from something grounded and seeking objectivity to something that emerges out of — and cannot be known outside of — subjectivity and practice. But personally, I don’t believe any of these innovations help us solve away the problem of internally consistent systems; in fact, I don’t believe they even sidestep the problem. I could be wrong, and if so, I would be delighted, for an answer to the problem of internally consistent systems might be at hand — a problem that considering Pluralism, strikes me as dire.
Practice is always orientated by ideas, and always occurs in and is interpreted through an ideological lens. If I believe that the point of life is to increase beauty, then I will act in a manner that increases beauty, and when I succeed, I might both feel good and see evidence that I am correct. Because of my beliefs, I will not do what I would have done had I believed the point of life was to serve my country, and thus I will not “bring unto me” a world-path in which I experience myself feeling good about serving my country and seeing evidence that I did what was right. I can only experience one world-path at a time, and the path I end up traveling down will be dramatically shaped by my beliefs. At the same time, as my beliefs will shape what I do, what I do will shape my beliefs, but all this transformation will be “bracketed” and “contained” by the world-path I head down because of my beliefs/practices. Down world-path x, the possibilities of transformation will be different from those down world-path y, and thus thanks to my beliefs/practices, I will have set for myself a certain “range of possibilities” that my beliefs/practices will not be able to transcend. Though I don’t entirely shape myself particularly, I can influence myself generally.
My beliefs/practices are bound to the system in which my beliefs/practices are orientated by and occur in: if I believe the true meaning of life is to increase beauty, I will act/believe in a manner that sends me down world-paths which the possibilities available will be “ranges” that are extensions of my initial truth. Perhaps down path x which leads to y and z, I will find in z a way to transition from the truth “the point of life is beauty” to “the point of life is goodness,” but then this new premise will still be an extension of a first, because I “came to it” thanks to my first truth (which will continue to live on in my second truth). Through practice/beliefs, I seem to have free range but not freedom (which might overwhelm with anxiety).
If in system x the “best thing to do” is increase beauty, then what increases beauty even if it involves self-denial is “most rational.” In system y, the “best thing to do” might be “to increase the realization of self-will,” and in this system, what helps us realize our will even if it decreases beauty is “most rational.” Humans naturally think/practice “toward” what they believe is “most rational,” and so the system people exist in will “predestine” the directionality of their thinking/acting in a manner that stays within “range” of the system’s truth. They’ll be able to move around, but not really escape (which we cannot even say is necessarily a “bad thing,” needing a system to even say such a thing).
Modern innovations in epistemology have successfully shown that rationality is indivisible from practice, but this might only mean that practice can suffer “the problem of internally consistent systems” just as much as does rationality: the body and the mind are prisoners together. This doesn’t mean that people thinking/practicing in system x, y, or z are wrong, only that practice can’t help us necessarily determine if x entails “truth more like actuality” than y or z (practice can’t help us necessarily touch the ground either). Overall, this suggests that it is not given that modern advances in epistemology help us determine how to move between ideologies, only help explain how we exist and justify ourselves in an ideology. The advances could actually help justify ideologies, and in this way, modern advances in epistemology could contribute to the problem of internally consist systems in so much as they make the problem seem avoided. Advances in epistemology do not necessarily help us solve Pluralism from devolving into chaos, only perhaps help rationally justify the state of Pluralism, for good and for bad, through the body. Something else is needed (mainly, “the encounter”).
Is there really no solution to “the problem of internally consistent systems”? Well, let’s say there was one: how would we find it? Perhaps there is only one ideology out there that is truly internally consistent and actual: the only way we could figure this out and solve our problem (as far as I can tell) is to investigate every ideology in the world, which strikes me as impossible. We would have to take Pynchon Risks potentially thousands of times over, with no guarantee of “reaching bottom,” and at the end of the day, perhaps we would deconstruct two thousand ideologies, but three hundred would still be left standing as internally consistent: the problem would shrink but not go away. The Professor and the Madman by Simon Winchester in mind, the task could require assembling a team as expansive as the one put together to assemble the Oxford English Dictionary, and even with thousands of people, the task could take a hundred years. And if at the end of all that work a huge book was released with the findings, there would be no guarantee that the public would accept it. We are currently suffering the “legitimation crisis” Habermas warned us about, and by the time the book came out, authorities may very well have even less legitimacy. Looking at the truth, the public may dismiss it as propaganda.
Thus, even if an answer existed to the problem, at the end of the day, there is no guarantee the public could accept it: there would still be an element of management that would have to be employed. The answer wouldn’t feel “given,” at least, precisely because the problem itself destabilizes the stickiness of truths, and thus the very question would cause existential anxiety for the public, decreasing the likelihood that they would accept the answer. Overall, all this can help us understand why “personal knowledge” can help us understand what is thinking, but this realization alone cannot help us solve “the problem of internally consistent systems”; rather, the inevitably of “personal knowledge” is precisely why the problem is one we must deal with in the first place. Ultimately, I don’t think the problem can be really addressed by individuals at all: we must think environments, mediums, social environments, aesthetics, habits, and the like, as will be a focus of Belonging Again II.2.² ³ ⁴
III
Does this mean everyone is brainwashed? Perhaps in a sense, but that angle also offers a temptation: we might disqualify everyone and maybe thinking more generally (especially that with which we disagree). All of our brains are “washed” by our truths, but then there is no such thing as a brain that isn’t so washed, which begs the question of what value the term “brainwashed” even offers?⁵ What we find is a problem that we might associate with issues that have arisen in quantum mechanics, at least according to Karl Popper, who wrote that ‘[the] early interpretation of Heisenberg’s implied that the particle had a sharp position and momentum; but owing to our interfering with it, we could never measure both sharply’ (“coherence” and “correspondence” exist in conflict).⁶ Put another way, Popper wrote that ‘the crisis [in Physics] is, essentially, due to two things: (a) the intrusion of subjectivism into physics; and (b) the victory of the idea that quantum theory has reached complete and final truth.’⁷ So it goes with thinking: rationality is always organized by a subjectively-influenced truth, forming an ideology and/or (indestructible) “map,” and this seems to be the final and complete truth of human knowledge. And that’s our “epistemology crisis” which we can associate with our “crisis in physics,” which I associate with “the problem of internally consistent systems.”
Why is it that we often stay devoted and married to one worldview? If multiple systems are equally true and valid, why do we stick with one? Why couldn’t rationality have us float “in the space between” ideological systems? Why do people continue to believe in worldview x and not often switch to y then z and so on? If ideologies are “internally consistent systems” and ultimately it is just as rational and true (as far as we can know) to believe in x as it is to believe in y, why do people keep believing in x? What keeps them (there) believing? Obviously, truths have a way of “keeping people attached,” and of course it’s just easier to keep believing what you’ve always believed rather than change up your entire worldview and life. But is that all there is to it? As discussed in “Compelling” by O.G. Rose, it seems to me that “truths are sticky”: once we believe in one, it’s like we’re stuck to it like a bug that has been tricked into landing on a bug trap — we can’t pull away — which makes sense if all rationalities are ultimately “practically predestined” by their truths. I don’t mean to use such a negative image, for it implies truths are traps, but the metaphor conveys the “stickiness” I want to imply.
What causes “stickiness”? Perhaps ideas themselves, or to touch on Belonging Again, perhaps sociological forces? Ideas themselves are impersonal and unmoving: even if somehow true, it seems strange to suggest that they can be sticky more so than their stickiness come from communal and personal forces that organize around those truths. It doesn’t seem so much that “2 + 2 = 4” has stickiness as it is the case that the fact that everyone agrees it is true creates a stickiness over the fact: the stickiness is created by the society which then in a sense “squirts” it over the fact. For example, Christianity has been sticky in the West: even when people encountered strong arguments against it, they kept believing. Arguably no Atheist philosopher has been as successful at converting believers away from the faith as has the phenomenon of Pluralism itself, because arguments and rationality aren’t responsible for the stickiness of a worldview, but rather more so the society and “plausibility structures” organized around it. If no society was organized around the most objectively true ideology in the world, perhaps it would have no stickiness; if society was most organized around the most objectively false ideology, it would be the stickiest ideology around.
People have known worldviews are subjective, but not until more recently have worldviews in my view felt subjective and lost their stickiness. When philosophical arguments against religion were at their strongest, the religious landscape did not change; when religions have become diverse, they have become wider and arguably more beautiful, but also emptier. Considering this, it seems that the stickiness of a truth is not necessarily related to its validity, rationality, or the like, but rather due more to its societal context. And if it is true that religion hasn’t declined until sociological forces have changed provides evidence that ideologies are indeed “internally consistent systems”: reasoning has been incapable of moving people in mass out of ideologies; it’s been the loss of “stickiness” that has made all the difference. Or at least for a time, until the failures of “givens” fell back onto the presence of (indestructible) “maps.” Now, religion seems back on the rise, at least a map-based form of it (which though might be less sociological — time will tell, but “maps,” even if “practically similar” to “givens,” might not prove able to provide “belonging”).
What do we mean in saying “givens failed into (indestructible) maps?” We’re alluding to Belonging Again by O.G. Rose, where a “given” is something that’s true to us not because we’ve thought about it but because it just “is” the way of the world (“thoughtlessly” thus). In the past, where Christianity was “given,” Christianity was true because it wasn’t so much thought about; today, where Christianity is “an indestructible map,” Christianity is true because its incompleteness can’t be shown to necessarily falsify it — and furthermore every other view is in the same boat, so we can’t dispute Christianity on grounds that there are better options. Sure, our option to us is better than Christianity, but to the Christian? It doesn’t have to be, and so everyone can be like that “Bourdain Donkey,” immobile and unable to make a choice, precisely because all choices are equally rational/incomplete (enough so, by their own terms) — which is a state of affairs that exists the “memes,” religions, worldviews, etc. which exist now. Where everything is relatively frozen like that donkey, everything continues to survive (evolutionarily logical). But do they live? Perhaps. Perhaps not.
To be clear, I don’t think the “stickiness” of beliefs due to sociological forces (like “plausibility structures” in Peter Berger) is the only reason why we traditionally tend to stay in our worldviews: I think sociological “givens” might have actually helped us avoid facing the deeper reason why. Practically, “givens” and “maps” function similarly, but “givens” can help us avoid seeing the truth of maps, which is their “internal consistency,” as we are suffering now (and as is manifesting into conspiracies, etc.). Maps can work enough “like givens” to make them function in their place and something to which people can readily turn (noticing little change), but though “practically similar,” there is a critical technical difference between “thoughtless givens” and “indestructible/inescapable maps,” a difference that comes out more and more with time (as “Pandora’s Rationality” multiplies, as we’ll discuss, as time, history, and the internet test views, as anxiety develops).
Though “practically identical” sociologically, there are differences between “maps” and “givens” that should be noted: “givens” tend to generate “thoughtlessness,” while “maps” tend to generate anxiety. We know that maps (which we can overlap with “scripts”) cannot prove themselves, and yet their very internal consistency provides us “reason to stick” with them, but this relationship can be thought-tormented and anxious (alluding to “The Dead” by Joyce). “Givens” work precisely by stopping thought, but that is why manipulation is possible. And without something like either “maps” or “givens,” we run the risk of confronting the Real, denying the relational and situational quality of reality, and ultimately proving dysfunctional. We are in danger, or we are no more.
IV
We have already discussed why there is incentive to realize an “indestructible map” over a mere notion, because what is indestructible is a better guarantee for avoiding “Pynchon Risks” and existential anxiety, but another reason is because it is practically better to base civilization on an “indestructible foundation” than something more fragile. A destructible notion isn’t as useful as an indestructible one, and so it is only a matter of basic existential and psychological incentives for us to test and find the “(indestructible) maps” around which we can then organize. It also makes sense practically because if we poured a lot of resources into building schools, governments, churches, community centers, etc. which were based and justified by weak and destructible notions, we’d run the risk of having all our work undermined and deconstructed, as we’d also risk the loss of social cohesion, authoritative beliefs which might compel reproduction, ethical schemas that everyone shares, and so on. In this sense, it’s smart and “a good investment strategy” to base our lives and civilizations on “indestructible maps,” and frankly the civilizations and people who don’t are likely to suffer significantly for it, providing further “forces of natural/ideological selection” which favor determining “indestructible maps.”
A house that is built on a bad foundation can suffer greatly for this mistake; likewise, humans who construct and orbit their lives around fragile notions that aren’t internally consistent are likely to pay a high price in the future. Hence, as history develops overtime, it is likely that civilizations which last are constructed upon more “indestructible maps” than mere notions (and also find ways to avoid “losing stickiness” say through insulation from the broader world, for good and for bad), even if there is a great “nova effect” in the civilizations where religions and philosophical expressions multiply and grow (as Charles Taylor discusses). Sure, there are arguably more ideologies today than ever, but my point would be that what civilizations, institutions, and the like are based on and justified by are increasingly likely through history to be “indestructible maps,” while in the past they perhaps were more likely to be based on notions which weren’t “internally consistent.” Regardless if there is a possible explosion of notions, ideologies, and the like, the main governments, countries, economic institutions, schools, etc. are likely to more gravitate around “indestructible maps” than others, whatever they might be. Earlier civilizations, perhaps not so much, but the longer humanity lasts, the greater the chance that our “great powers” be based on maps. And this means that the longer we last, the more likely it is we will have to face “the problem of internally consistent systems,” and do so through and between “great powers.” Our moment might be one in which history has transpired long enough that we can no longer avoid facing “the problem of internally consistent systems” (“the saturation point,” as Alex Ebert discusses, seems to have been reached).
For existential and practical reasons, humans create notions/ideologies that then through time are historically selected for “internal consistency,” and the maps which pass this test are likely to arise and prove foundational to institutions, governments, etc., for reasons of making the foundations of these entities as strong as possible, for assuring our foundations can help us avoid psychological turmoil (in encountering the Real), for increasing the probability that a greater diversity of people can be included in the institutions (thanks to the very consistency of the ideologies), and so on: there are many benefits to realizing and organizing around “(indestructible) maps.” Furthermore, it is suggested that with time “the problem of internally consistent systems” cannot be overlooked. We will have to learn to live with it, or else we will not “belong again” — and all this suggests that we must learn “the art of faithful presence.” (Does this explain the Fermi Paradox? Is “the problem of internally consistent systems” the Great Filter?)
The more time passes and history “unfolds,” the more “the problem of internally consistent systems” must be faced, which perhaps had to be faced at the point where no plausible deniability was left, precisely because we are masters of ideology preservation and can always avoid changing maps. Now, at this moment when we can glimpse that our map is indestructible not necessarily because of its correctness but because of its structure, is precisely the moment we might start to no longer be able to deny that a strategy of dealing with maps in which we try to deconstruct them is inadequate and unfitting for addressing Global Pluralism.⁸ This can be a moment of madness, but it can also be a moment of hope and change. We don’t tend to change until we must, after all.
“Givens” hid us from the always present “problem of internally consistent systems,” which glimpsing can help us understand how “maps” today “practically function” like “givens” in their absence, while at the same time are different qualitatively and in critical ways. Facing maps now, there is an opportunity for negation/sublation, but that Absolute Choice will be up to us. Alfred Korzybski wrote how in his book, Manhood of Humanity, ‘it is shown that the canons of what we call ‘civilization’ or ‘civilizations’ are based on animalistic generalizations taken from the obvious facts of the lives of cows, horses, dogs, pigs…and applied to man.’⁹ Due to these mistakes, we have been ‘forced to develop under the un-natural (for man) semantic conditions imposed on them. In turn, they produce leaders afflicted with the old animalistic limitations.’¹⁰ Animals can be territorial, violent, seeking of survival at social expense, and the like: where we are stuck trying to destroy indestructible maps, under A/A, we can revert to similar behaviors. This is the price of lacking a ‘theory of sanity,’ yet it is precisely when “the problem of internally consistent systems” is no longer deniable and all hope seems lost that this theory might finally prove possible.¹¹
Under “givens” and failing to face “maps,” we have lived according to A/A-logic, which is an inadequate condition for the human subject, but it is precisely now when we seem to “see” that hope is lost that hope arises. Yes, what is increasingly “vivid” to us is a problem we cannot solve, but perhaps that doesn’t mean there isn’t hope, but that the problem is one to be managed instead. It can be. Well then, what should we do? Indeed, we have perhaps read a lot, debated ourselves, shifted our views here and there, but has our “overall worldview” changed that much? Has our initial reference point really shifted? Surely we’ve come nowhere close to equally studying all possible worldviews, and though we’ve perhaps tried to focus on practical outcomes versus intentions, employed falsification, tried to think through brilliant voices we disagree with, and so on, it’s unlikely these practices have entirely justified our choosing x worldview over equally consistent y. Yes, accepting and justifying the essential limits of rationality and the problem of internally consistent systems can help me live with my intellectual shortcomings, precisely because I can tell myself that these shortcomings are unavoidable: as it is not my fault that I cannot fly, it is not my fault that I cannot equally consider all internally consistent systems. And yet though I might understand this is the case, it easily still doesn’t feel like I’m justified or that I’m being epistemically moral. By definition, I never can feel justified: the loss of “givens” necessarily entails the loss of existential stability. But can existential stability be regained through knowing that the loss of existential stability is inherent with the problem of consistent systems (and that the fact that my ideology is equally consistent with other ideologies doesn’t mean that it is necessarily false)? What does it feel like?
In conclusion and to review, Globalization is a Pluralistic world that through historic “trial and error” have helped sort maps from notions, moving maps increasingly toward the centers and foundations of societies, and in Pluralistic societies interacting, that means maps are also interacting (while surrounded by the multiplication of notions/maps thanks to the internet). Since maps are indestructible yet aren’t “given” so can feel more destructible and chosen (especially the maps of others), these Pluralistic interactions can be more psychological and anxious; in the past, when “givens” interacted, I would wager there was an acknowledgement of difference, but not one that caused the same anxiety and existential doubt (for good and for bad), especially seeing as most of the interactions were economic, temporary, and military. Suffering this anxiety, relations can be threatened, yet at the same time it could be argued that diverse relationships are more possible today than ever before in history. With opportunity comes challenge.
“The loss of givens” might accelerate the process by which “notions/maps” can be sorted into (indestructible, consistent) “maps” and (destructible, inconsistent) “notions,” meaning deconstructive processes like debates, skeptical examinations, and the like which perhaps defined intellectual life for most of history have been made more efficient. But this could mean maps are “rising to the top” as the main intellectual challenge we need to consider, which intellectual technologies like “debates” (as traditionally understood) are inadequate for addressing, even though those technologies were historically necessary for us to arrive where we are today. Pluralistic Globalization is when a major problem of ours is that “the maps are interacting,” and if all we have are the intellectual technologies of previous generations which cannot help us deal well with maps, then we are in trouble.
All maps entail notions, but not all notions make up maps, and while notions compose ideologies, not all notions can quilt themselves with other notions into the consistency of an indestructible ideology. But it is also not self-evident which notions are which, meaning an experimental process of trial and error is needed, a process which must be ongoing given that new notions/maps arise all the time, and a process that before the internet was necessarily slow and inefficient. For most of history, we have been dealing with notions/maps within nations and usually under a general system of belief (for example, we have dealt with debating within America between Methodists and Baptists under Christianity), and for that problem a slower process following traditional debate structures was (even if not great) adequate and manageable. Also, most of our concern say under Christianity was determining “core beliefs” like creeds which were “essential” from more debatable and “accidental” topics like infant baptism, which made space for unity and difference all at one. Now, our problem is the interaction of (essential) “creeds,” per se, and how in that interaction can “space for unity and difference” be found? It was hard enough to find under Christianity within American: to accomplish the same between Christianity and Islam within America, let alone the world, is an entirely different matter. We need better social and political technologies (say “the Antidebate” of Johanthan Rowson or “converdictions” of Theory Underground), which the internet might afford, but isn’t the internet also causing a mass spread of conspiracies and “Pandora’s Rationality?” Yes.
Under Global Pluralism, where our main problem is not just sorting “maps” from “notions” but the global interaction of indestructible maps (between anxious people due to “the loss of givens”), which is occurring within and alongside a mass multiplication of maps/notions thanks to the internet. That accompanying explosion of notions can perhaps make it seem like our main issue is deconstructing inconsistent falsities, but all of that, though a problem, is not for me the central problem, which is the rise and interaction of indestructible maps. This situation is what we need to think and innovate new mediums and expectations to address, from our politics all the way down to our schools. Debate perhaps was fine as a socio- and psycho- technology for dealing with ideas, but in an age where our main problem is “map interaction,” we need new ways to bring ideas into interaction. This is the work of the Liminal Web, the social coordination system — a work of “faithful presence” — but this will be elaborated on in II.2 (especially Book 4).
Discomfort, uncertainty, existential anxiety — is there any hope of escaping these feelings which haunt us today? Perhaps not, but if anxiety might empower us somehow, perhaps that’s a good thing. Perhaps discomfort, uncertainty, and existential anxiety are good for us and will help us into humility and Nietzschean Childhood? Perhaps, but these feelings could also drive us mad and into making totalitarianism appealing — it seems up to us to negate/sublate these feelings more positively. Might our age ultimately prove blessed by anxiety? I’ve heard it countless times that “the truth will set us free,” and perhaps if I were to live a million years and have time to deeply study every ideology, I would in the end find a truth that would set me free from doubts, unhappiness, uncertainty and pain. But if that is not “practically possible,” can it be “the truth”? If not, is anxiety the truth? Will anxiety set us free?
.
.
.
Notes
¹So what is it that makes people shift ideologies? Of course, irrationality could be behind the switch, because if we don’t realize x is equal with y due to a failure of thinking, we could wrongly conclude y is better than x and switch for unjustified reasons (and precisely because thinking can help us move from “irrationality” to “rationality” generally can keep us busy long enough to not realize there is also the problem of maps to consider). At the end of the day, perhaps all ideological shifts are due to shortcomings of thinking, but “assuming the best” compels me to assume otherwise. Even if it was so, I’m not eager to conclude the solution to the problem of internally consistent systems is irrationality, seeing as rationality seems necessary for Pluralism to not devolve into tribalism and totalitarianism.
²How should we practically respond to the 2008 Financial Crisis? The answer to this question will be bound up in the question of how we interpret the 2008 Financial Crisis, and how we interpret history will be tied up with what we have experienced in our lives and what we’ve undergone through our practical experiences, all of which will at least partially result from our ideas about how we should live, who we are, and so on. How we interpret the world is shaped by the sum of who we are, which is not a result merely of practice, but also the ideas which lead us to practice x instead of y, and since we practiced x instead of y, our ideas were shaped by x, which in turn impacted how we chose to act in the future, which ultimately shaped our interpretable lens, and thus orientated what we would learn from the 2008 Financial Crisis on how we should act in the future.
³Do note that even if management is needed, there is no guarantee humanity would be practically capable of it, as there is no guarantee that in the name of mitigating the existential anxiety that makes totalitarianism appealing, the management itself wouldn’t become a means to totalitarianism. To exist is to be at risk; even the Liminal Web offers no guarantees.
⁴To put this another way, we’ve already establish that “truth organizes values” (from The Conflict of Mind), but it is also the case that values bracket our truths. “Values shape what we experience,” and experience shapes what we think is true, and all of this exists in a feedback loop that is easily “predestined” by our maps. Once our (likely absorbed) truth organizes our values, our values shape what we experience to align with that truth. We might say a “deadlock” emerges where we seem “stuck” in our map (“brainwashed,” perhaps), so how do we break out? Thinking? Feeling? Likely not — our hope would have to be something “radical outside” ourselves, Otherness? Yes, but through “encounter” and critically surprise — hence the profound need of a social coordination system, a Liminal Web — focuses of II.2.
⁵The word “brainwashing” has an unclear meaning (distinct from “learning” and/or “changing view”), and it tends to be a subjective debasing of someone who begins to change his or her views into something the person using the word doesn’t like. Furthermore, we could really only call something “brainwashing” if we knew “the truth” which the person was being “brainwashed” relative to and away from. But even then, calling someone “brainwashed” isn’t nearly as productive as proving to the person the “truth” (which we need to know to use the word “brainwashed” legitimately), which could motivate the person to choose to step out of “the loop” in which they are (in our eyes) “being brainwashed” — what value adds the description and judgment, then?
The following thoughts once composed a paper called “On Brainwashing,” but I decided instead to transform the paper into a note for here (which helps suggests why “brainwashing” is likely not a helpful construct for us to address “indestructible maps”). To start, what does it mean to be “brainwashed,” exactly? Often, I find that the word is used to refer to someone who has been indoctrinated into believing a falsity as truth, but if this is the case, it would seem that knowing if a person is brainwashed requires first knowing reality. If someone has taught me that the object-toothbrush is the object-rabbit, to know I have been tricked, I would have to know that the two entities were not the same. In these “little matters,” determining if a person is brainwashed seems much more possible than in “big matters” (like overall ideology): if I believe you are brainwashed because you are a Capitalistic, I would have to know that the theory of Capitalism was false (which requires an incredible amount of work). Paradoxically, the claim “you are brainwashed” is usually used against people because of their overall worldview, not in “small matters” — exactly opposite of what could increase the probability that the term “brainwashed” is used meaningfully.
In a sense, all learning is “brain-washing”: learning “cleans the mind” of falsities or else perhaps washes the mind away. Everyone seems in the business of “brainwashing” to some degree, not just the propagandists. How do we identify a “propagandist” from a “teacher” then? Conservatives could think colleges are full of liberal propaganda, and Liberals can think rural areas are Conservative bubbles. “Propaganda” often refers to teaching that we don’t agree with (even if we’re actually the propagandists), and the word “propaganda” can imply “intentional brainwashing,” that the teacher knows the truth and is intentionally directing students toward a particular ideology of the teacher’s choosing. This is problematic, for even in Nazi Germany, the teachers easily believed in Nazism and weren’t directing students toward an ideology they knew was false. Perhaps people could argue that I am wrong about what “propaganda” implies, which I believe is similar to what “brainwashing” implies, and perhaps they are correct. Even so, I believe most people perceive these words as implying “an intentional and conscious directing away from the truth toward a falsity (ideology),” and this being the case, I think the words should rarely if ever be used (they cause more trouble than they are worth). At the very least, they shouldn’t be used causally — a lot is at stake.
Maybe the concept of “brainwashing” isn’t prevalent in our society today directly, but I would argue that indirect appeals to brainwashing are widespread. If someone says, “You can’t think for yourself”; “You are too close to the situation to see it clearly”; “You as an x can’t understand what it’s like to be a y”; “You’re blinded by love”; “You’re just an academic”; “You’re unconsciously biased”; “You’re manipulated”; “You aren’t listening to God”; “You live in an urban/rural bubble”… — these kinds of statements are degrees of saying, “You are brainwashed” which disqualify certain people from having a view about certain situations. Is there a time for such disqualification? Perhaps, but I think such times are rare.
I
If someone unintentionally teaches us a falsity as truth (because the person believes it is true), is that really “brainwashing” or just a mistake? If “brainwashing” and “mistake” are similes, the word “brainwashing” lacks distinction and perhaps should be discarded to avoid confusion. To be distinct, the word “brainwashing” must mean something like “the intentional and conscious teaching of falsity as truth.” Perhaps this isn’t what people mean when they use the word, but if not, then the word not only fails to add distinct value, but it also tends to confuse. Moving forward, I will use “brainwash” to signify “the intentional and conscious directing away from truth,” and personally, given this definition, I’m not sure if the word and concept are ever worth using, for two main reasons. First, if someone is brainwashed, the person lacks the capacity to know he or she is brainwashed, and so little can be gained by claiming this of someone, for even if it is true, the person likely cannot believe it is true lest he or she believes that which means he or she is incapable of telling what is true. Second, I think it’s fair to say that only evil people truly brainwash others, and even if evil in a deep sense exists, I believe most people are a mixture of good and bad versus bluntly evil. “Pure evil” is rare even if possible, which means “true brainwashing” is rare as well.
The claim and concept of brainwashing is rarely useful, to any degree, and in the situations where it may accurately describe what has happened, even then, I would argue that the construct is likely to cause more trouble than it is worth. If someone has been brainwashed somehow, telling the person this likely won’t make a difference (the person has been brainwashed into thinking he or she isn’t brainwashed, as all brainwashing seems like it must or else we would change). We likely have to show the person the truth indirectly, through argument, testament, or something else. Similarly, if we are dealing with an evil person who is knowingly brainwashing others, telling the evil person this is to tell the person something he or she is aware of (little will change). Hence, when the word “brainwashed” rightly applies, even then, using the world or thinking of the situation through its conceptual lens will add little if not negative value.
Often, I think when what we may call “brainwashing” occurs, it is actually unintentional, and those doing the supposed “brainwashing” genuinely believe in the message they are sharing; in other words, what we call “brainwashing” is actually “the unintentional spreading of a falsity as a truth” (assuming we know the truth). This is certainly not a good thing, but stopping this isn’t a matter of “stopping brainwashing” so much as it is learning to think and discern clearly and rightly about ideology and competing worldviews. In my view, the notion of “brainwashing” can hinder us from mastering this art and threaten our capacity to handle our Pluralistic Age. The construct should be deconstructed; again, even when the word is accurate, it seems useless.
II
Whenever we meet someone who thinks different from us, it can be difficult for us not to think of that person as an ideologue. Likewise, whenever someone is taught to believe something that we believe is false, we can easily think that the person has been brainwashed. We don’t tend to think of the teacher as guilty of a genuine mistake, or the teacher sharing an ideology we don’t agree with but can respect, but rather we can naturally think the teacher is “indoctrinating” or worse. It is hard for us not to think in extreme terms: it seems unnatural for us to respectably disagree but maintain dignity. Why? Perhaps it is because there is something within us that cannot help but think the worst of those who teach that which we believe is false. Necessarily, it seems, if someone teaches what we don’t agree with, we think the teacher is spreading lies, and if the teacher is mistaken, we still can’t seem to help but think of her as guilty of a moral injustice or at best immoral carelessness.
If we believe x, we can naturally believe that reason leads people to believing x (often not realizing that “reason” is relative to “truth,” as discussed in “The True isn’t the Rational” by O.G. Rose), and hence if a person doesn’t believe what we believe, that person mustn’t be reasonable. And why else would a person refuse to be reasonable unless he or she was an ideologue? Hence, whenever we encounter someone with whom we disagree, we can almost naturally come to see the person in “brainwashed”-terms. Using “brainwashing” seems natural — suggesting why we should be skeptical of it.
To be clear, I don’t mean to suggest that we necessarily think of everyone who thinks differently from us as “brainwashed,” but I do mean to imply that we are always very close to taking that incredibly dangerous step. We seem naturally poised for it, always “one step away” from it, and considering how useless “brainwashing” is as a term even when accurate, that means we are all “one step away” from using a construct that can have incredibly negative consequences and very little to show for it. But why exactly is “brainwashed” such a dangerous construct? A few reasons:
1. If we think of someone as brainwashed, we think of that individual as incapable of reasoning and thinking for his or herself, and yet in need of a change of mind. Only force can square this circle.
2. If we think of ourselves as brainwashed, we can think of ourselves as incapable of judging reality accurately: we can’t trust our own thoughts, discernment, confidence, convictions, etc. We become incapable of believing we aren’t brainwashed, for brainwashed, we are incapable of trusting what we believe, potentially leading to a mental breakdown.
3. If we think of someone we disagree with as “brainwashed,” we necessarily think of the view that person supports as having little and even no truth to it, hence freeing us of the need to investigate it or criticize our ideology in light of it.
4. Whether or not a person is brainwashed seems unfalsifiable unless we can prove a person has intentionally been led to believe x (by another or his or her self) when knowing all along that x was false. Since intention is very difficult to prove, the construct can set people up to engage in an endless investigation that leads nowhere.
5. The construct can turn a relationship from one between equals to one between a savior and someone needing saving, which can lead to alienation and destroy the relationship (“for good”).
To be brainwashed seems to mean we are incapable of knowing we are brainwashed, and to call others “brainwashed” might hurl people into incredible existential tension, disqualify ourselves from needing to understand their worldview, and/or frame us as “the knower of truth.” To claim that another is “brainwashed” can create a hierarchy, one in which we are at the top, and it can also be used to establish a “normalcy” which everyone must follow or “be brainwashed” (which might suggest we are actually brainwashed…). We have learned from Postmodernism and Derrida that we need to deconstruct structures that push out “the other,” and yet the concept of “brainwashing” can precisely be used to exclude and disqualify others and new ways of thinking from, impoverishing us all.
III
In line with “Self-Delusion, the Toward-ness of Evidence, and the Paradox of Judgment” by O.G. Rose, one who is worried about being “brainwashed,” being in a cult, not being able to understand reality clearly, etc., is a person who will easily start suddenly seeing lots of evidence confirming such to be the case. So it goes if we start worrying about someone else being brainwashed, in a cult, unable to discern, etc. Considering how useless the concept of brainwashing is, this is especially tragic, for the idea traps people in a prison that they might gain nothing for being stuck in. Considering how calling someone “brainwashed” can prime them to make this mistake, considering how it can impact how we see ourselves, and considering how rarely useful the concept is, we should take the utmost care before using any degree of the concept of “brainwashed” in our thinking or daily lives (even if we don’t say anything and only use the term inside our minds). Frankly, we probably should never use it at all (not even privately, to ourselves).
If we really wanted to, all learning could be called “brainwashing,” which means the learning we dislike can always be critiqued. When someone learns that which goes against what we believe, our entire worldview can be threatened, and such a threat can bring out the worst in us. It is tempting at such times to accuse the other of being “brainwashed,” but this is one of the most destructive acts people can do to one another. It turns a person’s worry against their self, and since worry is a self-justifying system (as discussed in “On Worry” by O.G. Rose), that person could begin seeing evidence everywhere that leads that person (“objectively”) toward his or her own self-entrapment, self-disqualification, and even self-destruction through overwhelming existential destabilization. To say, “You are brainwashed” can be to arrange another to use his or her mind as a weapon of self-destruction, and though this doesn’t mean a person can’t be brainwashed, it does mean that only under the most extreme circumstances should the word be used (especially considering how rarely the word is appropriate or even useful).
To conclude, the concept of “brainwashing” seems to do more harm than good, and it is better to show people their errors through rational debate, changing “conditions of possibility,” etc. than to hurl the claim, “You are brainwashed,” at them. The notion can threaten Pluralism, and again for this reason it should be discarded. Even if dealing with a member of a cult, calling a person “brainwashed” will probably only cause the individual to put up defenses: we should instead present the person with a view, work of art, etc., create a “clearing” in which the person can see things differently for themselves (a change in “conditions of possibility”), and then come what may. Furthermore, if “the map is indestructible,” all of us are prone to “self-brainwashing,” and calling others “brainwashed” might increase the likelihood that we fail to be self-critical (the claim necessarily entailing in the background a belief that we know the truth to know who, relative to that truth, is brainwashed).
We might argue that we have never called others “brainwashed,” and that’s great, but please note that this paper is focused not just on the direct and explicit word but the construct: we shouldn’t even think of people as brainwashed, or use language which suggests it. Even if hypothetically others actually are, being aware of this isn’t helpful: it can hinder how we relate. “Brainwashed” is a concept and way of thinking about others and oneself that is very dangerous: we either disqualify ourselves from living well or disqualify others from thinking. If the word “brainwashing” applies fairly anywhere to any capacity, perhaps it is in claiming that, “We have all been ‘brainwashed’ by ‘brainwashing’ as a construct,” which is to say in us undergoing social training to treat “brainwashing” as a useful construct. It is not. Let us wash the brainwashing away.
⁶Popper, Karl. Quantum Theory and the Schism In Physics. Totowa, NJ: Rowman and Littlefield, 1982: 17.
⁷Popper, Karl. Quantum Theory and the Schism In Physics. Totowa, NJ: Rowman and Littlefield, 1982: 1.
⁸A realization which is further assisted when we realize all the strategies we employ to preserve ideology, like “the game theoretics of conversation,” how “death is the event horizon of reason,” intention as justification, and other topics discussed in The Map Is Indestructible.
⁹Korzybski, Alfred. Science and Sanity. Fifth Edition (Second Printing). Brooklyn, NY: Institute of General Semantics, 2000: 40.
¹⁰Korzybski, Alfred. Science and Sanity. Fifth Edition (Second Printing). Brooklyn, NY: Institute of General Semantics, 2000: 41.
¹¹Korzybski, Alfred. Science and Sanity. Fifth Edition (Second Printing). Brooklyn, NY: Institute of General Semantics, 2000: 46.
¹²As expressed in and through not primarily teachings, but “a medium condition” like the Liminal Web and social coordination system.
.
.
.
For more, please visit O.G. Rose.com. Also, please subscribe to our YouTube channel and follow us on Instagram and Facebook.