A Short Piece Featured in The Map Is Indestructible by O.G. Rose
Under the wrong metrics, there’s incentive for conversationalists to take a “dominant strategy” quickly, which causes a Nash Equilibrium and suboptimal result. Problematically, democracies rise and fall based on the quality of “the civil discourse,” which is rational for players to become “dominant” and ruin.
Have you noticed that most conversations don’t go well? I’m not talking about “small talk” — I mean conversations, about pressing issues, decisions, and that kind of stuff. Someone disagrees, someone gets angry, someone gets offended, someone accuses everyone else of trying to destroy America — you get the drift. Why does this happen? Why is it so hard to have a good conversation? Well, I think it has a lot to do with the fact that there is an incentive to be the first to establish “dominance” in a conversation, and we can do that by disagreeing, getting angry, and the like. In other words, once I’m upset, I have framed others as needing to make me happy again, which is to say that I grant myself “the high ground,” per se. It stinks being stuck on the “low ground” and having to “play defense,” so after this happens to us a few times in conversation, we can start seizing “the dominant strategy” ourselves. And so, Nash Equilibria begin ruining all our conversations and contributing to the collapse of society, which is to suggest Game Theory gives us a way to understand why democracies are failing. This isn’t to say Game Theory provides the only explanation — we’d have to explore economics, for one, to begin sketching out a full picture — but I think it’s at least a useful start.
Wait, hold on: what exactly is a Nash Equilibrium, and what is Game Theory? Well, getting into the details on those topics exceed the scope of this paper, but fortunately the work of Lorenzo Barberis Canonico can get anyone up to speed (as featured in “Neurodiversity Overcomes Rational Impasses and Stops Eugenics” by O.G. Rose). That paper also argues that a “Nash Equilibrium” can be understood as a “Rational Impasse,” which is defined as ‘is a situation in which rationality keeps itself from reaching its overall best outcome.’ In other words, it’s when if everyone acts rationally, the result is suboptimal and/or negative.
Neurodiversity Overcomes Rational Impasses and Stops Eugenics
Is neurodiversity the best way to escape Nash Equilibria?
“The Game Theory of Conversation,” as will be the focus of this paper, will use the language of “Nash Equilibrium,” and ultimately reiterate the truth that we only escape Nash Equilibria with “nonrational” action, versus “rational” or “irrational” action — but that’s getting ahead of ourselves. For now, we will resume our focus on “conversation structures,” and please note that I am going to refer to conversations as “games” to stick to the “game theory theme,” as I will likewise refer to people involved in the conversation as “players.” I’m also going to use the language of “strategies” to describe ways people go about talking.
Anyway, the first “player” in the “conversation game” to take a “dominant strategy” quickly becomes who everyone else in the conversation must cater to, both in order for the game to continue, and in order for “the dominant player” not to win. If the game ends, the “dominant player” arguably “wins” (at least in his or her own eyes), so the other players must figure out how to make the dominant player “happy again.” Of course, the dominant player decides when he or she is appeased, so the dominant player indeed has all the power. With a single move (“getting upset”), the player who seizes a “dominant strategy” becomes the player who “takes the lead” (according to the metric of “defending, proving, and/or maintaining his/her position, which is the metric most people unfortunately ascribe to); additionally, “the dominant player” also changes “the goal of the game” into “making the dominant player happy,” which is a goal “the dominant player” decides when is met, meaning “the dominant player” becomes both the leader and a referee. How can we lose the game with that kind of position? Not easily.
The first person to establish a dominant strategy often seizes total control, and finding themselves under that power is not a feeling the other players will like. To avoid it, in future conversations, the other players may themselves seek “dominant strategies” first, thus spreading the prevalence of “bad conversations.” Worse yet, the “dominant players” often in fact do get their way, precisely because once a player establishes a “dominant strategy,” “the game” is then arranged in such a way that the “dominant player” practically must get their way for “the game” to continue. If “the game” ends, the “dominant player” either wins or loses nothing (because the conversation “just ends” without any conclusions); however, if no player takes a “dominant strategy,” then it’s possible for players to either win or lose. “Losing” isn’t really an opinion for the person who takes “the dominant strategy” (unless “a different metric” is used, as will be explained), and so it is rational for all players in a conversation to seek “the dominant strategy” first. Thus, a Nash Equilibrium arises, and the quality of conversation and democracy diminishes. Over long enough time, if this “game theory problem” isn’t recognized, the consequences could be dire for families, coworkers, and entire countries.
Welcome to 2022.
Why do I claim that “getting upset,” “voicing disagreement,” and the like are “dominant strategies” which grants a player “the high ground?” Can I elaborate? Sure: imagine a conversation where people are discussing Keynesianism. The three people explore circumstances around when the theory was produced, the influences on Maynard Keynes himself, and then begin exploring what Keynes meant by “investment multipliers.” And suddenly someone yawns. “You don’t know what you’re talking about.”
The conversation stops.
How do you reply? What do you say?
You reiterate your point and elaborate, and the person just shakes his head. This person is implying that he knows Keynesianism better than you. You point out that the phrase “money multiplier” is not used in The General Theory, just “investment multiplier,” and the person says you’re reading too much into the word choice. You explain why you’re not. The person says fine.
Did you feel how suddenly you were on the defensive? Did you feel how the person who claimed that you were ignorant granted himself “the high ground” in the conversation? Suddenly everything became about proving to that person that you did in fact know what you were talking about, but the person simply kept saying you were getting it wrong. And how could you reply? Did you have a copy of The General Theory on hand? Did you know the subject so well that you could reply adequately enough to counter the criticism? Probably not — even if you’re entirely right, it’s unrealistic that you would be that prepared in the middle of a conversation where you didn’t expect to be called out like that (as the person you’re speaking with may very well have known).¹
This is the dominant strategy of “explicit disagreement.”
Let’s take on another scenario.
You’re talking with family members about where you should go on vacation. You mention the beach while your sister says she would like to visit the mountains. You note how last year both of you were sunburned and how the family never tried a trip somewhere else. Your sister reminds you that grandmother liked the beach and that the family should honor her memory. You say something about how grandmother wouldn’t want everyone to be afraid to try something new, and your sister says that you’re being selfish. You tell her you weren’t trying to be selfish, but your sister just looks out the window. You apologize, and you tell her that it doesn’t really matter where the family vacationed, just so much as the family was together. Your sister doesn’t reply.
This is the dominant strategy of “using emotions.”
And so on on — I will also elaborate shortly on “logical fallacies” and “appealing to authorities” as other dominant strategies. Here, I want to start by simply articulating how dominant strategies work. No doubt everyone reading this work as been on the receiving end of a dominant strategy before and knows how terrible they can feel, how quickly they stifle conversation, and the like. Additionally, we’ve all probably seen how “the dominator” in the discussion usually gets their way, creating incentive to use dominant strategies. Unfortunately, what is “individually rational” and optimal in one situation can lead to a collective outcome which is suboptimal and possibly even destructive. As this paper will explore, the collective result of dominant strategies in conversations could contribute to the end of democracy.
Anyway, it is rational to be the first person in a conversation to get upset. Why we’re upset and what we’re upset about varies, and certainly there are times when it’s justified to be upset, but the likelihood of us being “upset for justified reasons” is very low compared to the likelihood that we are likely upset to give ourselves a dominant strategy (consciously or unconsciously). Only we can know, but I would encourage everyone to be very skeptical of themselves before voicing in a conversation their anger, disagreement, and the like. Once we voice offense or anger, once we use guilt, once we say, “I disagree” explicitly versus implicitly through questions, counter points, etc., and so on, the entire conversation will likely be reorganized to address the “dominant” statements. Unless that is players in the conversation simply know about “The Game Theory of Conversation” and necessity of “nonrationality” (as will be explained), but I fear that is unlikely,
Generally, if we disagree that abortion is bad (for example), we should voice our reasons and arguments for why, while simultaneously concealing our feelings that the Pro-Lifer hates women. If we think it’s stupid that our brother wants to eat lunch before visiting the store, we should not explicitly say, “That’s stupid” (which would be to take a dominant strategy), but instead give our reasons for why we think we should visit the store first. Likewise, in response to our reasons, our brother should give his reasons, and the debate proceed accordingly. Arguments, questions, and reasons should be made explicit, all of which indirectly make clear a person’s “disagreement,” but the “disagreement” itself, and corresponding emotions, should be kept concealed. This is the only way to keep conversations from self-destructing. “Players” must simply agree to these rules and agree that no one will seek a dominant strategy: otherwise, the conversation is likely to fall into a Nash Equilibrium and lose quality.
Okay, fine, but why doesn’t this happen? Because it’s hard and seems right to do otherwise. When we disagree with someone, we want them to know it and we want to get our way, not waste time discussing it. We also feel like they should just know they are wrong and that we shouldn’t have to explain it to them, thus making us feel justified to be disagreeable.² Similar things can be said when we use guilt and emotions (they should “just know” that they are being inconsiderate, selfish, etc.), as similar things can be said when we appeal to authorities (they should “just know” the experts are against them”). Generally, “dominant strategies” are not only rational to seize first in a “game,” but they also feel right to employ, and so we feel justified to use them.
Better yet, “dominant strategies” tend to be easier to use than listing out all our reasons for thinking a certain way, explaining ourselves, fighting the emotions we want to voice against people who disagree with us, and so on. All of this is hard, and why in the world should we do it if it’s rational for us to use a “dominant strategy?” It seems doubly irrational. And indeed, with the wrong metrics and no category of “nonrationality,” it is: thus, our crisis, which points to how the presence of rationality could increase the likelihood “dominant strategies” being used. This point should be repeated: far from improve our situation, rationality could worsen it, precisely because Nash Equilibria are caused by players being “rational actors.” Considering this, we should not assume that resisting “dominant strategies” will correlate with intelligence and education level: in fact, the opposite could be the case.
To list out a few reasons why “dominant strategies” are tempting:
1. We take “the high ground” in the conversation and organize the “game” in our favor, which is to say we’re less vulnerable.
2. It requires a lot less work and time.
3. We position ourselves as the most intelligent, considerate, etc. and thus less likely to lose social status.
4. We maintain the stabilities of our views and don’t have to revisit their justifications: we are less likely to lose “fixed belief” (as C.S. Piece describes), which basically means I can keep “confidently” thinking the way I do (there’s no risk of an existential crisis and questioning everything I believe, which could occur if I let myself be “vulnerable” by resisting a “dominant strategy”).
5. We are less likely to “lose,” per se, and “a fear of losing” is perhaps a subconscious and instinctual bias leftover in us from evolution and natural selection.
Those five reasons immediately come to mind, though there easily might be more. And please note that these are strong reasons to use a “dominant strategy” in conversation: resisting these reasons, without our metrics first changing regarding the purpose of a conversation, is highly unlikely.
Considering “The Game Theory of Conversation,” I feel sympathies for thinkers who have argued that “democratic debate” favors the powerful. This critique has been made by various Postmodern thinkers, and indeed, if dominant strategies aren’t resisted, then “practically speaking,” debate, conversation, and democratic exchange are indeed just matters of power. But it doesn’t follow from this that ending democracy is the answer: rather, the answer is “nonrationally” ending the use of dominant strategies. Hopefully, understanding “The Game Theory of Conversation” can make it easier to so act.
A person reading this paper might think that I’m making a simple case complex, arguing, “People seek dominant strategies because they’re egotistical jerks,” and no doubt that can be true, but I also think that explanation is too simple and can keep us from fully understanding the dynamics of game theory which operate in conversations. Indeed, dominant strategies can be egotistical, but it should be noted that “the rationality” of them can conceal the egotistical dimension. I am of the opinion that if dominant strategies were clearly and obviously egotistical, they would not be so widespread: for them to have become a dominant feature of modern discussion, debate, and the like, there must be a way the egotistical dimension can be concealed. This is arguably a kind of “self-deception,” and to me that dimension is necessary to explain the prevalence of dominant strategies.
It is my hope that the “Game Theory of Conversation” traced out in this paper helps explain how the noted “self-deception” works. Additionally, I worry that without this understanding, we will simply assume that people who use dominant strategies are jerks, idiots, ideologues, and the like, brandings of which I think will contribute to tribalism and the collapse of democracy. “Game theory” could help us solve this problem without worsening our separation.
Below is a list and elaboration of dominant strategies. Each category will include some clarifying and elaborating remarks, and I cannot say for sure that these are the only “dominant strategies” around. If there are more, please bring them to my attention.
We naturally dislike people being upset around us, so it’s only natural once someone becomes unhappy that we try to make them feel better. It’s in our tribal natures, as it’s also the nature of our “lizard brains,” which biases us to avoid risk. When people are upset, there is risk, so we naturally and quickly work to correct it. But of course, if we are engaged in this “personal work,” we are no longer working to “advance the conversation,” and so naturally the quality of the conversation dwindles, giving us “the suboptimal result” of a Nash Equilibrium.
Please note that “voicing disagreement” is not the same as “disagreeing”: we are allowed to disagree all we want, but that’s entirely different from explicitly saying, “I disagree,” “that’s stupid,” “that Anti-American,” or the like. The moment we make explicit our disagreement or emotions, we are likely seeking a “dominant strategy” to control the conversation. Not always, and certainly “voicing emotions” with a counselor is very different from “voicing emotions” with coworkers, but I think the fact that explicitly disagreeing is usually a “dominant strategy” is true. In creative writing, there is the general principle that we need to “show, not tell,” and I think something similar applies here: show disagreement with arguments, reasons, and the like versus tell people, “I disagree.”
Anger, guilt, disappointment, excitement — every emotion can be used to establish a dominant strategy. I can get mad and suggest that you must have done something wrong to make me mad; I can use guilt to make you feel like you’re ignoring my feelings and make you start apologizing (and thus “give up ground”); I can reply to something you said with, “I’m disappointed,” and thus make the situation suddenly be about making me feel better (which likely involves you changing your position); I can show excitement and happiness to do something that I want to do so that everyone else can feel bad not to do it; and so on. All of these are examples of how emotions can be used for dominant strategies, which isn’t to say that emotions are bad, but to say that emotions can be manipulative when made explicit and/or to used in such a way that changes the direction of a discussion in our favor.
Where there is no logic, there can be no argument, and so what is proposed cannot be argued against. Logical fallacies cannot be answered, only identified, which perhaps few are capable of doing, thus making logical fallacies incredibly powerful. If I tell you that we need to reduce the speed limit of a nearby highway from 80 to 70 because it will save a thousand lives, and you tell me that we could save more lives by reducing the speed limit from 70 to 60, then 60 to 50 — all the way down to zero, you’ve made a strong point and suggested that I’m committing “the runaway train fallacy.” Yes, we can save lives by reducing the speed limit from 80 to 70, but if that’s my only argument, the logic would easily have us reduce the speed limit all the way down to zero, which means I need additional reasons for why the speed limits needs to be reduced from 80 to 70. But if theoretically you didn’t identify my logic as “a runaway train fallacy,” how could it be countered? Reducing the speed limit would indeed save lives, and don’t we want to save lives? The board is basically set in my favor: I will the argument unless someone simply identifies the fallacy I am making (intentionally or unintentionally), which is to say they identify something structurally problematic with what I’m saying. As a baseball team who uses aluminum bats in the pros will have an almost undefeatable advantage unless the team is reminded that only wooden bats are allowed, so it goes with people who use logical fallacies: they must simply be told that they, intentionally or unintentionally, are not following the rules. They cannot be played into admitting metal bats shouldn’t be used, only informed.
If you tell me that we need to lower the minimum wage to increase youth employment, and I call you a heartless Capitalist, how could you reply? I have suggested that your argument is invalid because you are “a heartless Capitalist,” which by extension means you cannot advance your argument until you prove to me that you are not “a heartless Capitalist.” And how could you prove this to me? Well, ultimately the only way will be by abandoning your position, because even if you showed me numbers suggesting the unintended consequences of “minimum wage,” I could simply shrug and say that kids shouldn’t be employed before they’re eighteen, that they need to avoid “the system” as long as possible. I could then add that the fact you want kids working “so young” offers further proof that you are, indeed, “a heartless Capitalist.” With an attack on your person, I’ve put you on the defensive and seized a “dominant strategy” for myself.³
Appealing to Authority
If we are discussing what we should eat this evening, and you suggest carrot soup, and I bring up a new study which found that carrot soup is bad for digestion, how could you respond? You ask me which study, and I tell you that it came out a few months ago. You ask to see it, and I check my phone, sigh, and let you know that I can’t remember the title, but that “I remember reading something about it.” This suggests that the study exists and that the point I’ve presented is valid and “scientific,” but by not being able to find the study, we can neither prove nor disprove what I’ve claimed. And do you really want to risk eating something that’s bad for digestion? Better safe than sorry, right? So let’s skip the carrot soup tonight…
We are discussing the national debt, and you point out that you are worried about inflation. I claim that the data shows there is no inflation, and you claim that you heard an economist claim the inflation data is unreliable. We both look at one another and change topics. A day later, we are discussing something, and I tell you that “the facts prove” my position. I do not present the facts, and I do not make it clear why my interpretation of the facts is the right interpretation. All the same, I have suggested the facts are on my side, and furthermore suggested that if you look up the data to see if I’m right or wrong, you are trying to “deny the facts.” By appealing to facts, I have suggested that looking into evidence against my case is “searching for alternative facts.” By claiming facts first, I have framed everyone else as only appealing to “facts” in response to me, making myself the most scientific and “epistemically responsible” thinker around.
To keep a conversation from stalling, if I am going to appeal to an authority to “be the deciding vote” in a discussion, I need to have the data, the references, and the like on hand. To allude to it but not have it available for examination is problematic, in the same way that it’s a problem to make disagreement explicit versus have the disagreement implicitly motivate my arguments against a different position. Similarly, if I say, “Hume disagrees” or “experts disagree,” I need to present directly the exact lines and points that “show” Hume and/or experts disagreeing. In other words, I should “show, not tell” when it comes to incorporating and appealing to authorities; otherwise, I am using a dominant strategy to shut down disagreement.⁴
In conversation, why does gaining the dominant strategy first matter so much? Well, if I become a “dominant player,” others cannot enact the same strategy without seeming to “react to me,” which means I still have “the high ground.” I am still the one who is primarily in control, and then if other players enact “dominant strategies” too, there is a good chance that they can be framed as simply doing that because “they cannot address my complaint,” and thus they will paradoxically be seen as in the wrong instead of me. If, however, I cannot succeed with this framing, then all the players who enact “dominant strategies” will simply be tied, and no one will lose anything. It’s a draw. Considering this, if I can seize “the dominant strategy” first, I have a chance of winning and basically no chance of losing (worst case scenario, there’s a tie), whereas if I seize it second or later on, I could be framed in such a way where I could lose. Speed then is important in “the game,” which, assuming the whole argument about “The Game Theory of Conversation” is accurate, suggests not only that conversations will more often than not be bad, but that they will likely falter quickly. Not only will progress feel like it can never be made, but it will also feel like it can hardly even start to be made.
The sooner I take the dominant strategy, the sooner I cease to be vulnerable. Generally, nobody likes being vulnerable, and nobody likes losing, and so it makes sense that dominant strategies would prove so appealing. That said, simply identifying and warning against “dominant strategies” will not be enough if we do not always change our “metrics” for what constitutes “a successful conversation.” What constitutes a “winning strategy” is relative to what we believe constitutes “winning,” which means the next step of this discussion is framing “the goal of conversation.” And generally speaking, the goal needs to change from “being right” to “determining what’s right.”
If I am viewed in a conversation as “being right,” but actually I’m wrong about a subject, I’ve gained nothing but social status, clout, and the like. In my rightness, I’ve actually lost correspondence with reality. But who cares if I have more power in my social circles, right? Well, there we go: we’ve returned to the great debate between Plato and the Sophists, haven’t we? This begs the question: What’s our goal in conversation and democracy? Is it to be Sophists who socially benefit from conversation, or is to be more like philosophers who increase correspondence with “the truth?” Now, that second goal might sound lofty, and how do we even determine the truth anyway? Not easily, that’s for sure, but I want to note that there are basically no other options than these two choices: we either converse to determine “what is the case,” or we converse “to win our position and socially benefit as a result.” If we don’t try our best, however imperfectly, to “determine the truth,” then the Postmodern critique of democracy and “democratic debate” simply being matters of power will be more right than wrong. Additionally, there will be no way to stop rationality from causing conversations to usually be bad. Dominant strategies will prevail.
If we decide our goal is “determining what’s right” versus “being right,” then even what constitutes a “winning strategy” will change, which is to say what constitutions “rational action” will shift. If the goal of a conversation is to determine “what’s right,” then I will not necessarily “win the game” if I end up “being right”: I could “be right” and still lose. However, if the goal of a game is just to “be right,” then achieving this is what matters, and quickly appealing to dominant strategies will be rational and what I will likely do (unless I don’t want to win). Under these metrics, it only makes sense that conversations prove fruitless and democracy fail.
When the goal is “determining what’s right,” then a “winning strategy” will be to learn mental models, read extensively, explore worldview systematically, converse with people smarter than me, and so on; but if the goal is “being right,” then the best strategy is to appeal to dominant strategies as soon as possible, to ignore counter arguments and books that might upset my worldview (and/or “fixed belief”), converse with people stupider than me, and so on — “the winning” strategy for “being right” is basically the exact opposite of “the winning strategy” for “determining what’s right.” For this reason, listing out all the problems with dominant strategies will not prove useful if we don’t, at the same time, change our metrics for determining what constitutes “a good conversation.” Strategies and goals are profoundly linked.
This might all sound and seem easy enough, but I submit to you that “getting our metrics right” is a much harder challenge than it seems, because by definition we must believe that what “we think is right” is right, for otherwise we wouldn’t think it. For us, the categories of “being right” and “determining what’s right” profoundly blur and mix, and that means ascribing to the right metric determines a radical level of self-skepticism and resistance to self-deception. How can we be sure we’re ever doing enough? Not easily, and the effort will require the whole gambit of mental models, intellectual tools, and the like offered up throughout the work of O.G. Rose. However, I will say this: if we find ourselves using any of “the dominant strategies” warned against above, it is likely that our metric is “being right” more so than “determining what’s right.” Perhaps not — nobody can enter the minds of another person to say for sure — but I hope this is at least a good heuristic by which we can examine ourselves.
We all have work to do.
What constitutes a dominant strategy is always relative to the metric by which success is measured, and generally what we measure is always going to be what we get more of (“metrics are destiny,” I’ve heard it said). For conversations to prove intellectually fruitful versus simply exercise power dynamics, all players of a “conversation game” need to agree that “the best argument wins.” Whether I like the results or not, if it’s “the best argument,” we go with it. If this rule is not agreed to, we will likely reach an impasse which only power and force can overcome. The details of why exactly are sketched out in The Theory of Communicative Action by Jürgen Habermas, but basically it’s because if two people disagree and don’t agree that something external to them is the standard against which the disagreement must be resolved (material reality, logic, “truth,” etc.) then the disagreement can’t be resolved except by one party forcing the other party to abandon their position. In this environment, dominant strategies are basically inevitable, precisely they are the only way to end disagreement. That, or the disagreement never ends, and the two sides simply learn to live with that disagreement or never talk to one another again. In my experience, the later is what tends to occur, worsening tribalism.
I repeat, metrics are a big deal. Even if everyone in a conversation agrees not to seek a dominant strategy, if there’s no agreed upon goal, then this will accomplish little (and “the wrong goal” could have people end up in similar problems as those caused by dominant strategies). Basically, everyone must agree that “winning a conversation” isn’t about “getting our way,” but determining “what is in fact best.” If the majority of people follow the wrong metric, which is “getting my way” or “being proven right,” then this problematic metric will lend itself in favor of people seeking dominant strategies in conversation. As a result, power will indeed define discussion and democracy, which will easily motivate a reaction against democracy. What alternative will that leave us with which will seem to be “a winning strategy?” Not one I’m so sure will constitute “winning,” I fear.
Please note, yet again, that it’s not easy to determine when we are “trying to be right” versus “trying to determine what’s right,” because these categories blur in our minds. How we determine “what’s right” is an extensive topic explored throughout the works of O.G. Rose, but I will note that the very use of dominant strategies is, to start, rarely and likely never a good idea (even if we can’t determine the truth), and second that the very use of them is evidence that we are seeking to “be right” more than “determine what’s right.” In this way, we can start to understand how recognizing dominant strategies is a good skill to have, even if there is ultimately more work left to be done.
In review, the two main rules which conversations should follow to avoid Nash Equilibria:
1. No player will seek a dominant strategy.
2. The standard by which “the best argument” is determined will be external to all players.
Today, it can feel random which people we can have good conversations with and which people we cannot, but I bet there is actually a recurring pattern: our good conversations are likely defined by players never taking dominant strategies, while our bad conversations involve “dominant players” and power dynamics. This isn’t to say people consciously are aware of the Nash Equilibrium problem sketched out in this paper, but in my experience my best friends “have a sense of it” and simply intuit that dominant strategies must be avoided, as they also intuit “the right metrics” for advancing and organizing discussion. Similarly, I would wager that the people we stop trying to communicate with are those who often employ dominant strategies, gradually training us to stay away (even if we don’t have the language of “dominant strategy” by which to understand what’s going on).
The hope of this paper has been to help us articulate something we have perhaps sensed, to move knowledge into understanding, so that we can better and more consciously avoid Nash Equilibria which cause conversations to fail. If we don’t, I think democracy is probably doomed and totalitarianism likely, making “The Nash Equilibrium of Conversations” perhaps the most pressing Game Theory problem of all. For those who believe the claims of this paper are ridiculous, they might have a point, but alternatively, they might be exercising “a dominant strategy” about which the paper admonishes.
If democracy and society are to be saved, I believe it will be thanks to people resisting the temptation of dominant strategies in “conversation games,” which will be for people to act “nonrationally” (wherever the metrics of success haven’t changed, at least, which will likely be in most places for a long time). The same may hold true for saving personal relationships, families, and friendships: we all must act “nonrationally” and do what requires more vulnerability, more time and work, risks social status, and threatens our “fixed beliefs.” If we cannot learn to abandon dominant strategies, which means we must make ourselves vulnerable to others seizing them, I do not think democracy has much of a chance. Vulnerability is existentially terrifying, but if the average person cannot learn to live with that terror, then we will have to learn to live with the terror of power instead. Choose life.
¹For more on this, please see “Critical People Aren’t Critical Thinkers” by O.G. Rose
²This point is expanded on in “Persuasion Keeps Correctness From Feeling Totalitarian, but What if Persuasion Is Impossible?” by O.G. Rose.
³For more on logical fallacies, please see “Basic Math” by O.G. Rose.
⁴This point is expanded on in “Ludwig and the Authority Circle,” an O.G. Rose Conversation, which also argues for the necessity of authority, suggesting that the temptation to use this “dominant strategy” will always be with us.