An Essay Featured in (Re)constructing “A Is A” by O.G. Rose
On David Hume and Binding Ought to Such-ness
We all want to know how to live better lives, but that question is multifaceted. Earning some passive income will help us in one area of life, but we could be making all the passive income in the world and we won’t necessarily have a better relationship with our wife. We could be a good friend in terms of loyalty, but we might not be able to help our friends solve and address their deepest existential problems. Sometimes, it seems like “living the good life” is discussed like a monolith, like it is a matter of solving “a single area” of life, and then we’re good to go. In truth, “living the good life” involves figuring out relationships, work, money, and other important problems, presented by and in “such-ness” (as will be expanded on).
As has been argued throughout the work of O.G. Rose, our brains our frenemies: they want to save energy and rest on simple answers. As a result, our brains naturally want to think that solving life in a single area will solve life in all areas, and additionally our minds want our problems to be “solvable” versus only “manageable.” Manageable problems are ones we can never leave behind, one that we may have to address daily.
We are a problem that can be managed but never solved, and seeing as “ethical situations” involve people, ethics are also subjects which cannot be solved once and for all. If we determine in one situation that “x is wrong,” it won’t necessarily follow that we never have to worry about x again or that x is always wrong. In c situation, x could be wrong, while in f situation it could be good, and yet tomorrow x could be wrong in f situation — it depends. It will not do for us to say, “x is wrong,” and by that mean always and/or unconditionally, for that is too A/A in an A/B world: it is to take an idea (“x is wrong”) and press it down and over the world, flattening the world. Instead, we need to form a dialectic between our ideas and the world, which would be A/B: perhaps “murder is wrong,” but it would not necessarily be the case that every instance of “ending a life” was murder; it could be the case that some instances of “ending a life” was only “killing.”
“A/A Ethics” generally assumes the stability, universality, and consistency of selves and situations, but even if there is something to be said about the stability of “Absolute Moral Categories” (as will be discussed), it is not the case that reality is stable and flat. “A/A Ethics” avoids a dialectic between idea and experience — or moral premises and situations — and to instead “press down” a premise onto the world which doesn’t take the world into account. There is no “murder” in the world, only “the ending of lives”: the moment there is a value, there is something “unnatural.” That doesn’t mean values are wrong or bad — assuming “unnatural is bad” would itself be an “unnatural value” — but it does mean that values are something we add to the world (for good and/or for bad). All ethics are applied ethics, and this hints at why “dialectical ethics” are necessary. More importantly, as I think Hume understood, there might not be any other way to stop ethics from becoming an oppressive force.
Moving forward, this paper will assume readers have been convinced that “A is A” is incomplete in favor of A/B. If you are not convinced of this case, please review (Re)constructing “A is A,” and the papers focused on ontology particularly. This paper will also assume readers are familiar with the language of A/A versus A/B.
“(Im)morality” by O.G. Rose argued that many ethical questions ultimately come down to ontology. It is always murder to kill humans (especially against their will), so it’s critical to determine “What constitutes a human being?” If x falls under the category of “human” and the life of x was ended against its will, then murder was committed. Considering this, the fact humans are A/B versus A/A may entail ethical consequences, for that entails ontological consequences. To start, it means we aren’t simple syllogisms or linear systems: it means we are dynamic and capable of creating dynamic situations. This is obvious enough: I am capable of wanting to kill someone, and so make the situation one of “murder,” while I’m also capable of accidentally killing someone, so making the situation one of “unintentional manslaughter’ (perhaps). Since I am not A/A (and thus not “always willing” or thus “always unwilling”), the ethical meaning of a situation can shift even if the situation itself remains constant. Considering this, we cannot establish ethics on grounds of situation, which means we cannot say “x situation is good/wrong”; instead (especially considering that values are “unnatural”), ethical premises must be created in the realm of ideas first and then determined in “the space between” ideas and relevant situations. Hence, to be ethically skilled and accurate, a dialectic between “experience” and “thinking” is required (between “situation” and “idea”).
Is this “moral relativism?” It sounds like it, for we could be suggesting that “what is right and wrong” is relative to the situation, which would suggest there are no moral absolutes. But this is a mistake: to say that for x to be murder is contingent upon x wanting to kill is not to say that what constitutes “murder” is completely up in the air. As argued throughout O.G. Rose, “contingent” and “relative” are not similes (James K.A. Smith is right on this). Instead, we need a middle ground between “moral absolutism” and “moral relativism,” one I call “Absolute Moral Conditionality.” There is a short piece with that title featured in Thoughts by O.G. Rose; here, I will review a few sections which capture the main point:
Murder is always wrong, but admittedly, it is not always clear what is murder versus killing. Killing seems like it is not always wrong (say in self-defense, in stopping a rabid animal from attacking a child, etc.), so if x is “ending a life,” the question is if x always falls under the category of murder (y) or if it sometimes falls under the category of killing (z).
“Murder” (y) is a morally absolute category: it is always wrong.
“Killing” (z) is not always wrong…
…For the Christian, the teachings of Jesus or the Ten Commandments might be considered “practically moral absolutes,” but I’m not so sure. Certainly, “love your neighbor” is a morally absolute category — it is always good to love your neighbor — but it is not clear what acts fall under the category of “loving your neighbor.” (We may think we are loving them when in fact they interpret our actions as egotistical, for example.) Perhaps “going to Church” is always good for the Christian, but is “going to Church” merely attending, or do I also have to be mentally present? If mental presence is also required, then the physical act of “sitting through a Church service” would not necessarily fall under the category of “going to Church.”
The short piece then argues that determining if x falls under y or z becomes increasingly difficult the bigger and more complex the system, suggesting that “ethical living” is easier to navigate when we are embedded in a “common life” and/or particular situation. In other words, it’s easier to tell if I’m killing or murdering my cow then it is to tell if cows are being murdered or killed five states over. No, that doesn’t mean I’ll always make the right choice, but it does increase the likelihood that I will be right. This “increased probability” of being right hints at why “dialectical ethics” are so important, for if at best all we can achieve our “moral probabilities” versus “moral absolutes,” then we always have to be paying attention to what’s happening in the world: having the right ideas won’t be enough even if they really are the right ideas, because I still have to figure out when to apply them. I could have the right idea that “murder is always wrong” but apply it wrongly, thus making all “my ethical actions” ultimately wrong. Since there is the question of “applying the moral absolute,” I need to figure out when to apply it versus not, and since I will necessarily think I am applying it rightly even when I’m wrong, I need to “check and balance” myself. And this brings to the forefront the role of dialectical thinking, of checking my ideas against the situation, then checking the resulting ideas against the situation again, then again, and then again until it’s time for actualization, the decision.
If “thinking” the right ethics resulted in ethical outcomes, then dialectics would not be needed, as such would also be the case if “thoughtlessly acting out the right ethics” resulted in ethical outcomes (both A/A schemas). But the right ideas must be applied to the right situations, as the right actions must manifest the right ideas — there is a “two-step move” here that treating ethics as an A/A easily misses. Perhaps it is always true that “I should take care of animals,” but in order to determine how a given animal is best taken care of, I need to pay attention to my particular circumstances: it won’t matter how accurate my thinking is if I misapply the thought because I’m not paying attention to what I perceive. Unfortunately, in ethics traditionally focusing on “moral absolutes” and “moral universalities,” we’ve been trained to think that paying close attention to particular circumstances is unnecessary; worse yet, we’ve been trained to worry that this is an attempt to escape ethics into moral relativism. And certainly, moral relativists have used “particular circumstances” to deconstruct “moral absolutes,” but this has been a mistake which “Absolute Moral Conditionality” works to correct.
Perhaps it is “always good to take care of animals,” but if we put a dog in a metal cage for ten years and define that as “taking care of an animal,” then I am using a perhaps “morally absolute premise” in order to justify “immoral action.” This very possibility suggests the importance of dialectical thinking and regularly “checking and balancing,” an action which both “moral relativism” and “moral absolutism” could make us think we don’t need to do (either because we don’t believe in morality, despite the fact we must operate according to an idea of the good, or because we assume we are acting morally). Without perception, we won’t check to make sure that what we’re actually doing embodies a “morally absolute premise”: we’ll run the risk of being blinded by the thought, of the idea that “we are doing good” keeping us from realizing that “we’re not doing good” at all. The road to hell is paved with ethics, but a dialectic between “thinking” and “perceiving” can help us realize where we’re headed before we pass under the gate where all hope must be abandoned.
When we’re not paying attention, ideas will be our eyes, and ideas only see what’s inside our heads. Since thinking is incredibly flexible, if we are trained to give thinking “the final vote” over perception, it is likely that our ideas of “what’s right” will never be stopped. Most of us know someone who believes x is good and can’t be talked out of it — this is what happens when perception has no say. David Hume realized this, and so worked to argued that “oughtness” can only be determined from a place of involvement and commitment (as we’ll soon explore).
In conclusion, for too long, we’ve acted like either “killing is always murder” (thus meaning there are “moral absolutes”) or “we can’t determine when killing is murder” (thus meaning there are “no moral standards”) (the first brings to mind the mistake of Homo Ego, the second Homo Nihil, as discussed in “Homo Egeo” by O.G. Rose). No, there are no “moral absolute values that apply themselves,” and it’s indeed possible for us to misapply them (and perhaps we often do), but it doesn’t follow that therefore there are no moral values. Our lives must be organized according to “Absolute Moral Categories,” and living well requires a dialectic between “thinking” and “perceiving” — “Dialectical Ethics.”
To live dialectically is to always consider thinking in light of perception, ideas in light of awareness, parts in light of wholes — it is to live refusing to settle on “one side.” This can create anxiety, for dialectical living means we can never absorb into ourselves something “stable” and “complete” — we always have to be moving “back and forth” — but fortunately there is such thing as “constructive anxiety,” assuming we choose to use our anxiety to grow. And if “On ‘A is A’ ” is correct, the only alternative from “dialectical living” is to settle upon a false stability or “unity,” which would be for us to be defeated and even metaphorically killed (unities are death — A/A is not for us — as we learn from Dr. Cadell Last). To live dialectically is to be open to a whole we cannot access: as discussed in “Homo Egeo” by O.G. Rose, we are like a video camera which can’t turn its lens on itself, but since we are part of “the whole,” if we can’t include ourself in our schema, then our schema cannot be “the whole.” To fix this “incompleteness,” we can look at ourselves, but if a camera tries to record a screen onto which it is projecting, an infinite regress appears (like two mirrors facing each other). And so we can’t be “in” our schema: we must always be “lacking”; our schema must always be “incomplete.” “The whole” is always consuming or “withdrawing” itself the moment it “might” appear (to allude to Heidegger), but the hope is that we can catch “glimpses” of “us-world/wholeness” perhaps just before the “withdrawing” occurs. If we gather up all these moments into memories we cling to, perhaps we can live according to a “sense of the whole,” however imperfectly.
To return to the topic of ethics, I never am right/wrong, only “maybe right/wrong now.” In one way, again, this creates anxiety, for it means I can never rest in “being right” (by finding the right ethical formulation that I can apply in every situation), but in another way it creates hope, for it means I don’t have to worry about always being wrong (damned, per se). “Dialectical Ethics” never assumes “I am (always) right” or “I am (always) wrong”: it will assume “I am right/wrong,” a fluctuating mixture. At the same time, “Dialectical Ethics” understands that I must, by definition, make decisions based on what I reason is good (for/to me): if I am making choices, I am making choices “toward” the good (even if I’m wrong). Here, we see a repeat of the ontological formula presented in “On ‘A is A’ ” by O.G. Rose: as it is the case that we are “ ‘A/(A-isn’t-A)’ is ‘A/(A-isn’t-A)’ (without B),” so we are “Right/Wrong is Right/Wrong” “toward” Rightness (which we are without).” This is not critical to fully grasp, but the main point is that there is a “likeness” between our ontological being and ethical being, one that sets us up for irony, paradox, and contradiction. To avoid error, we must be constantly active.
What do I mean that we must always be “toward” what we believe is good? Well, this is elaborated on in “Assuming the Best” by O.G. Rose, but ultimately it means that, following Augustine, there is no such thing as a bad motive: if I am doing x, I must believe x is good to do (even if x is suicide). All thinking and action is ultimately organized by the good, and yet there is no guarantee that I even have the facilities to accurately determine the good. But I must still be motivated by it, but since I am ontologically A/B in an unstable world, the good I determine today is not something I can necessarily rest on tomorrow — the situation can change — and so I must always be active and aware. Perhaps I was motivated yesterday to achieve the good of making my wife happy, and perhaps I succeeded, but it would not follow that what I did yesterday to make my wife happy would make her happy today. In fact, a source of joy yesterday could be a source of pain today: “the good” can shift, and so I must be ready to shift with it.
Recently, Guy Sengstock hosted a discussion with John Vervaeke on “listening to reason-itself,” and the subject of “reason” strikes me as useful for understanding why “the good” should always be a central concern of intellectual life. To start, “reason” is a funny word: it has something to do with thinking but also purpose, which might suggest to us moderns that the meaning of life is to think about it. But perhaps we’re too drunk on the Enlightenment (and the “autonomous rationality” which concerned Hume): the “Dialectical Thinker” asks, “Why couldn’t the reverse be the case? Why can’t the meaning of thinking be to live it?” Following Sengstock and Vervaeke, indeed, that is how Classical Thinkers thought: they believed “the reason we reasoned” was because there was goodness out there which was worth finding.¹ The fate of reason and the fate of goodness was tied together, suggesting that there was wisdom in considering “truth, goodness, and beauty” closely related, a wisdom today we seem to have lost.
Reasoning is always needed to determine the good, both because “goodness” is a value which cannot exist in nature, and because “ideas of goodness” must be applied to situations to determine when they fit and when they don’t (it is not possible for us to achieve a “general framework of goodness” that we can live by like road signs that all we need to do is read). How goodness manifests is always particular and specific, and being able to identify goodness in a specific situation requires reason for us to do well. Just knowing that we should (generally) “be good” isn’t enough for us to know “how to be good”: that requires discernment, discernment which is always specifics-dependent. Paradoxically, did the focus of Western philosophy on “universality,” “objectivity,” “non-contingency,” and other similar categories contribute to the mutation of reason into logic, divided from considerations of the good? Hard to say.
Classically, reason that wasn’t motivational wasn’t “good” reason — it didn’t work; it was broke, dysfunctional. Today though, in conflating “reason” with “logic,” we don’t suspect there is something wrong with reason when reason strikes us as passionless. In fact, we’ve been led to believe a lack of passion is good, for that means we are “more objective.” Not only have we divided reason from what makes it itself, we’ve moralized the mutation, which of course means we’ve moralized lacking the capacity to identify the occurrence of the mutation. Our “depravity” is total, I fear, and for “logical reason.”
Logic doesn’t have to motivate us, so if reason is merely “logic” now, reason cannot provide reason (leaving us with a paradox we lack the capacities of reason to untangle). Personally, I think there is no clearer sign that reason is dying than the spread of boredom (which is a “death of sentiment,” noteworthy for our coming discussion on Hume), and it’s not by chance that boredom also corresponds with “the meaning crisis” Vervaeke discusses. Boredom is not a state of having nothing to do (for there is always something to do), but a state in which we don’t see significance in what we could do. If nothing is significant to us, then nothing is worth reasoning about, which means nothing is worth creating motivation “toward.” And so we are stuck in a circular trap: we need to reason to determine what is significant, but if nothing is significant, there’s no reason to try. And so the falcon cannot hear the falconer: the falcon is too enlightened. And the falcon certainly can’t hear music, for that would require listening, and the falcon has decided life is too dull for tuning in. It’s only logical.
There is no such thing as a “bad motive”: if we are motivated to do something, we believe it is good. Reasoning, then, to be motivated (to reason), must be “toward” the good; if reasoning isn’t, then “reason” becomes “logic,” and “logic” lacks a “good” to be “toward” beyond “the good of being logical in of itself.” And certainly, it’s good to be logical, but when logic is its own end, it doesn’t take long for us to get bored with it. Once that happens, we have ceased to see significance in being logical, and so we will cease to be logical/reasonable. In this way, “the loss of reason from reason” — the loss of a “vision of the good” and purpose from thinking — is the death of reason. To allude to “Deconstructing Common Life’ by O.G. Rose, if it is the case that “autonomous” and/or “instrumental” rationality cannot ultimately provide itself with a “vision of the good” (that feels justified and grounded) then “autonomous rationality” must ultimately lead to a spread of boredom and, by extension, “the meaning of crisis.”
But wait: isn’t “pure goodness” impossible? Didn’t we say earlier in this work that A/A is death and A/B reality? How can we be motivated by a “goodness” that isn’t “fully good?” A “lukewarm goodness” seems unable to sustain reason. A great question, but this is the trick: “Dialectical Ethics” are A/B not in terms of “moral degree” but “ontological source.” An “A/B good” is just as good as a possible “A/A good” — one is not necessarily more or less “good” than the other — the difference is how they arise, the process which generates the (supposed) goodness. Whereas “A/A Ethics” emerges in the realm of ideas or the realm of experience exclusively and/or predominately, “A/B Ethics” emerges from the dialectic between ideas and experience. To be more straightforward, the difference between “A/A Ethics” and “A/B Ethics” is deep commitment and participation in a “common life.” This brings us to the work of David Hume, Adam Smith, and the Scottish Enlightenment in general.
Before moving on, note that the “frenemy” brain seems to be in the business of preferring “the pure” to “the dialectical”: it wants us to be A/A and will gradually, slowly, carefully, and subconsciously make us such if we are not actively aware of our “being in the world.” We could avoid this risk if we didn’t worry about ethics and “the good” at all, but as this section has hopefully made clear, to abandon “the good” is to ultimately abandon reason itself (for it will inevitably devolve into logic and lose its significance). Hence, we must take the risk (for we require a good to be “toward”), as we need philosophy despite the risk of “bad philosophy” (as Hume discusses). But it will be unnatural for us to make the risk work out in our favor: if we succeed, it will be a great victory, and for Hume, that victory was ultimately impossible without “common life.”²
When reason devolves into logic, we become a strange paradox. We must do what we believe is “good” to think and act, but we no longer have a “reason” by which to be “toward” “goodness,” which means we have to be acting and “toward” a “goodness” we cannot articulate, grasp, or comprehend. We certainly can’t find reason to think this “idea of the good” is “actually good,” and yet we still need this “idea,” and faced with the resulting existential anxiety, we can stop trying to understand what’s going on (because, under these conditions, we can’t understand: there is an unresolvable paradox). At the same time, we lose a sense of significance to care that we stopped trying. Alright, perhaps this is what occurs on the individual level, but what about the social and government level? Well, governments must still operate according to what they think is “good” too, but if they only have logic to use, the goodness they implement and put forth will likely lack “reason,” And a “reasonless good” could be a terror that causes suffering and spread boredom. Without reason, even if the ethic which the State supports is “actually good,” it will not feel “actually good” to the people, and they will easily feel oppressed by it, which could lead to revolution. Where “reason lacks reason,” it’s unlikely State “ideas of the good” will actually be good, and even if they are, they probably won’t “feel good” to the people.³
If we take seriously our “A/B Ontology,” then an “A/B Ethics” is “fitting” and necessary. The right ideas do not apply themselves, nor do “the right ideas” automatically recognize when the “fitting situations” emerge to which they should be applied. To elaborate: when I am in y situation where “x ethic” should be applied, it is not the case that “the idea of x ethic” will force itself into my head and force me to apply it to y situation. I have to recognize that x is fitting for y, and such recognition requires a blending of perceiving the experience I am in and recalling the best ideas for it. It is not the case that my experience of a tree will force me to act “the most fitting way” toward that tree: I have to “bridge the space” between idea and action. I have to think up the idea for “how trees should be treated,” and then I have to embody, realize, and/or enact that idea; otherwise, the tree may never be treated in the way “that is best.” I function as a mediator between things and ideas (between one A/A dimension and another A/A dimension), suggesting why I am A/B and why that might be good.⁴
For us to be A/B versus A/A means we must always be a little “off.” We can never be completely unified or stable; there must always be something about us that is “restless.” Similarly, if we ever feel like we aren’t “restless,” we might want to start being concerned about what we believe is “good” and “wrong.” Certainty in our ethical life is dangerous, not because “moral absolute categories” are dangerous (they are in fact necessary), but because a lack of “restlessness” might suggest that we have become certain about how and when we should apply ethics, and we should never be certain about that, precisely because we aren’t God and can’t see the future to know what situations might arise tomorrow. Furthermore, we can’t fully know situations we aren’t involved in (a Hayekian point), so though perhaps we can access them (and say “y seems to be z”), it would be erroneous to judge them (“y is z”) (this is a distinction elaborated on in “Self-Delusion, the Toward-ness of Evidence, and the Paradox of Judgment” by O.G. Rose). Quickness to say “that’s right” or “that wrong,” a comfort in making judgements about distant people or places — all of these are signs that a person is engaging in “A/A Ethics” versus “A/B Ethics,” that they are too “unified” and not “dialectical.” They are acting as if “thought” and “being” are one and the same, that if I think “x is wrong in y situation,” then the very existence of the thought verifies that being is such (and so the thought valid). On this point, we now turn to David Hume and his break from Parmenides to suggest wisdom in Hume’s effort to divide “is” and “ought,” for the division isn’t as “total” as it seems.
To oversimplify a complex debate, Parmenides establishes a unity of thought and being. In other words, thinking is being and being is thinking: generally, there’s a “one to one” relationship. If I think about a cat, then I am thinking about the actual cat: the thought that “the idea of a cat” and “the cat itself” are different entities is something Parmenides would debate. But after Hume, and certainly after Kant’s noumenon, we now know that “things” and “ideas” aren’t identical, and furthermore we realize that even if we have “the right ideas about things,” we can’t be certain we even possess those ideas, and so even if in some way we have “more being” in this situation, we cannot meaningfully say such is the case (only be confident, perhaps). It is mostly thought to be Hume’s fault that being and thought were divided (not to say there weren’t forerunners), and, for our purposes, a focus on the “Is/Ought Problem” will help us understand why.
Hume argues that we cannot from identifying what a “thing is” ever determine what a “thing ought to be.” This doesn’t mean we don’t try, but if we think we can establish with any certainty or absoluteness “what is the case,” we’re deeply mistaken. As already established though, we must have an “idea of the good” if we are to be motivated, make choices, and act, so where does that leave us? We require “goods” but cannot meaningfully establish “good?” What a mess that would leave us, a possibility which Hume seems to be leading us into, which seemingly confirms the sentiments that Hume is a “Derridean trickster.” Is he saying we shouldn’t establish ethics? Not at all: Hume understands that we cannot think there’s no “right way to life”: an “unethical life” is impossible. We must live somehow, and that means we must live according to some idea of what’s right and what’s wrong. The problem today is that we seem to think it’s possible to reason and live without a vision of “the good,” but that just means we make decisions according to a “good” we never systematically consider, own, or consciously practice. Reason thus becomes mere logic and reasonless.
To start, Hume wants to deconstruct “absolute moral certainty,” not “ethics” in general (please note I don’t make a hard distinction in his paper between “ethics” and “morals,” even if it might be warranted). Hume is concerned about those who feel certain that they know what is right and what is wrong, and by extension feel like they can “judge” how people live their lives. Generally, Hume is always after “judgment” (what some thinkers have associated with “the final intellectual act”): Hume wants to point out that we cannot “jump” from an experience of balls always falling or the sun always rising to some “natural law,” as he wants to argue we cannot “leap” to a “stable identify” from a constant and regular stream of impressions, memories, and the like. But it is not “judgment in general” that Hume wants to erect, for he understands we cannot avoid forever making an assessment between this and that: the key is that Hume wants us to understand that ultimately abstract reasoning cannot make a justified move into judgment, that instead the foundation for judgment must be “common life.”
When we judge that “balls always fall when we throw them in the air,” it seems like this judgment is founded primarily on our ideas and that it mostly comes from out of our minds, but really the judgment is founded on consistency of experience and only translated into terms of thought. This is similar to how thinking is always rushing in and consuming perception, as discussed in “On Thinking and Perceiving” by O.G. Rose: thinking and philosophy in general are always trying to take credit for “creating/discovering conclusions” that our primarily created and discovered by “common life.” But since to recognize conclusions derived from “living” we must “think about them,” it always “feels like” that thinking is primary, and certainly thinking play a critical role (for otherwise we’d hardly be different from the animals). But ethical truth emerges ultimately from out of a dialectic between thinking and perceiving, not just from thinking, and not just from perceiving. Where there is no dialectic, there is a lack of proper understanding for how ethical conclusions should be formed and applied.
If a dialectic between the world and ourselves is needed to make (more likely) accurate judgments, then “moral certainty” is impossible, especially for those who are uninvolved in the relevant situations: Hume is empowering those involved in situations to make ethical judgments about them, which basically means that he is empowering people to direct and organize their own “common lives.” But even those people are not allowed to have “moral certainty” according to Hume (which could breed “the banality of evil” which concerned Arendt), for if there is a “dialectic,” there must be awareness, thought, and applications, all steps where mistakes could be made. Hence, even involved parities only have a claim on “moral confidence,” but “certainty” is off the board. For Hume, that decreases the possibility of political and social oppression.
Deeply inspired by Livingston, for me, Hume is always playing the same game: he wants to deconstruct the very possibility of thinking (“with moral certainty”) about lives in which we aren’t embedded. Where there isn’t embodiment, commitment, sacrifice, and participation, there ought not to be thinking, and especially not moralizing. Hume is after “the academic philosopher” in favor of “the farmer philosopher,” per se, because he has seen how “academic philosophers” can control people whose lives they have no business even being involved in. Unfortunately, since “academic philosophers” tend to control “the official philosophical narrative,” per se, in their minds all philosophy is “academic philosophy,” so the distinctions Hume wants to make between “good philosophy” and “bad philosophy,” “common philosophers” and “academic philosophers,” etc., have all been lost. If all philosophy is “academic philosophy,” for Hume to attack “academic philosophy” was for Hume to attack “all philosophy,” and that’s often how Hume has been taught: he is considered an enemy of rationality itself. And most of us laugh at this mission, because we generally understand that rationality and philosophy play a necessary role, and so we all tend to think Hume must be doing some irrelevant and foolish just to show off (“What’s the practical use of disproving natural laws anyway?” “Why is he telling jokes?” — such questions on Hume I’ve heard). As a result, we remember Hume mostly as the person who woke Kant up from his “dogmatic slumber,” and ultimately Kant receives credit for being the better thinker, because Kant “saves academic/all philosophy” from Hume (which we don’t even believe that Hume could have really threatened: it just “seemed like” Hume poised a threat because we didn’t have the arguments yet that Kant would discover, which we all knew had to be there somewhere, because there was no way rationality could be cast aside and disregarded, as we tend to think Hume wanted to do). And so Kant tends to be viewed as putting a trickster back in his rightful place, and in “saving us,” Kant taught us to tear down the protections Hume erected to save us from totalitarian and political oppression. The 20th century is then a story of the consequences of that choice.⁵
Far from a trickster, for Hume to argue against the possibility of linking “is” and “ought” is not an argument suggesting we are permitted to do whatever we want; instead Hume is arguing that we cannot philosophically ground ethics, but instead must ground it in sentiment and experience. In other words, we find “oughtness” in “such-ness,” not our raw ontological ideas of existence. This doesn’t mean ontology can’t play a role, and as argued in “(Im)morality” by O.G. Rose, ethics must ultimately depend on ontology, but an ontology based on “is-ness” for Hume is problematic: Hume wants our ontology to be based on “such-ness.” Where Hume deconstructs “is/ought,” he erects something more tangible and real.
(As a technical note for those reading all of (Re)constructing “A is A,” please note that in this paper I make a hard distinction between “is-ness” and “such-ness,” but in previous papers, such as “Truth Organizes Values” or “(Im)morality,” I use the term “is-ness” interchangeably with “such-ness.” I’m sure this is confusing, but this paper (as you’ll see) needed a clear distinction between “ideas of what things were” versus “experiences of what things were” (the paper “On Is-ness/Meaning” signified this distinction with the difference between “is-ness” and is-ness (“such-ness”) which is notably confusing). Still, hopefully all my papers that discuss “is-ness” make it clear that, when I use the term positively, I mean it as “such-ness” — an experience of actuality that can only be determined in particularity.)
Alright, but what were Hume’s views on ethics? That needs to be elaborated on, but to give a quick answer: Hume felt “sentiments” were the foundation of “moral life,” and “sentiments” emerged from direct experiences of people (which links “sentiments” with “such-ness,” as will be explained). On this topic, Adam Smith and David Hume held a lot in common.
Hume claims that the moral life primarily results from “sentiments,” which brings to mind the work of his dear friend Adam Smith in The Theory of Moral Sentiments (which is inspired by Hume). For Hume, we didn’t primarily treat our neighbors well because we read a moral argument on how ethics was rational or because we were persuaded by the Bible (not to say moral arguments play no role), but because our encounter with our neighbor creates an emotional response. We see their face; we hear them laugh; we recognize their struggles. For Hume and Smith, “sympathy” and/or “empathy” were the foundation of moral life, the ability of humans to feel the lives and emotions of others. We could construct the most perfect moral system in history, but without sentimental attachment, it would not matter. Even a perfect cathedral can be empty.
Is sentiment really that important? It might not seem so (“Probably Cause” by O.G. Rose inspects the question more closely), but ask yourself: do you treat strangers nicely because you read that you “ought to” in a book, or because you see their face? Sure, it can be a little of both, but doesn’t the “presence” itself of a person motivate you to act and speak differently? And isn’t it the case that the more you know a person, the more natural it becomes to treat that person morally? Is it the case that the more you know a moral philosophy, the more natural it becomes to treat people morally? I’ll go on a limb here, but I think the first matters more than a second: reading all of Kant will not motivate me to treat someone nicely as much as will my deepening relationship with them (though paradoxically loved ones can bicker, though this can suggest a comfort to bicker, and still even bickering loved ones will die for each other at a moment’s notice). If I was emperor of the world and wanted to increase overall morality, the choice between making everyone read moral philosophy and making everyone increase their empathy would be an easy one. Yes, moral philosophy could help increase empathy, but moral philosophy without empathy will achieve little. When it comes to ethics, empathy is more important than ideas.
Have you ever had to fire someone or tell someone something they didn’t want to hear? Have you ever had to stand on a stage and make a speech? Have you ever disappointed your coworker? All of these examples hint at how emotionally invested in and committed we are to the people around us (and sometimes to a fault, as is the case when “status anxiety” creeps into our lives, as Alex Ebert discusses). Yes, we can get into how these emotional connections can manipulate us, contribute to mental health problems — I’m not saying these “connections” are necessarily always good — my point is that Hume and Smith are more correct than not that the foundations of moral life are found in emotions, empathy, and embeddedness. Arguably, it is this lack of “embeddedness” that most concerned Hume, for “unembedded ethics” was not only “practically impossible” but a possible source of oppression and control.
In a way, by emphasizing sentiments, habits, and the like, Hume is suggesting that we need a fuller sense of how our minds work. When asked, people often associate “the mind” with “the brain,” but there’s a sense in which Hume wants us to associate “the mind” with “our whole body.” Our whole body is our minds, per se, and our brains are part of our bodies, yes, but our bodies are also so much more. Our minds are our bodies, and our bodies form sentiments, absorb experiences, empathize, carry out manual labor, participate in communities — Hume wants us to understand that all of this is “material” which composes our minds. Our minds are not just composed of the thoughts that live and dwell in our brains; our minds are basically our souls.
Please note that Hume isn’t favoring “emotionalism,” per se, but “emotional intelligence”; he doesn’t favor “blind obedience to tradition,” but a dialectic between tradition and progress that ultimately defers to tradition when there’s “a tie”; he doesn’t favor “irrationalism,” but a rationality that understands it ultimately requires “nonrationality” to be itself. Hume isn’t an enemy of philosophy in general, as he isn’t of religion: Hume is concerned about abstract systems that aren’t ultimately grounded in “common life” and that don’t humble themselves before experience. “Ideas are not experiences,” Hume understood, and ideas should act like it.
But isn’t “dispassionate judgment” good in ethics? Aren’t emotions and subjectivity threats to sound judgment? There’s a reason we compromise juries of “uninterested actors,” yes? Indeed, both Hume and Smith understood that the moral life ultimately needed “dispassionate judgment,” but there’s a huge difference between “dispassionate judgement” about lives we are “embedded” in and “dispassionate judgments” about ways of life in which we don’t participate. To put it another way, Hume was concerned about us making “moral judgments” when we had “no skin in the game,” and that’s what he saw happening in the traditional game of “moral philosophy.” By making sentiment and empathy primary to moral life, Hume wants involvement to matter, whereas “involvement” could be seen as a threat to judgment in the eyes of traditional philosophers. (Before moving forward, do note that I think Smith’s treatment of sentiment is better than Hume’s, for Smith makes “sentiment” more like “empathy.” Both thinkers get the job done, I think, but Smith strikes me as a little closer to the truth).
Traditional philosophers could moralize “un-involvement,” acting as if their judgments were “more reliable” precisely because they had distance from situations. And certainly, distance can sometimes have its value, but using distance to “assume” the soundness of moral judgment and to disqualify outright the moral judgments of the “involved” was, in Hume’s eyes, a recipe for disaster: this line of thought could easily be seized upon by tyrants to control and oppress “common lives.” Hume wanted to stop this, which perhaps motivated his work on the “Is/Ought Problem,” as will soon be discussed. Though we can associate “dispassionate judgment” with a lack of emotional judgement, Hume wants us to understand that “dispassionate judgment” isn’t necessarily good if that judgment is wrong, and for Hume the lack of “sentiment” greatly increased the likelihood that “dispassionate judgment” erred. Furthermore, it’s generally impossible for us to judge what we aren’t involved in, for there’s no other way to have access to the necessary information. If we don’t have a meaningful relationship with Sarah down the road, we probably don’t know what she’s going through, what she does — she has likely never opened up to us. Additionally, if Sarah is part of our town, what happens to Sarah can impact us, so Sarah’s fate is one in which we have “skin in the game.” This will give us greater incentive to make sure we judge Sarah correctly, and though this might risk subjectivity and emotions clouding our judgment, it’s better to err on this side than on the side of judging from a place of lacking any connection at all.
For Hume, “dispassionate judgment” is only meaningful where we have to combat “passions,” and we only feel “passions” for those whom we know. It is nothing of note to feel nothing toward people we don’t know; what’s considerable is being “dispassionate” toward what we are invested in. This causes a “dialectical tension” in us (and remember, where dialectics are lacking, there tend to be problems), for we feel the anxiety of knowing the people whom we assess collide with the anxiety of knowing we should not let our personal connections cloud our judgment. We know we need to be fair, which can be hard, but there’s also a higher chance we’ll be fair precisely because, in knowing the person, there’s a higher chance we’ll know all the possibly relevant information and care, which increases the likelihood that we will think deeply about all the evidence. In this way, we need dialectical “dispassionate judgment,” which is much tougher but more vital than “uninvolved judgment.”
As discussed in “Probable Cause” by O.G. Rose, there’s a famous passage in Smith that could help illuminate the need for “dialectical dispassionate judgement.” Smith writes:
‘The most frivolous disaster which could befall [a man] would occasion a more real disturbance. If he was to lose his little finger tomorrow, he would not sleep to-night; but, provided he never saw them, he would snore with the most profound security over the ruin of a hundred million of his brethren.’⁶
This is an upsetting passage, and we may desire to deny it’s claim. But can we? If we’re honest, it’s truer than we’d like it to be, but we must be honest if we are to accept the wisdom of recognizing that “dispassionate judgment” needs to occur from a place of embeddedness. All we must do is ask ourself: do we want the person more concerned about his finger deciding what should be done about the calamity on the opposite side of the world, or do we want someone who is part of the calamity to be in charge? Can the person with the finger problem be more “objective?” Maybe, but it’s also more likely that he’ll be wrong: he’s uninvolved, detached from relevant information to a profound degree. Are we so bent on “objectivity” that we’re willing to pay the price of ignorance? Why does ignorance for objectivity seem to strike us as a good bargain?
Hume understands that we may not want the judge of a case to be the father of the accused boy, but Hume would warn us that we better make sure the judge is from the same town. There’s a ditch on either side of the road: a judge can be too uninvolved and too involved. So it goes with kings and rulers, and Hume worries about empowering those to run what they don’t live in while telling themselves that their distance and lack of involvement constitute an advantage. In this way, detachment can be moralized, and it is this moralization Hume hopes to stop.
Basically, Hume wants “dispassionate judgment” to be hard, for that will help “bind it” and “rightly order it.” He is worried about someone entering into a “common life,” claiming that because they can look at a situation “from a distance” that therefore they are more reliable, authoritative, and objective, and then using that “edge” in their self-interest at the expense of the people. But if the judge has connections with the people, then “sentiment” will make it difficult on the judge to take advantage of them: he’ll have to live with himself knowing what he has done, whereas a stranger will likely feel much less remorse. Often, our movies and stories can highlight the importance of an “outsider observer” to review a situation, and though certainly some “creative thoughts” can arise this way, Hume would be skeptical of putting too much hope and faith in “distance.” For Hume, non-dialectical “dispassionate judgment” is a subtle way for tyranny to enter and reign.
If Hume is correct that “sentiment” is the foundation of ethical life and that the role of philosophy is mostly to “defend” the ethical life established by sentiment (more than “justify” it), then a view of ethics as achieved through “dispassionate judgment” is the exact opposite of how ethics is actually achieved. Arguably, “dispassionate ethics” could be an ironic source of “immorality” and “tyranny,” as Hume seems to have been aware. For this reason, Hume seeks to ground ethics in “such-ness” versus “is-ness,” arguing that “ought” cannot be derived from thought but only perception (to use my language).
(Moving forward, as described in The Philosophy of Glimpses by O.G. Rose, keep in mind that both the perceived/experienced and the thought must ultimately be written about, and that can make it seem like the two are identical (“the medium is the mask”). The fact both the conclusions of perception and thought must be communicated can make it seem like both are “thoughts,” and thus the distinction between “is-ness” and “such-ness” can seem unnecessary and even foolish, but that is a trick of the written word itself. What is that distinction? Let’s wait and see.)
Heidegger uses the term “such-ness” to refer to how being “presents itself”: it is basically a term which refers to “experience itself.” Using that language, we could say that Hume wants to establish the impossibility of determining “oughtness” from “is-ness” in favor of grounding “oughtness” in “such-ness.” “Is-ness” is what we arrive at when we answer the question “What is a cup?” whereas “such-ness” is when we go over, look at, and pick up the cup. “Is-ness” is thought, while “such-ness” is lived. Now, when I think about “what a cup is,” I tend to remember the feelings and uses of a cup, so it seems like “is-ness” equals “such-ness,” but “ideas are not experiences.” Yes, my idea of “is-ness” is likely informed by real experiences of a cup, so there certainly will be similarities between “is-ness” and “such-ness,” but David Hume is skeptical of how well we can actually remember and recall experiences, especially experiences that are not grounded in our everyday lives but instead happenstance events or encounters in the past. Hume wants our ideas (of “is-ness”) to be constantly and regularly (re)informed by “such-ness,” because it is so easy for us to think we remember and know what something is like when really we’ve forgotten and unknowingly begun remembering it wrongly. We naturally become overconfident in how much “is-ness” and “such-ness” are identical, and in that overconfidence, we start making claims about how cups “ought” to be used. This is where the trouble starts.
I cannot live without ideas, but ideas can be problematic. They never equal their subjects (ideas of x are never identical with experiences of x), which means ideas are always “approximate,” but some ideas are closer to their subjects then others. The more regularly we are informed by “such-ness,” the higher the likelihood our ideas are “more like” their subjects than not, and the ideas of ours that are “most informed by such-ness” are those about our “common life.”⁷ So Hume wants us to focus on that: far from antirational, what Hume is suggesting (at least to me) is very reasonable. Basically, regarding subjects relative to which we can’t form habits, sentiments, apply customs, or experience directly, we “ought” not to moralize about them. And if we haven’t formed habits, sentiments, applied customs, and/or experienced them directly, we haven’t “apprehended them enough yet” to earn any grounds for moralizing about them.⁸ We should know our place.
It is critical to note that “habit,” “custom,” “sentiment,” “tradition” — all of these enterprises which Hume is associated with defending “against” rationality are themselves forms of thought. This is a critical point: Hume is not critiquing thought over there while defending thoughtlessness over here, but rather Hume is critiquing one kind of thought (in “autonomous rationality”) in favor of another kind of thought (embodied in and through emotions, habits, etc.). Traditions are formed through long processes of trial and error, of thinking and testing — they don’t just appear out of a vacuum without any ideas to back them up. “Tradition” for Hume is a collection of tested ideas; “sentiments” are ideas about how people should be treated based on how we feel around them; “customs” are ideas that have worked for a people based on their particularity circumstances that there is reason to trust in even if it seems like they shouldn’t be trusted; and so on. Hume is not a “skeptic” of rationality in the sense that he wants to “deconstruct rationality” in favor of living thoughtlessly; instead, Hume wants to refine rationality and ground it in something beyond itself (which is “nonrational,” as is ultimately necessary, considering “The True Isn’t the Rational”). Hume wants to deconstruct “autonomous rationality,” yes, but not rationality in general. Perhaps calling Hume a “skeptic” has made it difficult to understand Hume’s project, especially after Derrida, for now we read the word “skeptic” and hear “deconstructist.” For Hume, skepticism was an act of refinement and defense of common people; it was not in the business of deconstruction and uprooting.
Hume seeks to disprove that thinking can move from “is-ness’ to “ought-ness” not because Hume believes ethics should be cast aside, but because Hume believes no one should determine how people “ought” to live who aren’t involved in and committed to the lives of those people (as part of their such-ness). Hume viewed moral claims outside the lives those moral claims referred to as dangerous: he sought to divide “thought” and “being” (generally) to stress that only “life” and “being” should be unified. For us to unite “is” and “ought,” we must do so through living: thinking is a cheap shortcut the entails dire consequences and risks empowering totalitarian forces.
To discuss again the work of Sengstock and Vervaeke, the more sentiment dies, the more life cannot be fixed by logic and “autonomous rationality”: rationality might try, but it will find itself unable to reestablish a sense of significance, which by existence will mean boredom shall spread. Where custom is lacking, then rationality will be establishing premises “in the dark,” without testing or trials by which it would have a better sense what might work. Where there are no traditions, rationality cannot establish an idea of how lives and seasons should be ordered and managed that won’t ultimately feel arbitrary and “top down.” No matter what rationality does, it cannot fill the roles of custom, tradition, sentiment, and the like — there are limits to what it can accomplish — “autonomous rationality” will never be enough. Even if we logically conclude “the best of all possible answers,” without “full body embeddedness,” the answers won’t “feel” significant at all: we could possess what we’ve always wanted, and we won’t have the eyes to see it.
To return to the topic of Parmenides and “unity of thought and being,” Hume is not arguing that “being” and “thinking” can never be “practically unified” — it all depends on the function of the world “being.” By that, do we mean thinking-based “is-ness” or perception-based “such-ness?” For Hume, we must organize ourselves morally according to “such-ness.” Where we establish “ought” based on “is-ness,” the likelihood of (horrific) error is great; where we establish “ought” based on “such-ness,” mistakes are still possible, but much less likely (and probably on a much smaller scale). The following graph might help with the point:
The graph generally represents “system size” or “scale.” In other words, the bottom of the graph represents my home, while the top of the graph represents my planet. The higher up the graph I move, the more “such-ness” becomes “is-ness” and divides from “ought-ness.” The divide is smallest and even “practically closable” in my home, while it is a little wider on the scale of my town, wider on the scale of my local area, wider on the scale of my State, and so on. “Such-ness” becomes more like “is-ness” the higher up we move (it’s a gradient), which means that “ought-ness” becomes more difficult to determine and more likely a force of totalitarian control which David Hume hoped to stop.
The more I move from “such-ness” to “is-ness,” the less involved in my life “perception” can be (for I cannot perceive or experience as well increasingly larger and more abstract systems, meaning it’s more possible to experience Charlottesville than it is experience the United States). If a key to “A/B Ethics” and realizing my “A/B Ontology” is a dialectic between thinking and perceiving, then this would prove to be problematic. This is not to say “large systems” never have a role, but even when they have a necessary role, there is a risk of us ending up orientated away from dialectics and “toward’ A/A, which means we need to recognize that we “work with fire” when we work with large systems (but also note that fire can cook food and keep us warm, so this isn’t necessarily bad).
The movement of “such-ness” to “is-ness” corresponds with a movement out of a dialectic between thinking and perceiving. And here’s the rub: if Hume is correct that “is-ness” and “ought-ness” must be divided, then it is only in a dialectic between thinking and perceiving (ideas and “such-ness”) that ethical life can be determined. And that means “Dialectical Ethics” are the only ethics — everything else is probably just power in disguise.
Blackness Visible by Charles W. Mills contains a section that I think helps illuminate why “such-ness” plays a critical role in philosophical and ethical formation.
‘If your daily existence is defined by oppression, by forced intercourse with the world, it is not going to occur to you to doubt about your oppressor’s existence in any serious fashion as a pressing philosophical problem; this idea seems frivolous, a perk of social privilege.’⁹
Dr. Mills is making the point that a slave doesn’t question if other people exist, precisely because the existence (and “such-ness”) of a slave is defined by others. Where people ask, “Is anything real?” the people are likely “well off,” even though people likely ask the question out of existential desperation. What Dr. Mills made me realize was that the “such-ness” of slave life entailed philosophical ramifications on questions that plagued philosophers for centuries (do note also that if we’re not sure “other people” exist, that will shape our ethics, for the imperative to treat well people whose reality we question will be weaker). With this in mind, it becomes critical to keep philosophy as “close to such-ness as possible,” for it is in the realm of “such-ness” that “what’s true/best” has a better chance of being encountered.
Hume’s arguments may strike us as Conservative, which would put him in the camp often associated as “out of touch” with Progressive concerns, but Hume’s thinking actually serves minorities well. Living out “Dialectical Ethics,” it’s hard to imagine a bank “redlining” a local neighborhood or a black business sector being destroyed in the name of “communal renewal.” Drunk on ideas of how a city “is” versus perceiving the city’s “such-ness” from the ground level, it becomes easy for bureaucracies and leaders to start thinking what “ought” to be done in a way that hurts those most in need. The oppression and abuse minorities have suffered adds credence to the case for “Dialectical Ethics,” though if that case supported “thoughtlessness” (thus contributing to “the banality of evil”), that credence would be negated. Thus, “Dialectical Ethics” only works if it is a product of the whole “philosophical journey”; otherwise, it will fall short.
As hopefully this paper has already made clear, what horrified Hume was thinking about how people “ought” to live outside membership to a people’s community. If the moral life was found in community — if deep participation in a “such-ness” was necessary for “ought-ness” — then ideas about the moral life outside of community were necessarily contradictory and dangerous. They lacked emotional connectedness to “check and balance” those ideas and increase the probability that they were actually good (versus only an “idea of the good” which was ultimately “bad”). Alluding back to “Deconstructing Common Life” by O.G. Rose, the “bad philosopher” was for Hume precisely someone who had “good ideas” for how a people “ought” to live without being part of that people (making the abuse and misapplication of “absolute moral(s) (categories)” highly probable). The “bad philosopher” wanted to make the world a better place without participating in it, which meant the probability was low for the “bad philosopher” being correct in how the world could be improved. But precisely because the “bad philosopher” was indeed a philosopher, he or she likely in fact had ideas that were internally consistent, reasonable, and even brilliant. Unfortunately, as we learn throughout the work of O.G. Rose, ideas can be internally consistent and brilliant and yet still wrong, ultimately because rationality cannot be its own grounding (meaning that “autonomous rationality” is impossible). Perhaps a set of “brilliant ideas” are in fact right and good, but the likelihood of that being the case without embeddedness in a “common life” is very low. Without emotional connection, community, habit, custom, tradition, and the like to “ground” philosophical ideas, that thinking was likely not only erroneous but “up-ending.”¹⁰
Concerned about the horrors which morals could cause, but also aware that living without morals wasn’t the answer, Hume wanted to argue that we couldn’t move from the idea of what a thing was to what a thing ought to be, but we could move from our experience of a thing to how a thing ought to be treated. Hume was trying to disconnect thinking and being to connect living and being (and thought to being only through living). Hume wanted to “unify living and being” so that we couldn’t be controlled by “smart and thoughtful” rulers who, in thinking that “thought and being were unified,” concluded that they knew how we ought to “be.” But Hume wanted to make it clear that where we weren’t embedded in and familiar with “such-ness,” we had no business thinking we knew what was going on or what “ought” to go on. The division of “is” and “ought” in ideas entailed political ramifications: if “is/ought-ness” had to be established through living, then “outside forces” that could threaten a small community or “common life” with “visions of the good” could never be justified in such actions. Automatically, “outside oughts” were illegitimate, which meant “everyday people” were safe from them. Unfortunately though, Kant may have unintentionally unraveled the protection Hume erected (even if Kant perhaps tried to correct his mistake later, as discussed in “The Two Kants” by O.G. Rose).
Hume wanted to defend and protect people; far from an immoralist, his intentions were themselves moral (which he was wise enough to realize made them dangerous, and thus their need for grounding). As a historian, he had studied extensively how kings, churches, and feudal lords forced certain agendas and visions upon everyday people in the name of some “good end.” Hume hoped to use philosophy to slow down if not end such efforts, and so he himself embodied the “good philosophy” he strongly supported. Far from an immoralist, arguably everything Hume did was for the sake of common virtue.
The dream of Kant was to unite “the rational” and “the ethical,” an effort which defined the majority of moral philosophy up until Hume. Hume realized though that when the “rational” and the “ethical” were united, the people who positioned themselves to be “the most rational” thereby become also “the most ethical,” granting themselves both a moral and intellectual legitimacy and authority which could be used over others. And, worse yet, it could be used morally, while simultaneously framing those who disagreed with the “rational and moral authority” as thus being “irrational” and “immoral.”¹¹ If the rational is moral, then “binding” the rational is immoral and irrational, which means that “autonomous rationality” is ethical and “ought” not be stopped. This means “autonomous rationality,” a force of destructive self-consumption, becomes a moral force for good.
Hume recognized that dividing “the rational” and “the ethical” would protect common people, and that it was necessary because ‘the imagination [could always] unite what [was] conceptually incompatible,’ which basically means that rationality can always find a way to justify its involvement and presence in anything, which then means that, regarding every possible subject, situation, etc., someone could position themselves as a “moral and rational authority” — tyranny would always be a risk.¹² Hume realized that the way to divide “the rational” and “the ethical” was by critiquing the ontological background of “is-ness” which made the unity of “rational/ethical” possible, and I think his critique was well delivered. Using my language from “The True Isn’t the Rational,” David Hume, in making the foundation of ethics “common life,” is in essence claiming we cannot derive ethics from “is-ness/rationality,” but must instead derive it from “such-ness/truth.” Ethics cannot be derived from our idea of a town (inspired by actuality) but only from a daily “lived experience” of that town (that is constantly and regularly (re)informed by actuality, not just once around the holidays). And if ethics must come from “such-ness,” then ethics must be “bound” by the very nature of what makes it possible: “such-ness” cannot be general, universal, or “wide-spread.” In my opinion, it was “general ethics” that Hume wanted to render impossible with his famous “Is/Ought Argument.”
David Hume realized it was critical to bind and contain “rationality/ethics,” because ‘ideas have consequences and outrun the management of their authors.’¹³ Plato’s ideas can be hard to recognize in the Christian Neo-Platonists, as the nuance of Derrida is hard to find in the Deconstructionists: philosophies, missions, and ideas that start out one way quickly become something else.¹⁴ And perhaps the original manifestation was wonderful, but Hume perhaps believed the only chance the original thought could stay wonderful was by keeping it “bound.” Perhaps something like the thought of Leopold Kohr applies here: Kohr warned that ‘whenever something is wrong, something is too big’; likewise, perhaps Hume is warning us that “whenever something is wrong, something is based on ‘is-ness” instead of ‘such-ness,’ ” which in essence means “whenever something is wrong, something isn’t bound.”¹⁵
The “such-ness” of Charlottesville Virginia is naturally limited by the limits of the town itself, but the “is-ness” of cities from which we determine how cities “ought” to function isn’t (geographically) bound at all. A “city” is a “form” that can be applied anywhere with its corresponding ethical framework, whereas the ethics of Charlottesville can only be applied to Charlottesville. This “binding” that results from “such-ness” cannot be accused of being immoral and irrational as can any “binding” of an ethic based on “is-ness,” because again the “binding of such-ness” comes “from the ground up” — it is a result of the nature of “such-ness” itself — whereas all “binding of is-ness” would have to be “from the top-down” and thus arbitrary and wrong. This suggests another reason why “is-ness” is so dangerous: “rational ethics” are unstoppable (just like philosophy, as discussed earlier in the essay).
There is no philosophical way to “bind” philosophy; there is no ethical way to “bind” ethics; there is no rational way to “bind” rationality; and (to allude to “Homo Egeo” by O.G. Rose) there is no way of the self to “bind” the self. Philosophy, rationality, ethics, the individual — all of these are incapable of keeping themselves from expanding forever. We learn from Alex Ebert that a cell which refuses to die becomes cancerous; likewise, if we are uncontained, we will be a force of destruction. Hume understood that, ‘[u]nder such conditions, the barbarism of refinement set in, and men [lost] contact with the source of moral sentiment’: the effort to combine “the rational” and “the ethical” caused a forsaking of the “such-ness” from which “sentiment” arose, and where there was no “sentiment,” there would be no ethics.¹⁶ Hume saw that ‘[t]he barbarism of refinement over liberty was in danger of destroying the actual practice of liberty’; similarly, the grounding of ethics in “is-ness” over “such-ness” was in danger of destroying the actual practice of ethics.¹⁷ ‘Caught in the grip of corrupt and alienating theories of liberty […] [Hume saw that] the people seemed bent on subverting the liberty they actually enjoyed in the name of liberty’ — so it goes with how people can subvert morality in the name of morality.¹⁸ ¹⁹
Personally, it’s hard for me not to view racism in general as a strong argument in favor of the “good philosophy” based on “such-ness” that Hume proposes, especially considering the work of Dr. Mills. Arguably, “lynch mobs” which terrorized African Americans in the States were “philosophical.” That may not seem so, but to decide “x group of people aren’t human and so can’t be murdered only killed,” is a philosophical move (possible because of the “gap” between “ethical premise” and “ethical application,” as already discussed). ‘All persons are equal, but only white males are persons’ — Dr. Mills frames the premise disturbingly but accurately (noting also how ‘[c]ognitive psychologists have documented the remarkable extent to which our perception of the world is theory-driven rather than date-driven’).²⁰ ²¹ The racist premise requires some education, for I believe natural experience of a black man will lead us to believe that “he’s a human being with black skin.” That doesn’t mean we will necessarily act kindly toward the individual, but to decide “blacks aren’t human” requires philosophical reasoning. Perhaps I’m wrong about this, but the very possibility suggests the legitimacy of Hume’s admonishment.²²
A great concern of Dr. Mills is establishing the importance and prominence of “the body” in philosophical discourse, and I can’t help but associate “the body” with “such-ness.” ‘The body […] is what incarnates one’s differential positioning in the world,’ ‘the material standpoint of inquiry itself,’ or so Mills tell us, and I think something similar holds true with “such-ness” and that ultimately the logic Mills uses in service of “the body” can be similarly applied.²³ ²⁴ Mills writes:
‘[Traditionally,] the body has not entered philosophical discourse. In modern analytic tradition, it has been at best the ‘general’ body of the mind/body debate, the body as the thinking bring, the scientific body. But since, in our world, it is precisely the body that has been the sign of inclusion within or exclusion from the moral community […] the black body arguably deserves to become a philosophical object.’²⁵
“Deconstructing Common Life” by O.G. Rose outlines the similarities between “Critical Theory” and Hume’s “good philosophy,” and there are certainly similarities, if not also key differences.²⁶ Still, it’s hard not to think Hume would find some of the developments which Mills writes on generally positive, such as an attack by ‘feminist philosophers’ on ‘[t]he presumption that epistemology as it has traditionally been defined as a neutral and universalist theory of cognitive norms and standards.’²⁷ Hume would also approve of deconstructing the idea that considering topics like race, “lived experience,” subjectivity, and “such-ness” in philosophy somehow detracts ‘from the really serious, basic philosophical questions: the existence of the external and world and of other minds, the reliability of perception, and the trustworthiness of memory.’²⁸ ²⁹ Mills hopes to ‘bring mainstream philosophy down from its otherworldly empyrean musings’ in order to confront ‘white supremacy as a political system and begin to map its contours’; likewise, Hume hopes to do the same in order to “map the contours” of “bad philosophy.”³⁰ Generally, if we took Hume and Mills seriously and founded ethics on “such-ness” versus “is-ness,” racism would be much weaker.³¹
The focus on “such-ness” for Hume is an effort to “contain” us. Avoiding philosophy entirely is a tempting avenue to reach that goal of “containment” (by making people “thoughtless”), but Hume recognized that this was a “false containment” that would make people vulnerable to manipulation. If we weren’t “contained,” we easily generated “ethics of is-ness” — the “essentialism” that worried Foucault — which easily lead to racism, “lynch mobs,” and oppression. Alright then, but how do we “contain” ourselves? That question is discussed extensively in “Homo Egeo” (and us moderns may not like the answer), but speaking generally, the only way for us to be contained is through a “real choice” and commitment to embed our-self in a “common life” and corresponding “such-ness.”³² Otherwise, we will live ‘unrestrained by the moral criteria of an established way of life.’³³ We will be free, and that means we will be unstoppable.³⁴
Recalling my earlier technical note that, in earlier works, I used italicized “is-ness” to mean “such-ness” (forgive my inability to always maintain consistent language), the paper “Is-ness/Meaning” by O.G. Rose is relevant here. That paper opens with the following:
What a thing “is” cannot be separated from what a thing “means,” as two sides of a coin are inseparable and yet distinct. A given cup is ultimately a collection of “atomic facts.” Therefore, a cup isn’t a “cup”: what a cup is isn’t what it “is” (to us). To humans, the is-ness of a cup cannot ultimately be understood (only approximated); therefore, when humans speak of is-ness, they speak of what a thing “is” (to them). In other words, what a thing “is” is what a thing “means.”
To use language from “On Thinking and Perceiving” by O.G. Rose, is-ness is perceived while “is-ness”/meaning is thought (generally, for this work, it’s good enough to know that perception is our experience or “taking in” of the world around us, independent of thought). Yes, “is-ness” can perhaps be “like” is-ness (an “ontological gap” is perhaps at least somewhat escapable, as outlined in many of the papers by O.G. Rose), but it would be erroneous to treat the two as necessarily identical. A dialectic is needed, but that has been discussed elsewhere with the topic of “meaningful memories” and “pure experiences.”
If we are discussing what a thing “is,” we are discussing what it “means,” and that means all identification exists in a mental and abstract realm. That doesn’t mean my identification is necessarily wrong or even bad, but it does mean that it is extracted from out of the world, the realm where “binding” and “containing” is possible. Where “such-ness” (or is-ness) is honored, no extraction occurs, and, considering this, we could say that Hume wants “unextracted ethics” versus “extracted ethics.”
To help highlight differences between “is-ness” and “such-ness” while sticking to the issue of race, note that the “such-ness” of “black skin” is just that: “black skin.” Nothing more and nothing less can really be said: it’s just skin; I can say, “That person has black skin.” But when I move into “is-ness,” I can suddenly say, “That person is black.” This may not seem like a major difference, and certainly I can use both phrases to mean the same thing, but there is indeed a slightly different orientation to the individual that can entail major consequences. I have now transformed a characteristic (or Aristotelian “accident”) of “black skin’ into a character (or “essence”) of “black person,” and once this happens, it’s easy to start deciding what blacks “ought to do.” But what must “black skin” (in “such-ness” do)? Well, protect the organs of the possessor of the skin — not much more can be said. And note that what “black skin ought to do” is very tied to what “it is already doing” — skin is “already” protecting the organs, etc. — I cannot readily change the behavior of the skin or demand something new of it based on a new or recently discovered “ought.” But the moment we say, “You are black,” we can begin speaking about an active person who may or may not already be doing what we think they “ought” to be doing. Things can be changed, and we can change them.
All this suggests a significant advantage of “such-ness” over “is-ness”: it radically limits the amount of ethics which can be created, not only by avoiding the essentializing language of “is-ness,” but also by viewing many things as “already doing” what they “ought” to be doing, which means a “totalitarian” can’t readily sweep in and “change” the behavior of a people based on some new “moral vision.” Yes, we need ethics, but often less is more when it comes to finding the right balance between “freedom” and “direction.”
“Self-Delusion, the Toward-ness of Evidence, and the Paradox of Judgment” by O.G. Rose makes a distinction between “judgment” and “assessment,” warning that it’s alright to assess, “A person lied,” but dangerous to claim, “That person is a liar.” Yes, again, the language could be used interchangeable — mere word choice alone will not make it clear if a person is judging or assessing — but with the distinction in mind, we could associate “is-ness” with “judging” and “such-ness” with “assessing.” A person living by “such-ness” will say, “That person has black skin,” while a person living by “is-ness” will say “That person is black.” And perhaps nothing bad comes from the second claim, and certainly there’s room to define “blackness” in order to establish a cultural identity and in order to acknowledge a history of oppression. But there is also a risk to saying, “That person is black,” for it opens the door to the possibility of then suggesting “oughts” based on that identification. Not necessarily, but where there is “is-ness,” it’s a small step to move from saying “That person is black” to saying, “That person should be a slave.” Hume understood this risk and, to stop it, tried to deconstruct the connection between “is” and “ought” in favor of “such-ness/ought.”
Can’t the same move be made from “That person has black skin” to “That person should be a slave?” Not if by the phrase the person means it in the realm of “such-ness,” because there is no logic in nature that follows from the biological fact of black skin to the social position of slave. This point suggests the value of Hume critiquing Natural Laws: as we cannot find a “Law of Causality” in nature, for example, so we cannot find a “Law of Black Skin Entails Slavery.” It’s absurd: if we decide black people should be slaves, the idea is ours, a property perhaps of our custom and habit. It is of our own making and doing: we are responsible for it and cannot outsource the responsibility to nature so that we can feel better about ourselves. We are the monsters, not nature.
As this example makes clear, custom and habit can indeed fall into “is-ness” versus “such-ness,” and that’s as problematic as alien forces using “is-ness” against a “common life.” Wherever there is “is-ness,” there is trouble, but just because “common life” can fall into “is-ness” doesn’t mean that therefore outside sources are better authorities (a mistake often made). The answer to our need for ethical foundations is in such-ness, and that can only be found in “common life,” despite the fact that “common life” can be corrupted too. This again brings to focus the critical role of the whole “philosophical journey” — staying in a “common life” is not the answer anymore than is staying in a detached “ivory tower” somewhere. If there are no philosophers present in a “common life,” when some vision of “is-ness” begins to creep in and define “black skin color” as designating slavery, then there won’t be people present with the skill to stop the “is-ness” from taking up residence. Once “is-ness” sneaks it, “oughts” according to its own vision and image will follow, and Arendt’s “the banality of evil” will easily appear.
“Laws of social status due to skin color” cannot be found in nature, which is to say they cannot be found in “such-ness”; instead, they have to be based on an interpretation of nature or “meaning,” which alludes back to the paper title, “Is-ness/Meaning.” What we find in “such-ness” is skin already doing its function of protecting the organs: skin doesn’t have to be coerced or managed by a mob or bureaucracy into doing “what it ought to do,” for skin in being skin is already doing what it should be doing according to its nature. This point cannot be overemphasized: “such-ness” ought to be left “as such.” Yes, skin can be damaged, and a doctor needed to fix it, but note the “standard” according to white the doctor “fixes” the skin should be “such-ness” versus “is-ness” (there is a radical difference between “fixing skin back” into what it was doing and “fixing skin into” what “we think” skin should be doing). Basically, there’s nothing to be done for a people trying to honor “such-ness” than to harmonize themselves to it — the people need to change little. If anything, the people may need to change themselves, and a dictator who only has the authority to change himself will hardly be a terror, but that is where “such-ness” leaves the dictator (self-contained).
There is no logic that necessarily follows from saying, “That person is black,” for “blackness” (as distinct from “black skin color”) doesn’t “do anything” on its own. It’s a social category, which means we do something with it of our own choosing (relative to what we think “blackness” “means”). Hume made a similar point when he noted that “The Law of Causality” isn’t why two billiard balls move when they make contact: the reason for the movement is the movement of the billiard balls. This might seem like a strange point, but it’s critical, for it means there are no “laws” working on billiard balls: they just do what they do. Similarly, there are no “laws” working on whites to make them mistreat blacks: if whites claim that there are such “laws,” that whites are just responding to “how it is,” they are wrong. The world is not controlled by causality even if it can be described in terms of causality, as whites are not controlled by their understanding of “how it is” even though their understanding of the world may describe how they act in it. Similarly, if a ruler or tyrant invades a “common life” and claims he is forced to by “the moral law,” the ruler is wrong. “Moral Laws” don’t force rulers to invade towns and make them “do what’s right” (according to the ruler), though rulers have incentive to claim such so that their actions are viewed as justified and “in the name of something higher.” Hume wants to stop this move from being possible and rationalizable.
Though nothing follows from saying, “That person is black,” there very much is a logic that follows from saying, “That is black skin,” for skin does something: in “skin-ness,” per se, there is embedded a function that is already being done. No new function can be prescribed and announced by some “philosopher of skin,” and certainly not a function that can be justified and “grounded” in what the skin is and doing: for new “oughts” to be claimed about “black skin,” black skin must be “extracted” up into a realm of abstraction where it is “unbound” by reality. It is only in a purely abstract realm that I can start creating new logics that “follow” from the presence of black skin, for no such logics will appear in nature or “in” skin itself. For tyranny to take hold, “such-ness” must be projected “upward” into “is-ness,” but if that projection can be stopped, so too can tyranny be thwarted.
Now, it should be clarified that since I can never access “things in themselves” with certainty, it is not that the case that “totally unextracted ethics” are possible, because some level of interpretation is inevitable. In other words, I can never encounter “such-ness” across Kant’s noumenon, and that means “Pure Such-ness” is impossible: there will always be some degree of “is-ness” involved. But again, this is where “Dialectical Living” is so crucial (as well as the phenomenology discussed extensively in “The Philosophy of Glimpses”): because though “is-ness” is unavoidable to some degree — I must always be “fallen” and a “sinner,” per se — if I at least know that “is-ness” is problematic (and always work to have my “is-ness” informed, refined, and corrected by (imperfect) “such-ness”), then I can get better with time on a personal level, and on a social and political level I can always know that all “outside” ruling powers which legitimize themselves without a dialectic (in terms of pure “is-ness/rationality”) are illegitimate outright. Additionally, I can know that if I start deciding what a thing “is” (an act of judgement) versus what things do (an act of assessment) that I have stepped more away from “such-ness” toward “is-ness,” which primes me to use “oughts” illegitimately and dangerously.
There is a need for me to constantly keep myself “toward” “such-ness” away from “is-ness,” which I can never do perfectly or “once and for all,” thus the need for daily practice and vigilance (I must always be dialectical or I will die like a shark which stops swimming). This is a tall order, and the difficulty of it suggests again the need for a “real choice” where I choose a way of life that forces me to always be “dialectical,” for otherwise it’s just probable that I’ll eventually forgo the effort. Though it’s elaborated on in Homo Egeo by O.G. Rose, a “real choice” is generally a decision which cannot be taken back (a deep commitment to a common life after Hume’s “philosophical journey”). If we are to bind ourselves and rationality, as so necessary, sacrifice will be required.
Distinct from “such-ness,” is “is-ness” ever good, or should we always and unconditionally avoid it? Well, “is-ness” is unavoidable, for we cannot access “Pure Such-ness” (as Kant teaches); also “is-ness” makes thinking possible, and we need a dialectic between thinking and perceiving, not just one or the other (“is-ness” is no more inherently bad than “thinking”: it’s just that we seem to “naturally favor “thinking” and “is-ness” over “perceiving,” at least in the West). So indeed, “is-ness” plays a necessary role: the mistake is making “is-ness” foundational¸ a ground for establishing an “ought,” and/or blurring it with “such-ness” as if the two are identical (similar to the mistake of blurring “thinking” and “perceiving”). It is not necessarily a mistake to consider and ponder an “is-ness” (it depends), but it is a problem when “is-ness” is used to establish “oughts.” It is that particular use of “is-ness” that this work has mainly sought to critique and deconstruct.
To use language scattered across the works of O.G. Rose, Hume seems to argue that “truth organizes values” and that truth is determined in realm of “such-ness,” whereas Kant practically argues that rationality organizes values, and rationality is determined by “is-ness” (which rationality also determines). At least in the Kant of the First Critique, rationality both organizes values and determines the truth relative to which the values are organized — rationality is “autonomous,” the exclusive intellectual act needed for “ethical living” “all the way down.” “Such-ness” and “is-ness” are so incredibly similar that we can sympathize with Kant’s mistake, especially since we can never totally escape “is-ness” into pure “such-ness.” But Kant’s mistake has still proven costly (even if he later tries to correct it, as discussed in “The Two Kants” by O.G. Rose), for ‘it is not philosophical reflection that is the source of civilization [but] civilization that is the source of philosophical reflection’: by mixing this order up, Kant tries to make a building out of a foundation and a foundation out of a building, causing philosophical violence and collapse.³⁵ If we are to avoid Kant’s error, “Dialectical Living” will be required of us, and that requires sacrifice, a “real choice,” and choices are hard for us moderns: we always feel like we are “missing out,” and so miss out on “containment” and reality.³⁶
For Hume, the greatest problem of philosophy was philosophy itself, and if philosophy was “bad philosophy,” the consequences would be dire. What is “bad philosophy?” Again, it’s philosophy that attempts to be it’s own grounding and rational “all the way down,” when ultimately rationality requires “nonrational premises” to even be itself. If we never have experiences, for example, we can’t know what the world is really like, and so we can’t be rational about the world (only about our “ideas” of the world, to which we’ll likely try to confirm the actual world). This argument is expanded on throughout The True Isn’t the Rational by O.G. Rose, but the reason “autonomous rationality” is so dangerous can be traced out with four premises (as also discussed in Belonging Again by O.G. Rose):
1. Philosophy can be about anything.
2. Philosophy never ends.
3. Philosophy can never provide its own grounding.
4. Philosophy that tries to provide its own grounding consumes itself and takes everything it has absorbed into itself down with it.
Taken together, this means that philosophy is unstoppable and all-consuming (in every sense of the word). With Kurt Gödel in mind, philosophy cannot be “complete” in of itself, but if philosophy’s standard for considering something “justified” is that it is “complete,” then philosophy must deconstruct itself — and that’s exactly what eventually happens, even though philosophy might be blind to the self-consumption it begins to engage in (since it occurs gradually through time, like boiling a frog). Hume understood how philosophy could consume itself and everything with it, and he sought to “bind” philosophy with “common life” so that it would not be unleashed as such. Like philosophy, fire is necessary for survival, but “unleashed” it can burn everything to the ground. It could be said that Hume wanted a “bound philosophy” versus an “unbound philosophy,” understanding that “no philosophy” wasn’t an answer either. Why? Because then Aunt Martha couldn’t be protected.³⁷
The differences between “good philosophy” and “bad philosophy” are expanded further in “Deconstructing Common Life,” but here a review with “Aunt Martha” might prove useful. This was brought up in a discussion with Daniel Zaruba, Thomas O’Halloran, and Seth Horras, and I thought I would include the reflection here to help trace out further differences between “the good philosopher” and “the bad philosopher.”
Imagine that we live with our great Aunt Martha. She doesn’t read much, but she’s a beloved member of the community who, though retired from being a seamstress, will still make wedding dresses for the young girls. She attends church every Sunday, and she cooks our meals. We tell her she doesn’t have to, but she insists. We venture outside and think about how big the world is, but that’s about it. We go back inside.
One day, there is a knock on the door. We open it with Aunt Martha to find a salesperson. He’s very kind and dressed in a nice suit. He offers us some juice. It’s on sale, and that’s all we need to know. We purchase the juice, wave to the salesperson as he leaves, and then sit down to have a drink with Aunt Martha. Our bodies are found a week later. The mice have been nibbling.
This is the risk of an “unphilosophical life.”³⁸
Revisit the scene: we live with Aunt Martha and venture outside where we think how big the world is and how vast the sky. We realize we need to learn about existence. We borrow some books from the library and read about different kinds of people. The world is full of heroes, sages, and villains. We study economics, sociology, and philosophy. And while sitting in the library, we begin to think about how little average people know. We think about all the work we’re doing to understand the world and how little everyone else tries to improve themselves. We think about all the uninformed positions people hold. We think about common beliefs. We think about all the problems caused in the world by thoughtlessness. We see some local farmgirl wander into the library to sell apples to the librarian, and we feel disgust. The locals don’t even try to read Spinoza. They play.
When the salesperson knocks on the door of our home, Aunt Martha answers it, and we are nowhere to be found. Later, we step into the kitchen, and find Aunt Martha laying on the floor.
This is the risk of “bad philosophy.”
Revisit the scenes: we live with Aunt Martha and begin wondering about the world. We study. We learn about the kinds of people who live in the world. We learn about heroes, but we also learn about deceivers. We learn how they trick people. We learn how they come off as kind and moral. We learn how to make them leave our homes without resorting to violence. We learn how not to be tricked by their temptations. And we also learn how hard life can be. We learn that we wouldn’t have food to eat if not for “simpleton” farmers. We learn everything that goes into making the supply chain possible. We learn how hard it is to make a tractor work, something the elders in the town can do with ease, even though they’ve never read Plato. We think about all the chores Aunt Martha does so that we have more time to study.
When the salesperson knocks on the door, we answer it. We tell Aunt Martha to keep busying herself with sewing dresses and ask the salesperson about how we can help him. He tells him about his juice. We ask him where it came from. He tells us that it came from a nearby town, but we are familiar with the town from our studies and mentally note that no juice is produced from that region. We ask the salesperson if he’s certain, and he claims he picked it up from the factory that morning, which we know cannot be true, because there are no juice factories in that region. We tell the salesperson to have a nice day and close the door. He knocks again, but we kindly tell him that we will call the police if he does not leave. The knocking stops as we sit down to enjoy lunch with Aunt Martha. And we never know it, but we just saved Aunt Martha’s life. And for our entire life, no one in the town ever sees what use comes from reading so many books. They let us know that they think we are wasting our time, but this is a cross we bear, and it is a very heavy cross because it is not visible. But we trust that we are making a difference in the end. We trust that even though there is no end to the reading of books, the reading of books matters.
This is the life of “the good philosopher,” who embarks from home on the “philosophical journey” in order to keep it. “The good philosopher” protects Aunt Martha. We value her. We protect Aunt Martha not only from tyrants but also from boredom, meaninglessness, lostness, and a life without wonder. We help Aunt Martha see wonder in the act of cleaning dishes and tending a garden; we help Aunt Martha live unafraid of death. We help Aunt Martha live, and indeed, an Aunt Martha lives with each of us
We have read David Hume far too long through Kant and Derrida, and as a result missed thinking that could help us find “a middle road” between “moral absolutism” and “moral relativism.” In Hume is a “Dialectical Ethics,” a system of “Absolute Moral Conditionality,” which is “fitting” for our “A/B Ontology.” And we require some ethical system, for we must live according to some “reason.” We cannot from “is” arrive at an “ought,” and yet we still must take guesses at “what’s good” because we must live. But Hume only seems to leave us in a paradoxical mess: instead, he actually saves us from ourselves. We have throughout history often caused ourselves great trouble and harm because of our ideas of “is/ought-ness,” which in Hume deconstructing to help us, he doesn’t leave us in a void, but rather pulls us out of the void and places us safely in the realm of “such-ness.” Hume does this while simultaneously defending us from “outside forces” that might try to control us through moralization by delegitimizing their efforts outright. Then, in the realm of such-ness, of “common life,” informed by sentiment, habit, and experience, we can live however we “ought” to live, freely.³⁹ Better yet, if ethics is bound to the realm of “such-ness,” it is bound to the realm where Hume believed sentiment and empathy naturally helped correct our worst impulses, greatly increasing the likelihood that ethics would actually be in services of the ethical.
Where there is perception of such-ness, there can be “oughts,” but where there is only thinking of is-ness, “oughts” must be questioned. Perhaps we could say that Hume is a skeptic of “new” and “outside” thinking in favor of “tested” and “internal” thinking, that Hume highly doubts ideas which haven’t suffered “the crucible of communal skepticism” — as have those in habits, traditions, and customs — are ideas which are indeed best. Hume in this way is a traditionalist, though his views on religion (as they are understood) often place him in more progressive camps (which is also strange considering his views on commerce and markets as sources for growing positive “sentiment” while providing important “market tests” — and do note the arguments made here against “is-ness” in favor of “such-ness” indeed entail economic consequences, as Hume and his friend Smith realized).
Does Hume’s thinking leave us in a place where we can’t establish any general human rights or laws? A fair question, but I don’t think so, because we indeed can establish “Absolute Moral Categories”; however, following Hume, we must be dialectical in the application of those categories, which suggests we should only do so from “the ground level” (admittedly, we probably should have less of these categories than more: the more “space” for “common life,” the better). Hume would note that there is a danger in “human rights” being used by large and global governments to control people “as the rulers see best,” and it’s not hard to think of examples where “humanitarian efforts” have been used to justify invasions and government interventions that ultimately turned problematic. Would it be better if we lived in a world without “global human rights?” Not at all, but we shouldn’t be quick to assume that all efforts in the name of “human rights” ultimately prove best, especially on a large scale where the likelihood of successful application is lower due to all the complexity (a Hayekian point).
One of my favorite quotes is a line from Judge Learned Hand (which I learned thanks to Jonathan Rauch), and if there’s any quote that captures the spirit of Hume’s project, I think it is this: ‘[t]he spirit of liberty is the spirit which is not too sure that it is right.’ It is for the sake of making us less sure of ourselves that Hume is a skeptic, for Hume believes an “uncertain totalitarian force” doesn’t exist. If kings, rulers, lords, governments, and the like took Hume’s arguments seriously, they would find themselves unable to establish the absolutism needed to feel moral in controlling the lives of others. Totalitarianism that was forced to have its legitimacy challenged by “Dialectical Ethics” would struggle to last: the people would be free.
Hume will not allow us to moralize what we don’t live, and totalitarianism, in often being alien, is thereby disqualified. But what does a life of “Dialectical Ethics” mean for us? What must we do to live according to such ethics? Well, as will be discussed in “Homo Egeo” by O.G. Rose, we must do something that is incredibly difficult for moderns like us obsessed with possibilities and “keeping options open” — we must make a “real choice.” We must choose a life and embed ourselves in it. We must lock all the exits. We must commit to Aunt Martha. This will not be easy, and it will require great sacrifice, for to make a “real choice” is to give up all other possibilities. But if “Dialectical Ethics” are indeed the only ethics, there is no other way to live an ethical life, and, by extension, there is no other way to keep reason from devolving into logic. If we want a reason to live, we must choose a life and live it for good.
¹I loved the suggestion in the talk that the loss of “deep listening to music” has corresponded with a rise in “instrumental rationality” — that’s fascinating.
²A reason it’s “unnatural” is because it’s also “unnatural” for us to make “a real choice” (we naturally want to “escape from freedom” when it most counts), but what is meant by this is expanded on in “Homo Egeo” by O.G. Rose. Also, to allude more to that paper, we problematically have sought a “good” (a “without B”) that we’ve ultimately believed could fill our “lack” (we’ve sought an A/A that could make us A/A instead of A/B). No “good” can do that: “goodness” is found in disunity not unity, for unity is death (as we learn from Dr. Last), because we are not unified. But in response to this realization, we have rarely moved to integrate our “lack” into ourselves (and become Homo Egeo), but instead fallen into homelessness and Homo Nihil. This is when reason is divided from morality and becomes mere logic. After this, the world is consumed by boredom because it is consumed by mere logic.
³Please note that the mind is basically able to rationalize any premise, so if it wants to believe in “autonomous rationality,” for example — if it just doesn’t know it shouldn’t — the mind will be able to convince itself that it can and is succeeding. We learn from Freud that where there is rationality, there is likely just rationalization, that when we think we’re uncovering truth we’re likely just creating confirmation for our opinions in way that makes it feel like “discovering. The brain and thinking are incredibly flexible, and ultimately “the map is indestructible” (as discussed in “The True Isn’t the Rational” by O.G. Rose). Ultimately, this means we probably simply have to believe “autonomous rationality” is impossible: if we search in “autonomous rationality” for evidence it cannot sustain itself, we’ll never find reason to cease believing in rationality exclusively. However, note that this “belief” against “autonomous rationality” isn’t blind or thoughtless, for it is based on experiences of how “autonomous rationality” falls short and contributes to totalitarianism.
⁴Please note that we cannot assume that “how a tree is best treated” is naturally occurring: it could be the case that things in nature, relative to the total sum of all possible ways they could be treated, cannot realize their best states and uses without human beings, because perhaps those “best of all possible approaches” can only be found in the realm of ideas, a realm to which only humans have access (as far as we know). If God Exists, perhaps God made humans A/B precisely so that they could best “tend gardens,” per se: if we were not A/B, it perhaps would not even be possible for us to be ethical. If there were no ideas which could be brought into nature (through A/B), then what happened to trees and the world would just be “whatever happened to happen.” It wouldn’t necessarily be best; in fact, “what’s best” might not even be in the cards of “naturally occurring possibilities.”
Perhaps God, in having to make “the best of all possible worlds,” had to make a world in which ideas could be actualized, because “the best of all possible worlds” wasn’t possible without ideas. To introduce to creation “the best way a tree could be treated,” humans may have had to come into existence; otherwise, the very possibility of “a tree being its best self” might not have even been possible, a logic which could be applied to all of creation as a whole.
“What’s best” might only be creatable, not causable, and perhaps that applies to all of ethics: “ethical being is only creatable, never causable, and if there were no “A/B Entities,” there would only be causation, never creation. Yes, this line of thinking risks making humans feel justified in “subduing creation,” but note that if we must be “dialectical” to be ethical, we must always be “thinking” and “perceiving” creation: we must be listening to it just as much as we speak to it. We can’t simply control creation; we must let it teach us how we can harmonize.
⁵Due note, I do think Kant may have actually tried to correct some of these areas, even though he’s not always read that way, but that argument must be found in “The Two Kants” by O.G. Rose.
⁶Smith, Adam. The Theory of Moral Sentiments. Indianapolis: Liberty Classics, 1976: 233–234.
⁷Note also that even if our ideas are wrong, if our system is contained to our “common life,” the mistake will likely not arise to large systematic consequences (such as mistakes can in a multinational global order, for example, unless perhaps deterrence keeps working, but it strikes me as tragic that the world must be held together by “mutually assured destruction”).
⁸To use language from The Philosophy of Glimpses on the work of Aristotle: Hume wants us to contain our thinking and ethics to realms where we can “apprehend” what’s happening. He wants us to focus on where we can apprehend “That’s a cat” versus ponder “What are cats?” from a place where such “reading” is not possible. To put it another way, if phenomenology isn’t possible, neither should be philosophy or ethics (which entails useful “epistemic humility,” mind you).
⁹Mills. Charles W. Blackness Visible. Cornell University Press, 1998: 8.
¹⁰“Up-ending” is a term that suggests “uprooted” and “pulled up and killed like a flower,” but it also suggests being “ended in the clouds.”
¹¹I think this move is extremely common in modern politics.
¹²Livingston, Donald. Philosophical Melancholy and Delirium. Chicago, IL: The University of Chicago Press, 1998: 370.
¹³Livingston, Donald. Philosophical Melancholy and Delirium. Chicago, IL: The University of Chicago Press, 1998: 368.
¹⁴As my friend Vincent once said, it’s okay to be Aristotle, but it’s not okay to be an Aristotelian; it’s okay to be Heidegger, but it’s not okay to be a Heideggerian; and so on.
¹⁵Kohr, Leopold. The Breakdown of Nations. Green Books Ltd, in association with New European Publications, 2001: 21.
¹⁶Livingston, Donald. Philosophical Melancholy and Delirium. Chicago, IL: The University of Chicago Press, 1998: 368.
¹⁷Livingston, Donald. Philosophical Melancholy and Delirium. Chicago, IL: The University of Chicago Press, 1998: 306.
¹⁸Livingston, Donald. Philosophical Melancholy and Delirium. Chicago, IL: The University of Chicago Press, 1998: 281.
¹⁹Likewise, as discussed in “Homo Egeo” by O.G. Rose, so people can subvert selflessness, humility, and “self-forgetfulness” in the name of selflessness, humility, and “self-forgetfulness,” a reality which suggests the ultimate necessity of a “real choice” which forever “contains” us.
²⁰Mills. Charles W. Blackness Visible. Cornell University Press, 1998: 70.
²¹Mills. Charles W. Blackness Visible. Cornell University Press, 1998: 75.
²²On the other hand, perhaps lynch mobs are just expressions of a “banality of evil” (a “thoughtlessness” that kills minorities because neighbors do it)? In this case, wouldn’t the mob be “unphilosophical?” It’s a very good question, and (to offer a case study) I think it brings to the forefront the complex relationship with the “German Nihilism” recognized by Leo Strauss and the work of Hannah Arendt (as discussed here).
What is today a “given” yesterday was easily an idea that was yet to be embedded and enacted on: “givens” don’t exist from the start of history, but are created along the way in such a manner that convinces us they are more so “realized” (and thus always existing, like an uncovered fossil versus a constructed skyscraper, and indeed, perhaps the “given” of Christianity does reflect some Eternal reality, hence making the “given” created and yet still “realized” — and this suggests why establishing religious “givens” might be easier than non-religious “givens”).
The idea “only whites are human” perhaps existed since the dawn of man, but I think there is good reason to believe the premise was introduced at a moment of history and then treated like gospel. Hence, the roots of the racist “banality of evil” are philosophical and based on a concept of “is-ness” that itself cannot be located among “such-ness.” Foucault warned about the dangers of “essentialism,” of defining a certain concept of masculinity, normality, etc. as “essential” to what it mean to be such, and I think Hume would share Foucault’s concern. We cannot “observe in nature” the “value” that “whites are superior to blacks”: for a culture to orbit around that premise, the premise must be originated along with a corresponding view of “is-ness” to justify it. After this “philosophical premise” is established, daily life is organized around it, people forget the premise was created, and then it becomes “given.” Once this occurs, “the banality of evil” can set in, but the roots are still philosophical.
²³Mills. Charles W. Blackness Visible. Cornell University Press, 1998: 16.
²⁴Allusion to Lewis Gordon, as found in Blackness Visible by Charles W. Mills. Cornell University Press, 1998: 16.
²⁵Mills. Charles W. Blackness Visible. Cornell University Press, 1998: 16.
²⁶To speak generally, though an improvement, “Critical Theory” might still be too close to “is-ness” versus “such-ness.”
²⁷Mills. Charles W. Blackness Visible. Cornell University Press, 1998: 21.
²⁸Mills. Charles W. Blackness Visible. Cornell University Press, 1998: 22.
²⁹The idea that novelists can’t do philosophy and vice-versa is also part of the problem.
³⁰Mills. Charles W. Blackness Visible. Cornell University Press, 1998: 118.
³¹Additionally, perhaps the existence of racism, and the idea that we don’t avoid “philosophical sins” if we don’t “constrain” ourselves with a “real choice,” suggests legitimacy to the work of René Girard. We naturally seem like we want to scapegoat someone, which itself seems like a “philosophical sin,” seeing as the foundations for justifying “scapegoating” can’t be found in “such-ness” only “is-ness.” If scapegoating is as prevalent as Girard claims, then “philosophical sin” is also prevalent.
³²For those who study literature, these kinds of “ironies” can function as evidence in favor of Hume’s case: literature just frankly shows us that there is something true about irony. Where there is irony, there tends to be truth, as where there is a ray of light, there tends to be a sun. Not always (the light could be from a lightbulb), but often, and we can usually tell the difference by examining the light closely (I submit to you that sunlight “looks different” from electric light).
³³Livingston, Donald. Philosophical Melancholy and Delirium. Chicago, IL: The University of Chicago Press, 1998: 253.
³⁴If we must “contain” ourselves to avoid becoming “cancerous,” must the modern state do the same? Livingston thought so: focusing on the French Revolution as a case study, he wrote the revolution ‘introduced the first philosophical nationalism’ and proceeded to argue that modern totalitarianism was practically impossible without “bad philosophy.” (A) Livingston elaborated:
‘The modern state that emerges after the French Revolution is not just any kind of state; it is, among other things, a philosophically self-conscious state intent on legitimating itself in the world through the philosophical act. And whenever the philosophical act appears, a Humean critique is in order.’ (B)
Livingston added that ‘[a] mob that could read was a philosophical mob; something quite new and, if informed by a corrupt philosophical consciousness, something quite dangerous.’ (C) This brings us back to the work of Charles W. Mills, for “lynch mobs” which terrorized African Americans in the States were “philosophical,” as discussed in the paper.
(A) Livingston, Donald. Philosophical Melancholy and Delirium. Chicago, IL: The University of Chicago Press, 1998: 369.
(B) Livingston, Donald. Philosophical Melancholy and Delirium. Chicago, IL: The University of Chicago Press, 1998: 334.
(C) Livingston, Donald. Philosophical Melancholy and Delirium. Chicago, IL: The University of Chicago Press, 1998: 280.
³⁵Livingston, Donald. Philosophical Melancholy and Delirium. Chicago, IL: The University of Chicago Press, 1998: 196.
³⁶Dr. Cadell Last brought to my attention how Žižek in Less Than Nothing introduces a fascinating tool of “retroactive philosophical positing.” Dr. Last writes:
‘[W]e normally think along the lines of a linear teleology of philosophical thinkers: Kant “correcting” Hume; Marx “correcting” Hegel; Aristotle “correcting” Plato; etc. but what this obfuscates is the possibility that Hume would have something interesting to say to Kant; Hegel would have something interesting to say to Marx; Plato would have something interesting to say to Aristotle, and so on. In short: what would Hume say to Kant?’
I found this incredibly provocative, and I cannot help but imagine Hume asking Kant, “Why did you tear down the wall I built to defend and protect everyday people?” In response, perhaps Kant would answer, “To save the religion of everyday people,” to which Hume may say, “I only critiqued philosophical religion.”
³⁷The critical role of sentiment in moral life can perhaps be recognized by the fact that, in many movies and works of entertainment, a protagonist who wants revenge by killing the bad guy, when they meet finally face-to-face, the protagonist suddenly can’t go through with the killing. It would be hard to count the number of times people decide to be merciful when they encounter “the face” of the target (which hints at why Levinas may have shared some of Hume’s views). More common, the fact we all “naturally” change how tone, conversation topics, behavior, and the like when we encounter people also suggests the role of “sentiment.”
³⁸Alternatively, imagine that the salesperson convinces us to join him and help him sell his “juice” to people across the countryside — this is “the banality of evil.”
³⁹Perhaps Hume is better than Kant at getting average people to be ethical, but if people decide they “won’t go outside anymore,” that they won’t encounter “neighbors” to stimulate the feeling of empathy, is a divine Jesus who demands love the only answer?