An Index of Works by O.G. Rose (First to Newest)
Successfully, Karl Marx identified the bourgeoisie, the proletariat, and “the material dialectic,” but despite his emphasis on creativity, he failed to identify the artifex, meaning “creator class,” which is comprised of entrepreneurs, inventors, and artists. An artifexian, which is a term first introduced in this paper, is anyone who creates or recreates a means of production and/or a thing to be produced. Marx, it seems, conflated creators with the general proletariat, and consequently his material dialectic only halfway addresses the nature of socioeconomic change. The full dialectic by which society “marches” through history can be expressed as follows…
To allude to Nolan’s masterpiece, all conversation is ‘inception’. It is because I brought up Inception that you are now thinking about the movie: I planted the idea in your mind. If you claim you are not being incepted, you are only saying that because I claimed that you were: you are making that claim because I put the idea of ‘being incepted’ into your mind. Now that I’ve spoken about inception, there’s ‘no exit’ from it — ‘l’enfer, c’est les bouches’.
You are not free to choose not to be incepted.
Your liberty ends along the borders of my words.
In teaching theory, we risk rendering theory meaningless. Economists, political theorists, literary critics, sociology — all are in the same boat. Like scientists and psychologists, sociologists must take into account the Hawthorne Effect, which, generally speaking, is a theory concerning how participants alter their activities once they are aware that they are in an experiment.¹ The very presence of countless sociological and educational articles, lectures, and books online — which can spread quickly like memes — may change the very way societies act. This may render what the data claims about societies wrong, making it seem as if the data was always wrong; on the other hand, the data may make societies suddenly act in the way the data suggests, making it seem as if the data was always right…
Descartes famously said ‘I think; therefore, I am’; today, it would be more appropriate to say ‘I think you think I think you think; therefore, I am’. Today, our minds are not simply centers through which we consider ourselves and our world, but rather our minds are places where we also wrestle with what other people are thinking, what other people think we are thinking, and what other people think we are thinking that they are thinking. Humans have always wrestled with these ‘realms’ to some degree, but in our increasingly disembodied, social media age — an age in which we live on screens, perpetually interacting with countless people ‘out of body’ — this way of thinking has greatly intensified. Our ‘self’ is now a network, and to contemplate by ourselves is to contemplate in a group. There is no longer just one voice inside our heads: our heads have become communities.
Worry hides itself. The person who worries doesn’t experience worry as “worry” — the person experiences it as “care,” “concern,” “realistic,” “love” — in a sense, nobody worries. If when we worried we experienced it “as worry,” we’d probably stop, for we’d easily recognize that it was “all in our head” and a matter of ungrounded fear. Rather, when we really worry, we experience it “as real,” not as something that cannot happen, but something that probably will happen. It strikes us as undeniable, as concrete, as that to which we must respond. In fact, what we worry about is precisely that which we think ignoring is foolish: it strikes us as something we should pay attention to, above all else.
If I thought you were crazy, how would you prove your sanity? Would you show me your college degree? Lots of crazy people are rather intelligent. Would you take me to lunch and ask about my family? Clearly you would only be doing that to trick me into thinking you were normal (proving that you’re not only insane, but also deceptive). Would you try to prove me wrong by claiming you were trustworthy? But everything you say is a lie, and since you won’t admit your shortcomings, it’s apparent that you’re also arrogant. How would you prove you weren’t prideful? By working as a janitor for a year? But you’d only be doing that to prove how selfless you were, taking pride in your humility. You’d be faking humility, as does any arrogant crazy person who’s unwilling to admit their insanity…
The internet, coupled with a lack of discernment, character, and craft has exacerbated our self-imposed dehumanization. People are creative, and people will use the internet to express their creativity. Since the internet isn’t going away, the question is whether humans will use it constructively. It’s easy and funny to use it for deconstruction, so the challenge to incubate self-motivated individuals to use the internet for the development and expression of excellent craft is great. People will generate culture even if they aren’t capable, and if society doesn’t equip its citizens properly, the generated culture will be one that dehumanizes and destroys.
Bernard Hankins is a gifted speaker and teacher, and his TEDTalk “Integrating People of Color” announces a vital link between the loss of creativity and the loss of diversity. He argues that the loss of ‘spectrum thinking’ (ST) leads to an increase in ‘binary thinking’ (BT), which contributes to segregation in concordance not only with how people have been taught to think but also with what they have been taught to believe is right. Consequently, education that stresses diversity but doesn’t incubate creativity is like stressing flight without providing wings: it fates students and society for failure and frustration. And when that failure occurs, lacking spectrum thinking, we may very well lack the capacity to recognize what caused the failure — in fact, we may think the problem is that we don’t have enough (binary) education — and so the Greek tragedy will go on, ever-worsening.
It is important to draw a distinction between emotional intelligence (EQ) and emotional judgment (EJ). Emotional intelligence is empathetic: it entails the hard act of thinking one’s self into another person’s shoes. It is an intellectual endeavor for the sake of achieving the proper emotional disposition toward other people and requires deep thinking. Emotional judgmental, on the other hand, is the act of gaging the validity of a truth based on one’s emotional reaction to it. The problem with EJ is that it requires one to consistently experience a positive emotion in order to verify, justify, or appreciate experiences, ideas, and so on…
A reason economies exist is to make joy increasingly more accessible to increasingly more people, and a society that doesn’t enable its citizens to feel like life is worth living is a society that arguably fails. The easier the economy makes this realization for its citizens, the better the socioeconomic order as a whole. Though both entail emotional fluctuation (at least according to this work), joy is non-contingent fulfillment, while happiness is contingent upon externalities. Consequently, unlike internal joy, happiness is temporary. Furthermore, though such an estimation cannot be made from happiness, the actual value of an economy can be generally estimated from the joy of its participants. The higher the joy, the less likely there is a bubble in the system.
Escapism is the antithesis of Existentialism. Existentialism, a philosophy that claims existence precedes essence, establishes that we come into existence and then decide the meaning of our existence, rather than the meaning be predetermined. It entails an engagement with the actual world, and an ‘existential crisis’ results when a person finds that actuality doesn’t match with what that person thought was real. Such a crisis causes a shattering of beliefs and preset complexes. It is a painful experience; it causes angst. To avoid this crisis, the modern person can view everything as a potential photograph, tweet, posting, text, etc. In this way, the modern stands as a viewer, ‘outside’ the world, and hence the world is something humans conquer rather than the world conquer humans. Perhaps in Eden humanity could have dominion over the world, but now the world holds dominion. In protest, humanity has digitized itself and the world to reestablish the rules of Eden, but this dominion is a kind of escapism and denial rather than a true engagement. It disembodies us: anytime we escape the world, we escape ourselves.
To be materialistic is to focus on material things, while the optimal way humans relate to things is as if those things are ‘invisible’, per se. According to Heidegger, a doorknob is ‘invisible’ to us until it breaks, for until then we use it to open a door without thinking about it. It’s only when the doorknob doesn’t work that we stop and notice it. Similar should be our engagement with all things in the world: to exist in this way is to avoid materialism…
Ethics is an ‘(im)moral’ study. While reading The Groundwork of the Metaphysics of Morals by Kant, I do not try to save the life of a starving child in Africa: I act immorally. Yet I am reading the text in order to learn how to be moral, and if it contributes to me being moral in the future, then I act morally. I act immorally relative to the child in Africa now, and morally to whom I will help in the future. In sum, I act (im)morally.
What is rational, responsible, honest, moral, sane, genius, and so on is relative to (what we believe) is true. If I am a bird, then leaping off a cliff isn’t insane but perfectly normal. If I am sick, then it’s responsible for me to see a doctor; if I’m not sick, then it’s perhaps a waste of time. If I have a meeting at one and I arrive on time, I prove myself to be punctual; if I am a gang member and scheduled to execute an innocent person at one and prove myself punctual, I prove myself to be a murderer.
What good is thinking and reasoning if economists, pundits, intellectuals, and the like are so often wrong? Great minds failed to foresee the 2008 Financial Crisis, Brexit, the rise of Russian aggression, the Trump presidency — so what use is thought? Clearly whatever its use, even if that use is necessary and invaluable, the use of thought still seems remarkably limited. Though mastering thinking is necessary (for reasons argued throughout the works of O.G. Rose), thinking alone is inadequate. If we fail to realize how innately incomplete thinking is (even genius thought), we may think that we have the tools necessary for saving the world, but when we go to make a difference, before our very eyes, nothing will change.
In his famous essay “The Ethics of Belief,” W.K. Clifford argued that if a person allowed others to use a car that the owner knew was unsafe, even if the people arrived at their destination successfully and unharmed, the owner of the vehicle would still be guilty of immorality. When we know something is true and disregard it, or when we believe in something without sufficient evidence, according to Clifford, we act immorally.
What do I say when I say “I love you?”
If I mean “you make me happy,” there is no difference between “love” and “happiness.” If when you say “I love you,” you mean “I’m happy around you,” again, the term “love” cannot be defined from “happiness.” If “love” is to be used meaningfully, it must signify something else.
Everyone has a self, and hence all acts are self-ish and no act is totally self-less. According to moralists, we aren’t to act selfishly, but it is not truly possible for anyone to act without any consideration of, or connection with, his or her self. It is difficult to imagine even what it means to act without one’s self, for even if one were to abandon his or her self, such an act would be done through the self. Therefore, it is not helpful to talk of a need to avoid ‘selfishness’; rather, as will be argued, it is more valuable to speak of a need to live in a state of ‘awe’ and ‘thanksgiving’. Furthermore, is a ‘hallow man’ who acts selflessly good? The selfless acts of such a person seem destined to be vapid and empty. Ultimately, I believe the focus today on ‘selflessness versus selfishness’ can have its uses, but it’s more often than not a source of confusion.
Why does history repeat? Why does it seem thinkers like Heidegger are at war with language? Why does it seem art has more influence on ideas than ideas on art, and though both have an impact, why does art seem to change the world more so than philosophy (as technology seems to be more consequential than education)? Why do words often fail us? Why does the phrase “I can’t put it into words” resonate? Why does knowing things could be worse not make us happier?
It’s because ideas are not experiences.
What do we say when we say, “I’m certain that x?” As noted in “On Beauty” by O.G. Rose, if Wittgenstein is correct that “the limits of my world are the limits of my language,” then expressions of my language should be expressions of my world. Thus, if I can isolate a term into distinction, then I should identity a real phenomenon in the world — words that don’t refer to real things are nonsense or blur with other words indefinably.
We understand things in the world through what they are not: we understand what constitutes a cat through the idea of a cat, yet a cat is not its idea. Additionally, our ideas necessarily must be incomplete: when thinking about a given cat, it is impossible for me to think about every single detail that constitutes that particular cat. I consider “a cat” through “cats,” per se, and if I try to particularize it into “Sushi” (my cat), I try to make the entity less abstract through a word the thing is not. And yet without the word, without the abstraction, my understanding of the thing would be even poorer. Unless that is my idea is utterly wrong; then, perhaps it would be better if I only silently stared at the creature, perceived it, and nothing more.
Why does one person find the case for x compelling while another finds the case against x convincing? Both believe they are rational and intelligent to assent to the case they believe in, yet both cases cannot be true. One person finds the argument that Israel is justified to use force legitimate while another finds the argument Conservative propaganda; one person finds the case that Robert E. Lee was a “hero” grotesque and absurd, while another person finds the argument nuanced and conceivable; one person finds it believable that Roosevelt knew about Pearl Harbor ahead of time, while another finds the argument a silly conspiracy. Why does one person find x believable but not anti-x (or y)? Clearly it is because one person is convinced by one case and not the other, but the question is why? Why is a person compelled and convinced by x, what occurs in the act of a person being compelled and convinced, and why does what occurs in the act occur at all?
Thinking and perceiving are not the same. If I look at a window and think about my grandmother, I perceive the window, but I do not think about it. However, the moment I stop daydreaming and realize ‘the window is dirty’, I am now both perceiving and thinking about the window. Perceiving is ‘processing through body’, while thinking is ‘processing through mind’. ‘Mind’ and ‘body’ are unified when I both perceive and think about the window, but the ‘mind’ and ‘body’ are separate when I perceive the window and think about my grandmother. When what I am thinking about and what I perceive match, mind and body (or brain) are one, though they are apart otherwise. When I perceive the window and think about the window, I am not ‘dualistic’; when I perceive the window and think about grandmother, ‘dualism’, in a sense, is true. The human shifts in and out of being Cartesian.
For Livingston, Hume was ‘among those rare thinkers for whom philosophy itself [was] the fundamental problem of philosophy.’ This is not to say Hume was against all philosophical reflection — in fact, philosophy has a necessary role — only that to understand Hume, we must realize ‘Hume’s philosophy is […] a critique of philosophy by itself [and] its central feature is the dialectic of true and false philosophy.’ ‘True philosophy ennobles mankind; false philosophy distorts, corrupts, and dehumanizes.’ ‘[Hume] sought only to reform the traditional understanding of philosophical autonomy by recognizing the autonomy of custom, that is, by demonstrating that custom is an original and authoritative constitute of speculative thought.’
“Sensualization,” a term coined in this paper, is the giving of sensual or “sense-able” representation to the metaphysical. If I think “I’m hungry,” to say “I’m hungry” is to carry out sensualization. Likewise, if I feel worried and carry a worried look on my face, I sensualize my fear (via a kind of “dark speech,” as discussed in “On Words and Determinism” by O.G. Rose). Ideas and feelings are metaphysical, and unless I sensualize them, I’m the only one who knows about their existence (within me), while everyone else is left in ignorance. If I have an idea about how to improve a house but don’t tell anyone, I don’t sensualize my idea but rather leave it as an idea. Humans are orientated to sensualize versus keep the metaphysical unsensualized, and when sensualization is a good thing, this pays off, but when it is a bad thing, this orientation (and bias) works against our discernment and development.
1. Words have power.
2. Words orientate, and relative to that orientation, create/realize the world/future (of a speaker).
2.1 A person’s world is what a person experiences. What one says determines what and how one experiences the world. Therefore, words orientate the world.
How do humans experience thinking? Is it willed or does it just appear? This might be a strange question, but addressing it might help us decide the way to incubate and encourage the right kind of thinking as a society. Furthermore, we might learn to identify biases that privilege intentional thinking and “low order complexity,” a bias which could hinder creativity and the “high order complexity” which defines necessary and emergent phenomena, without which our society and lives could suffer. Both “low order” and “high order” complexity play key roles in our lives, but our brain seems to be in the business of trying to put all our eggs in the “low order” basket. If we don’t actively combat it, our brain, the great frenemy, will win.
We need to stop believing we can determine what people think and who they are based on how they vote: I believe this assumption is tearing the country apart. Why people vote the way they do is incredibly varied and complex, and if we assume that how people vote is enough by which to know how they think, we’re likely just to use these conclusions to support our confirmation bias, ideology, and the like. Our voting almost always misrepresents us, but if we think it tells us all we need to know, our willingness to listen to one another, heal divides, and become fellow citizens may suffer immeasurably.
If x is true but there is no evidence verifying x, then it is irrational, intellectually irresponsible, and correct to believe x. In this situation, it is correct not to believe what is correct, and the right thing to do is to not believe what’s right to believe…
The cost of college tuition is high because businesses rely almost exclusively on colleges to determine employee qualifications. Today, colleges hold a monopoly on credentials, and where there are monopolies, price controls are lacking. Though some businesses, most notably in the Silicon Valley, are moving out of the narrow mindset that someone with a college degree is necessarily more qualified than someone without one, this enlightenment is yet to spread through the whole economy. Businesses need to develop their own, personally crafted methods of testing employees without involvement from colleges, which function today as long and expensive IQ tests in disguise (as Peter Thiel notes). Simultaneously, the social stigma against refraining from attending college needs to be effaced, for that empowers businesses to outsource determining qualifications to colleges.
If I find something beautiful, I treat it with care. If there is a vase in the kitchen that is notably elegant, I make a point not to bump into it, but if there is a vase made of plastic that I bought for cheap, though I won’t intentionally break it, I won’t be nearly as careful, and if I have to make a hard choice between catching the plastic vase from following off a ledge and catching a glass, I could easily choose the glass. Beauty corresponds with value, and if I find something beautiful, relative to the degree I do, I naturally and willing take care of it. This isn’t to say that beauty is necessary for me to care, but it is to say that beauty naturally inspires consideration and concern without anyone coming along and threatening to put me in jail if I don’t act better…
Neil Postman wrote numerous books on education, though he is most famous for his classic Amusing Ourselves to Death. His thought was deeply shaped by Marshall McLuhan, the mind behind Understanding Media, but he was no McLuhan-parrot; in my opinion, the student rises above the teacher, even if though without the teacher, the student would have been lost. Postman applies McLuhan’s thinking to education, and consequently generates some of the most innovative and provocative thinking about education I have ever read.
The present redefines the past, and the choices I make presently transform what it was I was choosing when I made previous choices. If I choose to go to college and meet the woman I am going to marry there, college suddenly becomes “the place where I met my wife,” as if it was “always” that place. It’s as if the moment “reached back in time” and redefined everything that was, is, and would be, changing what I chose when I chose to attend college. This is what I call a “flip moment” — a redefining experience that changes “what is” as if “what is” was “always” such…
What do we talk about when we talk about beauty? If Wittgenstein is correct that ‘the limits of my language mean the limits of my world,’ then it would seem to follow that “the expressions of my language are the expressions of my world.” Hence, if we can pin down a clear way a word is used and separate it from how other words are used, we might also be able to isolate a distinct experience and/or “use,” and thus arrive at a distinct meaning. From Wittgenstein, we can then move into phenomenology: the effort to define a word becomes the effort to define an experience, to achieve meaning.
Imagine someone will give you $10 for one hour of work. Now imagine that he will give you two hours of work if you agree to be paid $9 an hour, three hours if you agree to be paid $8 an hour, and so on up to ten hours. At ten hours, you would make the same amount as you would have for one hour of work, and so, at some point, it becomes illogical to trade wages for hours. Consider the following…
1. A is A.
2. A is A is A is A is A is A…ad infinitum…
3. A is A.
4. A.
5. A, A, A, A…eternal regression…
6. While “A is A” ad infinitum, A eternally regresses.
7. Hence, “A is A” signifies “ ‘eternal regression’ is ‘eternal regression’ ad infinitum.”
In the modern world, as brought out by the debate between Isaiah Berlin and A.J. Ayers, a particular epistemological error is common. The first is that when told by the teacher that “it is raining outside,” the students conclude that since they haven’t seen it raining, they have no reason to believe that what the teacher claims is meaningful. The second mistake is that the student who is on the verge of running out to see if it is raining stops himself, because he realizes that he has no reason to believe the teacher’s statement is true, and so has no reason to check the weather. After all, the student has been taught that there is no meaning without verification — empiricism is all the rage.
The “Protestant Work Ethic” that the sociological Max Weber identified in Protestantism is not the description of an essential dimension of Protestantism, but the description of a symptom of something deeper. That deeper problem in Protestantism is its susceptibility to Heideggerian and Deleuzian “capture” due to the Protestant tendency to reject “sacramental ontology.”
How do we know about God, and how do we live out that knowledge? Reason and revelation are often placed in opposition of one another, but from Austin Farrer we can learn to appreciate how reason makes it possible for us to ascent to a “vague God” that can make us “will” to experience “the particular God” of Jesus Christ disclosed in revelation. Without reason, we could never make it to revelation. This epistemology understood, we may also begin to find ways of grounding axiomatic positions in Christian theology, as well as unpacking the meaning of some of Christianity’s key phrases…
No one who lacks critical thinking thinks they lack critical thinking, for it takes critical thinking to realize you lack it. Hence, when it comes to defining critical thinking, we are presented with a paradox. To start, no one reading this paper will think they need to read it, for no one thinks they aren’t familiar with critical thinking, and yet this sense of familiarity is precisely why a person would need to read this paper. There will be readers who can critically think and those who cannot, yet everyone will think they belong in the first camp, for who doesn’t think they exercise critical thinking? And this is precisely why this work is necessary and precisely why it seems unnecessary. Critically thinking is surrounded by irony and paradox.
In line with the thought of Kurt Gödel, if all ideologies ultimately cannot ground themselves axiomatically, meaning that “autonomous rationality” is impossible (a worldview that is rational “all the way down”), how is it we justify our ideology? It is clearly with something “arational” or “(un)rational” (note I didn’t say “irrational”), but is that “something” experiences, emotions, imagination — what? I don’t deny these “(un)rational” means of ascent serve a role, but Samuel Barnes, the mind behind Missing Axioms, has brought to my attention the unique role of martyrs, soldiers, and saints. Barnes has taught me that blood is epistemologically significant, and, for that, I am in his debt.
“What is the meaning of life?”
Do you mean the word “life” or the phenomenon of life? If I were to ask you, an English speaker, “What is the meaning of (insert French word for life)?” you would probably answer with a definition, while a French speaker may leap straight into existentialism. Likewise, when we ask, “What is the meaning of life?” we are asking life to give us a definition (though we may not realize it, looking at someone). Unfortunately, life is silent. Therefore, we can only ask other people, who, assuming they share our language, will interpret the question to directly be a philosophical one, when a philosophical understanding of the question cannot be answered until after the word “life” is defined (which only “life,” forever silent and inanimate, can completely define)…
Humans seem to have the ability to create universals by speaking, yet there are only particularities in the world…
“The rage of Achilles” is so problematic because Achilles seems to have the ability to destroy fate and tear down the spacetime continuum. What seems to be a linear war is actually an event in which the entire cosmos hangs in the balance if things don’t happen “in the right order.”
The brain and the mind “have nothing in common.” The use of the word “nothing” here is a play on words, for Cadell proceeded to claim that the mind is a “lost cause,” something that is essentially “an absence”…
The title of this paper alludes to The Legitimation Crisis by Jürgen Habermas, a prophetic book nearly forty-years ahead of its time. It warned that we were losing confidence in political institutions, rendering those institutions ineffective and profoundly damaging democratic processes. Today, the term “legitimation crisis” is often used in reference to socioeconomic and cultural institutions, bureaucracies, and governmental processes in general, and in this paper, I will suggest the term “legitimation crisis” could be used to refer to nearly everything in modern life…
“Monotheorism” is the belief that there exists a single theory that can explain every given phenomenon and/or given event, and it is human nature to be monotheoristic.
1. Reading is an act of trust.
a. We can’t check all the author’s sources.
b. We can’t check to make sure the author didn’t steal ideas.
c. We can’t ask the author whether a fictional character is being sarcastic or ironic.
1.1 If we pick up a book on Vietnam by a reporter “who was there,” we must trust that the reporter actually tells us about things he or she really saw.
a. The reporter might be lying.
b. The reporter might have a bad memory.
Societies and stories are similar in how they work and fail. Like a story, a society that fails to maintain in its people “a trance of believability,” of legitimacy, is a society in decline.
Where things are “without nothing,” then things are “complete in themselves” (there’s “nothing else to see”), but where things are instead “lacking,” things are incomplete (there’s more to the story).
The free market works well because entities aren’t “too big to fail” and can meaningfully compete, and yet “rationality” and “self-interest” — principles which drive wealth creation — drive entities to try to make themselves TBF, hence driving them to threaten if not ruin the wealth creation which justifies their existence.
There is technically no such thing as “meaningful experiences,” only “meaningful memories (about experiences).” An experience is precisely relative to what thought is not involved: it is ultimately a matter of perception, which means it is a matter that doesn’t involve thinking or meaning. There cannot be meaning where there isn’t thought, so “pure experiences” are necessarily meaningless. And yet that meaninglessness can be a source of wonder and beauty.
Are “great thinkers” no longer with us? What happened?
“Pure experience,” for Keiji Nishitani, is basically the experience before Lacan’s “mirror stage” when a child doesn’t recognize his or her self in a mirror; during this time (which we all went through), there is no “hard line” between objects and subjects.
What should we do today when we return to a land that was once ours but that we do not recognize?
What a thing “is” cannot be separated from what a thing “means,” as two sides of a coin are inseparable and yet distinct. A given cup is ultimately a collection of “atomic facts.” Therefore, a cup isn’t a “cup”: what a cup is isn’t what it “is” (to us). To humans, the is-ness of a cup cannot be understood; therefore, when humans speak of is-ness, they speak of what a thing “is” (to them). In other words, what a thing “is” is what a thing “means.”
The rational and logical end where death and apocalypse begin; there, the border of thinking is reached…
Section One of a Philosophy of Glimpses
It’s hard to think of a more loaded word in philosophy than “metaphysics,” and it can mean a hundred different things to a hundred different people. To start, it would be useful to review some possible understandings of the term…
Section Two of a Philosophy of Glimpses
Phenomenology is the study of how things “unfold.” It is the study of what x is “like” primarily, with estimations of what x “is” following only secondarily. Even if Kant is correct and the noumenon proves uncrossable, the fact x “unfolds” like y instead of z will give us reason to think x “is” more like b than c…
Section Three of a Philosophy of Glimpses
Any effort to establish a “New Metaphysics” will have to defend itself against Derrida, who seems to have deconstructed all metaphysics with his masterpiece On Grammatology. Why I think Derrida failed is elaborated on in “On Typography” and “(Re)construction,” both by O.G. Rose, but here I will present the outline of the case.
1. Derrida deconstructed metaphysical systems which rely on “ontological gaps” but not metaphysics focused on phenomenological experiences of apprehension.
2. Derrida deconstructed metaphysical efforts to say what things are like “in of themselves’ (across the noumenon, per se), but Derrida did not deconstruct metaphysical efforts which focus on what things are “like” in their “unfolding.”
3. Derrida deconstructed “metaphysics of judgment” but not “metaphysics of apprehension,” “metaphysics of gaps” but not “metaphysics of reading.”
4. Derrida deconstructed metaphysics which open “gaps” between surfaces and depths, parts and wholes, etc.
Derrida deconstructed “metaphysics of gaps and judgment” but not “metaphysics of experience and apprehension” (in other papers, I say that Derrida deconstructed “the metaphysics of the book” but not “the metaphysics of reading”). By basing a “New Metaphysics” on phenomenology versus (Platonic) systematizing, we can justify engaging in the practice of metaphysics again…
A response to Alex Ebert’s “A Void Dance” on how denying death is to embrace the death drive, while accepting death denies the death drive…
Section Four of a Philosophy of Glimpses
Does phenomenology really overcome the problem of “presence” that Derrida claims signifiers never can, stuck endlessly deferring? This is the problem Derrida is getting at with his language of différance and “trace” — why does phenomenological experience avoid the problems of language and not fall into its own “ontological gap?” What is experience if not a “presence?” This was a point Lennart Oberlies raised, and I believe it deserves special elaboration.
Section Five of a Philosophy of Glimpses
Thomas Jockin made the point that not all “lacks” are nothing, and that the conflation of these categories has deeply hurt our capacities to reason, especially to reason metaphysically. This inspired a paper called “Lacks Are Not Nothing,” and here I will try to give an account based on that paper to explain the difference between “lacks” and “nothing.”
Section Six of a Philosophy of Glimpses
Phenomenology is an “art-form” of observation and careful distinctions based on our experience. We draw distinctions between “love” and “like” by taking into consideration how one “unfolds” versus the other. Since x “unfolds” y way while b “unfolds” c way, there is “reason to believe” that x and b aren’t identical. Maybe they are somehow, maybe they overlap here and there, etc., but if “love” unfolds y way and something “like love” folds z way, then there is reason to think that the thing “like love” must not be identical to “love.” And on these grounds, we now have reason to continue or conclude a new philosophical investigation…
To live is to be conscious; it is to inhabit a mode of being and thinking; it is to hold a set of memories; it is to experience a wide range of emotions; it is to know a wide range of people and things. Everyone who is conscious experiences such things, but only you experience what you experience and how you experience it. This helps constitute your-self, and you can never inhabit the self of another. Hence, there is a gap between you and others, and a sort of “hole” in others that you don’t have in yourself…
Section Seven of a Philosophy of Glimpses
A being with consciousness and will is able to shape its own “formal cause” (to some degree), which means that such a being can also shape it’s “final cause.” While a cup cannot change its formal and final causes, I can change the formal and final causes of myself.
Section Eight of a Philosophy of Glimpses
If free will exists and humans can be “toward” “lacks,” humans aren’t purely physical but “(meta)physical,” though that doesn’t mean humans are necessarily not an “emergent” product of ultimately physical forces (that would be a line of inquiry that exceeds the scope of this work). Considering this, we are capable of experiencing “(meta)physical” beings, events, etc. in ways that purely physical or purely nonphysical beings could not…
Section Nine of a Philosophy of Glimpses
‘Who has seen the wind?’ — Christina Rossetti starts her poem with this profound question. ‘Neither I nor you,’ but we have caught glances of trembling leaves and bowing trees, and now it is up to us to remember what we saw when the wind was ‘passing through.’ What passed over us? To answer, let’s start with what we felt.
On the Tropics of Discourse by Hayden White with Davood Gozli
As we can’t visit Lynchburg and just visit “the bank” (because we also have to “visit” all the surrounding buildings, roads, citizens, etc. on the way), so likewise we can’t just discuss “The American Revolution” without also discussing American agriculture, American assumptions about the virtue of freedom, the English language, American literature, and so on…
The term “dialectic” is used throughout philosophy but not always in the same way. Some philosophers by “dialectic” mean merely a “back and forth,” like a democratic debate. People will talk about the “dialectic” between Liberals and Conservatives, Republicans and Conservatives, and so on. In this first sense, a “dialectic” and a “debate” are extremely similar, and the key point is that this kind of “Discussion Dialectic” seeks to end the dialectic. The goal is resolution, for the involved parties to come to an agreement that stabilizes the situation.
But this is not the only kind of dialectic…
We are a problem that can be managed but never solved, and seeing as “ethical situations” involve people, ethics are also subjects which cannot be solved once and for all. If we determine in one situation that “x is wrong,” it won’t necessarily follow that we never have to worry about x again or that x is always wrong. In c situation, x could be wrong, while in f situation it could be good, and yet tomorrow x could be wrong in f situation — it depends. It will not do for us to say, “x is wrong,” and by that mean always and/or unconditionally, for that is too A/A in an A/B world: it is to take an idea (“x is wrong”) and press it down and over the world, flattening the world. Instead, we need to form a dialectic between our ideas and the world, which would be A/B: perhaps “murder is wrong,” but it would not necessarily be the case that every instance of “ending a life” was murder; it could be the case that some instances of “ending a life” was only “killing.”
Thinking “about the world” is arguably thinking that only “responds” to the world: it’s cause and origin is arguably the world. But thinking which “wasn’t about the world” — that misunderstood it, that was imaginative, that was completely abstract — didn’t strike Hegel as a “response” to the world, but its “own” cause and origin (thus, created and creative). It struck Hegel as erroneous to treat this second kind of thinking as identical to the first or to — worse yet — “bracket it out” as error and irrelevant, which arguably is what most Enlightenment thinkers did, absolving themselves the responsibility to consider the implications of the second kind of thinking. Hegel, though, wouldn’t grant himself that luxury.
Thanks to technology, everything ‘in this world has become everybody’s issue.’ People we’ve never met ‘are now involved in our lives, as we in theirs, thanks to the electric media.’ What we are orientated “toward” has dramatically changed in our modern age...In the past, it wasn’t possible to be “toward” events much beyond one’s locality, family, community or work; yes, people could read about the war, events in Europe, and so on, but we couldn’t regularly receive “live updates,” hour by hour, about everything that was happening everywhere. In the past, humans weren’t simply more isolated, but also more “truly ignorant” about global events: people not only didn’t know what was happening, they didn’t know they didn’t know. “True ignorance” can cause major problems, but so can bearing knowledge that the knowers don’t know how to bear…
We don’t tend to think of void as what makes being possible, but instead what needs to be “removed” so that being can flourish. Where there is void, being is “sucked in” as if by a black hole; in this way, voids are threats to being, not enablers of it. Worse yet, we seemingly don’t even have a robust category of “not-thing” like “void”: we generally have a dichotomy of “being” and “nothing,” which generally means that we only have a category of “being” because nothing is, well, nothing. “Nothing” for us is a “dismissal category,” a category we use to say, “It isn’t” and “It doesn’t matter.” It’s a “limiting concept,” a “boundary” — we suggest “things can’t be nothing,” which means if we’re talking about nothing, we’re talking about nothing and wasting our time. And so we don’t talk about it, and instead focus on being…
“Lacks” and “holes” are very similar, and both create “ambiguities,” for they exist between “the present” and “the absent” (like Schrödinger’s Cat). Because we are A/B, we must face ambiguities and decide if, in our minds, they are “lacks” or “holes”…
‘Language creates a worldview’ more than a worldview creates language. This isn’t to say worldviews don’t have any effect on language, but that language has an incredibly powerful impact on how we think about and see the world. As highlighted by Neil Postman in his book The End of Education, I. A. Richards would divide his class into three groups and ask each to write about language, but he would also provide each group with an opening sentence: either ‘language is like a tree,’ ‘language is like a river,’ or ‘language is like a building.’ ‘The paragraphs were strikingly different, with one group writing of roots and branches and organic growth; another of tributaries, streams, and even floods, another of foundations, rooms, and sturdy structures.’ As the exercise made clear, metaphor influences what we say, and to some extent, ‘what we say controls what we see.’
We tend to think of informative statements as either truths or lies, but what about something we can’t identify as either a truth or a lie? What about something that makes identifying “what is the case” harder, that blurs the line between “truth” and “falsity” beyond recognition and/or that convinces us that we cannot recognize the difference? That doesn’t seem to be a “falsity,” and yet right now that is the only term we seem to have at our disposal, suggesting the need for something else. In this short paper, a term I would suggest is “blur,” and such a term is especially needed in our Internet Age…
What’s it like not to know what we’re talking about? Unfortunately, it’s often like knowing what we’re talking about. Ignorance feels like knowledge, and we, informed about this but uninformed about that, usually participate in “civil debate” unaware that destroying democracy and contributing to it feel the same. In each of us, ignorance is certain.
As discussed with Samuel Barnes on “Truth Organizes Values,” the dream of the Enlightenment was generally that there was only “one internally consistent system,” meaning that “coherence” of a worldview would necessarily correlate with its “correspondence” to reality. Unfortunately, it turns out that a system can theoretically be utterly “coherent” and non-contradictory, and yet nevertheless not “correspond” with reality at all. This being the case, there is no necessary reason why there cannot be an infinite multiplication of conspiracy theories, nor why “information warfare” cannot convince us of practically anything. If there was only “one internally consistent system which corresponded with reality,” then both conspiratorial thinking and propaganda would be much more “bound” and stoppable by rationality; instead, it turns out that rationality is mainly in the business of “coherence” versus “correspondence,” and thus rationality can make the problem worse…
To remain in the Cave is not to be stupid; in fact, we can stay in the Cave and be brilliant. What keeps one in the cave is not applying their brilliance to the “right ends,” and/or letting their brilliance be “captured” by a zeitgeist or “contained” within some mental prison. Why is this an important point? Because imagination, which is incubated by art, perception, and experience (as will be expanded on), determines “the bounds” in which rationality can operate. We cannot think about what we cannot imagine, and that means imagination goes first in intellectual development (or experience — something aesthetic leads the way). Rationality is not the source of its expansion, only its coherence. Those who stayed in the cave did what was “coherent” and “rational”: they became best at the memorization game. Neither able to imagine they could leave or willing to walk out of the cave, the prisoners who remained in the cave did what made the most sense: they trained to win the game of memorization.
We’re explained when we know why we’re here, but we’re not addressed until we know why we’re here. A strange opening sentence, yes, and yet I would wager that you know exactly what it’s trying to articulate. Without a second thought, we know the difference between “here” and “here” — we’ve known it our whole life, every waking moment. We know that we were born, that we stay alive because we eat, that we travel through space because of our legs, and yet none of this feels like it’s addressing us (it feels “besides the point”). It’s explaining our physical composition, our need for energy, our body — but we are nowhere to be found in the explanation. And yet we live it — we’re always living explanations in which we cannot be found…
There is a story of a donkey caught between two equally sized piles of grain. Because the piles of grain are the same size, the donkey cannot decide which to eat from, and so the donkey starves to death. The donkey couldn’t make a rational choice: the donkey was paralyzed. Generally, many Greek heroes find themselves caught in similar situations (and their level of “perfection” doesn’t help), but Biblical heroes are different. Biblical heroes find themselves between what look like two equally sized piles of grains, but then thanks to God, they realize one pile is a little smaller than the other, and so they escape paralysis, make a choice, and eat from the larger pile. Additionally, for the Biblical hero, one of the piles of grains might be poisonous or contain an infectious disease that the Biblical hero might take back to loved ones, so it’s very important the Biblical hero chooses well under God’s guidance (and he or she must choose or starve to death). Unfortunately, the Biblical hero is sinful and has bad hearing…
I checked three translations of The Republic — one by G.M.A. Grube, another by Benjamin Jowett, and lastly Allan Bloom (not that these are necessarily the best translations or something: they were just what I had available) — and the verdict seemed clear: the prisoner did not act alone. Nowhere did the prisoner free himself or choose to ascend out of the Cave without prompting, so why in the world then did I recall the story as a narrative of individual enlightenment and self-ascent? Sure, I knew the Allegory involved education and learning the truth, but the imagery and “movie in my head” of the Allegory was one of chosen and autonomous “ascent” — I didn’t recall the prisoner being “dragged out” as the explanation for how he escaped. I recalled that in the discussion with Jockin as a “possibility,” a hypothetical, not as what happened. What was going on?
When I am hungry, the idea for “food” can enter my head: I can “see” a sandwich, and I can “see” where I need to go in the kitchen to make one. Indeed, the “realm of ideas” seems to be a world that “gives me solutions” to my problems: because I “see” a sandwich, I can “realize” and enter into a world where I have addressed my hunger, which is “a more perfect world” compared to the world I was previously in. If ideas can help me move from a “less perfect state” to a “more perfect state,” is it so crazy to suppose that ideas could keep leading me on until I reach “the most perfect state?”…
Imagine that we wake up on a blank canvas. There is whiteness for as far as we can see. There is no ground, no sky, no furniture, no trees — nothing. We even try to look at our hands, and they are not there. All and all, we are in a void: I said we are on a “blank canvas,” but there’s really not even a “blank canvas.”
So, this in mind, here’s the question: What would be something rational to do right now?
A year ago, December 14th, 2020, In Strange Woods was released, and to this day I think about it often. ISW is a podcast musical from Atypical Artists and creators Jeff Luppino-Esposito, Brett Ryback, and Matt Sav; if you have not listened to ISW yet, you must…
For the last six months of 2021, I had the tremendous honor and joy of participating in a reading group with Davood Gozli and John David on The Tropics of Discourse by Hayden White. This reading group already inspired one paper, “The Novel Historian,” but plenty of material was left unaddressed. In this work, we will try to explore and expound upon the rest of the essay collection, with a focus on the role of interpretation in thinking. Far from deconstruct all knowledge, life is found where we can locate hermeneutics….
Have you noticed that most conversations don’t go well? I’m not talking about “small talk” — I mean conversations, about pressing issues, decisions, and that kind of stuff. Someone disagrees, someone gets angry, someone gets offended, someone accuses everyone else of trying to destroy America — you get the drift. Why does this happen? Why is it so hard to have a good conversation? Well, I think it has a lot to do with the fact that there is an incentive to be the first to establish “dominance” in a conversation, and we can do that by disagreeing, getting angry, and the like. In other words, once I’m upset, I have framed others as needing to make me happy again, which is to say that I grant myself “the high ground,” per se. It stinks being stuck on the “low ground” and having to “play defense,” so after this happens to us a few times in conversation, we can start seizing “the dominate strategy” ourselves. And so, Nash Equilibria begin ruining all our conversations and contributing to the collapse of society, which is to suggest Game Theory gives us a way to understand why democracies are failing. This isn’t to say Game Theory provides the only explanation — we’d have to explore economics, for one, to begin sketching out a full picture — but I think it’s at least a useful start.
An Initiation to Game B premiered at The Stoa on Jan 17, 2022, and since then has inspired widespread discussion and debate. Davood Gozli recorded a great reflection on it, and I’ve enjoyed reviewing the threads which have popped up across the internet…
The Four Arrows
Mimetic Rivalry, “Cancer,” Scale, and Coordination Systems
o-g-rose-writing.medium.com
Dr. Cadell Last in his Substack article mentioned “Game A/B,” which brings to mind what was said in the video on The Stoa about “keeping the good of Game A” but leaving behind the bad. Similarly, we should ask what is the good of Game B we can strive to implement? This is an important question, for indeed our current system is struggling. For me, I like to ask, “Is Game B a replacement, reformation, or schism? Likewise, is the Dark Renaissance a replacement, reformation, or schism?” I’ve received different impressions from different thinkers, and I think all three possibilities are worth considering, regarding both Game B and DR. The same can be asked about “Absolute Communities,” as mentioned in the last piece…
In a recent discussion with the magnificent Dr. Cadell Last, it was noted how in Hegel an idea isn’t even an idea until it is made concrete.” Hegel discusses abstraction, negation, and concretion — the full arch of sublation, as I understand it — which for me highlights how we start off with an idea for how to start a business, then we “negate” the idea phase of our potential entrepreneurship so that it can be “sublimated” into reality, and then we start living with and operating “the concrete” and actualized business. Ideas are negated as ideas and thereby actualized into “concretion,” which for Hegel an idea practically isn’t even itself unless it goes through this final phase. Why’s that? Well…
Humanity is faceless. It has no gaze. It exists, sure, but it is not real like our husband, which is to say humanity is not a personalized individual who we could find in our kitchen chewing food loudly and talking about the laundry. Our husband doesn’t realize that we hardly slept last night, that our legs are aching, and that we cleaned up his closet without him even asking — does he realize everything we do to make his life better? We don’t say anything, fighting the thoughts in our heads, which also makes us upset, because our husband doesn’t even realize that we’re having to fight thoughts in our heads because of what he’s saying. It’s all “inside”…
Dr. Cadell Last, Tim Adalin, Alex Ebert, and I concluded our “Philosophy of Lack” series (though fortunately “lacks” can never be completed — that’s partly the point), and in it we discussed Socrates and how he helps us realize that we can go through most of our lives following a system of ideas that we cannot ultimately justify. Ideas guide our lives, and ideas require thought, but most of our thoughts we’ve never even thought about. Critically though, Socrates also puts himself in a place where he shares in our emotional struggle with his project. If Socrates somehow “pulls us down,” he brings us to a place where we all stand together, face-to-face, able to finally meet…
In this paper, I will explore differences between “developing from” (DF) and “developing with” (DW), with a focus also on questions of how the emphasis on DF or DW could alter relative to scale. All the same, an emphasis on “development from” is risky (even if sometimes valid). This will suggest, alluding to “The Socratic Embodiment (Part 1),” that the larger the system, the more “explanation” will be emphasized over “address,” seeing as address is particular and specific. There are advantages to this — explanations are more generally and universal — but there are also disadvantages (the loss of “address” has hurt mental health, for example). Regardless, the onus is ultimately on us to go “Inside The Real” of our neighbor’s home and find out what kind of people we can be…
Is family good? Is family bad? Hard to say, but to help frame an exploration on the topic, my mind turns to William Faulkner. The Sound and the Fury is a masterpiece, and here I want to understand it as three reactions, dimensions, and/or stages to the collapse of family, with each Compson brother embodying different responses (though Faulkner was no creator of simple symbols). To point to the conclusion, in the midst of what has been called “The Meaning Crisis” (our experience of ourselves as Hamlet, as discussed in The Breaking of the Day) we must learn to “endure” our present space of “negation” like Dilsey did; otherwise, there will be no hope for sublation, only effacement.
We have mistakenly discussed “The Meaning Crisis” metaphorically as “a dead end,” as if something we stumbled into and can’t escape. Under this framing, due to Neoliberalism, we are in a desperate situation that has resulted from our ignorance, carelessness, and foolishness. This only helps Putin rationalize efforts to separate Russia from Neoliberalism, but if we framed “The Meaning Crisis” differently, future invasions and similar reactions to “The Meaning Crisis” might prove more difficult to rationalize. For this reason, there is urgent imperative to metaphorically think of “The Meaning Crisis” as a situation in which we are like Thomas More. We do know how to solve “The Meaning Crisis,” but we’d rather suffer and perish than fall back on old solutions. In this way, there is a nobility to our existential suffering. “The Meaning Crisis” is a sign of hope.
What is Conditionalism? It is a philosophical approach which stresses that many of the most consequential occurrences in reality, pregnant with ontological, metaphysical, and epistemological significance, are occurrences which only occur under certain conditions for certain periods of time. Philosophy has often stressed universals, objectivities, and “non-contingencies,” and associated “absolute truth” with such. In Conditionalism, which falls between “Absolutism” and “Relativism,” we associate truth, beauty, and goodness with entities like “shadows,” which only appear when light and darkness are together “in the right way” so that they don’t cancel one another out.
Currently, our talk of the Singularity is reminiscent of a Freudian desire to “return to the womb” or “return to Eden,” both of which are basically a “wish-fulfillment” to escape feelings of incompleteness, lack, and frustrated desire. The topic of “lack” is expanded on “The Philosophy of Lack” series, but here it is enough to know that “lack” is that feeling that “something is missing.” It is a result of the split in the subject between what is wanted and what is, the Imagined and the Real, and we live and manage that “split” through our Symbolic order (as manifest in our daily lives and in our societies). Now, these categories overlap, for we tend to define “The Real” according to what we Imagine, and we like to believe the Symbolic likewise reflects and participates in the Real, all while we try to avoid experiencing the Real in a manner that forces us to accept that the Imagined and Symbolic diverge from the Real (perhaps radically so). We are strange and paradoxical creatures.
The Intellectual Class and position is thus total in itself, which is risky, for it engenders the possibility of totalization and totalitarianism. However, as will be argued, Intellectuals (of some kind) are necessary, so the removal of this possibility entirely will not prove beneficial to the society, as wouldn’t the banning of fire in order to keep people safe from burns. The name of the game is to keep Intellectuals “bound,” which Benda argues had been the case for centuries. Intellectuals or “clerks,” as he calls them, were restricted to “the impractical domain,” which is a limited and necessary sphere of activity. Unfortunately, Benda saw Intellectuals expanding into “the practical realm” of politics, which was for the unstoppable “Prometheus to be unbound.”
Would “the best of all possible worlds” Leibniz discusses be “the most rational of all possible worlds?” By this, I mean a world where only rational action occurred (assuming that is theoretically possible, which I see no reason to think it isn’t): a world where everything that happened could be translated into rational terms, rationally justified, and rationally explained. Is that where we would find paradise? Or would we instead, following the thought of Benjamin Fondane, find something closer to an authoritarian regime? After all, if God made the world, and creation indeed entails nonrationality, then there would be reason to think that “nonrationality” is somehow essential to making the world “the best it can be.” If this is the case, there is “nonrationality” in Heaven. Nonrationality is found in God.
The following paper emerged out of a conversation series with Davood Gozli and John David. We read Existential Monday together, and the series was a highlight of my 2022. I highly suggest both of their channels.
Funny enough, letters are undefinable: we cannot define a letter, only recognize a sound. Until letters come together into words, no meaning can be located: the gap between the “signifier” and the “signified” doesn’t appear. This “gap” is what Derrida believed metaphysics constructed its home in, and if the “gap” could be deconstructed, metaphysics in general could be deconstructed too. Derrida successfully argued that signifiers eternally “defer” the signified, that the signified is never arrived at; thus, metaphysics never “captures” its subjects or reaches its goals. Since this was the case, Derrida believed metaphysics was basically meaningless. But I believe Derrida only deconstructed “the metaphysics of the book”: I do not think Derrida deconstructed “the metaphysics of reading”…
“Why Does Anselm’s Ontological Argument Haunt Us?” hoped to establish why Anselm is epistemologically significant, and why his work is evidence of claims made in The True Isn’t the Rational. That paper is presented in (Re)constructing “A Is A” (Part 1), and here, I want to consider Anselm theologically. I think the fullness of Anselm’s project is often missed: professors teach “The Ontological Argument” (a phrase we receive mainly from Kant), found in Chapter 2, and that’s the limits of the discussion, right in my view when we’ve arrived at the start of what Anselm wants to accomplish…
What’s it like to speak? What’s the difference between the moment of silence and the moment with words? Sometimes words feel like escape, running from the silence, while other times not speaking feels like running. Conditions change with time, and what we should do and shouldn’t do changes with those conditions. Why do they change? Why is saying, “I love you,” in some situations the best of all possible acts, and in other situations it is an act of manipulation? What makes words mean what they mean? From where comes their meaning?
There was a plane wreck. We find ourselves stranded on a desert island. We loosen the tie around our neck and thank God for our cellphone. Our sister used to tell us that the cellphones weren’t useful, that they caused people to get addicted to social media and just inspired people to waste time, but lost on a desert island, who could possibly question their value? We pull out our phone and dial 911. No signal. The app with our account information is still open though. A million dollars. 100 pounds of gold. 10 bitcoin. There are no stores on the island, and night is an hour off. We stare at all of our money…
The paper “On Thinking and Perceiving” by O.G. Rose drew a distinction between “taking in” the nearby bookcase (“pure perception”) and thinking about the bookcase, which creates the distinction between “my idea of the bookcase” and “the bookcase.” In this work, within the category of “thinking,” I would like to draw a distinction between “thinking here” and “thinking there” (though I cannot promise to maintain this language throughout all my works). This distinction arose in trying to consider the role of memory in thinking, which led me to seeing acts of imagination, creativity, and memory as examples of “thinking there,” beyond my immediate surroundings and experience, while considering the nearby bookcase is “thinking here.” If I can physically see, smell, hear, etc. it, then it is something I can “think here” about; if I cannot see, smell, hear, etc. it, then it is something I can “think there” about (I use the word “physical” here, because I can arguably “see” and “hear” things in my head). Ultimately, I think the language of “thinking here” and “thinking there” can help us understand how we are a strange and paradoxical point at which immanences (“closed systems”) interrelate to make possible intelligibility, meaning, and efficient action…
Something is missing. Between “being” and “nothing,” we need a third category: “lack.”¹ Many philosophers today discuss “becoming,” a corrective of “being,” and certainly where there is “lack” there can be “becoming” relative to that “lack” (which is to say we can “be toward it”). If we are inspecting the tip of a glacier and “lacking” what is beneath the water, it is possible for us to then “be-toward” what is beneath the water by putting on Suba gear and diving in.² Arguably, if we weren’t “toward” what was under the water, trying to glimpse it, it couldn’t be “lacking” to us, for it is “practically” nothing. Yes, the bottom of the glacier is “technically” there, but not “practically” relative to us. For a “lack” to not be “nothing,” we have to be “toward” it (as described in “Lacks Are Not Nothing” by O.G. Rose), and this means “lacks” and “becoming” are deeply linked. Furthermore (and critically), for “a lack to not be nothing,” a certain condition of “towardness” must be met. Likewise, for “becoming” to not be “being,” it similarly must meet the condition of, and be conditioned by, something that it is “toward” (and hence cannot have or possess, “a lack”). “Lacks” condition becoming, as becoming necessitates the condition of “lack.”
“To be rational” is to follow the logic of given premises to their “coherent” end, but not all premises are true. “To be true” is to believe in true and “corresponding” premises, whether rational or not. It is, of course, best to be rational and true, but just because a person is one doesn’t necessarily mean the person is the other (and vice-versa). Which premises are true cannot always be determined (though note that something can be true even if it cannot be verified), and it is unlikely that people who aren’t rational could meaningfully explain why their premises are true. But truth is not true because it can be explained: it is true because it is true, even if the lack of explanation means the truth is meaningless to us and/or cannot be known as true. All that aside, the main point is that “being true” and “being rational” are not similes: when someone is rational about what is true, it seems like “rational” and “true” are similes, but this is not the case…
Much of O.G. Rose suggests a “conditional” metaphysics, ontology, and epistemology, all of which can seem strange, mystical, and paradoxical. These are fair points of contention, but I think what is proposed in Rose is a lot more tangible and practicable than it might at first seem: we are basically grounding our “abstract work” in the “concrete” experience of being an artist, creator, designer, or the like. “The creative act” is the experience at the center of our thinking, and by examining it closer, a better hold around the ideas of Rose becomes possible.
I’ve already recorded an analysis on the movie, and I do not want to repeat myself here; however, I did want to draw attention to a point I said more in passing, which is that Drive My Car strikes me as an example of a “Lacanian New Sincerity” (which could perhaps be associated with Metamodernity, but I’m not sure). This no doubt sounds strange, but I hope here to elaborate on what I meant. Ultimately, I will argue that embracing “Lacanian New Sincerity” is necessary for us to become “Absoluter Knowers” and “Deleuzian Individuals,” which is necessary today, given “The Conflict of Society” which is now becoming “visual” and undeniable, causing social dysfunction. What I will describe here is my idea of what we should do given Belonging Again, the problem of “givens,” the tensions of freedom, and the like. Basically, we must exercise “The Scholé Option” and condition ourselves to “Bring Forth The Real/Beautiful” from the place of a “real choice.” “The fate of beauty is the fate of us,” and ultimately “the fate of beauty” will be determined by our success of failure to become “Absoluter Knowers,” “Deleuzian Individuals,” and/or Sublime…
Like “being,” A/A is unavoidable and fine within a “dialectic” of “be-coming,” but unfortunately we are so naturally “toward” “being” that we easily fall into “autonomous being,” which causes pathological effacement. Furthermore, I consider possibilities of “Ultimate Being” (or some “Ultimate A/A”) as topics of what I call “Alterology” (under which falls subjects like Theology, Psychedelics, Speculative Futurism, etc.), which I try to “bracket out” from Philosophy. I also emphasize the problems with A/A, because I believe it is increasingly hard for us to avoid “autonomous A/A,” which is natural, subconscious, and causes trouble…
I can study a car, which seems to tell me in its very facticity that “it is for transportation,” and think to myself, “I could sleep there.” In this way, I could think of a car as both a “thing for transportation” and “a bed,” and yet the possibility of a using a car for a bed is not readily “given to me” by the facticity and structure of the vehicle. I can also think, “That car has changed,” when the car I’m driving now isn’t the same as it was five minutes ago (let alone five years ago). Hegel and Dr. McGowan stress the connection between “contradiction” and “becoming,” noting also “paradox,” which is to say that arguably the main way we “think contradiction” is to think of things as “solid” and “what we experience them as,” when things are rather constant “processes” and beyond our experience. Things are not what we take them to be, and yet we are able to “take them” that way all the same.
The Iconoclast by Samuel Barnes is a tremendous text, a fitting follow-up to Missing Axioms, which I similarly found inspiring. Barnes tells us that ‘[p]hilosophy is the question […] the ultimate question […] the question of questions.’ After an introduction like that, it’s easy to think that what will follow is a shameless praise of philosophy as the most important of all enterprises, but what we find instead is a cautionary tale in the heritage of Pyrrho, Sextus, and David Hume. The book also explores the work of Donald Livingston, of whom I myself derive much inspiration, and the text will ultimately suggest that it’s a problem that “philosophy is the ultimate question.” For Barnes, we could say that the ultimate question regarding philosophy is that philosophy forces us to decide what we’re going to do about philosophy (‘the intellectual leviathan’). So, indeed — what will we do?
Money isn’t the only currency: fame, popularity, connections, beliefs, affiliations — humans can use much as a means to what they want. As Louis Dumont argued it was human nature to create hierarchies, it also seems to be in our nature to create “currencies” (to turn x into something that can be used in exchange for y). Money is paper on a desert island, inefficient material for fires, but power in America. Though perhaps “value” is more individual, currency is necessarily a “social contract,” for I can only use x in exchange for y if the person with y agrees that x is worth y (which suggests that there is something inherently “explicit” about currency, for only the explicit can be socially acknowledged, which suggests that what is “explicit” is in the condition to possibly be turned into currency). Where there is currency, there is society, which suggests that where there are currencies, there can also be social pressures to create and use currencies, perhaps for noble ends, though perhaps at too high a cost.
If something is “totally new,” it has nothing at all to do with what came before it. Something “new” can share connections with the past, but not something “totally new,” which would suggest that “total newness” might be an impossible category that we never actually encounter. The same could apply to “total similarity,” another term for which is “sameness”: arguably, no two things are ever actually the same, for then the things would not be two but one. In this way, the word “same” is never meaningfully applied; whenever it is used, the term is used to actually mean “similar” (and/or “greatly similar”). If the term “same” is meaningfully used, what is used is “sameness”…
To review, if “similarity/difference” means “process,” we could say processes are contained within each Vector, but process isn’t what defines “the space between” Vectors. By extension, we can say that the process of Chemistry “is totally different” from the process of Biology, the process of Mind, etc. (each Vector contains a unique process, with the description of each requiring its own paper).
Where there is a “=” that doesn’t efface, there is a Vector, for there can be no equivalence within a Vector without causing effacement. And the reason the “=” doesn’t efface is because it dialectically relates to “Pure Difference” when a new Vector emerges. “Pure Difference” is a Vector, while lowercase-“pure difference” is always effaced as “pure difference” (identical logic applies to “Sameness” versus “sameness”). This in mind, we can write…
Since Physics is a foundational Vector, we should expect to be able to find ways in which “higher Vectors” end up “embodying” The Leibniz Oscillation. Leibniz shows how this occurs in Mind, and I think Girard and Darwin can provide ways to understand it in Biology and Culture (with Culture “embodying” Biology, suggesting overlap). At the same time, we should expect uniqueness, which we will explore in the following. This is a very tentative list, and I am not sure at all if the description of the “processes” for each Vector is accurate. However, I hope the logic I explore works just as well, even if in the future “The Vector Tower” needs to be updated and corrected. Furthermore, I don’t know if each Vector entails many processes (interacting), if some entail one that is so predominate that it “might as well” be the only process, or if some Vectors entail one process while others entail many. Again, I have no idea: my point is ultimately to suggest a framework and to eventually touch on “the problem of Vector transitions,” which seems essential (please note I’m still not sure if the LO is a process of Subphysics or Physics, or if Mind and Culture need to be combined, reordered, etc. — all of this can be readjusted in the future as need be). A very tentative depiction of “The Vector Tower” could be depicted as follows…
We have very good reason to believe Biology “embodies” Physics because zebras cannot escape the laws of gravity, and because the force they produce is relative to their mass and acceleration. We have good reason to think oxygen (Chemistry) can “participate” in Mind, because we can formulate “ideas about oxygen” (and you right now can read about oxygen). The ideas expressed here are based on experience versus theory: we have “observed” Biology following and “embodying” Physics, as we have experienced Physics “participating” in Mind (to us). For this reason, the possibility of “Vector Interaction” is empirical and phenomenological, so we must give an account for it that avoids reductionism (otherwise we’d risk a main purpose of Vector Theory). Successfully or not, this is what we will now attempt.
How and why does desire arise in the first place, and why does it tend to be “mimetic?” Girard in Battling to the End notes the discovery of “mirror neurons,” and indeed those neurons seem to be part of why humans mimic those around them. The larger the sample size, it does seem to be the case the “mirroring” explains a large part of human want and desire. But why? Is it simply because we have “mirror neurons?” Regardless, we seem primed to fall into “mimetic desire” by the raw fact that ‘imitation is the initial and essential means of learning’ (as Girard put it). Children must model themselves after their parents and environments if they are to have any hope of living in a world that is intelligible to them, but that means we all seemed primed from the beginning to end up in “mimetic desire.” If modeling is how we must learn, then we all must start learning by doing something that could easily end up in mimetics…
As argued in O.G. Rose, the nature of human reality is “ ‘A/(A-isn’t-A)’ is ‘A/(A-isn’t-A)’ (without B),” and story is a “proof” for it (via description). This ontological situation arises because of the divide between thought and perception, which creates the divide between ideas and things (an “A is A”-idea that an “A is A”-thing is not but is known through) (it is also the divide between being and Being, but this paper will avoid that terminology). The paper “On ‘A is A’ ” by O.G. Rose tries to argue that reality is indeed more captured by the formula “ ‘A/(A-isn’t-A)’ is ‘A/(A-isn’t-A)’ (without B)” than “A is A” (while a few other papers help lay the groundwork: “Read(er),” “On a Staircase,” “Transposition,” and “On Is-ness/Meaning”). But even if the paper “On ‘A is A’ ” succeeded, the mere internal consistency of a formula alone isn’t compelling evidence to believe that reality itself reflects that formula and lives and moves accordingly: that is left to be shown. Well, fiction shows it, and discussing math might help explain how…
The Science of Logic is not a text I feel mastery in, and I would turn readers to the work others for a deeper and better reading. Still, I feel comfortable to claim that what I call “The Modern Counter-Enlightenment” algins with Hegel, and that thinkers like Maurice Blondel, Alfred Korzybski, Benjamin Fondane, Paul Feyerabend, Pavel Florensky, Peter Geach, Alfred Whitehead, Henri Bergson, Michael Polanyi, René Guénon, and the like basically following Hegel’s critique of Aristotle and “hard objectivity.” Layman Pascal and Alex Ebert are two individuals I would consider as part of “The Modern Counter-Enlightenment,” which I believe is still occurring, for the line of thought has mostly been ignored. I would also associate the movement with “The Kyoto School,” Nietzsche, “The Scottish Enlightenment,” and Phenomenology, as well as some theological projects like that found in Balthasar — but those are claims I would have to defend. As brought to my attention by Dr. Terence Blake, Francois Laruelle also seems critical, whose “non-philosophy” strikes me as very much aligned with my thinking on “nonrationality.” For Laruelle, all philosophy requires a decision and orientation that comes prior to philosophy, which indeed sounds like my ideas on how we must ascent to a truth before we organize a corresponding rationality. For me, this is the “pre-move” and/or “dialectical move” arguably at the heart of all A/B-thinking…
Imagine we stepped into an art gallery which was totally generated by AI but didn’t know it. If the AI passed “The Turning Test,” there would easily be nothing which would make it clear that this was AI generated art. At this point, seeing as the gallery was “missing nothing,” we’d be forced to make a choice, meaning we’d be forced to choose if that “nothing” suggested humans were “insignificant” or “apophatic.” (Please note that “truth organizes rationality,” so whatever interpretation of “nothing” we choose will radically organize our rationality and thinking.) If technological development is unavoidable or unstoppable, perhaps the point of capital-H-History is to arrive at the place where “The Singularity” is created and we are forced to see that it entails “no negativity” (just like Žižek notes), and at that point we’d have to choose/interpret if that negativity is evidence humans are insignificant or evidence that human negativity is “apophatic”…
Plugged in and online, we are part of a collective mind, and when we are disconnected we still think in terms of possible connection (our “towardness” is forever changed, as discussed in “Representing Beauty” by O.G. Rose). In this circumstance, trust, which is already hard to define (let alone maintain), seems jeopardized. Trust is difficult to establish between two individuals, especially after the trust is broken once (let alone multiple times), and it just seems improbable that trust could exist between millions of different people. But if we can’t learn how to maintain trust within a collective consciousness (to some degree), it would seem racial, religious, political, etc. tensions will only worsen. If overcoming this obstacle is impossible, our collective consciousness might rip consciousness apart…
Kids are obsessed with stories and creativity, and they’re also obsessed with asking, “Why?” In children, we see a remarkable blend of art and creativity, and Nietzsche calls his highest metamorphosis “the child.” Children also seem uniquely “intrinsically motivated,” which might suggest the bringing creativity and philosophy together can help us with problems of boredom, seeing little reason to live, and the overall “Meaning Crisis” (as Dr. Vervaeke calls it). In my view, “the problem of motivation” is severe today, and my experience has lead me to believe that bringing philosophy and art together can go a long way to helping with this problem (I also see Nietzsche as a thinker concerned with deconstructing “Bestow Centrism,” which leads us into a life of creativity and thought, for we need nothing outside of ourselves to create or think). Perhaps not, but I think children are a case study suggesting that this angle is worth considering…
I support the Counter-Enlightenment and Scottish Enlightenment, discussing also “The Modern Counter-Enlightenment,” and I have a particular love of David Hume, about whom papers can be found throughout O.G. Rose. My main paper is “Deconstructing Common Life,” where I argue that Hume stresses a need to “return to common life” to keep philosophy from becoming a force of totalitarianism — so why in the world am I spending so much time on Hegel? Isn’t Hegel everything Hume warned against? Indeed, for a good decade, I interpreted Hegel as the logical extension of Kant which lead into Marx and “totalizing projects” which contributed to the horrors of the 20th Century. In other words, I wanted nothing to do with Hegel. But David Hume was big on friendship and stressed that we learn a lot from friends, so when I found that Dr. Cadell Last respected Hegel, I thought I might need to revisit him. And indeed, my view entirely changed.
Thinkers like Lacan are famous for explaining why desire ultimately cannot complete itself and ever find “final fulfillment” or “completion,” and I completely agree that desire in this life is never satisfied. Dissatisfaction seems built into desire’s very structure, a point which also brings The Weight of Glory to mind by C.S. Lewis, and on the face of it this realization is horrifying. We can never be satisfied. Doesn’t that mean life is a joke? Maybe, but it also might mean we can always have something to do. Isn’t that great?
The logic of a people’s economy will indirectly come to be a major influence behind how they see the world. If the economy suggests there is a hard difference between “price” and “value,” then this is dualistic, and the majority of people will likely be Dualists. It’s natural: much of a person’s day is spent working and operating through economic exchanges, and so the nature of that system can naturally organize and shape their practices. Within Dualism, both a “hard spiritualism” (Gnosticism) and “hard physicalism” (Materialism) tend to arise (together, in response to one another, etc.): we either claim that all that exists is “the body” and facts, disregarding “spirit” entirely, or that “the body” and money are dirty and corrupting, disregarding “materiality.” In this way, Gnosticism and Materialism can arise together in a dichotomy, and I would argue that Modern Economics has mostly operated within that very dichotomy. And that has worked well enough for most of history, but today as we find ourselves having to effectively price and organize “horizons” (as I call them), and for “horizons” what worked for pricing “goods” and “services” no longer proves sufficient…
Ebert argues that we can find in Hegel ‘modern interpretations of equivalency,’ not equality, and “equivalency” is another term that basically means “approximal,” which is the language of Layman. In Hegel, there is a stress on “I as other” (A/B, “being-other”), but we know “I = other” (100%) is impossible; however, it is not impossible for “I (to be) like other” (99%). If 100%-ness was possible, there’d be no fundamental “gap/open-ness” in a self to be possibly and essentially incorporated with “otherness,” and so the very impossibility of 100%-ness is why “I as other” is possible. Love itself almost seems fundamental to being in reality proving to be a great 99% (or at least “love” seems to be an appropriate term by which we might describe “I as other”). Under this schema, it is not the case that “I” and “other” are ontological alternatives into which things might be “thrown” (Heidegger), but ultimately indivisible in their very indeterminacy…
Walker Kaufmann tells us in his Prologue to I and Thou by Martin Buber that Buber taught Kaufmann ‘how to read.’ This is because Buber taught Kaufmann to treat writers like a You and not an It, which is to make space for the voice and presence of the other (as “sacred” even). Buber will be addressed in The Absolute Choice as a way to consider the movement from Self-Consciousness (I-I, I-It, A/A) to Reason (I-You, A/B), but here I want to focus on Kaufmann’s point that Buber is hermeneutical…
In my work, I often claim that “the true isn’t the rational,” which is to say that they are distinct categories, and the main point is that rationality is always relative to what we believe is true, which begs the question of how we might determine what we think is true if rationality comes afterwards? Rationality operates assuming axioms (it must), and so to try to think “nonrationally about truth” is to try to find ways of knowing that are free of axioms (or “between them”). But how is that possible? Well, that’s arguably Hegel’s whole question with Science of Logic, and we all obviously do indeed do it because we use rationality to function. We likely just “absorb a truth” somehow through experience, emotions, faith, etc., and that’s why we are the way we are now. Alright, fine, but how might we nonrationality choose a truth now, after having already absorbed one? Having operated according to presuppositions and axioms for so many years, how might we employ a new logic? And how might we do this and not feel like we are being irrational versus nonrational?
Worry creates its own evidence, against which freedom struggles to rationally compete. By virtue of how worry pulls phenomena “toward” itself as evidence for its justification, freedom likely cannot survive safety concerns. If you and I are sitting in a kitchen and there is a door nearby, and I ask you not to open it because there might be a murderer on the other side, the only sure way to prove that my concern is irrational is by opening the door and risking death. If you do open it and find no one there, I could be emotionally hurt because you didn’t heed my instruction, making you feel like you made the wrong choice for putting your life at risk and for so carelessly disregarding my concerns for your wellbeing. What would you have lost by not opening the door? You wouldn’t have needlessly put your life in jeopardy; furthermore, you wouldn’t have hurt my feelings and disrespected my wishes. By taking a risk, you gained nothing and lost my confidence. You kept your freedom but hurt me and risked your own life. All you accomplished was prove that there was nothing to worry about.
Think about a cat. Do you see it in your head? Very nice — now, here’s the question: What ought you do with that cat? Pet it? Feed it? Good, good, but how do you know that this cat in your head needs food or needs to be patted on the head? Because you know cats need food and love? That’s fine, but how do you know that this cat in your head right now needs these things and in the way you think it needs them? Yes, you know the cat generally needs food and love, but how do you know when and how the cat should be given food and pet? Because you imagine the cat needs these things right now? Well, that works, but the real world isn’t so easy. To determine “the details” of when we “ought” to do this or that for the cat, we would need to understand and encounter the cat in its particularity and “such-ness.” Otherwise, we will be acting off generalities, and, critically, misapplied goods can become evil. If I feed a cat when it doesn’t want to eat, I can cause it stomach pain; if I pet a cat to the point that it has a rash on its back, the cat will suffer. But how can I tell when a cat shouldn’t eat more or when I should stop petting it? Well, by paying attention to the particular cat we’re feeding and petting: the “idea of a cat” cannot provide this necessary information of seeing, practical timing, and application. Yes, from the general idea of a cat or “is-ness” we can determine what the cat generally needs, but the generality isn’t enough. It can cause trouble…
“Autonomous rationality” creates a world in which everything that cannot be justified in terms of rationality is deconstructed and destroyed, but in “autonomous nonrationality,” say where intuition is king, everything which questions intuition is automatically wrong, and anyone who seeks to “conceptually meditate” intuition or have it justify itself can be framed as “below” the special knowledge found in intuition. If we don’t “just know,” we’re not in “the in-crowd,” and thus can be treated as an outsider. Those with the power of intuition (or some secret connection with Divinity) are not obligated to explain themselves, and they cannot be argued out of their position. After all, argument is “conceptual meditation.”
Rationality itself consists of essential limitations that until we recognize, we cannot act according to those limits and still consider ourselves epistemically moral and rational (for “good reason”), which might help us maintain existential stability. In other words, if we don’t accept the ways thinking is useless, paradoxical, and/or ironic, we cannot reinvent what constitutes “being rational,” for we fail to change our organizing truth. What a people believe is “true” is what that people will emergently and organically organize themselves “toward,” and if that truth and/or truths are poor ones, the people will reflect that impoverishment. Similarly, to allude to The Absolute Choice on Hegel, if we understanding that the ontological and metaphysical foundation of reality is A/B versus A/A, then it becomes “epistemically responsible” to engage in A/B, for A/A is unveiled to be essentially incomplete and ultimately self-effacing.
It was wonderful to speak with the Other Life community this week on Aristotle’s Nicomachean Ethics, hosted by Justin Murphy. Everyone involved was impressive and made the text come alive, which I think is important today, for I believe the stakes are very real when it comes to determining according to which framework we should situate the question, “How should we live?” It is important to answer this question, but if we first don’t ask if our worldview is more Utilitarian or “Virtue Ethic”-based, we may fail to realize how this important question could be halfway answered before we even begin considering it. The framework we bring to a question is as important as the question itself…
Philosophy is extremely interested in the possibility of “presuppositionless philosophy,” which is to say a philosophy that assumes absolutely nothing. Our presuppositions are our assumptions and starting axioms, and philosophy has a long history of believing that it had located a thought that didn’t require any assumptions, only to later realize there was an assumption secretly hidden in the premise(s) without realizing it. “I think, therefore I am” seems straightforward and free of assumption, and yet why am I so sure that “I think” versus “Thinking is occurring?” Am I justified to assume the existence of the “I,” or can I only say an act of thinking is occurring, and thus “thinking is?” But am I so sure thinking is occurring, or is it rather code and programing running through my brain from a great simulator? And so on — the point is that it’s remarkably difficult to avoid assumption…
Values naturally compel us to associate “supporting process” and “skeptical analysis” with disbelief and opposition, for values present themselves as having a right to just “be” — we don’t experience them as needing to be “processed” or examined. In fact, we experience process and examination as delaying the enactment and realization of those values, and that is to contribute to immorality, injustice, restricting freedom, and so on. For us to support process and skepticism requires us to think through the phenomenological experience of our values, and that will naturally feel wrong and immoral. We simply have to know we require process and skepticism, but we probably won’t feel right about it. Values make us feel like we must choose between “the death of values” and “the death of process,” and what good are processes without values? It seems like an easy choice…
The internet currently faces a significant problem: the quality of the high majority of the information is difficult to determine. This, in turn, makes it very difficult to use the internet to establish a new community and or “commons.” As will be explored, the technology of NFTs could help with this problem. “How do we establish a Global Commons?”
Hegel tells us in Elements of the Philosophy of Right that he has ‘fully developed the nature of speculative knowledge in [his] Science of Logic,’ which is to say he has different plans for this book. He tells us ‘[this work] is based on the logical spirit. It is also chiefly from the point of view that I would wish this treatise to be understood and judged. For what it deals with is science, and in science, the content is essentially inseparable from the form.’ What does Hegel mean? Admittedly, I find this assertion mystifying, but I take Hegel here not to be discussing literary genre, but instead to be alluding mainly to two points…
Rationality cannot be its own grounding, and so we must trust in something beyond our own rationality. For Hegel, the State provides grounding for Rationality and Philosophy: without trusting something (“nonrational”) like the State, Philosophical progress would prove impossible. The individual subject is confronted with an ‘infinite variety of opinions,’ and it is not immediately clear which of those “opinions” is right or best.⁸ If the individual could have reason to trust the “Now” though, the individual could compare a given opinion with “the State of things,” and that would at least begin a process of sorting out what has a chance of being true from what has little chance at all. No, not with certainty, but Hegel wants us to see that there is “reason to believe” that if the world has mostly become a place where alchemy is dismissed and the Greek gods denied, then we should start from a place where we assume alchemy is fake and the Greek gods nonexistent (that rationality has developed well). Now, perhaps alchemy isn’t false and perhaps there is truth to Greek mythology, but perhaps everything we’ve ever learned is false — how can thought ever begin, engaging in such radical doubt? Hegel understands the problems explored in “Ludwig” and “The Authority Circle,” both by O.G. Rose — we simply cannot investigate everything and we must accept authority. Fortunately, Hegel provides reason in Elements of the Philosophy of Right to assume that the “collective intelligence” of the State is likely more right than wrong.
Dr. James Conant begins his essay with a magnificent example of a layered cake, and notes how most people think of “rationality” as something just “added atop” our more animal nature, with each layer of the cake not impacting the others. There is an assumption ‘that the internal character of the manifold constituting the bottom layer remains unaffected by the introduction of the upper layer.’ Conant elaborates…
What is “right” in abstraction, “The Abstract Right,” is a state of pure, limitless, and total freedom. ‘[P]ossibility is being,’ Hegel tells us, ‘which also has the significance of not being,’ which is to say that it is only in being that we have the possibility of actualizing “Abstract Right,” but by definition being, defined by “determinations,” is where “Abstract Right” is impossible and cannot “be”; hence, being is a paradoxical mixture of “being” and “not being” — it is precisely where “limitless freedom” can be actualized that we find it unable to be actualized. Our lives reflect how we come to terms with this tragedy: ‘since personality within itself is infinite and universal, the limitation of being merely subjective is in contradiction with it and is null and void.’ We are “void” because we feel something, “total freedom” (“abstract freedom”), which is not possible, and thus we feel something which cannot be real. And this easily makes us feel void as well, unless that is we come to integrate ourselves with this “lack” and accept its centrality (like a circle that “totally relates,” and yet in accomplishes this relation leaves its middle open). This means there is a “void” which is part of ourselves that could always cause us effacement, especially if we fail to “want” Determination, such as the very Determination that we are a subjectivity which experiences itself as infinite within limits. But if we accept our “determinations,” they can come to be “necessities” for our definition (as the closed-ness of a circle is “necessary” for a circle to be a circle).
A major role of the State (and “present moment”) is to help us face Determination and come to see it as a Necessity for ourselves and for freedom. Given that we are all born with an “abstract freedom” which makes us feel like we ought to be “totally free,” it is not natural for us to see determinates as necessities, which is why we need trust in the State to help us move beyond what comes naturally. By “State,” again, I don’t mean just the government (though that is part of it), but the whole gambit of practices, fields, groups, interactions, and the like which we find ourselves “thrown” into by being “thrown” into “now.” What exists “now” Hegel gives us reason to think exists for a reason, and if it exists “now” there is reason to think we need it to come to the place where we experience “determinations” as “necessitates,” and thus experience determinations as things which enable us to be free and fuller subjects. Without those determinations, we’d be worse off.
By “the science of man,” Hume does not mean biology but “common life,” which is much more phenomenological. This seems like ‘the commonsense school of Thomas Reid,’ but critically that is not what Hume means, for ‘[p]articipation in custom does not provide one with an epistemology privileged access to truth; nor can it serve as a foundation for knowledge.’ This would privilege the “non-philosophical” in ways that Hume wants to avoid, for ultimately the “dialectical incompleteness of philosophy” is a philosophical thought we must reach through philosophy (privileging “commonsense” would suggest this journey is not needed) (and please note here we can see something similar to Hegel’s “Phenomenological Journey” and it’s negation/sublation, as Cadel Last discusses). And what exactly is the character of “the philosophical thought” which constitutes a negation/sublation of philosophy (into what Hume calls ‘true philosoph[y]’)?
What is the difference between wine and “standing reserve” in Heidegger? It is strange, but Heidegger speaks of wine as if the full realization of grapes, which is to say that without wine grapes would never “bring forth their being” in all fullness. And yet Heidegger has also lamented how the Rhine River no longer inspires us to poetry, because now the river is a potential source for electricity for some end we do not know about ahead of time, as trees are just a source for lumber. The question is this: “What is the difference between wine and lumber?” (between “wine” and “Rhine”). Why does wine somehow “bring forth being” while lumber “reduces being?” The difference seems subtle, almost nonexistent, and is perhaps equivalent to the difference between “tool” and “technology” in Heidegger, as noted by Joel Carini, which has much to do with “care.” Indeed, it seems we must care to choose the difference.
In Hegel, it is suggested that Notion and Nature (as I like to put it) are two irreducible sides of the same coin, a “Monistic Dialectic,” which is beyond strange and yet perhaps less strange than a belief in a multiverse. Hard to say, but the point is that Hegel would have us believe that we participate in our knowledge as our knowledge changes our participation, and so on. How though do we judge whether we are collectively participating in our Notion well and/or if Nature is influencing Notion optimally? If they inform one another, doesn’t that mean we are “sealed inside a bubble” that we cannot test because we are inside of it? The great Michael Polanyi of the Modern Counter-Enlightenment tells us that ‘[we] cannot use []our spectacles to scrutinize []our spectacles,’ which is to say that we cannot measure what we are contained in by what we are contained in, and if the universe is ontoepistemological as Nature/Notion, then how do we inspect or assess its development and our role in it?¹ Perhaps we can’t and should just accept that impossibility; after all, Hegel suggests that the act of measurement changes what we measure, so perhaps to ask the question we are asking is to risk changing the universe (though perhaps for good).
Does that overlap sound absurd? Understandably, and the first step we must make to bring Hume and Hegel together is to clarify that Hume is no simple empiricist, assuming by that we mean someone who follows ‘the doctrine that all knowledge originates from experience and that nothing is in the intellect that was not first in sense.’ But even this might apply to Hume (as does to Aristotle and Aquinas) if we do not by this assume ‘that necessary propositions are analytic’ — unfortunately, that is almost always what is meant today when people speak of “empiricism.” ‘No philosopher has suffered more from than the narrowing of vision that comes from the modern habit of epistemological classification than Hume. He is commonly identified as an empiricist and indeed as an especially clear case of what radical empiricism is.’ In truth, though clarifications must be made, Hume is far more like a phenomenologist (even a “Phenomenological Pragmatist,” as I like to discuss), and with that move we can start to move Hume and Hegel closer together…
Our age might be remembered as the age which chose determinism. Even as the idea faces objections in science where quantum notions like “Heisenberg’s Uncertainty” are considered, determinism seems to rise in culture (yet when determinism rose in science thanks to Newton, it was dead in culture). To obliquely reference Belonging Again, perhaps we are “choosing determining” because “givens” have faded and we want to regain a “thoughtless” source of direction so that we don’t mentally suffer? Hard to say, but perhaps a reason why determinism is appealing is because it seems to guide our moral lives in light of the loss of “givens” and “values.” Where religion is in decline and values debated, it’s hard to know what constitutes “right action” and what we ought to do (a problem discussed in “The Value Isn’t the Utility” by O.G. Rose), but if we can claim “x situation arose due to forces outside a person’s control,” then it feels like we ought to address the situation. Ethics feels like it can be guided by determinism, which means we can regain a degree of direction, which we are profoundly hungry for in this age when direction seems impossible (due to what some have called “The Meaning Crisis”).
Desire’s Masterpiece
Considering Conspiracy & The Subject by Hunter Coates
o-g-rose-writing.medium.com
As considering “madness” a state of irrationality keeps us from having to face the more disturbing reality that madness is a product of rationality (as described in Belonging Again), so considering conspiracies as a product of “losing touch with reality” saves us from facing the truth that our natural experience of (A/B) reality makes us vulnerable to conspiracies. Rationality cannot objectively “ground” itself and must settle at best with an “Absolute Knowing” of its own limitations (as we learn in Hegel), and that means it is also possible for rationality to find itself seeing connections between a hand sign, a government official, the Vatican — and thus a conspiracy be born. In other words, rationality is limited from not being able to see connections anywhere and in everything (after all, there is no “absolute grounding” to stop this “unbound connecting”), and so once Pandora’s Box is opened (as we spoke about with Lorenzo, Ep #36), rationality never sees reason to close it. In fact, just the opposite: once the box is opened, connections start to appear everywhere…
Leibniz frames his analysis in terms of a comparison between ‘magnitude and situation,’ suggesting that ‘as magnitude is to arithmetic, so situation is to geometry,’ and furthermore claims that if we make “situation” and/or geometry ‘the primary element, many things easily become clear, [things which] are more difficult to show through the algebraic calculus.’ Leibniz is critical of Cartesian approaches in Geometry and claims that a problem at the foundation of mathematics ‘is a problem of reduction to algebra and induction back to geometry (or situation).’ Critically, we should note that ‘[s]ituation is particular, and therefore contextual, and novel. Unforeseen postulates and assumptions are situational.’ In other words, situation entails an inherent “one-of-one”-ness we must keep in mind, a reality Leibniz doesn’t think ‘Cartesian approach[es] to mathematics’ can account.
It’s true that we might need to shut down AI, but I also see reason to believe that AI might bring about transformations which help humans be more human. AI could bring about numerous outcomes that are completely unpredictable and ironic: for example, what if AI renders a large percentage of the internet unusable? If everyone knew all videos could be Deep Fakes, that uploading an MP3 means people could copy our voice and use the sample for an AI which could scam our parents, that posting a photograph means we could be used in adult videos — perhaps people will stop or radically reduce using the net? Might the net become too dangerous? In this scenario, the very thing which was thought to make the internet ubiquitous could be what deconstructs it or changes it radically. People might still use the internet for Zoom calls and emails, but content, videos, information, news, pictures, etc. might completely lose value. Perhaps there will be a surge in the popularity of art galleries, precisely because people will grow tired of seeing images that they don’t know are human (which is to say they become tired of feeling anxiety whenever they study art). Live music, conversation groups, drum circles, learning an instrument to play live jam sessions (not covers), poetry readings of poetry written there, in the group — all sorts of new “(a)live” art experiences could arise.
Logic for Hegel works through logic as traditionally understood, which can be thought of as a series of exchanges (a point “High Root” made in “The Net (49)”): A leads to B, which leads to C, which leads to D, and so on. Javier made the point that “the logic of exchange” can be problematic when applied to relationships, everyday life, and the like, because quickly we can start feeling like “because I did x, they should do y,” or “x means y should happen.” Exchange can become a problematic notion, ripe to cause conflict. For Javier, this means we need to “make an exchange in terms of changings the terms of exchange,” which is to say, “if you love me, I will not think that if I do x, you owe me y” — love is hence an opportunity for exchange to be negated/sublated as exchange (some wordplays also came up in the conversation, thanks to Michelle, on how “exchange” is “ex-change,” meaning we “don’t have to change” because we “externalize change” onto others and suggest they are morally obligated to change; change becomes our “ex,” per se).
Religion historically speaking has been central to social formation, and so to investigate religion will I think help us understand what society today needs to look like if a “Community of Absolute Knowers” and Children might be possible. How should we think about religion, which in turn will help us think about the 21st century, after the collapse of “givens” and impossibility of returning to them? What is society after we must find “belonging without belonging” and “givens without givens?” Well, answering this will be aided by better understanding what role religion might play, a consideration I was honored to be invited by Dimitri of Actual Spirit to consider with Layman Pascal and Cadell Last.
Looking up, there is a sense in which we trust the sky is overhead, but this trust seems different from when our brother says, “I’ll see you tomorrow at two, downtown.” We indeed must trust we will see him, for it isn’t yet a fact that he will be there at that time. If we do not trust our brother to be true to his word until it becomes certain that he is true to it, we do not really trust him; rather we set ourselves up to acknowledge facts (“he is here at two” or “he is not here”). Considering this, trust isn’t so much earned as it is assessed, and furthermore it generally can only be assessed accurately in circumstances of clear, verbal exchange. Trust can be confirmed and given but not really earned, for relationships themselves can only be confirmed and given. At the same time, there is a kind of “trust as living” which seems relevant, as well as “trust as extended/given,” so how might these two trusts interact and interrelate? A good question, one that will lead into critical sociological considerations…
The answers seem to have something to do with Children, identifying with lack/otherness, and the corresponding institutions needed to incubate these “ways of being,” which will come with various skills, practices, and abilities. For me, to be very vague and without elaboration, a list of possible “skills” and “practices” (think football and Plato’s gymnasium) which will be needed for “social agents” as Children to be possible…
Sounding like Hegel, Polanyi tells us that ‘[t]he possibility of error is a necessary element of any belief bearing on reality, and to withhold belief on grounds of such a hazard is to break off all contact with reality.’ An “Absolute Community” organizes itself around this instead of “dogmas,” per se: “Personal Knowledge” and “Proper Confidence” describe the epistemology of its “first principles,” a relation that I think is captured beautifully where Newbigin writes: ‘Only statements that can be doubted make contact with reality.’
Skepticism has been conflated with disbelief and died as a result. The skeptic is not someone who dismisses everything he or she is told (outright), for the true skeptic is skeptical of even his or her capacity to dismiss. And the skeptic is not the cynic, who has stopped questioning and taken on assumptions like “there is no truth” or “everyone is lying.” Rather, the skeptic is the individual who asks questions for the sake of the truth, not just for the sake of deconstructing it (though determining the truth entails deconstructing false truths). Furthermore, the skeptic questions his or her own standard by which truth is determined, not because there is no right standard, but because standards require perpetual refining. “The mode” through which the skeptic asks questions is different from the cynic (though do note that the individual who shifts between modes and fails to understand skepticism properly is someone who will struggle to convince others when he or she is genuine). Lastly, to be skeptical is not the same as being untrusting; to be skeptical is to be aware that the truth is hard to fight for and know.
It is difficult to think systems, a radical challenge to consider subjects, and Cadell Last has decided to think both. Systems shape subjects though, as subjects change systems, and that means the topic is active and changing. To make matters more complex, Cadell Last considers the possibility that the very act of considering “systems and subjects” actively participates in the formation of both, and so thinking about that meta-thought shapes the meta-thought, which shapes the meta-thought — like a fractal open to infinite recursion. How do we even begin thinking something so alive, dynamic, and fractural? This is the challenge Cadell Last has put himself up to facing in Systems & Subjects, where he attempts to elaborate on why subjective experience is so problematic, why we must think systems and subjects together versus apart, and even goes so far as asking us to imagine ‘the self-certain cogito [realizing] itself to be nothing but the pure thought of the unknown where there is only pure possibility.’ We cannot be so sure that ‘Absolute space and time [won’t] reveal itself to be nothing but the Absolute concept,’ capable of creating radically different constraints than what we are habituated to, which would mean that the Absolute concept could also beget a reality in which subjectivity is effaced — as seems to be what we as Moderns are bent on accomplishing.
This paper will also draw a distinction between Universal Basic Income and Basic Income. UBI is a system in which everyone (poor, rich, whoever) receives a certain amount of money every year, while BI is a system in which only certain people who don’t make a certain amount of money receive money from the government. The first isn’t means tested, while the second is means tested. Currently, I think most people who hear about UBI conflate it with BI and assume UBI is means tested, because by “universal,” there seems to be two understandings.
If we watch a television program about a person who visits Uganda to start an orphanage, when we encounter someone in real life doing the same, we might wonder if the person is going because the person believes it’s the right thing to do, or if the person is going because he or she saw a television program about other people taking the trip. Furthermore, it’s clear by the television program that the person who went to Uganda was honored and respected for “being a good person,” and in order to “be a good person” ourselves, we too may visit Uganda. The television program has “given us a script” which we can follow to “be a good person,” and not only does this help us deal with the existential anxiety wondering if we are in fact a good person, it also gives us a sense of what we can do and accomplish with our life (that will be respected, admired, help make the world a better place, etc.)
There is no evidence without definition, and to create definition is to create the possibility of evidence, but that very definition might render the evidence meaningless. Furthermore, a theory can be defined out of vulnerability. Communism and Capitalism, as theories, can never be “seen” empirically or directly, if you will (both are frameworks and ideas). Yes, they are ideas given definition by writers like Adam Smith and Karl Marx, but who is to say my reading of Smith is the right reading of Smith? Who is to say my reading of Marx is the right reading? There is always room for doubt, and so always room for questioning if an implemented “version” of Capitalism is in fact the “real” Capitalism. Furthermore, even if I do have the right “reading” and/or understanding of Capitalism, who’s to say my implementation and realization of it is a valid realization or “fitting?” Perhaps the idea is lost or deformed in the process? Perhaps I implemented the theory correct in some ways and not others? Perhaps not everyone did what they were supposed to do according to the theory? There is always room for doubt, and so always room to claim that “Capitalism wasn’t tried” or “that isn’t Capitalism.” In this case, any evidence of failure that arises will not be evidence against Capitalism (as I define it), but against “something else” (whatever that might be).
Oppenheimer is a record about Robert Oppenheimer, which means the concerns of the movie about Oppenheimer’s record are a meta-concern about the movie itself. What legacy does the movie leave? What does the movie suggest that Dr. Oppenheimer left us? To have children is to start a “chain reaction” of events that spread into the future; likewise, to invent the atomic bomb has forever changed the world. Is a life lived which must control its birth? Must we want to claim “the chain reaction” which our lives set in motion? But how can we know what will happen next?
Quoting Aristotle, Josef Pieper writes in Leisure, the Basis of Culture: ‘This is the main question, with what activity one’s leisure is filled.’ What we do in our “free time” is who we are, not what we do to make money (though we often say, “We’re an engineer,” “We’re a doctor,” etc.), and if we have nothing to do in our freedom, we too are nothing. “Leisure,” as Josef Pieper means it, is not a state absent of work though, which is what we might think given how the term “leisure” is often used to refer to “free time” after work (a notion that makes it nearly impossible to understand what Aristotle is getting at). ‘Idleness in the old sense […] is really ‘lack of leisure.’ There can only be leisure, when man is at one with himself, when he is in accord with his own being.’ “The Garden of Eden” was a place of leisure, for Adam could “walk with God” and thus was at home with Adam’s self (and please note “the man” and “the woman” were not divided into “Adam and Eve” until after “The Fall,” suggesting even further “fitted-ness”). And yet there was work in Eden (just not “toil”), for Adam was responsible for the animals and the garden. Eden suggests that leisure is a state of our most human work, and yet that work is profoundly “useless” (a point which brings to mind Julien Benda and his Treason of the Intellectuals). Isn’t that horrible? Only perhaps if humans were never “an end in themselves” (itself of which would be a horrifying state).
Negative Theology teaches us that it is better to say, “God doesn’t exist,” versus claim “God exists,” because what we mean by God is so far off from God’s actual being that speaking of God’s nonexistence is closer to the truth. In Blood Merdian, this logic applies to evil: evil is so much worse than we realize that it’s better to say “evil doesn’t exist” versus say it does, because what we mean by evil is but a shadow and falsity. Judge Holden is clearly a Satan figure, but it is also more accurate to say Holden isn’t Satan, for whatever we think constitutes Satan (perhaps pale, towering, hairless…), Satan isn’t just that. Evil is unknowable, as the Judge sets out to make the universe and everything living accept.
…As we learn from Louis Dumont, there is reason to think that people naturally think in hierarchies, and since they cannot easily rank what cannot easily be understood, it is natural, especially in regard to what is considered a practical matter, that people rank and value jobs they understand over jobs they don’t understand, even though those incomprehensible entrepreneurships and creative endeavors create the wealth that creates jobs, rather than simply distribute wealth and employment. Wait, who was Louis Dumont exactly? A fascinating thinker and author of Homo Hierarchicus who might provide resources for us to see why humans are naturally “low order,” which means we are naturally inclined toward Discourse which favors simple causality versus “high order” Rhetoric which favors dynamic creativity (as we arguably require today). If we are naturally hierarchical, then we naturally favor the creation of systems in which those hierarchies can be created, knowledge, and/or even rewarded, which means we might need a State that can do this for us. But if the State can do this, even if we create and win the hierarchies, the State has power over the systems which make these hierarchies possible and/or real, which mean the State (government plus corporations) really have the power. This suggests why a human nature which is hierarchical could favor problematic Discourse, and I do think Dumont makes this case powerfully, suggesting further reason why we must be intentional about Rhetoric and “being high order” (which is unnatural).
If I say, “God cannot be known,” I have made a claim about God that I can only make if I know God; if I say, “God cannot be talked about,” I have talked about God; if I say, “The finite has no connection with the Infinite,” then I have made a claim about the Infinite that would require me in my finitude to have a connection with the Infinite in order to say meaningfully. If I say, “We can only say what God is not,” then I have said something about “Who God Is” (in trying to avoid speaking and thinking about God wrongly, I perhaps speak and think about God wrongly)…
Still, why should we think that Rhetoric could be so powerful? Sure, perhaps McCloskey has ruled out everything else but Rhetoric, but are there positives reasons to think of Rhetoric as so culturally and intellectually significant? McCloskey does provide these arguments, but I would like to focus on a favorite book of mine, Kindly Inquisitors by Jonathan Rauch. This book defends free speech, but it so not merely on grounds of “human rights” but also by helping us see how free speech functions to help gather and test knowledge. In this way, Rauch’s argument is incredible.
Let us pretend though like I successfully describe all of a cup in its materiality: have I succeeded describing it, or must I also include my mental experiences of it? Since cups are always thought about and/or described by some mind, there is no such thing as an experience of a cup that isn’t “colored” by mental activities; hence, shouldn’t description include phenomenology, psychology, etc.? It would seem that we can describe a thing: in thought, in perception, and in both. Some aim to describe things in perception alone; others, in thought alone; others, in both. And these three schools of description might think the other schools get it all wrong (it’s only natural).
To make an overarching point against all “self-correcting market equilibria,” Keynes focused on employment and more particularity the possibility of “involuntary unemployment,” which is what we will focus on here. ‘The traditional theory maintain[ed], in short, that the wage bargains between the entrepreneurs and the workers determine[d] the real wage,’ Keynes wrote, which meant that wages resulted from negotiations, and if wages were low, it basically must have been the case that negotiations collapsed for some reason.²⁹⁸ ²⁹⁹ ‘If this [was] not true,’ Keynes added, ‘then there [was] no longer any reason to expect a tendency toward equality between the real wage and the marginal disutility of labour.’³⁰⁰ To speak very generally, following the “Classical Model,” “involuntary unemployment” was an impossibility, for if someone wanted a job, it had to be available somewhere. But witnessing the Great Depression between 1929 and 1939, it seemed ridiculous to Keynes that this could be explained simply by claiming that people wanted to be unemployed and/or that people were unwilling to work for wages being offered to them. Something deeper and more systemic had to be at work, and the target of Lord Keynes was a theory of Capitalism that suggested it necessarily returned to a state of equilibrium. Keynes arguably never proved Capitalism wouldn’t ultimately self-correct, and he himself acknowledged that, but he did prove that the downturns could be far longer than any of us imagined. And in the long run — to allude to a line from Keynes that even children know — we’re all dead.
It is better to know history than not know it, but while those who feel history also know it, those who know it don’t necessarily feel it. But how we do we come to “feel history?” Shelby Foote believed that history was best learned through narrative, because a good story was one in which we forgot we were reading a story and suddenly and mysteriously were “in it.” Stephen Cushman in his book Bloody Promenade discussed Civil War reenactors and a strange moment when, during a performance, the reenactors forgot they were not actually in the Civil War, and I believe a similar phenomenon is what great writers accomplish with their readers. Cushman wrote that reenactors ‘reenact[ed] in order to lose track of time, to fool themselves, to experience a mystical moment when the seemingly impermeable boundary between the present and the past suddenly dissolve[d].’ According to Cushman, there was a moment when ‘[s]uddenly the reenactor […] [was] but a participant in an event unfolding in the present’; similarly, I think great history books are ones in which readers are suddenly ‘a participate.’ For Foote, history books only accomplished this by being narratives, because only stories possessed the power to make people forget they were only reading: when reading a list of facts, there could be no collapse of ‘the conditions that determine[d] what [was] or [wasn’t] truly authentic.’
If we save today and contribute to the collapse of demand, we will not necessarily be able to use our savings to consume tomorrow. If we assume this like CE does, then basically as long as we save today, we can do no wrong: things will work out in the end. If we save, we are safe. It is this idea that Keynes opposes, for it suggests that “preparing for the future” necessarily “makes the most of the today,” which Keynes argues that the act of preparing for the future could be precisely why the future never comes. We cannot be like religious believers who ignore or rationalize what occurs presently because, ultimately, “we’ll all be saved”: Keynes will not allow that kind of escapism. The picture is much more complex, as I would argue is the case in Christianity, where it seems like the goal is to “get to Heaven,” but really the goal is a reflection more of how “New Jerusalem is on earth.” What we do today matters, but then the obvious question arises: What should we do? For Keynes, the answer was basically that we must be “investors,” “savers/suppliers” (seeing as consumption helps supply) per se, not just one or the other (likewise, Christians should be “earthly/heavenly”-minded, just not worldly-minded). We must be dialectical, but I will abstain from discussing Hegel today.
Keynes convincingly makes the case on ‘the social dangers of the hoarding of money’ and the need for us to orbit a new ethic around “investment” (which entails “saving money,” while “saving money” doesn’t necessarily entail “investing”). As a result, I do indeed think Keynes establishes that ‘[t]he absurd, though almost universal, idea that an act of individual saving is just as good for effective demand as an act of individual consumption, has been fostered by the fallacy.’ When we talking about “savers,” it’s easy to fall into an uncritical acceptance of “the morality of saving money,” but when we talk about “hoarders,” our thinking can dramatically shift, suggesting that even we subconsciously understand that savings must ultimately transition into investing. If Keynes is able to convince of us at least this, he has made the case he seems most concerned to make.
Having discussed the conflation of “the money multiplier” with “the investment multiplier,” let us now address another common accusation, that Keynes was a Socialist and/or a Marxist, which will return us to “the multiplier question” (which I think the views of Keynes on that subject are critical to grasp if we are to understand why he is ultimately a Capitalist). To this point, at the end of Chapter 12, Keynes says something of interest that can help illuminate why we shouldn’t use the term “Keynesianism” as a blank term for all government spending (though I myself am guilty of making this mistake). Keynes writes…
It was assumed by many economists like Keynes that as autonomation and technology spread, we would have to work less, but McKerracher notes that this has not been the case. Far from a ’15 hour work week[] […] we find ourselves working more than ever.’ Why? How did this happen? First, Keynes basically thought that the benefits of technology would spread out to everyone (a perhaps fair thought before Artificial Intelligence), but it is clear that the benefits of technology seem to be increasingly garnered by the owners of capital. This doesn’t mean we don’t benefit from AI at all, but it does suggest that there is a risk of the working-class being cut out of labor entirely without access to capital, forcing them to be stuck in a poverty they couldn’t easily escape. This is one problem we face with automation, but another problem is that we are in danger if we assume that “more free time” thanks to automation will necessarily translate into “more leisure.” This is a problem which Timenergy can help us understand and articulate: we can have all the “free time” in the world, but if we have no timenergy, it will not matter. Gaining back timenergy is not just about reducing work hours (though that is part of it), but regaining control over our entire “existential horizon” by which we approach and engage in the world. Socioeconomic transformation must start with existential transformation. Timenergy is not just about how we live but about life itself…
Let us now explore the possibility that Keynes is right about DEH but overly optimistic about the State’s ability to create demand (which would mean WWII helped us out of the Great Depression more so than FDR). This could be the worst of all possible worlds, but I fear it’s a possibility we must take seriously. It could be the case that Conservative economists are correct about the inefficient of the State, while more Liberal economists are correct that the market isn’t always self-correcting. To use Hayek as a placeholder for Conservative thought, it could be that Hayek was right about the State but let’s say wrong about “market self-correction,” while Keynes was wrong about the State but right about DEH…
Where there are no limits, totalization becomes possible, and where totalization is possible, so also might there be totalitarianism. David Hume realized this as a problem with philosophy: since we can philosophize about anything, there are no limits to philosophy, and that means governments, rulers, etc., could use philosophy to create for themselves a noncontingent and unlimited basis for their power. When government basis its legitimacy on the food it provides for a neighborhood, government is limited in its power by the degree it can provide that food, and furthermore its power doesn’t extend beyond those for whom food is provided. But where government’s legitimacy is based on justice, freedom, or a philosophical value (possible because of the spread of “philosophical consciousness,” as Dr. Livingston describes Hume), then there is no necessary limit on power. Yes, we might freely impose upon ourselves a limit (say if we understand the role of Determinations that we freely choose in making possible Necessities, as discussed with Hegel), but this act will easily seem strange and “nonrational,” making it hard for most people to do. As a result, where philosophical value becomes primary, a people are likely to find themselves unable to impose limitations on the power which is legitimized by that power. “Unlimited power” then becomes possible.
Illich discussed ‘modern institutions in terms of rituals,’ noting how ‘schooling [was not seen as] a technique whose effectiveness ought to be assessed,’ which meant for Illich it should be ‘analyzed as a ritual because only then did it become evident that the major effect of these institutions was to make people believe in the necessity and goodness of what they were supposed to achieve.’ Institutions made themselves into rituals which could not be critiqued only faithfully practiced, a transformation which perhaps became far easier when people could communicate and lay out orders without face-to-face interaction (for face-to-face, it would be harder to judge the workings of institutions according to only numbers and abstract metrics, for we would experience and face the suffering they caused when dysfunctional, a point which suggests Medical Nemesis). Furthermore, with the removal of “the face” (a point which suggests Levinas), we train ourselves and our thinking out of the habit of thinking subjectivity, personally, and contingently, which means we train ourselves out of the Unplanned into the Preplanned. Illich would have us use “tools” in a way that doesn’t remove “the face” (perhaps we could discuss “Facing Tools”), for the loss of “the face” is the loss of the human, which makes us susceptible to treat institutions like “rituals” we believe in.
Along with critiquing “thoughtless uses of technology,” throughout all his works, Illich can be seen as critiquing and opposing the ‘fetishism of rules and norms’ in favor of a “Good Samaritan model,” which isn’t merely ‘a stage on the road to a universal morality of rules.’ Illich says that he has ‘chosen […] to write as a historian curious about the undeniable historical consequences of Christian belief,’ and Illich believes that beliefs (say “absorbed” from writing and “ritualistic institutions”) make history. ‘Belief refers to what exceeds history,’ Illich says, ‘but it also enters history and changes it forever.’ Belief is not merely a thing we choose but functions as the horizon on which things are defined as themselves, and yet rarely do we think about our beliefs. Illich highlights how this impacts our understanding of the neighbor (and thus the orientation of our moral compass), for today we believe that ‘[m]y neighbor is who I choose, not who I have to choose.’ This changes everything and arguably puts ethics in service of my tribe: ethics nearly becomes anti-ethics. Furthermore, if the neighbor is someone I choose, the neighbor is someone I can plan for, and thus Preplanning defines my relationship to my neighbor, which then makes it possible for institutions and systems to care for my neighbor, which of course seems rational and “effective” to do…
On October 9th, we discussed Plato’s Republic, and Mr. Murphy drew attention to the fascinating start of the book where Socrates and Glaucon are ‘[going] down to the Piraeus […] with Glaucon […] to pray to the goddess.’ Suddenly the slave boy of Polemarchus grabs the cloak of Socrates and orders him to wait for Polemarchus; they do and Polemarchus acknowledges that Socrates and Glaucon are in a hurry ‘to get away to town,’ but then basically says they have no choice but to stay because Polemarchus and his men are stronger than them (‘either prove stronger than these men or stay here’). It’s a strange way to start the dialogue, and I think Mr. Murphy is right to draw attention to it, for it might be a way to frame everything that follows. Socrates replies to Polemarchus that there might be ‘one other possibility […] our persuading you that you must let us go.’ And Polemarchus replies with an answer that might help us understand the whole point of Plato’s Republic…
November 2023, Michelle and I have the pleasure of teaching a course at Parallax titled “Look at the Birds in the Air,” which orbits around a distinction between “planned” and “prepared.” The class claims we need to “unplan our lives,” which sounds strange, admittedly, but the notion is that we today are very dependent on plans and planning, and though that emphasis was perhaps acceptable for most of human history until now (2023), given the rise of Artificial Intelligence, the collapse of “givens,” the growth of complexity, Pluralism, and the like, we must begin to shift our focus to being “prepared for the unpredictable.” Again, following Hegel, this doesn’t mean we were wrong to focus on planning like we did, but that we are now reaching an end to that “historic episode” and need to “negate/sublate” it into a period where “being prepared” is more primary. We will of course still make plans and require them, but these plans will emerge out of a state of preparation versus us live as if “planning” and “preparing” are similes (as we’ve been able to do for most of history without great trouble). But things are shifting, and that shift would have us become more like Ivan Illich thought we needed to become like…
Is the world getting better or not? This question seems very straightforward, and yet everyday I find it more mystifying. We attempted in “Hegel’s Justification of Hegel” to explain why for Hegel the “Now” is justified as a basis for thought and philosophy, that we shouldn’t question if we should “assume” today in our thinking, or otherwise thought can never get started. More can be said on the case, but ultimately what this leaves us with in Hegel is a notion that “today is better than the past somehow,” and yet the past was also necessary for us to be who we are today. Hegel wants us to focus on “The Now,” and yet doesn’t that somehow suggest today is better than the past? Doesn’t that make Hegel a Progressive? In one way, yes, and yet Hegel will also not let us “think the future,” only be “radically open” to it. But if the future is better than the past (which seems to follow), shouldn’t we think the future? It seems that way, but it’s also not so simple.
Why is this ethos so problematic? It disables people and makes them dependent, and even if I could call a mechanic to fix my car, by having the ability to fix it, I am more able to deal with the situation of a broken car. There is much talk today on the importance of decentralization in currency, but we see here a possible “decentralization of ability” that Illich realizes is missing in favor of a “centralization of ability” in experts and technology. Since this has occurred, people are then dependent on those systems and more likely to go through life afraid precisely because they are disabled, and then the only way to address those fears is to turn to the system to help. And so the system causes us to live afraid which we then must turn to the system to overcome (a dynamic discussed in “Concerning Epistemology” by O.G. Rose), which is to say the “centralization of abilities” changes our entire experience of the world to favor the system and Discourse. This brings to mind what Ivan Illich discusses involving water, which has become ‘an observed fluid that has lost the ability to mirror the water of dreams.’ ‘Water can no more be observed,’ he adds, ‘it can only be imagined, by reflecting on an occasional drop or a humble puddle’ — but unfortunately we may have also lost imagination (considering “The Great Stagnation”), meaning we are trapped in a world of H20 (without really being “trapped,” because we cannot imagine anything different (thankfully?)). Illich here is not saying that water isn’t H20, but that we’ve lost the ability to see things as “two things at once” (a capacity to perceptive paradox that seems required for creativity and honoring “lack”); at the same time, he writes on fire that ‘the fact that we cannot separate our experience of passion from the element of fire and cannot imagine fire without passion in no way implies that the two are at all times perceived as versions of the same principle,’ which means we should not reduce both of these experiences to say “causality”; rather, we should be able to see in a flame both the principles of “causality” and “creation,” science and poetry, etc. at once…
Belonging Again (Part I) discussed the topic of Absolute Knowers, Deleuzian Individuals, and Nietzschean Children, all phrases and terms meant to refer to the same kind of subject who can live according to values which are not socially supported, lack “givens,” and yet still have authority over the subject. They are created by the subject, and thus run the risk of being arbitrary, and yet manage to prove empowering. The question of Belonging Again was if “created givens” could being and orient the average person, and on this dilemma the question of “character” was explored. Here, we will focus the life and thinking of Simone Weil, who I believe provides and important piece of the puzzle, though this will also lead into considering Nietzsche. We will consider Weil’s life in the context of “The Meta-Crisis,” and suggest that we should focus on neurodivergence to address it…
“Complete economics” is a fiction lurking behind both Republican and Liberal debate, for both are suggesting that if only we follow their ideas, the economy would work. But ideas and systematic adjustments only might lead to wealth-creation and economic strength, to the degree “stimulating demand” can work — but I believe there is a limit to this, as evidence with “The Great Stagnation.” But so long as “complete economics as fiction” lasts, politicians can derive authority and legitimacy from that fiction (perhaps like religion), and in this way perhaps the State doesn’t want an “adequate theory of value” which unveils “economic incompleteness,” for that would threaten their legitimacy. Furthermore, the average citizen might not be ready to face the abyssal reality that we have a lot less of an idea of what we should do so that society can thrive than it seems…
…As we’ve been circling, in my view, one of the most important questions in economics is if freedom causes prosperity or if prosperity causes freedom. Yes, it’s clear in free markets that the answer is “both,” for once the “cycle” and “market” get going, more freedom causes more invention and creativity which causes more prosperity, but what gets the “market” going in the first place? This question is reminiscent of our inquiry, “How does anyone leave Plato’s cave on their own?”; here, regarding growth, we are asking if the first step is freedom or is the first step prosperity? This is critical, for we are basically asking “What causes demand?” If demand as Keynes has argued is “the necessary precondition” for free markets to be possible, then free markets cannot operate unless “demand” is first present and maintained. So what creates and maintains demand? Freedom or prosperity? Which comes first?
Worldviews orientate emotions as emotions orientate worldviews. There is a dialectic between how we feel about the world and how we see it: what we feel about x impacts how we think about x, and vice-versa. Our heart and mind aren’t separated (though culture often depicts them dualistically); rather, they are constantly engaged in a conversation, shaping and forming each other. We aren’t creatures with a “heart and mind” so much as we have a “heart/mind,” per se (as we are more so creatures of ‘feelings/beliefs” than “feelings and beliefs”). Yes, the two are distinct, but they are also so constantly engaging with one another that it’s difficult to divide them (such as when “thought” and “perception” combine over a given entity like two rivers running together, as discussed in “On Thinking and Perceiving” by O.G. Rose). While a person does a math problem, it is easy to identify the mind as distinct from the heart, for the mind takes a primary role, as it is easy to identify the heart as distinct during romance. But even in these situations, the heart and mind are conversing (perhaps more so in some situations than in others), for I might do math because I love it, as I might date a certain person because I think she is special. Still, though there are certain situations where either the heart or mind is “in the driver’s seat,” that doesn’t mean the other isn’t present or informing the situation at all…
The first set of essays in The Map Is Indestructible have meant to highlight ways in which humans naturally “map-make,” though I don’t mean to suggest that there are not other means (such as those described by Žižek) — my hope is only to highlight a few case studies to suggest a larger point and habit (as will be the case with all sections of this book). Also, it should be noted that if we try to rebel against “theory” and “script” by not creating either (for example), then we can find ourselves existing in a world we do not understand, for we require a “map” to make sense of the world. It is thus not an option to be an “atheorist” or “ascripturist,” per se (though please note our “map” will try to convince us we are indeed in a way “a-map” to conceal itself), and yet we might be tempted to try this position by avoiding the act of “putting forth” a new theory or “script” and instead critique the old theories and “scripts” as being oppressive, manipulative, inauthentic, or the like (as we will always have justified and “rational” reason to do, for indeed these critiques can be accurate). But this is the mistake of the rebel who is not ready to rule, who deconstructs what exists without an idea of what he or she will replace it with, which might prove like the French Revolution, leading to a Reign of Terror. Furthermore, if this book is correct that “mapmaking is human nature,” then to deconstruct various “maps” (perhaps in the name of liberation) from them will only prove to make space for a new “map” which could similarly be rebelled against — on and on. “Escaping maps” is not to address them, which is to say our challenge is far harder (we must play with fire).
A prime way humans preserve ideology is with “certainty,” and yet we learn in “On Certainty” by O.G. Rose that certainty is mostly impossible, so what’s going on? Well, it might be that “certainty” isn’t so much a term that describes the likelihood that an idea is right, but instead the term might have more to do with how much we need an idea to be right to hold our world together: what we are certain in might have less to do with “what we are confident in” as it does with “that which is too central to be false.” Certainty might reflect an emotional nervousness more than it reflects a product of well-earned investigation, and so central is the idea that we are certain about that we might want the idea “to vanish,” per se, which is to say we want to remove it from focus, for what we focus on is that which we might think about, while what we rarely focus on can avoid being considered (“practically invisible”)…
What is intention as opposed to motivation? When I intend to visit my friend, it could indeed be the case that I am also motivated to see my friend, but if all “intention” means is “motivation,” there doesn’t seem to be a need for both terms. Perhaps when people talk about “being intentional,” they mean something like “though you are motivated to do x, y, and z, you should be more motivated to do x.” The word “intention” could suggest a “focused motivation” (“motivation but with emphasis”), though I don’t deny that “intention” is often used as a simile for “motivation.”
How does philosophy begin? For Socrates, it began in wonder, and though Hume would agree that “true philosophy” can be cultivated and developed in such, Hume disagrees that “true philosophy” is the only philosophy. More often than not, Hume believes that philosophy starts due to “a break” between what we thought and what we experience, a “break” in what we are familiar and what we encounter. Dr. Livingston describes this event well…
What have we concluded in our exploration of Keynes and his General Theory? We have concluded that an economy without demand is in trouble, and we have concluded that the likelihood of a State stimulating demand decreases as the economy becomes more wealthy, technological, and complex. State efforts to create demand then might perhaps have unintentional consequences, say increasing wealth inequality, shrinking the middle class, and the like, a claim often made by Web 3.0 users who support cryptocurrencies — many can be found arguing the Federal Reserve has wrecked the lives of average Americans in their efforts to create stimulus — but I will let those others argue that case and let interested readers research the topic for themselves.
Much of creativity emerges through cross-pollination across disciplines, encountering the unpredictable, being outside of one’s comfort zone, etc. Without disorder, there is little incubator for creativity; consequently, there is no Artifex or “creator class,” and the economy self-destructs. Humans and societies seemingly need volatile environments; otherwise, humans weaken. If humans spend all their time in room temperate, other temperatures more easily harm them, as a child that is never exposed to bacteria is more vulnerable to sickness. Disorder strengthens, as trial and error order benefits. Unfortunately, the benefits of disorder are undividable from its downfalls, as trial and error, by definition, requires error. And if the world today is too fragile (perhaps like Jim Rutt discusses) to handle this disorder, we might have a problem.
All systems must ultimately prove incomplete, which basically means that “no map can be its territory,” and it is precisely at the point where “the map” is unveiled not to be “the territory” that we find the human standing with all his/her pathos, logos, and mythos — naked. The human subject is why all systems are ultimately incomplete, which means what happens to the system is ultimately up to the human (“a flip moment”). So, if the human chooses to seek A/B and “Absolute Knowing,” the “essentially incomplete system” can practically (“as long as the human so practices it”) be “(in)complete,” which means the system or community “finds completeness in incompleteness,” a Hegelian state (of “belonging in not belonging”). “The Gödel Point” is where Affliction is unveiled as open and Affliction is most enraged and strikes hardest, but if we survive through negation/sublation — an opening…
An ultimate example of Illich’s “disablement” is boredom, and perhaps there is no greater expression of “The Meaning Crisis” and/or “Skill Crisis.” According to Dr. Patricia Meyer Spacks in her book, Boredom, ‘in the eighteenth century almost no one spoke directly of boredom.’ Spacks works mainly in literature, but ‘[a] literary history of boredom necessarily involves cultural history as well,’ and so if boredom is not a concept we find readily in art before the 18th Century, there would be reason to believe culture was also different. Spacks argues that ‘the spread of boredom has coincided with and reflected an increasing stress on subjectivity and individualism,’ and given “the collapse of givens” described in Belonging Again (Part I), boredom seems to be a response that arises with that sociological shift (perhaps because numbness is a way to cope with great existential anxiety, because when we are overwhelmed by “release” and possibility all choice becomes arbitrary in us lacking a sense of significance in how we could live — hard to say). However, a reason we might be bored is also that we lack skills and ability to do anything (“The Skill Crisis”), and so all we can do is consume (rather products, relationships, experiences, etc., everything can be treated like a consumer good, even the holy), and consumption can entertain us for only so long.
So, what should Church be? Much, but I think here about how the disciples ran to the tomb when they were told it was empty. They ran; they couldn’t help themselves. They flew out into the world to see. A Pilgrimage is an effort to go see, while a journey is an effort to go to, and the news that the tomb was empty began a Pilgrimage that the disciples raced out on — they couldn’t help themselves. In tears. In awe. In hope.
llich ‘based himself from the outset on the assumption that development was a doomed enterprise, jobs for all a destructive utopia, and the monopoly of commodities over satisfaction a prescription for envy and frustration.’ Illich saw no need for the State to employ Orwellian fear or Huxleyan pleasure: if the system grew large and complex, it would prove disabling of the people, and then the people would organize themselves like Josef K into “capture” and control. But perhaps it is necessary to weaken and disable us after “the death of the scapegoat,” following the theories of René Girard?
Ivan Illich was a great man, but he is relatively unknown and certainly not well-studied. I have referred to Nietzsche as a “Core Thinker of Critiquing Bestow Centrism,” and I think we might say that Illich was also a “Core Thinking of an Apocalypse of Nonrationality for the Art of Living.” I unfortunately think there might be an academic bias against “Core Thinkers” in favor of those who are perceived to be “Systematic Thinkers” (which I think has contributed to us missing many of the Counter-Enlightenment and Modern Counter-Enlightenment thinkers), which is to say we favor philosophers who we perceive as building systems upon large ideas or axioms like “Being” or “Absurdism,” perhaps because they are easier to paraphrase and teach to larger groups of people (which is considered a standard of success by many systems), and if we do teach Core Thinkers, we have likely found a way to think them through a category, concept, or summary (arrived at “top-down” versus “bottom-up,” omitting too much). A “core” of Nietzsche and Illich though is never explicitly stated (perhaps more so intuited), perhaps because Core Thinkers may hesitate to defend an overarching theme in fear that it might bias their observations. They also aren’t so much looking to “make a point,” but to instead describe what they see (like Martin Buber depicting himself as someone who stood at a window and pointed), but for me the very fact they do this and a pattern emerges is all the more reason to believe this pattern is substantive versus self-imposed. Although “the overarching theme” of Core Thinkers can be harder to grasp and is usually only implicit, the likelihood of its validity I think is higher, precisely because it has been observed so widely and through so much variance.
To use an image from Butler’s incredible From Where You Dream, I foolishly try to write fiction and philosophy, an attempt both fields often see as equivalent to shoving a shotgun in our mouth while we polish the trigger. I naturally think in words and logical sequence, not images, so when I write fiction, my natural concern is if the logic of the story adds up to an emotional payoff. I cringe when I hear “the journey is more important than the end” — it feels like a copout for the writer who fails to think a story through. I also dislike the common emphasis on description and emotional connection, not because these things don’t matter, but because these concerns often dominate in literary fiction over tragedy, psychology, stories, and ideas. But then I read the thankfully ruthless Robert Olen Butler and realized my long-standing mistakes. Though still error-prone, at least now I feel like my errors are visible.
As Cadell has taught on during “The Month of Libido” (January 2024) at Philosophy Portal, we learn from Freud and Lacan that the brain naturally works according to “the pleasure principle,” which is to say that we naturally make choices relative to what will increase pleasure (“the pleasure principle” and what Augustine means when he says “bad motivations are impossible” seem related). From eating food to uncrossing my legs to avoiding anxiety — all of these can be seen as operations of “the pleasure principle,” which society must “check and balance” with “the reality principle” (restrictions, “givens,” etc.) so that our desire for pleasure doesn’t lead to profound problems (say everyone has sex freely and there are countless orphans, everyone consumes and never works so the society falls apart, etc.). Critically, we find pleasure in the act of understanding itself — it is more pleasurable to understand then not — which is a reason why we avoid “lack” and anxiety (cutting ourselves off from Childhood), and also it means we enjoy the “act of compression” the brain carries out so that we can understand the world around us. When I look at a bookcase, I don’t experience its fullness, only a “compressed image” of it, as necessary so that I can understand it. Otherwise, I might lose my mind and find myself unable to function.
Dr. Corey Anton in a conversation with Guy Sengstock (“What was dialogue & relationship before literacy?”) noted how literacy has helped humans control their emotions, quoting Marshall McLuhan in claiming that surgery was not possible until literacy. Reading trains us to use the brain in a way that can “just observe,” which by extension means we are trained to “look upon our emotions” without necessarily being swept up in them. To put this another way, literacy makes it possible for us to be angry without showing anger, while before literacy to be angry was to show anger (a divide between “being” and “doing” is hard to imagine before literacy). Funny enough, much of Hegel and Liminal Web philosophy today is about closing the divide between “being” and “doing,” which can be seen as “a return to (be)coming” and an emphasis on time over space — is this a return to “pre-literacy humanity?” In a way, yes, but we learn in Hegel and Hume that every “return” is actually a “(re)turn” — there is always a difference we’ve gained along the way.
To further allude to Andrew’s and Alex’s work (which I will discuss through the structure of a “Comedy” versus a “Tragedy” for our purposes here, please note, which is an important distinction from their work), the Theme (a key term) of “The Absolute” has always been present since the start of history, but it has been implicit “under” the Truth. Now, with the Causer, the Theme of “The Absolute” is in view and we can choose to align with it. There never seems to have been a Causer like AI (not even before the decline of religion), for the Causer must be “disembodied intelligence,” for that is the exhaustion of “The Truth” (which internally forces a consideration of the Absolute).
Forever indebted to Anthony Morley, who taught me everything I know on Leibniz in his incredible Geometry & the Interior, O.G. Rose will often consider Leibniz and his work on geometrical “situation” versus algebraic “points” (say in The Absolute Choice and (Re)constructing “A Is A”), which claims reality is more composed of “situations” than “things.” There is nowhere we can go in the universe where we don’t find “relations,” and though geometry is composed of algebra, it is not the case that geometry is “built up from algebra” (which would suggest “things” are more fundamental). Rather, “things” are necessarily part of “situations,” which means it must be possible for us to consider things “as if” they can be understood independent of “situation.” But this is an act of imagination (or “understanding” versus “reason,” to allude to Hegel), which we experience as an act of rationality and “getting to the things themselves.” But “getting to things themselves,” as if they can exist independently, is a mistake that can lead to problems.
According to Rauch, “free speech” is necessary for knowledge acquisition in a society, which is to say that giving it up will cause social regression. All of this fits into the context of Belonging Again exploring why it is so easy to opposes “social processes” that the society requires to function. Indeed, all values feel like they should just “be,” and debate is a “process” which values will naturally oppose in their very being. And yet without “processing,” we cannot know which values we should exercise and which are actually just “good intentions” which will cause a vast and dangerous “fall.” In the name of values, in opposing expression, we can destroy the possibility of living them in an order that doesn’t disorder the good…
The Situation of Capital (Part 2)
Considering Labor and Value as Situation
o-g-rose-writing.medium.com
“Use-value,” “exchange-value,” and “value” are not all the same in Marx (not that I myself manage to always maintain the language), and Marx seems to suggest that ultimately all “value” is real-ized through “exchange” in Capital (‘exchange value is the only form in which the value of commodities can manifest itself or be expressed’ (emphasis added)), but then on the other hand he speaks as if “value is labor” — what’s going on?¹¹ Is there a latent value always present that is brought out in the act of exchange thanks to labor, meaning “labor is more fundamental than exchange,” or is labor only value-creating to the degree it ends up in some “means of exchange?” “Value” cannot readily be found in nature and so must be “put in nature” through some human activity, but what is that activity? Is it one activity or a network of activities? A host of questions arise.
A reason why Hume is so passionate about maintaining a “dialectic” is because “autonomous reason” creates a feeling that we should “autonomously rule.” If we must always be dialectical with “common life” though, we can only ever be a “steward,” never a ruler who disconnects from our people without falling into “false philosophy,” a force of destruction. Everyone naturally seeks to “be right” (it is impossible to knowingly and consciously seek to be wrong), and so philosophy seeks to “be right,” which means that philosophy is primed to naturally engage in “autonomous rationality,” which is to say “nondialectical thinking.”
If the word “forgive” and “forget” are similes, we should cease using the word “forgive,” for the term “forgive” suggests that the act is “something more” than just “forgetting that something happened to us”; if “forgive” is just a simile, this is not the case, and the term “forgive” should be deconstructed so that we might avoid confusion. When asked directly, sure, many people might say “forgiving isn’t forgetting,” but practically I do think this is how we often act, and furthermore if we’re not clear to ourselves on the difference, we can end up acting like they are equivalent even if we don’t mean to do so. Indeed, alternatively, many people seem to think that a person who remembers a hurt is not forgiving, but do we actually have control over what we remember and what we forget? I don’t know if I’ve ever personally consciously in my life chosen to remember something. I mostly just find myself remembering or find myself not (and forgetting that I forget). If “forgiveness” is a matter of “forgetting,” then forgiveness might be something we have little control over, and yet the term “forgive” seems to suggest agency. For this reason and given the confusion the term can cause, we should probably cease using it — unless that is “forgive” means something else…
Labor as “value-quantifier” (as I’ll discuss in contrast with “value-form”) becomes a way to tell us “right and wrong” that is like law without being law (such is the quality of “givens”). It helps me decide what I should value that others will also find valuable, and in that way I can both have the comfort that I won’t readily be negatively judged for my values (as defined by Capital, for they are socially validated, supported, and facilitated), and I can have comfort in knowing that I might possible engage in “exchange” and relation with others on terms we both mutually share…
As Agnes Callard argues (as brought to my attention by Joel Carini), ‘[i]f you have a reason to be angry with me, you will have a reason to be angry with me forever.’⁷¹² Yes, if I could make you realize that your reason for being angry was false or based on a misunderstanding, then your anger might go away, but I would accomplish this ‘by showing you that you never had any reason to be angry with me […] If you did have a reason, you’d have it forever.’ Hence, if we have reason to efface Hegel’s “Absolute Spirit,” we’ll always have a reason to avoid Religion (as we’ll discuss). A single act of pain is a timeless reason to prevent Absolute History. The odds favor Land.
‘The commodity linen manifests its quality of having a value by the fact that the coat, without having assumed a value form different from its bodily form, is equated to the linen.’ Value-form can manifest concretely or abstractly, based on the physical use-value of a coat or based on “an abstract notion of labor” — it doesn’t matter, for the presence of a “value-form,” however it might arise, is all that is needed so that ‘a commodity is in the equivalent form,’ and thus ‘directly exchangeable with other commodities.’‘Since no commodity can stand in the relation of equivalent to itself […] every commodity is compelled to choose some other commodity for its equivalent, and to accept the use-value, that is to say, the bodily shape of that other commodity, as the form of its own value.’ Nothing can exchange with itself or anything identical with it, and so the value of things can only rise “in comparison” to other things (hence, “situated”)….
There are inadequacies and failures to seemingly all terms and language, but considering Ionia for a moment here could be of value, around which Karatani orbits his book Isonomia and the Origins of Philosophy. Ionia consisted of Greek colonies in what we now call Turkey, and Ionia lacked direct rulership, social divisions, and established equality more on mobility and immigration than say law. At the same time, ‘[c]ommerce and industry were highly developed in [the cities of Ionia], and they were centers of overseas trade’ (suggesting hope that avoiding a Meaning Crisis must require us to give up prosperity). To quote Karatani…
In this way, we might say Capital is more “a matter of seeing” than it is “a matter of becoming,” for a cup is first valuable as a commodity due to interpretation before I might begin asking about its value, which I likely try to quantify according to labor. Labor-quantification is primarily retrospective, as “rationality” comes after “nonrational truth.” Yes, if labor doesn’t exist to provide labor-quantification, then when I change in the “retrospective act,” I will fail to justify my valuation and so it will crumble, but the fact labor-quantification come later, not initially, suggests that “the abstract idea of labor” more primarily than actual labor itself. I consider a thing under Capital relative to its exchangeability, value, and price, and then might seek to justify that e-valuation with reference to labor-time. If no system of exchange existed, this second step would likely never occur (except if I was perhaps a private collector or something — a hypothetical which isn’t enough to sustain a social order)…
‘Money devalues what it cannot measure,’ Illich tells us, and that means money suggests what it cannot buy is that which doesn’t really matter (perhaps contributing to the loss of “social capital” and social gatherings), and yet “the unpriced” might be what matters most of all. This is a similar dynamic that arises between “wage labor” and “shadow work,” and if it is the case that money requires what it cannot buy to function (as it could be the case that society requires what cannot be monetized), then we will find further reason to think that economics is “fundamentally incomplete”…(“Tools of Conviviality and Useful Unemployment”)…
Every “limitation” is also a “necessity” (every “limit” and every “necessity” is actually a “limit/necessity”), for “limits are necessary,” per se. And thus to encounter a limit is to encounter “the necessity of responding and/or incorporating that limit.” But if we incorporate that limit, we change, and thus we are not limited in the same way we were before we encountered and incorporated that limit. And so our limits change (Consciousness negates/sublates into Self-Consciousness), meaning we have different limits to encounter (Self-Consciousness will now unfold as itself into Reason), which encountering will necessarily generate different limits — on and on. Every limit necessitates new limits, and so limits are unlimited in that they can always more so emerge. Limits aren’t limited, and thus we need limits to be unlimited.
Inspired by the likes of Bruce Alderman and Cleo Kearns, an Elder for me is someone who dwells in a different movement and sense of time, who is able to see patterns that to younger people are “new,” while also able to discern how much of “The Real” (Lacan) someone can handle while motivating them to experience that Otherness (so that they keep developing), but not so much of the Otherness that they are overwhelmed and “reduced to ash” like would have been Dante before Beatrice if “she smiled too quickly”…
The word “brainwashing” has an unclear meaning (distinct from “learning” and/or “changing view”), and it tends to be a subjective debasing of someone who begins to change his or her views into something the person using the word doesn’t like. Furthermore, we could really only call something “brainwashing” if we knew “the truth” which the person was being “brainwashed” relative to and away from. But even then, calling someone “brainwashed” isn’t nearly as productive as proving to the person the “truth” (which we need to know to use the word “brainwashed” legitimately), which could motivate the person to choose to step out of “the loop” in which they are (in our eyes) “being brainwashed” — what value adds the description and judgment, then?
Data doesn’t seem like it can be truly preventative, only reactionary or preparatory, for we cannot collect data about what hasn’t happened to stop from happening the thing that the data is about and quantifying. Furthermore, preventive measures are at an existential and empirical disadvantage to reactionary measures, for not only do preventive measures fail to create evidence that “they worked” as do reactionary measures, but preventative measures also fail to create existential certainty that “there is/was no problem.” If I keep something bad from ever happening (and hence from ever being “something bad”), I don’t ever experience that “something bad” ceasing to be (“something bad”), and so I never experience certainty that the “something bad” even existed and/or ceased existing. When there is prevention, there can be more existential uncertainty compared to when problems aren’t kept from coming into existence, only solved after they manifest…
Thomas Jockin is a good friend, and it is an honor and delight to speak with him. In March 2024, we discussed the nature and quality of gratitude, and how there seemed to be a difference between being grateful for something very obvious and apparent (like receiving $100), compared to being grateful for suddenly being able to appreciate and understand Faulkner. Both entail a gratitude, yes, but they also don’t seem equivalent. To be grateful for the $100 seems “fitting” and very linear, while I personally read Faulkner when I was younger, hated it, and then five years later I found myself liking it and seeing what Faulkner was doing. And suddenly and all at once I was grateful for the work William Wilson did to help me understand the text, a way of reading I then carried with me to other books and other experiences of art. My horizon of possibility was changed, but I wasn’t grateful for “the seeds being planted” at the time — that took years. And then I felt gratitude: I understood the patience Wilson had to have with me, as I also understood the patience “the spirit of Faulkner” had to have with me (we could say). And I found myself grateful for forces and processes I didn’t realize transpired. Is this second form of gratitude the same as the first or different? Jockin and I discussed…
Once a “map” emerges, a logic naturally arises that makes it moral to stay in and inconceivable to venture out — a process of insulation and “map-sealing.” Then, we employ “dominate strategies” in life to keep back anyone who would challenge our “map,” an act which also makes us seem right to stay in our “map.” After all, no one is able to pierce it; it’s safe. It should be our home (blood-bound).
If I want to become President of the United States, should I become President of the United States? No absolute or universal judgment is possible, but there’s an argument to be had that a person who wants power, leadership, influence, etc. is exactly the person we don’t want to lead us, precisely because there is something potentially problematic about ambition. Many of the great leaders and heroes of history are often thought of as “reluctant leaders,” compelled by duty or some Higher Power (suggesting another Game Theory Problem that religion might have helped address, as discussed in Thoughts, perhaps also like “The Population Problem”). People who want power are often depicted as immoral and “in it for the wrong reasons,” as similarly it often seems the best novels and ideas are generated not by people who want to be writers, philosophers, etc., but those who felt compelled by something “beyond them,” whether a Vision, God, or Something More. In this way, we might say that generally ambition and vice correlate, while attraction and virtue correlate (though people can use words in different ways), but this could mean we have a problem: “If the virtuous don’t seek power, how can those in power be virtuous?”
The World Wars allowed governments to do what otherwise would have been impossible, as can pandemics, revolutions, and social collapses. Existential events allow what otherwise would be unthinkable, because when the unthinkable occurs, the unthinkable can happen. But otherwise? Little — that’s Studebaker’s point. If this is so, do we really want “the way” to open? Might it be “a fanged noumenon?”
What do we talk about when we talk about beauty? If Wittgenstein is correct that ‘the limits of my language mean the limits of my world,’ then it would seem to follow that “the expressions of my language are the expressions of my world.” Hence, if we can pin down a clear way a word is used and separate it from how other words are used, we might also be able to isolate a distinct experience and/or “use,” and thus arrive at a distinct meaning. From Wittgenstein, we can then move into phenomenology, which is to say the effort to define a word becomes the effort to define an experience.
Does everything in the world “call” like Ulysses? Perhaps: I don’t believe it’s possible for anyone to say for sure that any given thing couldn’t “call” to at least one person in the world, if not all of us (with most perhaps being deaf to the summons). What about a dead rat? This brings us to the question of if it is possible for something to be so ugly, offensive, rude, etc. that it cannot be beautiful to at least one person on the planet (which is to say no “strike of beauty” would prove possible). If this is so, it would be inaccurate to say that “everything is potentially beautiful.”
Perhaps it is this “bowing” and accepting of the inevitably of entropy that has partly led to us forgoing the question of Creativity in favor of Supply and Demand? If there is no possibility of avoiding a loss of “useless energy,” then it seems irrational to devote infrastructure to seemingly denying the inevitable. Whatever Creativity emerges is luck and a “bug in the system,” we might say, which means it happens but it cannot be planned or cultivated. Economics and society must accept the inability to plan and cultivate these things; what is practical and reasonable is focusing on a management of the descent. Creativity cannot radically change anything. We are at “the end of history”…
Facts are “facts” within a network of assumptions about what constitutes the truth. However, they can be appealed to in order to legitimize this network and/or framework, as if transcendent and “above” the framework within which they are situated and “factualized,” per se. Ironically though, without the framework, the facts wouldn’t even be facts: to transcend the framework would be to transcend the factual. In this way, to deal with facts is to deal with facts/worldviews, ergo what we might call “factviews.”
As Kurt Gödel found mathematics and seemingly any self-referential system cannot make itself axiomatic or formal, so the same goes with all of thought. With Gödel, we can consider Alfred Korzybski, the brilliant challenger of Aristotelian thinker, who we might also associate with Hegel of the Science of Logic. Korzybski’s Science and Sanity attempts to help us recognize ‘mathematics as a language similar in structure to the world in which we live.’¹ Perhaps Korzybski succeeds in this, but if so, that would perhaps help the case of making Gödel’s work part of the world itself.² ³ ⁴ And what would this mean? That we are always dealing with maps that are indestructible precisely because finding a point of incompleteness will not necessarily mean they are wrong: though the realization brings anxiety, “incompleteness” can benefit maps. If incompleteness is essentially part of every map, then finding maps incomplete will not necessarily overturn them. Far from necessarily relativizing them out of existence or into nihilism (though that can happen for some), the vulnerability can make the maps more invincible.
‘The proof of arithmetic’s consistency could not be relative [but] had to be an ‘absolute proof,’ ’ but after Gödel such a proof was impossible.⁶⁴ He succeeded at oddly ‘proving that there are true arithmetical propositions that are not provable,’ meaning “proof” and “true” are not similes, meaning “consistency” and “correspondence” aren’t, and that obliterates any dream for a formal system.⁶⁵ ‘For a system to be consistent means that it does not yield any logical contradictions,’ and if reality is contradictory (A/B) as Hegel teaches though, no “consistent system” can actually be about reality — except perhaps by removing humans from it (without humans perhaps realizing it) (as discussed in II.1; Land waits).⁶⁶ This is what Korzybski understood, and this is why he might welcome Gödel’s work as helping us move toward “a science of sanity” — though oddly through a theorem that seems to suggest madness is our destiny (irony can gift).
If reality is relational, we cannot escape maps, but the same also holds if reality is ultimately abyssal and/or “lacking,” which leads us to the work of Žižek. Now, a “relational reality” and “essentially lacking reality” are not necessarily exclusive positions, and it’s possible that we experience an “essential lack” precisely because we must always lack “the other” to which we relate: in other words, perhaps we are always “passing over” into an otherness (A/B) that we cannot access. In fact, if “things don’t exist” and yet we must experience things “as if “ reality consists of things (Understanding, A/A), that alone suggests an “essential lack,” for we lack the capacities to experience reality like it actually “is” (perhaps “Fundamentally Nonlocal,” as Kaufman discusses, or “Dialectically Material” like Žižek considers). Ultimately though, if models of Relational Metaphysics are wrong and hence it doesn’t follow that “maps are indestructible” for that reason, we might be able to establish another ontological reason for “indestructible maps” if Žižek is right and humans can’t handle a Real without maps. If that was so, reality would consists of “maps we cannot destroy without a psychotic break,” which would mean “maps are (practically) indestructible,” returning us to our dilemma.
“The Map Is Indestructible” by O.G. Rose argued that an ideology was entirely internally consistent, to the point where (critically) even its incompleteness proves justified within the system (to incorporate Gödel). In other words, in an ideology, no essential contradictions are found, only “apparent” ones, and considering this, ideologies and/or “maps” are indestructible: all criticisms can be countered, avoided, and/or overturned. Thus, assuming Christianity, Islam, and Atheism are all indeed true ideologies (which I think we have reason to believe considering how old they are and how much “historic testing” they have persevered through), relative to their assumptions and axioms, there is equally rational reason to be a Christian as there is to be a Muslim or an Atheist, so why be one versus the other, or why stay in one as opposed to leave it? Will time and history help us deal with this problem? That’s a question which deserves attention, but considering what Nassim Taleb told us in Antifragile, as the older a book is the greater the likelihood it will be around in the future, so the older an ideology is, the greater the chance it is an “indestructible map.”
It’s hard to beat the scholarship and insight found at “Owen in the Agon” (previously “Raymond K. Hessel”), a truth to which Owen’s latest book is a testament and confirmation. He starts off telling us that ‘Plato no longer grounds our philosophical thinking — he haunts it.’¹ Whitehead famously declared Western Philosophy a series of footnotes on Plato, but Owen makes the case that we’re not so fortunate: we’ve forgotten him beyond our impression, and it’s more like we’re either reinventing the wheel or inventing square ones. Writing “footnotes” would require us to be aware of the source, when we’re too smart for that…
.
.
.
For more, please visit O.G. Rose.com. Also, please subscribe to our YouTube channel and follow us on Instagram and Facebook.