A Complete List of Essays

An Index of Works by O.G. Rose (First to Newest)

Frozen Glory Photography

Successfully, Karl Marx identified the bourgeoisie, the proletariat, and “the material dialectic,” but despite his emphasis on creativity, he failed to identify the artifex, meaning “creator class,” which is comprised of entrepreneurs, inventors, and artists. An artifexian, which is a term first introduced in this paper, is anyone who creates or recreates a means of production and/or a thing to be produced. Marx, it seems, conflated creators with the general proletariat, and consequently his material dialectic only halfway addresses the nature of socioeconomic change. The full dialectic by which society “marches” through history can be expressed as follows…

To allude to Nolan’s masterpiece, all conversation is ‘inception’. It is because I brought up Inception that you are now thinking about the movie: I planted the idea in your mind. If you claim you are not being incepted, you are only saying that because I claimed that you were: you are making that claim because I put the idea of ‘being incepted’ into your mind. Now that I’ve spoken about inception, there’s ‘no exit’ from it — ‘l’enfer, c’est les bouches’.

You are not free to choose not to be incepted.

Your liberty ends along the borders of my words.

In teaching theory, we risk rendering theory meaningless. Economists, political theorists, literary critics, sociology — all are in the same boat. Like scientists and psychologists, sociologists must take into account the Hawthorne Effect, which, generally speaking, is a theory concerning how participants alter their activities once they are aware that they are in an experiment.¹ The very presence of countless sociological and educational articles, lectures, and books online — which can spread quickly like memes — may change the very way societies act. This may render what the data claims about societies wrong, making it seem as if the data was always wrong; on the other hand, the data may make societies suddenly act in the way the data suggests, making it seem as if the data was always right…

Descartes famously said ‘I think; therefore, I am’; today, it would be more appropriate to say ‘I think you think I think you think; therefore, I am’. Today, our minds are not simply centers through which we consider ourselves and our world, but rather our minds are places where we also wrestle with what other people are thinking, what other people think we are thinking, and what other people think we are thinking that they are thinking. Humans have always wrestled with these ‘realms’ to some degree, but in our increasingly disembodied, social media age — an age in which we live on screens, perpetually interacting with countless people ‘out of body’ — this way of thinking has greatly intensified. Our ‘self’ is now a network, and to contemplate by ourselves is to contemplate in a group. There is no longer just one voice inside our heads: our heads have become communities.

Worry hides itself. The person who worries doesn’t experience worry as ‘worry’ — the person experiences it as ‘care’, ‘concern’, ‘realistic’, or even ‘love’ — in a sense, no one worries. If when you worried you experienced it ‘as worry’, you would probably stop, for you would recognize that it was all in your head and a matter of ungrounded fear. Rather, when you worry, you experience it ‘as real’, not as something that cannot happen, but something that probably will happen. It strikes you as undeniable, as concrete, as that to which you must respond. In fact, what you worry about is precisely that which you think ignoring is foolish: it strikes you as something you should pay attention to above all else. Additionally, what you worry about is that which you experience as rational to think about, for it is rational to try and avoid undesirable events, and those events must seem dire. For if what was worried about wasn’t dire, you wouldn’t worry about it, and additionally, people seemingly only worry about those they have a relationship with; hence, when you worry, by definition, you think something dire will happen to someone you care about. And surely if you loved that person, you would try to save them from that fate, wouldn’t you? To worry is to face and even create a problem.

If I thought you were crazy, how would you prove your sanity? Would you show me your college degree? Lots of crazy people are rather intelligent. Would you take me to lunch and ask about my family? Clearly you would only be doing that to trick me into thinking you were normal (proving that you’re not only insane, but also deceptive). Would you try to prove me wrong by claiming you were trustworthy? But everything you say is a lie, and since you won’t admit your shortcomings, it’s apparent that you’re also arrogant. How would you prove you weren’t prideful? By working as a janitor for a year? But you’d only be doing that to prove how selfless you were, taking pride in your humility. You’d be faking humility, as does any arrogant crazy person who’s unwilling to admit their insanity…

The internet, coupled with a lack of discernment, character, and craft has exacerbated our self-imposed dehumanization. People are creative, and people will use the internet to express their creativity. Since the internet isn’t going away, the question is whether humans will use it constructively. It’s easy and funny to use it for deconstruction, so the challenge to incubate self-motivated individuals to use the internet for the development and expression of excellent craft is great. People will generate culture even if they aren’t capable, and if society doesn’t equip its citizens properly, the generated culture will be one that dehumanizes and destroys.

Bernard Hankins is a gifted speaker and teacher, and his TEDTalk “Integrating People of Color” announces a vital link between the loss of creativity and the loss of diversity. He argues that the loss of ‘spectrum thinking’ (ST) leads to an increase in ‘binary thinking’ (BT), which contributes to segregation in concordance not only with how people have been taught to think but also with what they have been taught to believe is right. Consequently, education that stresses diversity but doesn’t incubate creativity is like stressing flight without providing wings: it fates students and society for failure and frustration. And when that failure occurs, lacking spectrum thinking, we may very well lack the capacity to recognize what caused the failure — in fact, we may think the problem is that we don’t have enough (binary) education — and so the Greek tragedy will go on, ever-worsening.

It is important to draw a distinction between emotional intelligence (EQ) and emotional judgment (EJ). Emotional intelligence is empathetic: it entails the hard act of thinking one’s self into another person’s shoes. It is an intellectual endeavor for the sake of achieving the proper emotional disposition toward other people and requires deep thinking. Emotional judgmental, on the other hand, is the act of gaging the validity of a truth based on one’s emotional reaction to it. The problem with EJ is that it requires one to consistently experience a positive emotion in order to verify, justify, or appreciate experiences, ideas, and so on…

A reason economies exist is to make joy increasingly more accessible to increasingly more people, and a society that doesn’t enable its citizens to feel like life is worth living is a society that arguably fails. The easier the economy makes this realization for its citizens, the better the socioeconomic order as a whole. Though both entail emotional fluctuation (at least according to this work), joy is non-contingent fulfillment, while happiness is contingent upon externalities. Consequently, unlike internal joy, happiness is temporary. Furthermore, though such an estimation cannot be made from happiness, the actual value of an economy can be generally estimated from the joy of its participants. The higher the joy, the less likely there is a bubble in the system.

Escapism is the antithesis of Existentialism. Existentialism, a philosophy that claims existence precedes essence, establishes that we come into existence and then decide the meaning of our existence, rather than the meaning be predetermined. It entails an engagement with the actual world, and an ‘existential crisis’ results when a person finds that actuality doesn’t match with what that person thought was real. Such a crisis causes a shattering of beliefs and preset complexes. It is a painful experience; it causes angst. To avoid this crisis, the modern person can view everything as a potential photograph, tweet, posting, text, etc. In this way, the modern stands as a viewer, ‘outside’ the world, and hence the world is something humans conquer rather than the world conquer humans. Perhaps in Eden humanity could have dominion over the world, but now the world holds dominion. In protest, humanity has digitized itself and the world to reestablish the rules of Eden, but this dominion is a kind of escapism and denial rather than a true engagement. It disembodies us: anytime we escape the world, we escape ourselves.

To be materialistic is to focus on material things, while the optimal way humans relate to things is as if those things are ‘invisible’, per se. According to Heidegger, a doorknob is ‘invisible’ to us until it breaks, for until then we use it to open a door without thinking about it. It’s only when the doorknob doesn’t work that we stop and notice it. Similar should be our engagement with all things in the world: to exist in this way is to avoid materialism…

Ethics is an ‘(im)moral’ study. While reading The Groundwork of the Metaphysics of Morals by Kant, I do not try to save the life of a starving child in Africa: I act immorally. Yet I am reading the text in order to learn how to be moral, and if it contributes to me being moral in the future, then I act morally. I act immorally relative to the child in Africa now, and morally to whom I will help in the future. In sum, I act (im)morally.

What is rational, responsible, honest, moral, sane, genius, and so on is relative to (what we believe) is true. If I am a bird, then leaping off a cliff isn’t insane but perfectly normal. If I am sick, then it’s responsible for me to see a doctor; if I’m not sick, then it’s perhaps a waste of time. If I have a meeting at one and I arrive on time, I prove myself to be punctual; if I am a gang member and scheduled to execute an innocent person at one and prove myself punctual, I prove myself to be a murderer.

What good is thinking and reasoning if economists, pundits, intellectuals, and the like are so often wrong? Great minds failed to foresee the 2008 Financial Crisis, Brexit, the rise of Russian aggression, the Trump presidency — so what use is thought? Clearly whatever its use, even if that use is necessary and invaluable, the use of thought still seems remarkably limited. Though mastering thinking is necessary (for reasons argued throughout the works of O.G. Rose), thinking alone is inadequate. If we fail to realize how innately incomplete thinking is (even genius thought), we may think that we have the tools necessary for saving the world, but when we go to make a difference, before our very eyes, nothing will change.

In his famous essay “The Ethics of Belief,” W.K. Clifford argued that if a person allowed others to use a car that the owner knew was unsafe, even if the people arrived at their destination successfully and unharmed, the owner of the vehicle would still be guilty of immorality. When we know something is true and disregard it, or when we believe in something without sufficient evidence, according to Clifford, we act immorally.

What do I say when I say “I love you?”

If I mean “you make me happy,” there is no difference between “love” and “happiness.” If when you say “I love you,” you mean “I’m happy around you,” again, the term “love” cannot be defined from “happiness.” If “love” is to be used meaningfully, it must signify something else.

Everyone has a self, and hence all acts are self-ish and no act is totally self-less. According to moralists, we aren’t to act selfishly, but it is not truly possible for anyone to act without any consideration of, or connection with, his or her self. It is difficult to imagine even what it means to act without one’s self, for even if one were to abandon his or her self, such an act would be done through the self. Therefore, it is not helpful to talk of a need to avoid ‘selfishness’; rather, as will be argued, it is more valuable to speak of a need to live in a state of ‘awe’ and ‘thanksgiving’. Furthermore, is a ‘hallow man’ who acts selflessly good? The selfless acts of such a person seem destined to be vapid and empty. Ultimately, I believe the focus today on ‘selflessness versus selfishness’ can have its uses, but it’s more often than not a source of confusion.

Why does history repeat? Why does it seem thinkers like Heidegger are at war with language? Why does it seem art has more influence on ideas than ideas on art, and though both have an impact, why does art seem to change the world more so than philosophy (as technology seems to be more consequential than education)? Why do words often fail us? Why does the phrase “I can’t put it into words” resonate? Why does knowing things could be worse not make us happier?

It’s because ideas are not experiences.

What do we say when we say, “I’m certain that x?” As noted in “On Beauty” by O.G. Rose, if Wittgenstein is correct that “the limits of my world are the limits of my language,” then expressions of my language should be expressions of my world. Thus, if I can isolate a term into distinction, then I should identity a real phenomenon in the world — words that don’t refer to real things are nonsense or blur with other words indefinably.

We understand things in the world through what they are not: we understand what constitutes a cat through the idea of a cat, yet a cat is not its idea. Additionally, our ideas necessarily must be incomplete: when thinking about a given cat, it is impossible for me to think about every single detail that constitutes that particular cat. I consider “a cat” through “cats,” per se, and if I try to particularize it into “Sushi” (my cat), I try to make the entity less abstract through a word the thing is not. And yet without the word, without the abstraction, my understanding of the thing would be even poorer. Unless that is my idea is utterly wrong; then, perhaps it would be better if I only silently stared at the creature, perceived it, and nothing more.

Why does one person find the case for x compelling while another finds the case against x convincing? Both believe they are rational and intelligent to assent to the case they believe in, yet both cases cannot be true. One person finds the argument that Israel is justified to use force legitimate while another finds the argument Conservative propaganda; one person finds the case that Robert E. Lee was a “hero” grotesque and absurd, while another person finds the argument nuanced and conceivable; one person finds it believable that Roosevelt knew about Pearl Harbor ahead of time, while another finds the argument a silly conspiracy. Why does one person find x believable but not anti-x (or y)? Clearly it is because one person is convinced by one case and not the other, but the question is why? Why is a person compelled and convinced by x, what occurs in the act of a person being compelled and convinced, and why does what occurs in the act occur at all?

Thinking and perceiving are not the same. If I look at a window and think about my grandmother, I perceive the window, but I do not think about it. However, the moment I stop daydreaming and realize ‘the window is dirty’, I am now both perceiving and thinking about the window. Perceiving is ‘processing through body’, while thinking is ‘processing through mind’. ‘Mind’ and ‘body’ are unified when I both perceive and think about the window, but the ‘mind’ and ‘body’ are separate when I perceive the window and think about my grandmother. When what I am thinking about and what I perceive match, mind and body (or brain) are one, though they are apart otherwise. When I perceive the window and think about the window, I am not ‘dualistic’; when I perceive the window and think about grandmother, ‘dualism’, in a sense, is true. The human shifts in and out of being Cartesian.

For Livingston, Hume was ‘among those rare thinkers for whom philosophy itself [was] the fundamental problem of philosophy.’ This is not to say Hume was against all philosophical reflection — in fact, philosophy has a necessary role — only that to understand Hume, we must realize ‘Hume’s philosophy is […] a critique of philosophy by itself [and] its central feature is the dialectic of true and false philosophy.’ ‘True philosophy ennobles mankind; false philosophy distorts, corrupts, and dehumanizes.’ ‘[Hume] sought only to reform the traditional understanding of philosophical autonomy by recognizing the autonomy of custom, that is, by demonstrating that custom is an original and authoritative constitute of speculative thought.’

“Sensualization,” a term coined in this paper, is the giving of sensual or “sense-able” representation to the metaphysical. If I think “I’m hungry,” to say “I’m hungry” is to carry out sensualization. Likewise, if I feel worried and carry a worried look on my face, I sensualize my fear (via a kind of “dark speech,” as discussed in “On Words and Determinism” by O.G. Rose). Ideas and feelings are metaphysical, and unless I sensualize them, I’m the only one who knows about their existence (within me), while everyone else is left in ignorance. If I have an idea about how to improve a house but don’t tell anyone, I don’t sensualize my idea but rather leave it as an idea. Humans are orientated to sensualize versus keep the metaphysical unsensualized, and when sensualization is a good thing, this pays off, but when it is a bad thing, this orientation (and bias) works against our discernment and development.

1. Words have power.

2. Words orientate, and relative to that orientation, create/realize the world/future (of a speaker).

2.1 A person’s world is what a person experiences. What one says determines what and how one experiences the world. Therefore, words orientate the world.

How do humans experience thinking? Is it willed or does it just appear? This might be a strange question, but addressing it might help us decide the way to incubate and encourage the right kind of thinking as a society. Furthermore, we might learn to identify biases that privilege intentional thinking and “low order complexity,” a bias which could hinder creativity and the “high order complexity” which defines necessary and emergent phenomena, without which our society and lives could suffer. Both “low order” and “high order” complexity play key roles in our lives, but our brain seems to be in the business of trying to put all our eggs in the “low order” basket. If we don’t actively combat it, our brain, the great frenemy, will win.

We need to stop believing we can determine what people think and who they are based on how they vote: I believe this assumption is tearing the country apart. Why people vote the way they do is incredibly varied and complex, and if we assume that how people vote is enough by which to know how they think, we’re likely just to use these conclusions to support our confirmation bias, ideology, and the like. Our voting almost always misrepresents us, but if we think it tells us all we need to know, our willingness to listen to one another, heal divides, and become fellow citizens may suffer immeasurably.

If x is true but there is no evidence verifying x, then it is irrational, intellectually irresponsible, and correct to believe x. In this situation, it is correct not to believe what is correct, and the right thing to do is to not believe what’s right to believe…

The cost of college tuition is high because businesses rely almost exclusively on colleges to determine employee qualifications. Today, colleges hold a monopoly on credentials, and where there are monopolies, price controls are lacking. Though some businesses, most notably in the Silicon Valley, are moving out of the narrow mindset that someone with a college degree is necessarily more qualified than someone without one, this enlightenment is yet to spread through the whole economy. Businesses need to develop their own, personally crafted methods of testing employees without involvement from colleges, which function today as long and expensive IQ tests in disguise (as Peter Thiel notes). Simultaneously, the social stigma against refraining from attending college needs to be effaced, for that empowers businesses to outsource determining qualifications to colleges.

If I find something beautiful, I treat it with care. If there is a vase in the kitchen that is notably elegant, I make a point not to bump into it, but if there is a vase made of plastic that I bought for cheap, though I won’t intentionally break it, I won’t be nearly as careful, and if I have to make a hard choice between catching the plastic vase from following off a ledge and catching a glass, I could easily choose the glass. Beauty corresponds with value, and if I find something beautiful, relative to the degree I do, I naturally and willing take care of it. This isn’t to say that beauty is necessary for me to care, but it is to say that beauty naturally inspires consideration and concern without anyone coming along and threatening to put me in jail if I don’t act better…

Neil Postman wrote numerous books on education, though he is most famous for his classic Amusing Ourselves to Death. His thought was deeply shaped by Marshall McLuhan, the mind behind Understanding Media, but he was no McLuhan-parrot; in my opinion, the student rises above the teacher, even if though without the teacher, the student would have been lost. Postman applies McLuhan’s thinking to education, and consequently generates some of the most innovative and provocative thinking about education I have ever read.

The present redefines the past, and the choices I make presently transform what it was I was choosing when I made previous choices. If I choose to go to college and meet the woman I am going to marry there, college suddenly becomes “the place where I met my wife,” as if it was “always” that place. It’s as if the moment “reached back in time” and redefined everything that was, is, and would be, changing what I chose when I chose to attend college. This is what I call a “flip moment” — a redefining experience that changes “what is” as if “what is” was “always” such…

What do we talk about when we talk about beauty? If Wittgenstein is correct that ‘the limits of my language mean the limits of my world,’ then it would seem to follow that “the expressions of my language are the expressions of my world.” Hence, if we can pin down a clear way a word is used and separate it from how other words are used, we might also be able to isolate a distinct experience and/or “use,” and thus arrive at a distinct meaning. From Wittgenstein, we can then move into phenomenology: the effort to define a word becomes the effort to define an experience, to achieve meaning.

Imagine someone will give you $10 for one hour of work. Now imagine that he will give you two hours of work if you agree to be paid $9 an hour, three hours if you agree to be paid $8 an hour, and so on up to ten hours. At ten hours, you would make the same amount as you would have for one hour of work, and so, at some point, it becomes illogical to trade wages for hours. Consider the following…

1. A is A.

2. A is A is A is A is A is A…ad infinitum

3. A is A.

4. A.

5. A, A, A, A…eternal regression

6. While “A is A” ad infinitum, A eternally regresses.

7. Hence, “A is A” signifies “ ‘eternal regression is ‘eternal regressionad infinitum.

In the modern world, as brought out by the debate between Isaiah Berlin and A.J. Ayers, a particular epistemological error is common. The first is that when told by the teacher that “it is raining outside,” the students conclude that since they haven’t seen it raining, they have no reason to believe that what the teacher claims is meaningful. The second mistake is that the student who is on the verge of running out to see if it is raining stops himself, because he realizes that he has no reason to believe the teacher’s statement is true, and so has no reason to check the weather. After all, the student has been taught that there is no meaning without verification — empiricism is all the rage.

The “Protestant Work Ethic” that the sociological Max Weber identified in Protestantism is not the description of an essential dimension of Protestantism, but the description of a symptom of something deeper. That deeper problem in Protestantism is its susceptibility to Heideggerian and Deleuzian “capture” due to the Protestant tendency to reject “sacramental ontology.”

How do we know about God, and how do we live out that knowledge? Reason and revelation are often placed in opposition of one another, but from Austin Farrer we can learn to appreciate how reason makes it possible for us to ascent to a “vague God” that can make us “will” to experience “the particular God” of Jesus Christ disclosed in revelation. Without reason, we could never make it to revelation. This epistemology understood, we may also begin to find ways of grounding axiomatic positions in Christian theology, as well as unpacking the meaning of some of Christianity’s key phrases…

No one who lacks critical thinking thinks they lack critical thinking, for it takes critical thinking to realize you lack it. Hence, when it comes to defining critical thinking, we are presented with a paradox. To start, no one reading this paper will think they need to read it, for no one thinks they aren’t familiar with critical thinking, and yet this sense of familiarity is precisely why a person would need to read this paper. There will be readers who can critically think and those who cannot, yet everyone will think they belong in the first camp, for who doesn’t think they exercise critical thinking? And this is precisely why this work is necessary and precisely why it seems unnecessary. Critically thinking is surrounded by irony and paradox.

In line with the thought of Kurt Gödel, if all ideologies ultimately cannot ground themselves axiomatically, meaning that “autonomous rationality” is impossible (a worldview that is rational “all the way down”), how is it we justify our ideology? It is clearly with something “arational” or “(un)rational” (note I didn’t say “irrational”), but is that “something” experiences, emotions, imagination — what? I don’t deny these “(un)rational” means of ascent serve a role, but Samuel Barnes, the mind behind Missing Axioms, has brought to my attention the unique role of martyrs, soldiers, and saints. Barnes has taught me that blood is epistemologically significant, and, for that, I am in his debt.

“What is the meaning of life?”

Do you mean the word “life” or the phenomenon of life? If I were to ask you, an English speaker, “What is the meaning of (insert French word for life)?” you would probably answer with a definition, while a French speaker may leap straight into existentialism. Likewise, when we ask, “What is the meaning of life?” we are asking life to give us a definition (though we may not realize it, looking at someone). Unfortunately, life is silent. Therefore, we can only ask other people, who, assuming they share our language, will interpret the question to directly be a philosophical one, when a philosophical understanding of the question cannot be answered until after the word “life” is defined (which only “life,” forever silent and inanimate, can completely define)…

Humans seem to have the ability to create universals by speaking, yet there are only particularities in the world…

“The rage of Achilles” is so problematic because Achilles seems to have the ability to destroy fate and tear down the spacetime continuum. What seems to be a linear war is actually an event in which the entire cosmos hangs in the balance if things don’t happen “in the right order.”

The brain and the mind “have nothing in common.” The use of the word “nothing” here is a play on words, for Cadell proceeded to claim that the mind is a “lost cause,” something that is essentially “an absence”…

The title of this paper alludes to The Legitimation Crisis by Jürgen Habermas, a prophetic book nearly forty-years ahead of its time. It warned that we were losing confidence in political institutions, rendering those institutions ineffective and profoundly damaging democratic processes. Today, the term “legitimation crisis” is often used in reference to socioeconomic and cultural institutions, bureaucracies, and governmental processes in general, and in this paper, I will suggest the term “legitimation crisis” could be used to refer to nearly everything in modern life…

“Monotheorism” is the belief that there exists a single theory that can explain every given phenomenon and/or given event, and it is human nature to be monotheoristic.

1. Reading is an act of trust.

a. We can’t check all the author’s sources.

b. We can’t check to make sure the author didn’t steal ideas.

c. We can’t ask the author whether a fictional character is being sarcastic or ironic.

1.1 If we pick up a book on Vietnam by a reporter “who was there,” we must trust that the reporter actually tells us about things he or she really saw.

a. The reporter might be lying.

b. The reporter might have a bad memory.

Societies and stories are similar in how they work and fail. Like a story, a society that fails to maintain in its people “a trance of believability,” of legitimacy, is a society in decline.

Where things are “without nothing,” then things are “complete in themselves” (there’s “nothing else to see”), but where things are instead “lacking,” things are incomplete (there’s more to the story).

The free market works well because entities aren’t “too big to fail” and can meaningfully compete, and yet “rationality” and “self-interest” — principles which drive wealth creation — drive entities to try to make themselves TBF, hence driving them to threaten if not ruin the wealth creation which justifies their existence.

There is technically no such thing as “meaningful experiences,” only “meaningful memories (about experiences).” An experience is precisely relative to what thought is not involved: it is ultimately a matter of perception, which means it is a matter that doesn’t involve thinking or meaning. There cannot be meaning where there isn’t thought, so “pure experiences” are necessarily meaningless. And yet that meaninglessness can be a source of wonder and beauty.

“Pure experience,” for Keiji Nishitani, is basically the experience before Lacan’s “mirror stage” when a child doesn’t recognize his or her self in a mirror; during this time (which we all went through), there is no “hard line” between objects and subjects.

What should we do today when we return to a land that was once ours but that we do not recognize?

What a thing “is” cannot be separated from what a thing “means,” as two sides of a coin are inseparable and yet distinct. A given cup is ultimately a collection of “atomic facts.” Therefore, a cup isn’t a “cup”: what a cup is isn’t what it “is” (to us). To humans, the is-ness of a cup cannot be understood; therefore, when humans speak of is-ness, they speak of what a thing “is” (to them). In other words, what a thing “is” is what a thing “means.”

The rational and logical end where death and apocalypse begin; there, the border of thinking is reached…

Section One of a Philosophy of Glimpses

It’s hard to think of a more loaded word in philosophy than “metaphysics,” and it can mean a hundred different things to a hundred different people. To start, it would be useful to review some possible understandings of the term…

Section Two of a Philosophy of Glimpses

Phenomenology is the study of how things “unfold.” It is the study of what x is “like” primarily, with estimations of what x “is” following only secondarily. Even if Kant is correct and the noumenon proves uncrossable, the fact x “unfolds” like y instead of z will give us reason to think x “is” more like b than c…

Section Three of a Philosophy of Glimpses

Any effort to establish a “New Metaphysics” will have to defend itself against Derrida, who seems to have deconstructed all metaphysics with his masterpiece On Grammatology. Why I think Derrida failed is elaborated on in “On Typography” and “(Re)construction,” both by O.G. Rose, but here I will present the outline of the case.

1. Derrida deconstructed metaphysical systems which rely on “ontological gaps” but not metaphysics focused on phenomenological experiences of apprehension.

2. Derrida deconstructed metaphysical efforts to say what things are like “in of themselves’ (across the noumenon, per se), but Derrida did not deconstruct metaphysical efforts which focus on what things are “like” in their “unfolding.”

3. Derrida deconstructed “metaphysics of judgment” but not “metaphysics of apprehension,” “metaphysics of gaps” but not “metaphysics of reading.”

4. Derrida deconstructed metaphysics which open “gaps” between surfaces and depths, parts and wholes, etc.

Derrida deconstructed “metaphysics of gaps and judgment” but not “metaphysics of experience and apprehension” (in other papers, I say that Derrida deconstructed “the metaphysics of the book” but not “the metaphysics of reading”). By basing a “New Metaphysics” on phenomenology versus (Platonic) systematizing, we can justify engaging in the practice of metaphysics again…

Section Four of a Philosophy of Glimpses

Does phenomenology really overcome the problem of “presence” that Derrida claims signifiers never can, stuck endlessly deferring? This is the problem Derrida is getting at with his language of différance and “trace” — why does phenomenological experience avoid the problems of language and not fall into its own “ontological gap?” What is experience if not a “presence?” This was a point Lennart Oberlies raised, and I believe it deserves special elaboration.

Section Five of a Philosophy of Glimpses

Thomas Jockin made the point that not all “lacks” are nothing, and that the conflation of these categories has deeply hurt our capacities to reason, especially to reason metaphysically. This inspired a paper called “Lacks Are Not Nothing,” and here I will try to give an account based on that paper to explain the difference between “lacks” and “nothing.”

Section Six of a Philosophy of Glimpses

Phenomenology is an “art-form” of observation and careful distinctions based on our experience. We draw distinctions between “love” and “like” by taking into consideration how one “unfolds” versus the other. Since x “unfolds” y way while b “unfolds” c way, there is “reason to believe” that x and b aren’t identical. Maybe they are somehow, maybe they overlap here and there, etc., but if “love” unfolds y way and something “like love” folds z way, then there is reason to think that the thing “like love” must not be identical to “love.” And on these grounds, we now have reason to continue or conclude a new philosophical investigation…

To live is to be conscious; it is to inhabit a mode of being and thinking; it is to hold a set of memories; it is to experience a wide range of emotions; it is to know a wide range of people and things. Everyone who is conscious experiences such things, but only you experience what you experience and how you experience it. This helps constitute your-self, and you can never inhabit the self of another. Hence, there is a gap between you and others, and a sort of “hole” in others that you don’t have in yourself…

Section Seven of a Philosophy of Glimpses

A being with consciousness and will is able to shape its own “formal cause” (to some degree), which means that such a being can also shape it’s “final cause.” While a cup cannot change its formal and final causes, I can change the formal and final causes of myself.

Section Eight of a Philosophy of Glimpses

If free will exists and humans can be “toward” “lacks,” humans aren’t purely physical but “(meta)physical,” though that doesn’t mean humans are necessarily not an “emergent” product of ultimately physical forces (that would be a line of inquiry that exceeds the scope of this work). Considering this, we are capable of experiencing “(meta)physical” beings, events, etc. in ways that purely physical or purely nonphysical beings could not…

Section Nine of a Philosophy of Glimpses

Who has seen the wind?’ — Christina Rossetti starts her poem with this profound question. ‘Neither I nor you,’ but we have caught glances of trembling leaves and bowing trees, and now it is up to us to remember what we saw when the wind was ‘passing through.’ What passed over us? To answer, let’s start with what we felt.

On the Tropics of Discourse by Hayden White with Davood Gozli

As we can’t visit Lynchburg and just visit “the bank” (because we also have to “visit” all the surrounding buildings, roads, citizens, etc. on the way), so likewise we can’t just discuss “The American Revolution” without also discussing American agriculture, American assumptions about the virtue of freedom, the English language, American literature, and so on…

The term “dialectic” is used throughout philosophy but not always in the same way. Some philosophers by “dialectic” mean merely a “back and forth,” like a democratic debate. People will talk about the “dialectic” between Liberals and Conservatives, Republicans and Conservatives, and so on. In this first sense, a “dialectic” and a “debate” are extremely similar, and the key point is that this kind of “Discussion Dialectic” seeks to end the dialectic. The goal is resolution, for the involved parties to come to an agreement that stabilizes the situation.

But this is not the only kind of dialectic…

We are a problem that can be managed but never solved, and seeing as “ethical situations” involve people, ethics are also subjects which cannot be solved once and for all. If we determine in one situation that “x is wrong,” it won’t necessarily follow that we never have to worry about x again or that x is always wrong. In c situation, x could be wrong, while in f situation it could be good, and yet tomorrow x could be wrong in f situation — it depends. It will not do for us to say, “x is wrong,” and by that mean always and/or unconditionally, for that is too A/A in an A/B world: it is to take an idea (“x is wrong”) and press it down and over the world, flattening the world. Instead, we need to form a dialectic between our ideas and the world, which would be A/B: perhaps “murder is wrong,” but it would not necessarily be the case that every instance of “ending a life” was murder; it could be the case that some instances of “ending a life” was only “killing.”

Thinking “about the world” is arguably thinking that only “responds” to the world: it’s cause and origin is arguably the world. But thinking which “wasn’t about the world” — that misunderstood it, that was imaginative, that was completely abstract — didn’t strike Hegel as a “response” to the world, but its “own” cause and origin (thus, created and creative). It struck Hegel as erroneous to treat this second kind of thinking as identical to the first or to — worse yet — “bracket it out” as error and irrelevant, which arguably is what most Enlightenment thinkers did, absolving themselves the responsibility to consider the implications of the second kind of thinking. Hegel, though, wouldn’t grant himself that luxury.

Thanks to technology, everything ‘in this world has become everybody’s issue.’ People we’ve never met ‘are now involved in our lives, as we in theirs, thanks to the electric media.’ What we are orientated “toward” has dramatically changed in our modern age...In the past, it wasn’t possible to be “toward” events much beyond one’s locality, family, community or work; yes, people could read about the war, events in Europe, and so on, but we couldn’t regularly receive “live updates,” hour by hour, about everything that was happening everywhere. In the past, humans weren’t simply more isolated, but also more “truly ignorant” about global events: people not only didn’t know what was happening, they didn’t know they didn’t know. “True ignorance” can cause major problems, but so can bearing knowledge that the knowers don’t know how to bear…

We don’t tend to think of void as what makes being possible, but instead what needs to be “removed” so that being can flourish. Where there is void, being is “sucked in” as if by a black hole; in this way, voids are threats to being, not enablers of it. Worse yet, we seemingly don’t even have a robust category of “not-thing” like “void”: we generally have a dichotomy of “being” and “nothing,” which generally means that we only have a category of “being” because nothing is, well, nothing. “Nothing” for us is a “dismissal category,” a category we use to say, “It isn’t” and “It doesn’t matter.” It’s a “limiting concept,” a “boundary” — we suggest “things can’t be nothing,” which means if we’re talking about nothing, we’re talking about nothing and wasting our time. And so we don’t talk about it, and instead focus on being…

“Lacks” and “holes” are very similar, and both create “ambiguities,” for they exist between “the present” and “the absent” (like Schrödinger’s Cat). Because we are A/B, we must face ambiguities and decide if, in our minds, they are “lacks” or “holes”…

‘Language creates a worldview’ more than a worldview creates language. This isn’t to say worldviews don’t have any effect on language, but that language has an incredibly powerful impact on how we think about and see the world. As highlighted by Neil Postman in his book The End of Education, I. A. Richards would divide his class into three groups and ask each to write about language, but he would also provide each group with an opening sentence: either ‘language is like a tree,’ ‘language is like a river,’ or ‘language is like a building.’ ‘The paragraphs were strikingly different, with one group writing of roots and branches and organic growth; another of tributaries, streams, and even floods, another of foundations, rooms, and sturdy structures.’ As the exercise made clear, metaphor influences what we say, and to some extent, ‘what we say controls what we see.’

We tend to think of informative statements as either truths or lies, but what about something we can’t identify as either a truth or a lie? What about something that makes identifying “what is the case” harder, that blurs the line between “truth” and “falsity” beyond recognition and/or that convinces us that we cannot recognize the difference? That doesn’t seem to be a “falsity,” and yet right now that is the only term we seem to have at our disposal, suggesting the need for something else. In this short paper, a term I would suggest is “blur,” and such a term is especially needed in our Internet Age…

What’s it like not to know what we’re talking about? Unfortunately, it’s often like knowing what we’re talking about. Ignorance feels like knowledge, and we, informed about this but uninformed about that, usually participate in “civil debate” unaware that destroying democracy and contributing to it feel the same. In each of us, ignorance is certain.

As discussed with Samuel Barnes on “Truth Organizes Values,” the dream of the Enlightenment was generally that there was only “one internally consistent system,” meaning that “coherence” of a worldview would necessarily correlate with its “correspondence” to reality. Unfortunately, it turns out that a system can theoretically be utterly “coherent” and non-contradictory, and yet nevertheless not “correspond” with reality at all. This being the case, there is no necessary reason why there cannot be an infinite multiplication of conspiracy theories, nor why “information warfare” cannot convince us of practically anything. If there was only “one internally consistent system which corresponded with reality,” then both conspiratorial thinking and propaganda would be much more “bound” and stoppable by rationality; instead, it turns out that rationality is mainly in the business of “coherence” versus “correspondence,” and thus rationality can make the problem worse

To remain in the Cave is not to be stupid; in fact, we can stay in the Cave and be brilliant. What keeps one in the cave is not applying their brilliance to the “right ends,” and/or letting their brilliance be “captured” by a zeitgeist or “contained” within some mental prison. Why is this an important point? Because imagination, which is incubated by art, perception, and experience (as will be expanded on), determines “the bounds” in which rationality can operate. We cannot think about what we cannot imagine, and that means imagination goes first in intellectual development (or experience — something aesthetic leads the way). Rationality is not the source of its expansion, only its coherence. Those who stayed in the cave did what was “coherent” and “rational”: they became best at the memorization game. Neither able to imagine they could leave or willing to walk out of the cave, the prisoners who remained in the cave did what made the most sense: they trained to win the game of memorization.

We’re explained when we know why we’re here, but we’re not addressed until we know why we’re here. A strange opening sentence, yes, and yet I would wager that you know exactly what it’s trying to articulate. Without a second thought, we know the difference between “here” and “here” — we’ve known it our whole life, every waking moment. We know that we were born, that we stay alive because we eat, that we travel through space because of our legs, and yet none of this feels like it’s addressing us (it feels “besides the point”). It’s explaining our physical composition, our need for energy, our body — but we are nowhere to be found in the explanation. And yet we live it — we’re always living explanations in which we cannot be found…

There is a story of a donkey caught between two equally sized piles of grain. Because the piles of grain are the same size, the donkey cannot decide which to eat from, and so the donkey starves to death. The donkey couldn’t make a rational choice: the donkey was paralyzed. Generally, many Greek heroes find themselves caught in similar situations (and their level of “perfection” doesn’t help), but Biblical heroes are different. Biblical heroes find themselves between what look like two equally sized piles of grains, but then thanks to God, they realize one pile is a little smaller than the other, and so they escape paralysis, make a choice, and eat from the larger pile. Additionally, for the Biblical hero, one of the piles of grains might be poisonous or contain an infectious disease that the Biblical hero might take back to loved ones, so it’s very important the Biblical hero chooses well under God’s guidance (and he or she must choose or starve to death). Unfortunately, the Biblical hero is sinful and has bad hearing…

I checked three translations of The Republic — one by G.M.A. Grube, another by Benjamin Jowett, and lastly Allan Bloom (not that these are necessarily the best translations or something: they were just what I had available) — and the verdict seemed clear: the prisoner did not act alone. Nowhere did the prisoner free himself or choose to ascend out of the Cave without prompting, so why in the world then did I recall the story as a narrative of individual enlightenment and self-ascent? Sure, I knew the Allegory involved education and learning the truth, but the imagery and “movie in my head” of the Allegory was one of chosen and autonomous “ascent” — I didn’t recall the prisoner being “dragged out” as the explanation for how he escaped. I recalled that in the discussion with Jockin as a “possibility,” a hypothetical, not as what happened. What was going on?

.

.

.

For more, please visit O.G. Rose.com. Also, please subscribe to our YouTube channel and follow us on Instagram and Facebook.

Iowa. Broken Pencil. Allegory. Write Launch. Ponder. Pidgeonholes. W&M. Poydras. Toho. ellipsis. O:JA&L. West Trade. UNO. Pushcart Nominee. linktr.ee/ogros

Iowa. Broken Pencil. Allegory. Write Launch. Ponder. Pidgeonholes. W&M. Poydras. Toho. ellipsis. O:JA&L. West Trade. UNO. Pushcart Nominee. linktr.ee/ogros