A Complete List of Short Pieces

An Index of Works by O.G. Rose (First to Newest)

Frozen Glory Photography

How do I know when I’m being anxious versus when I’m being wise? They feel so similar. I don’t want to take costly and unnecessary precautions, but I also don’t want to run out of toilet paper. Let’s discuss.

Marx emphasized creativity but failed to identify what I call the artifex or “creator class,” which is made up of entrepreneurs, inventors, and artists. An artifexian is anyone who creates or recreates a means of production and/or a thing to be produced. I think Marx conflated creators with the general proletariat, and I think that’s why his material dialectic is incomplete.

After a few hours, the adults finish their intricate sand-city. There are buildings with windows, streets, taxis, and even a small Statue of Liberty. The parents stand up, brush off their knees, rest their hands on their hips, and smile at one another.

We are what we love, to allude to James K.A. Smith’s invaluable phrase, for what we love is what forms our habits, and our habits form our character. Importantly, Scruton argued that we find beautiful what we love and love what we find beautiful, and if Smith is correct, that means what we find beautiful shapes our habits and character.

The word “midwife” sounds archaic, like something out of the Middle Ages. “Doctor” sounds professional and safe: when people hear that you are using a doctor, they are familiar with the term, and it strikes them as modern and up-to-date. But “midwife” is a word that’s hard to place. It sounds old. It doesn’t sound safe. And why is “wife” in there? Midwives are halfway married? For more on the origin of the name click here.

Fire is dangerous but keeps us warm. The key is that we shouldn’t play with fire. Likewise, we shouldn’t play with hope, for when we’re careless, we start calling “hope” what is actually an “expectation,” something that can incubate an entitlement spirit and leave us vulnerable to boredom and disappointment.

Government and democracy also need to maintain legitimacy, as Jürgen Habermas discusses in his important Legitimization Crisis. If people don’t believe in “the system,” they will likely feel existentially anxious and oppose it. If people believe the process by which elected officials take office is corrupt or that the government doesn’t reflect the will of the people, the people will not believe it is right to follow the government. In fact, in their minds, a conflict between righteousness and the State can arise, moralizing civil disobedience.

Let’s say you need glasses. Is it practical for you to wear them? Absolutely: glasses pretty much make all practice possible; it would be hard to walk to the fridge without them. Glasses won’t help you lift a box onto a shelf directly, but indirectly, they are absolutely essential…

But there’s a problem: it’s rational for businesses to make themselves invincible, and if it’s possible, that’s what the smart businesses will do. And to make a long story short, America made it possible by mixing the markets and government. How? Read “No Exit” — here you’ll just have to take my word for it.

The point is that knowing what to believe about anything is super hard, but it’s especially hard to know what to think about something that’s still in the process of happening now. The newer the story, the more intense the problem of certainty (the more Ludwig laughs)…

…I think we need to take a moment to stress that someone can listen to you and still not think like you. The assumption seems to be going around that if someone actually listened to me, they’d change their views and think like me. The same mistake happens with empathy: if someone was actually empathetic, the disagreement would vanish, (because they’d think like me). Agreement seems to be the litmus test for determining if someone is listening or empathetic, because how else could we tell? (Other than say trust and “assuming the best” of others, which would leave us vulnerable to manipulation and worse.) But if that’s the case, then listening and empathy become practically indistinguishable from indoctrination.

Figuring out what and how to think is harder than most schools suggest. Mastering reading and memorization, to start, are not enough.

The following is a list of questions that are laid out in a suggested order people should follow to determine if they should believe something. Without a systematic guideline for thinking, insanity isn’t out of the question.

Please don’t assume this outline is perfect, and perhaps the order needs adjustment, but at the very least, I hope it helps…

It is reasonable to believe a thing doesn’t exist if you can’t see it if it “is” a thing that if it existed you would be able to see it. On the other hand, it is unreasonable to believe a thing doesn’t exist if you can’t see it if it “is” a thing that can’t be seen. Perhaps it’s microscopic, a virtue like justice, or an alternative dimension of space and time? Hard to say…

You are born in a theater and have never been outside. The building is condemned, but not to you, because as far as you’re concerned, home is like all buildings are supposed to be: the walls are cracked; the roof leaks; mold expands in the corners. You have never seen the signs outside the entrance that warn about danger, and even if you had, the ink wore off years ago…

1. Verification is where we try to find reasons to believe what we believe, while falsification is where we try to find reasons not to believe what we believe. If we try to falsify x and can’t, then we have all the more reason to believe it; if we try to falsify x and succeed, then we shouldn’t believe x anyway. We’re better off.

2. It could be said that falsification sort of entails verification, but verification doesn’t necessarily entail falsification.

3. Confirmation bias is always what other people have…

David Hume made an extremely valuable distinction between “good philosophy” and “bad philosophy.” Hume understood that philosophy itself could be a problem, and that if reasoning did not ultimately defer to a “common life,” it would become a force of destruction…

In infinite information over infinite time, any networks of ideas that don’t internally contradict will be discovered, and precisely because they maintain internal consistency, the networks will be plausible

1. To talk about consciousness is to take your life into your own hands.

2. Consciousness is where thinking and perception mix like milk and dye. (Consciousness and sub-consciousness also mix inseparably.)

In line with “On Thinking and Perceiving,” what I think about is “conscious” while what I perceive is “sub-conscious” (or “below consciousness”). When I think about a chair, I am conscious of it, but the chair beneath my thoughts that I perceive is sub-conscious. When I think about a chair in a room I perceive, the room is sub-conscious, while the chair is both conscious and sub-conscious. In this sense, my sub-conscious is the context of my conscious mind, as perception is the context of thought.

3. Consciousness is paradoxical and/or ironic.

Where there is freedom, there will be limits, so the existence of limits does not necessarily prove the nonexistence of freedom. In fact, limits are what make freedom possible and could be evidence of its presence. Thus, if determinism is to disprove free will, it must prove not so much limits, but external influences on a will that keep it from being free. Keep in mind that a will that influences itself is a free will.

If w candidate was Pro-Choice but supported every other issue you supported, while candidate z was Pro-Life but was against every other issue you supported, would you vote for w instead of z?

If x candidate was a racist but supported every other issue you supported, while candidate y wasn’t a racist but was against every other issue you supported, would you vote for x or y?

Is there an “ultimatum issue” in your worldview?

Jobs and money are created, so it does not necessarily follow that someone takes a job or paycheck from someone else in working and gaining a raise. The rich are not necessarily rich at the expense of the poor, as the employed are not necessarily employed at the expense of the unemployed. But what if growth stagnates? What if wealth ceases to be created?

There is a lot of talk today about finding meaning, and I won’t argue with any of it. If you haven’t read Victor Frankl or Daniel Pink, you should: a life with all the riches in the world but without meaning is a life suffered. However, I think there’s a problem: the advice we’re given is to do whatever it is we are intrinsically motivated to do, and though that’s all the advice a lot of people need, there are lots of people for whom this isn’t enough guidance at all. They don’t know what they want. They don’t know what they are intrinsically motivated to do. And so their suffering can almost get worse by learning about the importance of meaning. If they didn’t know they needed a meaningful life and didn’t do something meaningful, that would be bad, but now they know they should live a meaningful life and aren’t, and that’s worse.

Justin Murphy and Johannes Niederhauser are starting a class on Heidegger and Deleuze, and I really enjoyed their conversation. I think today Heidegger would be especially horrified by how we can’t take a walk in the woods anymore without thinking about potential tweets or posts we could make about our walk. Our “towardness” to the world has changed: everything is a potential commodity for our online lives. This by extension controls our horizons and ways of life in ways that even captures and “fences in” our imaginations: we live in societies of control in many ways.

Socrates once said that “the unexamined life isn’t worth living,” but I agree with Merold Westphal (who’s a genius, by the way) that Socrates is simply wrong. There are plenty of people who have never read Nietzsche or Plato who go on to live deep and fulfilling lives.

Still, I don’t think Socrates was totally off the mark (I’m biased and like philosophy, after all). Personally, I think it’s better to say, “the unexamined life is risker to live.”

Reality is more like a story than a collection of facts, and yet when someone claims something is like a story, we tend to associate it with being fictitious. Paradoxically, we associate “raw facts” with depicting reality accurately, when none of us live in a world of “just facts.” Subjectivity is very real in our experience, so unless I’m going to live in a world without the very subjectivity that makes my awareness of facts possible, then subjectivity must be included in my depiction of reality if that depiction is to be accurate. And yet the moment I do so, I can be accused of making my depiction inaccurate, and indeed, maybe I am: in subjectivity not being as “solid” as facts, it can be much harder to know if I’m giving subjectivity the right treatment and incorporating it properly. This can increase anxiety, which can increase a temptation to escape that anxiety by removing subjectivity again (as I will likely be encouraged to do).

If the intellectual goal of our lives is certainty (and worse yet, if certainty is moralized), then with a single doubt, we lose the goal. However, if the goal is confidence, we can have doubts and even many doubts, and not lose what we’re after. Additionally, if the goal is certainty, diversity of opinion, people, etc. are all threats, because difference creates reason to doubt, and if we must have certainty, we cannot have even a single doubt. But if the goal is confidence, the encounters with difference are not threats; in fact, they can help us expand our views and test our confidence, perhaps strengthening our confidence in ways it should be strengthened and weakening it in ways it should be weakened.

It is impossible to escape having a worldview or philosophy: the battle is keeping it from becoming an ideology that “does our thinking for us” and/or “that makes the world a worse place.” Worldviews are structured like stories, but problematically, so are conspiracies, philosophies, ideologies, and the like. We cannot from identifying structures alone defend our minds from falsities, but that means we have to do a lot of investigation that cannot promise us any fruitful results.

Memory is so critical to thinking that it is often ignored. Similarly, oxygen is so important to biological survival that it is taken for granted. It is possible for there to be memory without thinking, but not thinking without memory. This is because with memory, I can still mentally experience images and thoughts, even if I cannot connect them with logic into thinking. Without memory, even if I have a self, it will be impossible for me to meaningfully discuss that self, for I lack the mental material by which to define and explain what that self has gone through, experienced, and how that self has been understood by others.

Formalism is the act of creating structures in which entities like “beauty,” “goodness,” and “truth” can be defined and judged. It’s a kind of philosophical recipe where we say that if we have a little x, a spoonful of y, and a pinch of z, we’ll have ourselves a beautiful painting. Formalism is extremely tempting because it creates a clear standard by which to judge things, to create things, to strive for things to become like, and so on. Without formalism, we can feel like we’re lost in a sea of chaos, but the cost of not feeling lost is restriction.

Photo by Caroline Veronez

Imagine a person wore an earring on their right ear and looked in a mirror; the earring would look to be on their left. Similarly, when it comes to their arguments, Liberals and Conservatives often use the same forms with different accidents: their arguments possess identical structures, though the details of their arguments vary. I believe failure to understand this “sharing of argumentative forms” leaves us defenseless against ways our minds seek to trick us yet again.

We use the term “subjective” to refer to a person’s personal take on this or that. We have subjective opinions, subjective views of the world — pretty much everything humans do can be called “subjective.” Tastes, sights, likes — all of it. But the word “subjective” is problematic, for though we tend to know what it means when asked directly, it bears some problematic connotations.

This is a preview list of short pieces I wrote focused on “thinking about thinking,” mental models, epistemology, and the like…

You’ve probably heard the rumors by now that the protestors yesterday were actually members of Antifa pretending to be Trump supporters in order to stage an invasion of the Capitol that would destroy Congressional support for investigating claims of election fraud.

First, I want to note how quickly this narrative emerged. It didn’t take but an hour for the idea to spread across the internet like wildfire. Some people came up with it, and instantly the idea dawned upon millions…

We all know that we can’t shout “fire!” in a crowded movie theater, that our freedom of speech can’t put other people in danger. We also can’t harass, make death threats, and so on — none of that is controversial. Yes, there’s a hard to define “gray zone” when trying to decide what constitutes unallowable “hate speech,” and we all know about that debate (which I discuss in “The Spectre of McCarthyism”) — and though a critical discussion, that’s not what I’m interested in today.

Instead, I want to focus on an idea Mike M. brought to my attention: Is social media inherently “a crowded movie theater” (especially if we have a large following like the President)? Additionally, if we are the President, are we considered the equivalent of a fire marshal, and so for us to shout “fire” is especially consequential?

Is there ever real progress in philosophy? What about literature, sociology, economics — don’t all the “soft sciences” have the same problem? I think a lot of it hinges on the question of if we think progress is possible without certainty. Personally, I mostly think certainty is impossible, but we can still garner confidence, and not all confidence is equal (some is better than others).

We already talked about the possibility of progress in philosophy, but a few more things can be said. Is it true that there are “no answers” in philosophy, only questions? Again, if our standard is certainty, that might follow, but even if “absolute answers” are impossible, it doesn’t follow that “answers in general” or “better answers” cannot be obtained. This might sound problematic, but it’s not that different from most questions we live with just fine. If I’m asked, “How was your day?” I can only answer about this day: it is not actually possible for me to discover a general answer to this question that I could apply to every day of my life (though, that’s not to say we don’t try with answers like “fine”).

The Making of a YouTube Radical was put out by The New York Times in 2019, and it has sparked a vigorous debate ever sense, a debate that has come back into prominence with the recent invasion of the Capitol. The piece basically argues that YouTube contributes to young men especially being indoctrinated into right-wing radicals. Mark Ledwich recently debated the premise at this tremendous podcast.

Nobody does anything they think is irrational. If they touch fire, which is arguably stupid, they must be doing it because they want to impress someone, feel pain, or see what fire feels like. In light of this desire and want, touching the fire becomes rational to them, even if it’s not actually rational. But unfortunately, only God can ultimately know what is actually rational, and none of us are God. Maybe touching fire gets someone a promotion to being chief of a village somewhere? Can we really say that it’s never rational to touch fire? Seems extremely situation-dependent…

Who doesn’t want to be unified? Anyone out there like division? Not many? Then why does the country seem so divided? Why do so many people feel like “calls for unity” are just propaganda?

Imagine that Darth Vader said to the Rebels “It’s time for unity” — do you think “unity” would be taken as anything else than “join us or else?” It would also entail a moral threat, for failing to unify with the Empire would contribute to division. And people don’t tend to respond well to moral threats…

What is a “metatalk?”

A metatalk is when we talk about the mechanisms of talking, thinking, relationships, and the like. It’s not just any talking, but a particular kind of talking in which we try to figure how and why all parties interpret things the way they do, why they feel a certain way, and what they think we’re saying when they say this or that (countless more examples could be made).

“Talking” is about dinner, what we did today, how we’re feeling, etc.

“Metatalking” is about why we thought it was good to do what we did today, why we felt x way when y happened, etc.

Is it good to want people to miss us when we’re gone? Or is that selfish? In one way, it means we want to live a life that matters to people, but in another, it means we want people to suffer. What’s right?

If nobody cares when we die, this might suggest we didn’t live a good life. Worse yet, if people are secretly happy that we’re dead, we probably blew it. So, in the sense that we don’t want people to be apathetic or happy over our death, the phrase “I want to be missed” seems positive.

But, at the same time, that leaves people to be sad over our death, and sadness hurts. Therefore, if we want people to miss us when we’re gone, doesn’t that mean we want people to suffer? And isn’t it wrong to want people to suffer?

Beauty might help us find the balance.

As brought up in “The Conflict of Mind” by O.G. Rose, the amount of justification an argument needs to be accepted should be considered as relative to the degree that the consequences of the argument are contained and individuated versus uncontained and nonindividuated. There are “nonindividuated consequences” — consequences that I suffer because of the choices of others — and “individuated consequences” — consequences that I suffer because of my own choices (we could also say “contained consequences” versus “uncontained consequences”).

If A is B, and B is C, is C equal to A? Yes, that would be a rational conclusion. Now try this one: if A is B 20% of the time, and B is C 15% of the time, what percentage of the time is C equal to A? That’s a lot trickier, isn’t it? (5% sounds right, no?) Well too bad we live mostly in a world of probabilities, though by how rationality and logic are often discussed, it’s suggested we live in a world composed mostly of basic syllogisms.

Descartes does not prove we exist, only that we are a closed system that must assume our existence in order to proceed. Descartes only suggests we cannot not exist, for to think we don’t exist, something must exist to think we’re not around.

Essence is what makes a thing that particular thing. In other words, essence is what makes “that chair.”

Substance is what makes a thing a general thing. In other words, substance is what makes “a chair.”

Form is what makes the idea of a thing, without which the thing would not be intelligible. In other words, form is what makes “that idea of a/that chair.

Mental models” are tools through which we can understand the world. Reminiscent of Cardinal Newman’s point that words do not wear their meaning (which means interpretation is unavoidable), data does not “wear on its face” the right way to interpret it, nor does data tell us automatically the right conclusions we should draw. We have to do that work ourselves, but if we use the wrong model or lens through which to understand data, the data won’t stop us from making that mistake. It will remain silent, and, right or wrong, let us do what we want with it.

The way we think about something can be just as important as how hard we think about it. If I try to hammer in a nail with a wrench, it might work, but it might also mess up the job. Nails need hammers, and there are jobs that if I try to use a hammer when a screwdriver is needed, I might break whatever it is I’m working on.

Kennan Grant proposed the following consideration:

If sufficient economic hardship inevitably produces a minority of violent, extremist political powers — be they fascist or communist or what have you — and if that minority is all it takes to intimidate the majority into compliance because the majority is, at their best, protecting their dependents…

Then aren’t you left with only two solutions?

Solution 1: The society never falls into economic ruin.

Solution 2: Families decide, as entire families, to be courageous and defiant. No family member will comply with an extremist movement out of fear for their dependents.

And since solution 1 is (probably) impossible in the long run, that leaves solution 2.

What am I missing?

In my mind, Grant has laid out a useful framework for considering ideological differences (he himself, nor I, would not claim it is a hard “natural law” of political science but still helpful). It reminds me of James Madison in the Federalist Papers, considering ways to avoid both “majority mob rule” and “tyranny by a minority” (views on which shape views on State size). The framework might help us understand differences between Conservatives and Liberals, Capitalists and Socialists, and though this short work will not endeavor to prove “who’s right,” it might still prove helpful for providing bearings on political discourse today. Also, we might discover some ironies and paradoxes, which is my favorite pastime.

We don’t fully know a language until we don’t have to translate it. A native English speaker, I don’t have to “translate” English when I hear it: I just “know” what it means. Perhaps in a sense I am translating the words into concepts, but I’m certainly not translating English into Latin and then into concepts. Considering this, I think it’s fair to say that languages we really know are ones we don’t translate: if some kind of translation occurs, it’s so quick and automatic that it’s practically not translation at all.

It’s a cliché now, associating genius and madness: the market is saturated with movies and shows about it. The Queen’s Gambit, PI, Whiplash — I could go on. Why does this stereotype resonate? Well, because Nikola Tesla seems to have loved a pigeon and John Nash developed schizophrenia — the stereotype is backed by evidence. But isn’t that strange? If genius is the ability to reason, and madness the inability to reason, shouldn’t they share an inverse relationship versus correlate?

What does it mean to call someone “smart” if at best all we ever know is maybe 1% of all there is to know? Okay, let’s be generous: let’s say we can know 10%. What was failing in High School? 69%? Yea, I don’t think any of us are very smart.

Thinking there are “smart people” out there, we come to overestimate how much people know. We need to get it deep in our bones: we don’t really know anything. We know a sliver of a sliver of a sliver of a sliver…of what this universe holds. Our smartness is maybe the size of a gnat, so what does it mean to say someone is “smart?” Not much.

Have you ever met someone who thinks they aren’t creative? A lot of people, right? Very few people are willing to say “I’m creative,” and the people who are creative just seem lucky. And indeed, there probably is luck involved, but what if part of the problem is that we need to stop “trying to be creative” and instead “try to experience beauty?” What if like meaning, creativity is something we find indirectly more so than directly? What if it’s by directly seeking beauty and art that we can indirectly cultivate our creativity (and sense of meaning)?

An explanation is not evidence. If there is a cup on a table, I could probably come up with a thousand (possible) explanations for how it got there (maybe more). At the end of the day though, only one explanation would be true. If I convinced you that you were obligated to investigate every plausible explanation, then in the name of truth, I would have convinced you to waste a lot of time.

Lorenzo Barberis Canonico recently gave a presentation in which he argued that rational individuals in the Prisoner’s Dilemma will produce an irrational outcome, that the only way to break through this “trap of game theory” is for someone to act “non-rationally.” Lorenzo makes a point not to say “irrationally,” for if the final outcome of a “non-rational” act is “the best outcome” for everyone involved (such as the case in the Prisoner’s Dilemma), it wouldn’t make sense to call it “irrational.” And yet it doesn’t fit to say “rational” either, for those involved had to act against their (apparent) self-interest in order to achieve “the best outcome.”

If I start talking about McDonald’s, you will probably have no idea where I’m talking about: Mcdonald’s is everywhere. But if I mention Café Du Monde, you’ll probably know I’m talking about New Orleans. Particularity entails situatedness, especially where there isn’t duplication. When talking about the Mona Lisa, we know we are talking about the Louvre — or maybe not. The original, yes, we associate with that famous museum, but now that the world is filled with copies and prints of the painting, perhaps I could be talking about “seeing the Mona Lisa” in my friend’s house. Due to duplication, it’s not so easy to know where we’re situated when talking about the famous painting.

The questions we ask say a lot about who we are — questions suggest identity. If I were a bug, I wouldn’t ask the same questions I do as a person. I might wonder, “Why is grass so tall?” “Do bugs have souls?” “Why do humans squash us?” but probably not much about the Green Bay Packers or Nolan’s most recent masterpiece. If I were a star, I might wonder why I didn’t have arms for hugs; if I were a bird, I might wish I could cook. Even when I genuinely want to know, I cannot help but want to know in a way that is suitable for me.

Generally, there are people who lean more on the side of “wanting” and others who lean more on the side of “willing” (though of course everyone is a mixture). Perhaps we could say that A-personality types are more “wanting people” while B-personality types are more “willing” (though I generally dislike these categorizations). Please note that neither is necessarily better than the other and that we are all a mixture of both: the point this short work will stress is how both personality types can be misunderstood and hurt as a result.

Friedrich Hayek argued that when it came to large central planners, most people assumed ‘that the rise of [dictators wasn’t] the necessary consequence of a totalitarian system,’ that benevolent dictators were possible and that just because large central planners in the past devolved into Nazism or Maoism, it wasn’t the case that they had to end up this way.¹ Is this true? If so, the problem isn’t so much about Right vs Left, but Up vs Down (as Kohr warned).

Does criticism “construct” creators? Are creators and artists balls of unformed clay that, without critical direction, spend all their days as lumps of nothing? That might be what critics like to think, for that makes them extremely important, and furthermore the metaphor makes creators out to be children lost in the dark, stumbling around, trying to figure out what to do. The children are forever lost until someone comes along with “a lamp of criticism” to help the creators find their way, and forever forth, the creators are in the debts of “the lamp bringer.”

There are situations that, once we’re in, a tragic trade-off is inevitable. It’s best to avoid these situations in the first place, but until we’re in them, we only have the idea of how difficult the situations will be, not the experiences. “Ideas are not experiences” — as the paper by that name argues — and ideas are much weaker at compelling human action than experiences. Considering this, it’s improbable humans will take preventative measures, especially if those measures are costly and similar historic events (which could provide reference points) distant.

I doubt anyone wakes up one day and decides they want to use complicated language. Sure, we can accuse academics of wanting to show off, and I’m sure sometimes they do, but that’s the simple answer — what’s the real reason for jargon? Well, when the topic we’re discussing is so complex and difficult, we end up using phrases like “ontological negativity,” “substitutionary atonement,” “anarcho-primitivism” — I could go on — just to save time. Explaining every term and justifying the concept every time would mean when we sat down to write a note, we’d end up with a book.

If we live in a post-truth world, that can’t be true, but what is possible is that everyone who disagrees with us lives in a post-truth world. Our world, though, must be the world, because otherwise we couldn’t judge “truth” from “post-truth.” See why the term “post-truth” is problematic? If we tell people they’re “post-truth,” they’ll likely hear “You better start thinking like me.”

How many arguments force us to change our views? In other words, how many arguments are out there that aren’t merely “persuasive” but “undeniable?” Spoiler alert: a lot less than we think.

We tend to experience arguments that favor our ideology and things we agree with as “conclusive arguments,” but they’re probably just “persuasive arguments.” However, since we’ve been persuaded by them, we tend to experience them as “conclusive” — experience plays a trick on us.

Maps aren’t territories,” so no book can be perfect, but currently we tend to think of “popular books” as containing the main ideas and “nonpopular books” as containing (unnecessary) technicalities (note also that the term “nonpopular” implies “bad,” perhaps contributing to subconscious bias). This is a mistake: there are “Level 1 maps,” “Level 2 maps,” etc., each of which adds valuable direction.

The wonderful Apollos Dionysios the Areopagite posted the following (be sure to visit the website):

The polymath Gottfried Leibniz made a cosmological argument for God’s existence, which is an extension of St. Thomas Aquinas’s cosmological (or contingency) argument. There is another way of stating this same argument, as a situational-cosmological argument:

Axiom: We have limited time.

1.) Therefore, in each moment we have at least two mutually exclusive options.

2.) Therefore, in each moment we prioritize one option over the other/s.

3.) Therefore, in each moment prioritization itself is inevitably one of the options; we can either prioritize and use our limited time well, or we can not prioritize and waste our limited time.

4.) However, we are unable to prioritize prioritization of our own personal will power, because that would require having all the reasons for our own priorities within ourself. It is self-evident that we do not have all the reasons for our own priorities within ourself, otherwise we would have omniscience.

5.) Conclusion: Since we do not have all the reasons for our own priorities within ourselves, we necessarily derive the reasons for our own priorities from a force greater than ourselves, in order to use our time well. That force which is greater than ourselves, which has all the reasons for our own priorities, all people call God.

If you deny (5.), then either you must either deny

(4.) in which case you have omniscience, or you must deny

(3.) in which case you do not use time well, or you must deny

(2.) in which case you admit that something else prioritizes for you, or you must deny

(1.) in which case you can prioritize more than one option at once, in which case you have omnipresence, or you must deny

the Axiom, in which case you admit eternal life.

Information does not tell us what it means. Words do not give us their definitions. Facts do not force us to view them as evidence for a certain case. We decide the meaning of information, words, and facts, and yet information seems like the meaning is self-evident, that anyone would draw the same conclusions as us if they were trying to really think. If they draw different conclusions, it must mean they’re not thinking, that they’re ideologically driven, or worse, that they’re intentionally misunderstanding the facts.

If we know x is good, this knowledge will only be useful if we are able to accurately discern when something is x. If we are incapable of making this judgment, then knowing “x is good” will not be useful, and in fact could be harmful if we wrongly define something as x that is bad but we try to use that bad thing anyway because we believe it is good. If we cannot categorize well, knowledge often proves useless.

There is technically no such thing as “meaningful experiences,” only “meaningful memories (about experiences).” An experience is precisely relative to what thought is not involved: it is ultimately a matter of perception, which means it is a matter that doesn’t involve thinking or meaning. There cannot be meaning where there isn’t thought, so “pure experiences” are necessarily meaningless. And yet that meaninglessness can be a source of wonder and beauty.

If we have all the information in the world, it will be useless to us if we do not have the ability to evaluate it. This is becoming undeniable with the internet: it’s an amazing research tool, but if we don’t come to the internet with some level of “prior knowledge,” as David Rieff pointed out, or if we don’t gain from the internet a framework through which to understand the internet, the information it presents us with will prove difficult to organize, overwhelming, and probably useless. We won’t have the ability to interpret it, to determine the true from the false, the probable from the improbable, and/or the conspiratorial from the real. We’ll feel like Dante in a dark wood but without Virgil.

So why are we so sure the world out there is real or that it won’t change on us without warning? Well, I think it’s because from “lived experience,” we subconsciously and/or consciously erect our sense of solidness not upon “thought” but upon “perception.” And the problem with perception isn’t so much “subjectivity” as it is “limitedness.”

Do moral absolutes exist?

Well, even if “morally absolute acts (in-of-themselves)” don’t exist, “morally absolute categories” still could.

Murder is always wrong, but admittedly, it is not always clear what is murder versus killing. Killing seems like it is not always wrong (say in self-defense, in stopping a rabid animal from attacking a child, etc.), so if x is “ending a life,” the question is if x always falls under the category of murder (y) or if it sometimes falls under the category of killing (z).

Letters don’t have meaning, and yet words are made of letters. Letters are sounds, and sounds are more “concrete” than words, and yet letters don’t mean anything. Letters seem to be both “concrete” and “abstract,” and yet we tend to think of these as opposites, that where there’s concreteness, there won’t be abstraction. What’s going on?

Living offline and online…

Owning two houses is great, right? You have more equity, more space — lots of advantages! Imagine the two houses are built right next to one another and that both of them are two stories high. Great! But wait, who’s going to clean them?

A society that is bored is a society that will struggle to think well. Boredom is not so much a state of having nothing to do — a person who lives in New York, for example, which is full of activities, can easily be bored — but rather boredom is a state where an individual doesn’t see significance in what he or she could do (it is a state in which a person “doesn’t see any point” in doing one thing versus another)…

We learn from Samuel Barnes, the mind behind Missing Axioms, that it is impossible for us not to possess and exhibit values: as he puts it:

‘The human truth is that you have values, values which eminate from you explicitly and implicitly. Human being can never be contentless. […] Values spew from us in every stride or stumble.’

Considering these eloquent and profound sentences, when we claim nihilism — that “nothing matters” — we claim something that cannot be lived…

If we ever want to destroy a relationship, the following formula is a great guide:

If you cared about x, you would have done y.

Assuming intention, action, values, cares, and the like from facial expressions, choices, actions, body language, and so on — no need to look any further! It’s a great way to make life miserable (and seems so justified too)…

…we can start to see how Strauss and Arendt can come together, for while “German Nihilism” can be an extreme desire to regain values, heroism, ethics, and other “givens,” “the banality of evil” is what can emergently set in within those “givens” (once they are (re)established).

“Thoughtlessness” is not a simile for “stupid,” as we learn from Hannah Arendt: to be “thoughtless” about x is to “not think about it,” to instead assume it, christen it an axiom, and the like. On the other hand, to be “foolish” about x is to get x wrong, to be illogical about x, and so on. Society doesn’t honor foolishness, but “thoughtless people” can be called “people of principles,” “people of convictions,” and so on. In this way, honor and social capital can be found…

If for one person on the planet a “lack” is objectively real, while for everyone else the “lack” is only subjective, is it the case that the “lack” is objectively real?

2 + 2 and simplistic points on determining truth in our bias/funding/partisan/etc.-obsessed world.

Caddell, Tim, Alex, and I recently started a conversation series on the role of “lack” in our lives. Cadell opened the conversation beautifully by suggesting that, after Parmenides, Western thought has been almost exclusively focused on “being,” which has left us ill-prepared to address the role of “lack” in our lives…

The account of a philosophical journey on how practical questions can help us solve abstract inquiries: it is not an “either/or” decision.

What if there are ideas we must (re)learn every generation, ideas we naturally experience though as “already learned?”

Can we really call something a “philosophical system” if every part isn’t dependent on every other part?

Guy Sengstock recently shared a beautiful elaboration on the wonder of teaching — that magic of “getting it” — and explored the meaning and nature of that experience. He mentioned “the special learning that reconstitutes the world” and how “the world is co-constituted by us” — the video is worth every minute. Particularly, I wanted to focus in on his discussion about the FedEx Arrow…

Ideas cannot be about themselves. Try to think of something that has nothing to do with something you’ve experienced. A unicorn? That’s a combination of a horse with a horn, both of which you’ve (probably) experienced. A time traveling space station? That consists of shapes and colors and likely resembles a machine you’ve seen. Also, you’re familiar with time…

Another term for “Stock Market” is “Capital Markets,” which should remind us that a point of Wall Street is the allocation and reallocation of resources and capital for the real economy. If Wall Street loses its connection with Main Street — if stocks basically have nothing to do with the real economy — there will be costly inefficiencies.

We are careful with words because we don’t want to hurt people, but what about being careful so that we don’t fail to make the most of our lives? The first extremely important concern is the focus of the councilor, but the second, which is equally as important, is the concern of the philosopher.

Since it is not possible for us to choose or desire anything “entirely on our own” (meaning “autonomously” and without any reference to “external sources”), then we must look “beyond” thinking to decide “what we should do” […] And what do we see in our immediate experience? Other people living other lives […] [I]f we see in our experience Sam doing x, then Sam provides “reason to think” x is worth doing…

Imagine you were forced to look at something you couldn’t do anything about. Torture, right? What if you were forced to look at a problem you couldn’t solve — wouldn’t that eat at you? Well, paradoxically, that’s exactly what we can do to ourselves when we focus on something. Why? Because the wrong kind of focus can turn off our creative brains, making us less dynamic in our thinking and more linear, which makes us more unable to discover solutions.

It can be rational to distrust the institutions, experts, and authorities we require to be rational, but it is rarely clear when we should distrust them (and which), seeing as we probably need the institutions, experts, and authorities to help us figure this out — which puts us in a vicious circular problem…

To speak generally, to financially survive, Millennials today mostly find themselves stuck with “Continual Work,” while Generation X had a lot more “Completable Work,” and this contributes to the cultures talking past one another constantly. Most vividly, “working hard” is a value that has been complexified, for whereas Millennials must decide when to “pause” Continual Work, previous generations just had to “finish” their Completable Work. Completable Work “decides for us” when we should stop working, whereas Continual Work forces us to decide when we will “pause” (for the sake of a “work/life balance,” perhaps). But if we choose to “pause” working, we can be accused of and feel like “we’re not working as hard as we could.” After all, we didn’t have to “pause”…

David Hume believed that philosophy’s greatest problem was philosophy itself, for philosophy could unleash incredible violence upon the world. At the same time, Hume understood the answer wasn’t to avoid philosophy entirely, for “critical reasoning” was necessary for a people to defend themselves from tyrants, “bad philosophy,” and the like…

Heidegger didn’t like Sartre: the father of Being and Time basically saw Being and Nothingness as trash. When I first learned this, I was surprised: I thought Sartre sounded similar to Heidegger (on first glance). But then it became clear that Heidegger wanted to remove “the subject” from the focus of our consideration regarding “the question of being,” and here Sartre came along and put “the subject” right back into the middle of the conversation. That upset Heidegger, but why? With all the talk on authenticity and existential concerns found in Heidegger, why was this such a big deal?

Imagine a single person playing violin in a room by himself. Two blocks down the road, there is a woman playing violin alone, and three blocks down from her, a different woman is playing a flute. This continues for hundreds of miles with hundreds of musicians. None of the musicians can hear one another; none of the musicians wonder about themselves in the presence of one another. Musicians may feel loneliness, but there is little existential anxiety.

This is soloing. This is isolationism. This seems to align more with human nature…

If we never think philosophically, our positions on these questions will likely be ones we “absorb” from our surroundings versus ones we pick for ourselves. That doesn’t necessarily mean we will be wrong, but it does mean we could end up like cattle stuck between fences. Sure, we have a field we can roam around in, but we’re ultimately not free. At best, we only have free range.

Continental Philosophy is mostly in the business of “knowing by absence and tracing,” whereas Analytical Philosophy is mostly in the business of “knowing by presence and directly.”

“Paradox” and “contradiction” are often used like similes, but paradoxes are different. Contradictions are combinations of inconsistencies that negate, which means they can only exist in thought and cannot be experienced.¹ A paradox, however, is a combination of inconsistencies that don’t negate, and this is because though paradoxes may logically negate, they don’t experientially negate. Where there isn’t a strong distinction between “thinking” and “perceiving” or “ideas” and “experiences,” it is only natural for the terms “paradox” and “contradiction” to practically become similes, which I think is what has generally happened in the West. This has cost us the category of “paradox,” and where we lose a category of language, we also lose a category of experience (our world shrinks)…

Human motivations are complex. Why people work the jobs they do can be a mixture of reasons like “I don’t mind it,” “It provides for the family,” “I learn some skills,” and so on. Naturally though, we tend to assume linear and simplistic explanations (or at least reflect such in speech), and basically claim that if a person is working x job, he or she “must like it.” And perhaps there is truth to this, but the problematic step is acting like this explanation “explains the whole of it” — a dangerous and natural step…

Philosophers for centuries have struggled with the relationship between freedom and knowledge. If I know there is a million-dollar check in the mailbox, am I really free not to walk up to the mailbox and check? It would seem I am free to deny the option, but am I really?

Since we are in time and can’t discuss everything at once, we must discuss things within sequence, and thus we can never avoid creating the impression that we think x is better than y. For, again, we must discuss things one at a time, and we can always ask “Why is the person discussing one thing and not the other? The very act/choice of doing so suggests the person must think x is more important than y, for otherwise the person would be discussing y.” And so on.

It’s unfortunate we decided to run with the phrase “critical thinking”: we would have saved ourselves a lot of trouble and confusion had we stuck with “deep thinking” or “dynamic thinking.” Instead, we strapped ourselves to a language that suggests we’re profound and insightful if we’re insulting; as a result, someone who criticizes something seems to be someone “who knows what they’re talking about.” This puts a lot of social capital in the hands of people who are hard to please, and I think Charlie Munger is right: “Show me the incentives and I will show you the outcome.In a world where “critical thinking” is associated with “criticizing,” it’s smart to be difficult.

…Instead, what I want to focus on is the idea that if moral living keeps our “souls unified,” and if it is the case that with a “unified soul” we are more able to discover truth and even beauty, then morality entails epistemological consequences. The more upright a person we are, the more equipped we will be to discover truth. If this is true, then living ethically and knowing truth correlate.

Recently, Eric Jobe initiated a discussion on “Freud’s Group Psychology,” which I enjoyed immensely. It was full of great points and contributions from all the participants, and you can find the full talk here.Particularly, the idea that “group psychology” and “individual psychology” are not “different in kind,” only “different in degree,” struck me as critical. Often, we think that “groups” think and act differently than individuals, but really “the psychology” of groups is simply a manifestation of what can be found within each of us. But most of the time that psychology is repressed and hidden: the group just gives us permission to “let it out.”

‘We shape our [metaphors], and then our [metaphors] shape us.’ Considering this, the metaphors we choose to understand life through will directly impact our mental health…

Hume decoupled “is-ness” from “ought-ness” in order to connect “such-ness” with “ought-ness” and stop tyranny, and I wonder if Heidegger did something similar…

Imagine that I’m standing on the floor of an apartment that is held up by the floor below it. Unfortunately, I’ve never “zoomed out” to learn that the floor I’m on is supported by something beneath it. And I live my whole life like this — unaware — as does the other person living on the same floor as me. We get along well enough, despite our differences…

Davood Gozli recently recorded a great review on Squid Game, which I couldn’t watch until I saw the series, so guess what happened? Yup, sleep-deprivation — Squid Game was brilliant up to the very end. Unfortunately, by that, I don’t mean “until the final credits” — I mean until the very end. And then the show stabbed itself in the throat like Cho. Talk about a tragedy…

My student told me that she regretted the language of “Forbidden Fruit,” for that suggested that “The Tree of Knowledge of Good and Evil” was itself forbidden and evil, when really it was biting the fruit which was the problem. Everything God created was good, so even The Tree of Knowledge had to be good and somehow added to the harmony of Eden — nothing existed that was ontologically evil: evil was a result of “towardness” (she hinted at 1 Timothy 4:4–5). Critically, it also wasn’t the fruit Adam wanted so much as it was to “be like God,” as the serpent tempted — the fruit itself was not what Adam desired, but instead Adam desired to compete with God, to “relate” to God in a certain and different way. My student emphasized that our focus should be on our “relations to things” to determine good and evil, not so much on things themselves…

Keynes was very concerned about how our future ideas impacted what we did in the present, thus his profound concern about “the signals” which interest rates sent to the market. But I fear discussions “about future technologies” can have a similar impact: when all anyone talks about is automation and the inevitability of self-driving cars, why would anyone become a truck driver? You’d have no future — an idea that makes it thus.

Praise for Cadell Last’s Presentation: “Jordan Peterson Situates Love as a Key to Truth and Meaningful Revelation

“What does Peterson say about our particular historic moment?” is a framing Dr. Last opens the presentation with, which made me think about a Hegelian framing from Absolute Knowing: “How is Peterson ‘for’ consciousness gaining higher self-consciousness?” “What is consciousness learning ‘about itself’ thanks to Peterson?” Instead of deciding right out the gate if we “like” Peterson or not, asking this Hegelian question presents Peterson as an opportunity for us to reflect on ourselves. Doctors will ask patients to “listen to their bodies” and ask, “What is your body telling you?” Similarly, focused on “mental health,” per se, we can ask, “What is our collective consciousness trying to tell us in focusing on Peterson?” (Please note that we are all part of “the collective consciousness,” so to ask about it is to also ask, “What is our consciousness trying to tell us?”) Possibly, our consciousness is warning us that we are out of balance, that we seek The Truth but not The Absolute (as will be explained), and that this has made us a problem.

Johannes A. Niederhauser and James Poulos recently discussed the difficulty of staying human. I loved Poulos’ point that Aristotle’s “formal cause” is best understood as something like an “environmental cause” — it is about how “our environment shapes us,” per se — which makes very clear that, right now, “the digital” is the main “formal cause” of the world.

Does our brain work like a computer? That language is often used, but computers don’t exist in nature, so how could a human mind work like one? It doesn’t seem like it could, and yet it has become natural for us to think of our minds like computers, and frankly to refer to all “methods of thinking” as “computation.” Why is this a big deal? Well, because metaphors shape our thinking in profound ways (as explored in “Meaningful and Metaphoric Tendencies” by O.G. Rose), and also we might be “closing ourselves off” from identifying key differences and nuances between different ways of thinking and understanding the world. If there are big differences between say what John Vervaeke calls “Dialogos” and “computation,” then believing “it’s all computation,” we won’t think we’re missing out on anything by skipping Dialogos, when really we’ll be skipping out on a lot (worse yet, we won’t even know we’re missing anything, “truly ignorant”)…

Where persuasion is lacking, there will likely be a feeling of oppression, even if that feeling isn’t warranted. It is not enough to be correct and not be oppression: we also have to feel to others like we aren’t oppressive, and that requires the art of persuasion. But what if persuasion is ultimately impossible (or at least impossible when it really counts) because of “the problem of internally systems?” In other words, what if ideas, ideologies, worldviews, and the like are so structured that nothing can necessarily disprove them?

Iowa. Broken Pencil. Allegory. Write Launch. Ponder. Pidgeonholes. W&M. Poydras. Toho. ellipsis. O:JA&L. West Trade. UNO. Pushcart Nominee. linktr.ee/ogros

Iowa. Broken Pencil. Allegory. Write Launch. Ponder. Pidgeonholes. W&M. Poydras. Toho. ellipsis. O:JA&L. West Trade. UNO. Pushcart Nominee. linktr.ee/ogros