A Short Piece Inspired by Apollos Dionysios the Areopagite, Lorenzo Barberis Canonico, and Gottfried Leibniz
What do we do if it’s impossible for us to know for sure that we’re using our time well?
The polymath Gottfried Leibniz made a cosmological argument for God’s existence, which is an extension of St. Thomas Aquinas’s cosmological (or contingency) argument. There is another way of stating this same argument, as a situational-cosmological argument:
Axiom: We have limited time.
1.) Therefore, in each moment we have at least two mutually exclusive options.
2.) Therefore, in each moment we prioritize one option over the other/s.
3.) Therefore, in each moment prioritization itself is inevitably one of the options; we can either prioritize and use our limited time well, or we can not prioritize and waste our limited time.
4.) However, we are unable to prioritize prioritization of our own personal will power, because that would require having all the reasons for our own priorities within ourself. It is self-evident that we do not have all the reasons for our own priorities within ourself, otherwise we would have omniscience.
5.) Conclusion: Since we do not have all the reasons for our own priorities within ourselves, we necessarily derive the reasons for our own priorities from a force greater than ourselves, in order to use our time well. That force which is greater than ourselves, which has all the reasons for our own priorities, all people call God.
If you deny (5.), then either you must either deny
(4.) in which case you have omniscience, or you must deny
(3.) in which case you do not use time well, or you must deny
(2.) in which case you admit that something else prioritizes for you, or you must deny
(1.) in which case you can prioritize more than one option at once, in which case you have omnipresence, or you must deny
the Axiom, in which case you admit eternal life.
Not a theological argument we hear very often, is it? I find it fascinating, and the argument reminds me of “The Argument from Desire” by C.S. Lewis, which though doesn’t prove God Exists, still suggests there is “reason to think” God might be out there, or at least that humans are “toward” a God even if God Doesn’t Exist — a Kafkaesque view. And ultimately, how many arguments exist that are actually convincing, anyway? If certainty is mostly impossible, “compelling arguments” are probably the best we can do the high majority of the time (a topic taken up in “Conclusive Arguments Are Rare” by O.G. Rose).¹
I agree with the implications of Leibniz’s argument that we do indeed require a “truth” and/or “standard” to organize our values, choices, day, etc., and yet we seem incapable of picking such a truth or standard. And yet we get by — how? What Leibnitz has pointed out is discussed in the paper “The Conflict of Mind” by O.G. Rose (particularly section five), and Leibniz suggests that we really are deluded if we think that we make choices rationally “all the way down.” Sure, some choices are better than others, and I’m not saying rationality has nothing to do with decision making, but I think Leibniz makes it clear that we don’t have access to a standard that is “autonomously rational” (to allude to David Hume).
Leibniz suggests we make decisions not just rationally but “(un)rationally” too. Perhaps that “(un)rationality” is indeed expressions of Divinity, and Leibniz does indeed provide “reason to think” God might Exist (even if that cannot be stated conclusively). That said, I do think there might be some incompletion (not error) in the idea that we must “not use our time well” if God Doesn’t Exist. This is because if the standard to which Leibniz refers isn’t present, it means we can’t say we use our time well or that we use our time poorly. Without Leibniz’s standard, we cannot say anything conclusively about how we use our time: we basically must be agnostic about time management.
Wait, surely that can’t be right: I often know when I use my time poorly versus when I don’t.
Oh? Perhaps, but what if watching Netflix for five hours results in you seeing something that inspires an idea that results in you writing the next great novel? Then, what seemed to be a waste of time suddenly turns out to be an incredibly productive use of time (due to what I call a “flip moment”).²
Okay, sure, that can happen, but that’s rare: I’m usually pretty good and guessing when I’m using my time well.
Did you use the word “guessing,” just now? If you guess x and x turns out to be true, are you right? I don’t think so, because you didn’t hold an argument claiming “x was true” relative to which you could be right or wrong. You can only be right and wrong about things you think about, which takes a lot more than throwing a guess out there.
There are arguments behind guesses!
Sometimes, but it’s important to note that if you “thoughtlessly” pick x instead of y, and x turns out to be “right,” I don’t think it’s good to use the language of thinking “you were right.” You made “the right choice,” but that doesn’t make “you right” — and don’t forget that, unless you’re God, you really don’t know “for sure” that you made the best of all possible choices. Leibniz has a point here.
Well, maybe then that’s all we do: make guesses.
To the degree we are operating relative to arguments, information, evidence, etc. is to the degree that “we can be right/wrong,” and to the degree we are operating off of incomplete data is to the degree we can only “make right/wrong choices.” However, we can never really know for sure to what percent we make a choice relative to “all possible information,” and so we probably can’t break down to what degree “we” can be right or wrong about something.
Considering this, one way we address Leibniz’s claim is by saying that it’s true “we” can’t be right or wrong, but it’s still possible for us to make right and wrong “choices” (and/or to make choices that aren’t as good as others). Also, the degree “we can be right or wrong” versus “we can make right or wrong choices” shifts relative to the complexity of the question. If I’m choosing between using salt or not for my spaghetti, it’s much more like “I” can be right or wrong versus when choosing to be a writer or a coder. Ultimately, regarding choices, it’s likely never a “100% versus 0%” breakdown, per se: some choices are perhaps 90% me and 10% a guess, while another choice could be 20% me and 80% a guess, etc.
“Rationality Is Mostly About Making Good Bets” by O.G. Rose argues just what the title suggests: “being rational” isn’t so much about “right or wrong” versus a matter of probabilities and “investments.” From that short work:
‘What it means to be rational is talked about like it’s simple and clear-cut, but I think that’s caused a lot of trouble. If I think it’s going to rain today and I bring an umbrella, but then it doesn’t rain, did I act irrational? No, but I was wrong: turns out it’s possible to be both rational and wrong, even though the terms “rational” and “wrong” are often conflated. Similarly, if there is a 95% chance of event x happening and I bet against x and end up right, did I act rationally? Well, generally, if something is only 49% likely to happen, let alone 5%, I’m irrational to bet in favor of it, but that doesn’t mean the bet won’t end up in my favor.’
The point here is that it’s much more useful to think about “making choices” like putting together a stock portfolio. This way of thinking could be called “financial epistemology,” and Lorenzo Barberis Canonico came up with the idea. In his essay “Is Investing the Best Model to Deal with Uncertainty?” Lorenzo writes:
‘I [Lorenzo] consider myself a fallibilist, because I believe truth exists even if we may never be certain of it. Specifically, I think most if not all of what I know will eventually be disproven or proven inadequate to explain major phenomena […] on the observation of 2 trends: a) history is filled with paradigm shifts and b) our complex world is full of black swans.
‘The most difficult kind of decision-making is decision-making under uncertainty: life, just like poker and investing, is a game of imperfect information where great moves can lead to fortunes and bad moves to disaster […]
‘Under “financial epistemology,” our worldview becomes a portfolio: a collection of bets on what will be disproven and when in order to achieve some material advantage. Even though we know that our investments can and eventually will fail, the key is to make our “trades” at just the right time (which is impossible) to maximize our goals.’
I personally really like Lorenzo’s metaphoric structure (and also believe everyone should learn more about fallibilism). What Lorenzo offers may be the best answer we can come up with to address Leibniz’s point, though I stress that what Leibniz argues does strongly suggest the absurdity of “autonomous rationality,” of “certainty-based rationality” versus “probability-based rationality.” For that, we are in Leibniz’s debts.³ ⁴
¹Perhaps we just tend to hold theology up to “a higher standard” because we hold biases against the metaphysics prescribed by theologies, demanding for theological arguments to be “undeniable,” when not even the arguments that have convinced us of Marxism, Capitalism, Pacifism, Just War, etc. meet that standard.
²Judging “good uses of time” seems to be a matter of “high order complexity,” as discussed in “Experiencing Thinking” by O.G. Rose.
³There are threads that still need to be tied, but I will wait to complete that thinking until “Conclusive Arguments Are Rare” and “Rationally Constructed Arguments Versus Truth Suggesting Arguments.”
⁴Following Lorenzo’s thinking, when we make life choices, we can think of ourselves as making guesses on what we believe will make returns on our investment. We never know for sure that doing x versus y will lead better results, but we also never know for sure that if we invest in this stock versus another, we’ll make a good returning. The hope of every choice is making a good “return on investment” (or ROI), so when it comes to questions of ideology, perhaps we should ask: am I going to make a higher ROI becoming Conservative, Liberal, Libertarian, or something else? Please don’t mistake me as arguing that worldviews are no more important than money or practical considerations; rather, it’s simply that the “financial metaphor” is helpful for living with fallibilism (which isn’t a simile for “utilitarian” — a common mistake).