A Short Piece on a Dialogue Between Johannes A. Niederhauser and James Poulos
Comments on “Being Human in the Digital Catastrophe”
Are we still human if we don’t remember being human?
This reflection will assume you’ve watched the dialogue; otherwise, some context will prove missing.
Johannes A. Niederhauser and James Poulos recently discussed the difficulty of staying human, and, like everything on Johannes’ channel, it was magnificent. I loved Poulos’ point that Aristotle’s “formal cause” is best understood as something like an “environmental cause” — it is about how “our environment shapes us,” per se — which makes very clear that, right now, “the digital” is the main “formal cause” of the world.
Long before talk of Facebook’s “Meta” was in the air, our modern environment was already radically digitized. As a result, we are being “digitized” even when we don’t use our cellphone — we just don’t tend to realize it. Our “towardness” is changing, for one, which means we can no longer look at a beautiful sunset without thinking about Instagram. This is another way we are “captured,” to allude to Deleuze, as Johannes and Justin Murphy have discussed in the past.
Poulos points out that we today must be “outside” the database in order to control it and tell it what to do: if we’re always “inside” the database, it will tell us what to do without us realizing it, all while we think we’re in control. Thus, to keep ourselves from becoming the slaves of data, we will need to be able to maintain the ability to step outside the data, which increasingly “digitization” will want to assure we can no longer do (for our own good, of course). Worse yet, we may not want to be able to walk away (thus having the tension of the choice removed from us), driven by “machine envy”: we (perhaps subconsciously) can desire the radical integration with technology that technology is luring us into accepting in ways we can hardly follow.
I think James Poulos is exactly right: today, we basically envy machines and no longer take pride in “being human.” Since we think our brains are “just computers” (and “metaphors matter”), then computers are “the best brains,” and it’s only natural that we wish we could merge with them. Machines occupy a spacetime we don’t, “seem” best at memory, are swift processors of vast amounts of information — and so we long and desire to make ourselves into computers. We fail to realize there is a difference between “memorization” (which the prisoners in Plato’s cave were good at) and “memory,” and ultimately that means we fail to realize that we run the risk of losing “human memory” entirely.
The conversation between Poulos and Niederhauser had me reflect on how I personally remember a world before the internet. Yes, I was young, but really the internet wasn’t “all consuming” until I was eighteen. Kids today basically have no concept of a “Non-Digitized Age” (I can’t imagine), which might seem as harmless as never remembering a world “without running water,” but I think digitization is different. Water is in the real world, but Facebook is not: digitization entails an ontological shift and change in our very frame of reference. Why is this a problem? Well, I’m a big fan of “dialectical thinking,” and I think it becomes very difficult to think “dialectically” about digital media if it’s “the water we’ve always been swimming in” (yes, I have Wallace’s famous speech in mind). Alluding to Baudrillard, the “digital” is becoming our reference point for “the real,” and that means it’s become so “omnipresent” that it’s incredibly difficult if not impossible to think about clearly. To question it is starting to be like questioning, “Is reality real?” — a question we expect to find in Philosophy Classes, but hopefully not anywhere else.
Personally, I think everything started to change with the cellphone. Now, we carry the computer around with us: the ability to “walk away” from the digital, to really be “offline,” is fading if not entirely gone. We are always connected, or — more precisely — we always could be connected, which means the thought to connect is always in our minds (our “towardness” has changed). If we don’t connect, why don’t we? What could we be missing? Since we could connect, what does it say about us if we don’t? Everyone around us knows we could own a cellphone if we wanted to own one, as they know that we could reach out to them if we only cared to — and we know that they might think this (“metamental”) (not that we tend to think of this as a “might”: our “naturally anxious brains” are likely to think people do think this, upset). All of this results in a nervousness that can drive us into constantly using technology, an act which might cost us our humanity. Also, gradually, this technology-caused nervousness can replace any existential anxiety we might feel over questions about being human.
The fact that we feel an anxiety due to our technology can feel like an existential crisis, but this is not an existential crisis that can help us face realities of “being human” and diving deeper into what humanity means for us. Perhaps we could say that technology makes us nervous, but it doesn’t make us existential, but the feeling of nervousness is so similar to existentialism that we can easily mistake our nervousness as a sign of facing “the human condition.” In this way, technology “captures” our feelings and orientates us toward losing our humanity while thinking we’re dealing with it: it covers our loss of humanity with a feeling that we still possess it. Because of this, we can end up participating in “anti-human nihilism — a tremendous phrase from the discussion — without realizing it.
The point was raised that sex today is losing its power in favor of information — a keen insight. I also agree that “life is found in the implicit,” so if everything must be made explicit, nothing can be alive. I fear that if the world proves too big for Artificial Intelligence, something that ultimately cannot be digitized, we’ll program the AI to teach us that the world isn’t actually that big, and then AI will be able to teach us that what AI can control is all that exists. In our “enlightenment,” we’ll design an AI that creates a Plato’s Cave without an exit — Plato’s “Hotel California,” I suppose — and how smart we’ll be then, using our computers to memorize everything…
I agree that we don’t appreciate the profound relationship between life and death anymore (a point Alex Ebert also hits home), and thus we try to defeat death in a way that makes life “timeless”; unfortunately, hell is timeless while heaven is eternal. Heaven changes us as we enter it, while hell just extends what we “are” out forever without giving us the capacity to do something worth our “end-less” time (do note that “end” means “purpose” too). Hell gives us timelessness without value, and so we get to survive forever — too bad where time is “end-less,” time is also lifeless. Machines don’t need life though, just electricity, and machines are what we’ll become, so no worries, right?
In closing, there is a profound connection between memory and imagination. If it is true that today we are trying to replace “memory” with “memorization,” what will “memorization” equip imagination to extend us? Nothing which extends beyond the “Digital Framework” we operate in, I doubt. We used to dream about the heavens, but now we envy followers. Being remembered in a world that can only memorize. End-less life.
Human Forever by James Poulos is available to purchase and well worth your time.