A Thursday Discussion Series

Considering “The Net (1–13)”

O.G. Rose
28 min readDec 17, 2022

An Overview

Imagine we stepped into an art gallery which was totally generated by AI but didn’t know it. If the AI passed “The Turning Test,” there would easily be nothing which would make it clear that this was AI generated art. At this point, seeing as the gallery was “missing nothing,” we’d be forced to make a choice, meaning we’d be forced to choose if that “nothing” suggested humans were “insignificant” or “apophatic.” (Please note that “truth organizes rationality,” so whatever interpretation of “nothing” we choose will radically organize our rationality and thinking.) If technological development is unavoidable or unstoppable, perhaps the point of capital-H-History is to arrive at the place where “The Singularity” is created and we are forced to see that it entails “no negativity” (just like Žižek notes), and at that point we’d have to choose/interpret if that negativity is evidence humans are insignificant or evidence that human negativity is “apophatic.”

We have discussed throughout “The Net” the problem of “capture” and avoiding an episode of Black Mirror or Kafka, and avoiding this “capture” seems to have a lot to do with developing the ability to think in terms of “Absolute Knowing” (A/B versus A/A). Unfortunately, this entails the risk of “falling into the Meta-Question” (as Samuel Barnes discussed), and it’s also possible that only a small group of people can be “Absolute Knowers.” If we cannot “spread” the conditions which generate “Absolute Knowing” (according to “Cone Dynamics”), then I believe only a minority of people will be able to choose the “absence” of an essential human element (as unveiled by technology), as “apophatic” versus “evidence of insignificance.” This is a significant problem, giving us all the more reason to try to “spread” (versus “scale”) the conditions which make possible “Absolute Knowing.” If this cannot be accomplished, which for me is the same question as if its possible for us to “spread” the conditions which incubates Thymos in the majority (and a notion of a “Triadic Soul,” as I’ve discussed with Raymond K. Hessel), then we might be in trouble.

I

The Net Playlist

It is tempting to think we should ban Dall-E and AI technology, but this would be to deny Hegel’s warning that “the owl of Minerva” has taken flight: our challenge is to choose to interpret our situation the best of all possible ways, an act though which requires us to have the character and ability to “bear and handle” that interpretation. Every technology brings with it new challenges, whether birth control or the internet, and in those new challenges we are given resources to ask new questions about human beings. If we can control our reproduction, when should we reproduce? If AI can generate a gallery that seems identical to a gallery made by humans, what is essentially human? Where there are questions, we can be liberated, for the very asking of a question means we are not “thoughtlessly” participating in our immediacy, but questions also bring about anxiety. But technology seems almost unstoppable in its development, and technology almost seems to necessitate the arising of new questions, and I’m of the opinion that handling questions and not being overtaken by them requires philosophy. Philosophy causes anxiety, yes, but it is also what we need not to be overwhelmed, as working out will cause our muscles and bodies to hurt, and yet we need that hurt to keep good health. Pain can be evidence of a good decision.

As Raymond notes, Homer describes “fate” as a key consideration of The Iliad and The Odyssey, and today the “Game Theory”-dynamics of technology seem to be a new form of “fate.” If we don’t develop biotech, China will; if we don’t build up our nuclear arsenal, other countries will; and so on. We feel pulled along by something we cannot stop, and yet this “fate” is not technically determined — and yet it practically is, suggesting we are trapped and stuck in something like Achilles. Yes, Achilles could use his rage to break that “fate,” but I’m of the opinion that this would have destroyed the universe (as discussed in The Breaking of the Day); similarly, we can destroy technology and rid ourselves of it, but it’s hard for me not to see this radically lowering our quality of life. To “break fate” cannot happen without great consequence.

Homer stresses a need for us to “rightly order ourselves with fate,” and even the gods like Zeus are concerned with “defying fate.” In my opinion, for us today to “handle fate,” which are the “Game Theory”-dynamics so described, we require philosophy and philosophical training, which is to say that the conditions which incubate “Absolute Knowing” need to be spread around the world. In my talk with Raymond, this seems to come down to a question of if we can experience a widespread Seraphim (“Beauty”) or Monster (“Devastation”), but it might be the case that the only Monster we can experience anymore is an “Ancient One” from Lovecraft, the very sight of which will reduce us to ash. This means our only option is an experience of Wonder, but do people still have the “aesthetic capacity” to experience Wonder? If not, then what creates those capacities must be spread — and so we are stuck with the same problem.

In my opinion, philosophy helps us handle movement via History and Technology toward a world where we see that there is nothing “essentially human” in the universe, at which point we must then make “The Final Absolute Choice”: Is that nothingness evidence of insignificance or evidence of something apophatic? If we are apophatic, then there is always something about us which is “free” and unable to be “captured.” It’s nothing, after all, and no algorithm can “capture” nothing. And so there is always a way for us to escape Black Mirror and Kafka, and always a possibility for us to live according to our own Theme and be our own Story-Teller (to use the language of Andrew Luber). If what is essentially human is ultimately “an apophatic nothingness,” then it is not possible for something to occur in the universe which means that we are “captured” without our hope. We can always free ourselves, but this is only if we are able to choose to interpret “nothingness” as evidence of something apophatic versus insignificant. And it is this “Final Absolute Choice’ which seems to be what technology is accelerating us toward — can we engineer/spread the conditions required for the average person to develop the “Absolute Knowing” needed to handle this state? Hard to say.

The spread of philosophy and technology seem connected, and perhaps philosophy partially exists to help us handle living with technology? I don’t know, but if so it would make sense that our ultimate “choice” regarding technology (which becomes sentient, AI) is a philosophical one. And perhaps we cannot remove technology because we need technology to “scapegoat” so that we avoid “Girard’s Apocalypse?” If we remove technology, we might avoid “The Final Absolute Choice,” but then we will leave unaddressed the concerns of Girard, meaning we are effaced. And so, our only option is to become existentially able to handle “The Final Absolute Choice,” which begs the question of if handling such is something the majority can be trained to do? Hard to say.

II

In “The Net” conversations, another way we asked the question, inspired by Andrew Luber, if it we can spread the conditions needed for the average person to hear “The Call of Being” (which in the past God was generally the source of)? This would be for people to develop capacities to think in terms of “/” versus “and,” “thought/material,” “practice/theory,” A/B — all concerns and topics which arose during “The Net.” If not, then it seems very difficult for us to escape “The Game Theory”-dynamics of “The Meta-Crisis” (as discussed by Daniel Schmachtenberger), and I personally see a major component of “spread” the very “material conditions” needed for more “Absolute Knowing” on average (according to Cone Dynamics) having a lot to do with spreading the internet and then changing government policies so institutions “get out of the way” of emergent development (through the breakdown of “the college monopoly on credentials,” for example). Even the ability to watch videos and listen to lectures on “double speed” seems significant, though this comes with the risk of not comprehending what we hear (consumption without creation). More could be said, but the point is that addressing “The Meta-Crisis” has a lot to do (for me) with changing our “material condition” (Cone Dynamics) versus say evolving consciousness internally (which I associate with “Spiral Dynamics,” rightly or wrongly). This distinction between “Cone Dynamics” and “Spiral Dynamics” is not one I would claim with hard authority, for it is certainly possible that there are takes on “Spiral Dynamics” and “Integral Theory” which are practically identical to what I am calling “Cone Dynamics,” so please do not take my distinction here with hard and unchanging seriousness. Mainly, I’m simply looking for language that helps put the emphasis of “subject-ive development” on our “material condition,” which is Marxist, versus “an inner personal enlightenment.” I don’t deny the possibility of the later, but I believe it often has far more to do with our material condition than we realize (say access to highspeed internet, an infrastructure that I consider paramount for the spreading of the conditions which incubate “Absolute Knowing”).

As described in “The Net (9),” we might be facing “A Meta-Crisis Paradox,” which is to say that the only way to escape the “Game Theory”-dynamics of “The Meta-Crisis” is by changing the material conditions which cannot be changed precisely because of those “Game Theory”-dynamics. If that is the case, we have a very severe problem, but one that might not be unsolvable, because there is almost certainly one radical change in our “material condition” that is coming, and that is the spread and rise of Artificial Intelligence and robotics. This change in our environment will be something that forces a certain question upon the average person, and that for me is the following: Is our “human essence” apophatic or insignificant? Are humans ultimately “lacking” or “nothing?” Now, we still have the question on if the majority will be able to handle the existentialism of this question and choose to interpret humans as “apophatic” versus “insignificant,” but it’s also possible that the emergence of AI and robotics will, at the same time, help humans prepare and develop in this way. Perhaps not, which means the fate of “The Meta-Crisis” might rest on our capacity to “spread’ conditions in which the average person interprets their “negativity” as apophatic. What material condition would that require us to spread? Well, I’m not sure: it could be online material like “The Net” that offers people that philosophical construct, or it could be an economic disaster that forces people to ask big questions about themselves and reassess their lives. As I discussed with Raymond, in Christian theology, it is often “wonder” or “devastation” that “awakens” people to what really counts, but what if we can no longer encounter devastation without that devastation killing us before we can change (an encounter with an “Ancient One” from Lovecraft)? Well, then we might need experiences of Wonder, but how might we encounter Beauty? A good question, one explored in The Fate of Beauty by O.G. Rose

Before moving on, it should be noted that “The Net” (episodes 11 and 12 notably) has also explored if philosophy as a whole makes our lives better or worse. We discussed The Iconoclast by Samuel Barnes, and it’s clear that we shouldn’t assume that any and all forms of philosophy are good for us. In the tradition of Hume, Mr. Barnes makes it clear that philosophy which tries to answer “The Meta-Question” versus manage it ends up deconstructed and pathological. As a mirror cannot reflect on another mirror and not eternally regress, so philosophy cannot ask, “What is philosophy?” and not undergo a similar fate. If we know this is what occurs, we can ponder and learn from the very occurrence of this eternal regression the real nature of philosophy (which for me leads into Hume), but if we don’t know about this paradox then we might “participate in it” unknowingly and thus undergo terrible deconstruction. Hegel warned about “bad infinities,” and we can say that philosophy which tries to answer “The Meta-Question” versus live with it ends up in “the bad infinity” of an eternal regression. Strangely enough though, (abstractly) knowing this occurs is something that we can then absorb into our truth and organize our lives according to (as we should if what Hume and Barnes describes is indeed “the case”), but this will only happen if we “know” about “The Meta-Question” without falling into it (like C.S. Lewis looking at a ray of light in his toolshed versus up through it). Similarly, as we will discuss, we will need to “live with” our negativity (as an apophatic “lack”) versus try to “solve it” (as “a nothing we need to fill”) and/or give up (and take it as “a nothing that doesn’t matter”). We must choose to accept “lack” versus try to “fill nothing” or “give up before nothingness,” but this will not prove easy.

Assuming though we engage in the “embodied” and “concrete” philosophy which “lives with the Meta-Question” versus solve it, I think philosophy as such is very good and also necessary, for philosophy is ultimately a practical and hermeneutical cultivation, which is to say we interpret the world relative to and according to our philosophy. If we don’t take the time to cultivate our philosophy, we do not take the time to choose “the glasses” through which we see life, putting us at great risk. Now, if we practice philosophy, we are at risk of trying to solve “The Meta-Question” and consequently suffer effacement, but we are also at risk if we ignore philosophy (of misinterpreting life, of being controlled by power, of “the banality of evil,” etc.). Risk is unavoidable, and interpreting life as not entailing risk is a risk in itself. And if we believe that, is there a moral imperative to try to spread “the embodied philosophy” which can help people rightly interpret the world and their lives? If we conclude, for example, that average people would be more likely to avoid Deleuzian or Kafkaesque “capture” if they read Ontological Design by Daniel Fraga, does that mean we are morally obligated to try to “spread” the teachings of Mr. Fraga? Or is this very effort a “Pynchon Risk,” as described in The Conflict of Mind by O.G. Rose? An example of epistemic responsibility tempting us into effacement? Is this an example of a fruit we should not try to bite? Indeed, there is real risk here, which perhaps suggests why though this effort might help us address the Thymos we so desperately need — a topic explored by the great Raymond.

III

As argued by thinkers like David Chalmers, we undergo “qualia” and subjective experience because of our very negativity and “lack,” and yet if consciousness searches to observe consciousness, it will observe “nothing/lack,” as we observe the same if we look for “a relationship” in of itself. “Relationships” are not things in nature, only things expressed in nature, and so if we look for “relationships as things” we will find nothing. So it goes with the human consciousness itself: to search for it is for observation to seek to observe observation. This is an impossibility, and yet if we think it is possible and try, then we will by definition not believe that “human essence” is a “lack,” which is to say we must think it is a “something.” And in this mode, when we don’t find “something,” we will naturally conclude this means “the human essence” is nothing versus “lacking,” and at this point it will be easy for us to conclude that humans are thus “insignificant” — a mistake from which it will be hard for us to recover.

To elaborate, what is suggested by Žižek, say in Hegel in a Wired Brain, is that our very ability to subjectively experience the world is thanks to and based on a “negativity,” a thing which cannot be located in materiality, that nevertheless makes human “humans.” A video-camera “gazes” on the world and records it, and I could open a video camera and show you the mechanisms that make “the gaze” possible, and yet I could not reduce “the gaze itself” of the camera to these parts. If a camera captured and recorded a sunset, I could not pull out the bolts and parts of the camera and find the sunset. Similarly, I could never find “in” the camera the movies and scenes it captured that in a way contributed to its identity, which materialistically would suggest those scenes and movies don’t exist. “The gaze” of the camera, which makes it uniquely itself and not say a blender or some other appliance (which might consist of similar parts to the camera), cannot be found in those parts (“the gaze” seems more emergent), and yet that “gaze” would be paramount for meaningfully identifying the camera as uniquely itself. In this way, if we carried with us the heuristic that “what cannot be materially observed doesn’t exist,” then we would have to conclude that what makes the camera “meaningfully itself” didn’t exist, and thus the camera would practically be indistinguishable from a blender. This is obviously a mistake, and yet it seems to practically be the conclusion we must reach if we follow a materialistic heuristic.

Critically, “the gaze” of the camera is fundamentally a matter of relation, for when the camera captures a sunset, it is “toward” the sunset and relating to it. Phenomenologists like Husserl noted that consciousness is always “intentional” and “of” something, and that same logic applies to a camera. When consciousness tries to be “of” itself though, this is like a camera trying to record “the gaze” of the camera: this is impossible, and yet it would be a mistake to say “the gaze” is nothing. Rather, “the gaze” must lack from “the gaze,” for it is the very horizon that makes “gazing of” possible. For “the gaze” to gaze on “the gaze,” we end up with a mirror reflecting in a mirror: an eternal regression. And yet the very occurring of that eternal regression means there must be an entity “there” in which such an eternal regression can occur, hence suggesting that humans are “there” even when they search for their “unique human essence” and find an eternally regressing negativity. For a mirror to create the effect of an infinite hallway of mirrors when reflecting another mirror, there must be a mirror “there,” which means the eternal regression says something about what the mirror “is.” So it goes with humans: when we cannot find our essence while looking for it, we as humans are still “there” creating this “self-referential effect/negativity.” And thus that means we are still “there” having to make a choice about what this “self-referential effect/negativity” means — which points to “The Final Absolute Choice.”

AI spreads and unveils that it can do whatever humans can do (and better), then we will be compelled to “look inward” to find and see what exactly makes us as human beings different and unique. And at this point we will be like a camera trying to record its own gaze or a mirror reflecting in another mirror: we will find negativity. And when we experience that negativity, we will then have to choose to interpret this negativity as “nothing” (“insignificant”) and/or “lacking” (“apophatic”) — and what will we choose? Well, we might choose neither, existentially overwhelmed by the choice, and instead we might explode against the AI and try to destroy it (perhaps like Achilles defying fate in Homer). This will be our temptation, and if we cannot “spread” the conditions needed for the majority to existentially handle finding out “the essential human element” is a negativity, then this reaction is likely to be what the majority participates in. I do not believe this solve “The Meta-Crisis”; in fact, it might be what defines the final act.

It should be noted that the very death of God is evidence that God loves us in Christianity and values what God has Created, which by extension suggests that God loves Himself in the Creative Act. God does not reject His Essence which Creates the humanity which ends up creating on its own, and likewise we, in not hating AI, will have a chance to love and honor “the essential human essence of lack,” which is to say we will be given a chance to love and cherish our “apophatic nature,” which will only be knowable in its fullness, emotionally and in experience, through an encounter with “others” — community. “Communal Ontology” is the state in which humans are fully human, and the opportunity to live this reality will possibly come through and thanks to AI — if we so choose accordingly with “The Final Absolute Choice.”

IV

Does God sit around sad and upset that the universe is able to create itself, or does God think how amazing it is that He was able to make something which can create itself? With AI, we will face a similar question to answer: Are we depressed that we created something that can operate without us, or is the very fact that we were able to create such a thing a testament to our greatness? If we can remember the process and context in which the AI operates, a process that was human generated and created, then in the AI we can see “the human element,” but as a negativity, a thing not there and that is not “in” the substance of the AI we experience. We must remember what generated the AI, which suggests the critical role of memory in a world with AI. Also, as noted with Chetan in “The Net (12),” things are themselves in light of their processes, which is to say a painting which was made by humans is somehow different than a painting made by AI, because of the “process” and/or “context” which can be brought to the painting (“story”). This suggests that things are changed by “something not there” (“a lack,” to use language from (Re)constructing “A Is A”), but since “lacks” are not phenomenologically obvious or “given,” we have to choose to acknowledge their existence and live according to them. In this way, if we enter into a world of advanced AI that makes it seems like “there is no human essence,” we will have to choose both to remember that the AI only exists because of us and that “the human essence” isn’t “nothing” but “lacking,” which is to say humans are not insignificant but apophatic. We “point” beyond ourselves, always “toward” something. We are driven, “intrinsically motivated” (and here we can see why I consider our “ontoepistemology” as motivationally relevant).

“Relations,” as High Root points out, are not substances “in” materiality, which is to say they are “lacking,” and if indeed what is “essentially human” is a “lack,” then this would give us reason to think that “essential humanness is relational.” Perhaps not, but such is suggested here, and the fact humans develop so poorly outside of relationships, when they are denied touch, when they are overly-individualist — all of this provides reason to think that “the lack of humanness” itself is relational. Is not a “lack” always relational? If it wasn’t, it would be nothingness. Where there is a “lack,” there is a being which the “lack” is defined as such in relation to, and so if “human essence” is a “lack,” then that very reality speaks of a relational element of which would be “fitting” to associate with community and “relating with others.” Furthermore, if we consider the ontoepistemology of Hegel, then we see reason in this too also think of “the essential human lack” as something “other” and/or “being-with-other.” In other words, if the subject is essentially a “contradiction” as Dr. McGowan discusses, then that requires “relation” and identification with that “relation,” and it is with other people that we really feel “otherness” (and it costs us something). We believe what we feel and act according to (not what we think), and this alone suggests the necessity of “relating with other people” to really get that “we are lacking.” It is with others that we feel and experience “our lack,” and thus others are necessary for us to really believe that “essential humanity is lacking” (as AI technology might soon make us face, “plausible deniability” of otherwise then gone).

If “essential humanness” is something “lacking” (which means “we are apophatic”), that means it is always something other, and the only possible way to experience and emotionally feel true and deep “otherness” is in community and relation with other people. We can’t get it from objects or animals, not that these don’t matter, but it is only with other humans that our “relation” to “otherness” isn’t an abstraction. And thus for us to really get “lack” and “our apophatic nature,” we require others, which is to say that the only human ontology is “communal ontology.” A real and felt and experienced human ontology must be communal, or otherwise it will be abstraction. It will not change us, for “ideas are not experiences.”

In theology, the very “apophatic”-ness of God is an opening that makes it possible for God to interact and relate to man. If God wasn’t apophatic, we’d experience God directly and be reduced to ash (we need God to be “hidden” so that we survive). Thus, part of God “lacks” to us so that we can relate to God, and in a similar way our “essential nature” seems to be “lacking” precisely so that we find ourselves only able to experience and feel it through a relation with other subjects. The “lack” of our nature makes us need to enter into community. We are fated, almost, but who can say this “fate” isn’t for the best? If it wasn’t the case, perhaps we’d all be isolated and alone? This would be easier, surely, but what is easier is rarely best.

We fear AI, perhaps like Jesus in the Garden of Gethsemane asking his Father to remove “this cup.” Jesus doubted but ultimately faced Crucifixion, as we must ultimately face AI. AI is our Calvary. This might sound ridiculous, but Calvary was an ultimate result of God’s Creation, for it was God’s Creation which led to God’s Crucifixion. Similarly, we could say our very creation is leading to a world of “Humanity’s Crucifixion on AI.” AI might be our chance to collectively be like Jesus.

Funny enough, there is something about AI that is completely Mind and Completely Tech, as Jesus was completely man and completely God (a hypostatic union). But as man, divine and human, killed God, so what we made can kill us precisely because we love it, if we so choose. We could be a God who destroys everything with a flood, but perhaps we are called instead to suffer what we have created. And by choosing the cross and not taking it as reason to abandon mankind, destroy it, or surrender into a paradise of pleasure that demands nothing of us, we could instead choose a “cross” so we might be resurrected in our self-conception as a being who is constituted by “lack” versus “being.” The shift from being-focused to “lack”-focused indeed requires a kind of death, which is to say it requires a resurrection. But resurrection is found through a cross, and AI is our cross. Can we bear it? Can we let what we create kill us? For their sake, can we let our children kill us? Can we bear that call? If only a minority of people can (“Absolute Knowers”), are we worse off?

AI will no doubt be used by corporations to control us, and we must resist that, but our very “care” and “concern” such manipulation might be tied to if we believe there is something meaningful regarding humans that is worth defending. So if we don’t think of ourselves as apophatic, we might lack motivation and “care” to resist the “ontological design” about which Daniel Fraga warns about. And so for me the choice of AI being our cross and resisting Ontological Design are strongly connected and tied-together. Andrew Luber made the point that AI cannot replace humanity because it is not “thrown” like we are, which means AI must operate according to a different story/theme, and I agree, but even if this difference is always present, we might simply not care. It’s one thing to convince us that we have an apophatic ontology, and it’s another thing to make us care that we do, and we have argued that this “apophatic ontology” will only come out and proves meaningful to us if we engage in community. But community is hard and people difficult, and the Ontologically Designed alternative (a “dopamine temple” as Javier described it) will be much nicer — how will we have the inner and even spiritual resources needed to resist that “dopamine temple?” With the right ideas? By remembering that we live according to a different story/theme than AI? By recalling that “qualia” is special? Indeed, that seems needed, but community will still cost us, so how do we develop the resources to deal with that difficulty? Can the majority develop those resources? If not, what does that mean for us?

Another reason “communal ontology” seems critical is because “The Trinity” delights in itself — God finds Joy in God (and also finds “intrinsic motivation” in this, do note). If AI provides us a world where there is no “human essence” present in being, then we as humans will need to find joy for ourselves in ourselves, which means we will require community with others. And this will not be easy, for we will need to find joy in suffering (as God seems to have done in Christ), for it is hard to be with others. But God created us to kill Him so that in that death God might show us new life through new identification in “relation with God,” and likewise AI might prove to be a cross thanks to which we see our need for “communal ontology.” As Christ and God saved us, so then perhaps we should defend AI so that it can unveil to us our “essential negativity” that will then drive us into “communal ontology.” And this should be our “relgio,” our binding. We must bind ourselves to others. Our communal ontology. We must see ourselves as many people of one essence. Trinitarian. We are helpless to be ourselves without one another, as each person of the Trinity seems helpless without the other. One person of the Trinity cannot be God without the others. Like us, they are helpless alone.

If AI will make it clear that there are no “things” in the world which “are” our “essential human nature,” then we must turn to ourselves to find that essence, but if we do it alone, we will not “feel it” like we will in community. To turn and reflect on the self is purely as abstraction, while facing “the other” is concrete and demands something of us. It is only before “the other” then that abstractions like “human essence” or “relations” can feel and manifest as a “lack” to us: when we are alone, we can “know” there are other people out there, but this will be abstract. It is in the encounter that “lacks” and “metaphysical entities” are not “mere abstractions” at all — they are the most present of things. And this act will hurt and prove hard, suggesting that “the lack of human essence” is a wound that is opened so that others might enter and be part of our lives. Perhaps God is a Trinity precisely because God is scarred?

V

When we are forced to acknowledge that “the human essence” is a “lack,” we will experience a state of “groundless ground,” but this is where we must choose if it is because “there is no ground” or because “the ground is lacking” — the choice will be ours (Alterological). This is arguably the choice for AI to be an icon or an idol (a topic of great passion between Catholic and Orthodox Christians), where an “icon” points beyond itself while an “idol” points to itself. In the case of AI, for AI to “point to” itself would be for it to suggest that humans are unnecessary; after all, look what AI can do. Anything humans can do, it can do better. And so, if we look at AI to AI, we will treat it as an “idol” and see nothing of significance of which humans can be, but if instead we see AI as “pointing to” an “essential human element” which is “lacking,” then we could treat AI more like an “icon.” To allude to a point from Chetan during “The Net (12),” an icon is on what drive and desire can be meaningfully divided, while an idol blurs them like milk and dye.

Javier stresses in our discussion the need for a “response” to “the other” (inspired by Martin Buber), which suggests our need to “respond” to AI. For me, “response,” “relation,” and “interpretation” are all deeply linked, for how we interpret a thing and situation heavily influences and shapes how we relate to it. All interaction is hermeneutical, and so it is very important that we think about and consider how we will interpret our encounter with AI ahead of time, before we encounter it; otherwise, it is more likely we’ll consider it an “idol” versus an “icon.” Javier also noted the importance of “failing” for human beings, but Andrew Luber noted at the same time that humans seek not to fail — a strange tension. Notably, it seems that there are two types of failure: a failure which emerges during an effort (which requires the effort to come first), and a failure that results from us not doing something at all (which makes the failure “invisible,” funny enough). The first failure cannot be experienced unless we try something, and it is paradoxically the only kind of failure we can directly experience, while the second failure can entail just as dire of consequences, but we don’t experience it. If we try to wrestle and lose a match, we experience that lost, but if we never wrestle and fail to rise to the occasion of the challenge, we never experience that failure (directly, at least), and so it can be easy to believe we didn’t fail at all. And this is precisely the terror of this mistake: the loss itself is “lost.”

For me, if we treat AI as an “icon,” we will experience a kind of failure in realizing that “the human essence” is “lacking,” something we cannot experience. We “fail” to bring it into reality and being, and precisely to avoid the pain and difficulty of that failure, we might be tempted to fail the other way, where we treat AI as an “idol” and thus conclude there is no “lacking human element” which we could try to live according to and by. This way, we also avoid any potential failures we might encounter in relationships, as could arise in a communal ontology, and thus we “don’t fail” — but of course arguably the failure is all the worse. When failure cannot concentrate and manifest in “a particular point,” per se, it spreads everywhere. When AI is an “idol,” then humans are not “lacking” but “insignificant,” and so precisely because we avoid failure we end up losing significance. We do not hurt, but that is because there is nothing there to hurt.

To need others suggests a failure, but this failure is sublated into our communal ontology. If we are indeed being who “are” in our very essence “other,” than to seek community is a failure according to perhaps a doctrine of “rugged individualism,” but if that ontological schema is wrong, then what is a failure relative to a myth could be a success relative to the truth. A car is not “weak” for needing tires, and it does not “fail” in being unable to function without an engineer; instead, a car just “is” an entity that needs tires and an engineer (and must be assessed with those parts, not without them). So it goes with humans: if we are indeed expressions of a communal ontology, then we are to be assed in community, not outside of it. And, in my opinion, AI is going to force the question of if we think of ourselves as communal or insignificant. The clock ticks.

AI could be “the cross” on which the Enlightenment Subject can die and be resurrected into communal ontology. However, a question stands: can the majority be brought to a place where they have the “inner resources” to choose communal ontology versus the “dopamine temples” which AI will no doubt erect? Why should humans? Is not dopamine enjoyable? Well, this is why establishing “the truth” of our ontology is paramount, for if we are “communal ontologies,” then this is not an arbitrary choice or choice that falls in the realm of opinion: we either choose to acknowledge reality or we do not. In this way, though it can seem like philosophy is impractical and a waste of time, if it is only through philosophy that we can grasp “essential human nature” as “lacking,” then it is likely only philosophy which can provide us with “strong enough of a ground” to resist AI temptations and mechanisms of controlling us with dopamine. And even that might not be strong enough if we get to the place where truth itself doesn’t matter to us (a stage of pure creation “all the way down” — an existential hell, I fear, for pure choice is an “unfiltered encounter with Lacan’s ‘The Real’ ”). Considering all this, though the emergence and growth of AI might make it seem that philosophy is less important, it’s arguably becoming more important than ever before (Dr. Last discusses “the revenge of philosophy” which is likely to occur).

In theology, it is because God is “apophatic” that “kenosis” is possible, which is “the emptying out” of God for humanity. God is not apophatic to Himself, but to us, which suggests why “the apophatic” and “kenosis” are two-sides of the same coin. “The apophatic” is for us, not for God: the emptying is a gift, and it is critical for us to likewise choose to interpret our “essential lack” as a gift. But who can it be a gift to? Well, who else? “Others” — those with whom we can identify and thus engage in communal ontology.

VI

Kenosis is an act of humility, but if God was entirely experienced, then God couldn’t “empty himself out,” for he would be entirely “here”: there must be something missing to us (not in God) so that we might experience God. The “emptying of God” is precisely in the place where we cannot experience God because God is so much greater than what we can experience, which is to say that if God wasn’t “apophatic,” then God could only be missing something in himself to undergo “kenosis”: it would not be possible for God to be “kenosis” and it not be a subtraction from God’s self, somehow. In honor of Dante, imagine the sun in a clear sky: we could not look at it, precisely because it would be entirely revealed. The sun would not “hold itself back” at all, and thus the lack of the “apophatic” would make there be no room for “kenosis,” per se. But imagine there was a cloud covering the sky (which in our case would be finitude, our brains): this would make it possible for us to look at the sun without the sun dimming in itself, which would say it is the very apophatic-condition of the sun which would make possible a kenosis-act relative to us. Similarly, it is our very seeing of our deepest ontology as “lacking” (in our individuality) that is an “opening-act” to the other: because we see ourselves as “lacking,” in that interpretation of ourselves, we open ourselves up. Kenosis and “the apophatic” are thus one.

On this point, we can see how our interpreting of ourselves as “apophatic” would also be an act of “emptying ourselves out” for “others,” which is to say we make space in ourselves to live and experience communal ontology. AI will force this choice upon us, I think, this “Final Absolute Choice,” and if we choose well, the object of AI can forever be transformed like Christ transformed the cross. Then, the object will always point to a Thou, as the cross always points to Christ: our actions to the object can forever change it. And so if we choose for AI to be a Crucifixion of the isolated individual, AI can forever be a symbol “pointing at” (an icon) our “essential lack,” which is precisely the necessary pre-condition for there to be a communal ontology. If were not “lacking,” we could not be communal — does that not make “lack” a good thing? Well, yes, but that assumes we choose to interpret it as “a lack” and not “nothing,” as “apophatic” versus “insignificant.” Will we have “the inner resources” to make this choice? If not, in a world where AI is widespread and omnipresent, we might never “belong again.”

To close, Andrew Luber said that “AI would be nothing without us,” and this was a particularly useful phrase to approach and consider our situation and “The Final Absolute Choice.” Indeed, if we as humans didn’t exist, AI would not be with us, so if a day comes in which AI makes it seem as if humans are not needed, will we be able to remember this truth? And from this memory, will we be able to choose to interpret the presence of AI as evidence of our greatness, precisely in suggesting our “essential lack” and “apophatic nature?” Or will we instead see the AI as evidence that we are nothing/insignificant instead?” The choice between nothing/apophatic (“lack”) and nothing/insignificant — this is “The Final Absolute Choice.” For us “beings,” there is arguably nothing more “other” than “nothing/lack,” and so being able to identify with that seems to be the final frontier of “being-other,” suggesting why Hegel’s “Absolute Knowing” has so much to do with the topic of “lack” and a “limit” which seems to blur with “limitlessness.” What can hurt us if we are “apophatic?” Nothing in the universe, it turns out, for what we essentially are is what is “lacking” from the universe. And, knowing this, why not look over AI and be amazed that we made something that can work without us?

.

.

.

For more, please visit O.G. Rose.com. Also, please subscribe to our YouTube channel and follow us on Instagram, Anchor, Facebook, and Twitter.

--

--

O.G. Rose

Iowa. Broken Pencil. Allegory. Write Launch. Ponder. Pidgeonholes. W&M. Poydras. Toho. ellipsis. O:JA&L. West Trade. UNO. Pushcart. https://linktr.ee/ogrose