A Short Piece

AI in Jerusalem

O.G. Rose
4 min readFeb 27, 2023

Inspired by “The Net (34)”

In “The Net (34),” Alex Ebert made the point that we could offload ethical decisions to AI, which saves us the existential burden and tension of deciding “what we should do,” which will prove especially tempting in “tragic situations” where there are no good or easy decisions. If we have to choose between cutting food supplies or access to water to save lives, wouldn’t it be wonderful if we could claim “an AI told us to do it?” Or “the AI said x was the rational course?” In this way, we could avoid blame or responsibility: we could claim “we did it because it was rational” or “we did it because the AI concluded we should.” In one way, this sounds appealing, because humans are emotional and self-deceptive, so perhaps we should want AI to make hard ethical decisions for us? On the other hand, Chetan made the point that Adolf Eichmann, according to Hannah Arendt in her famous Eichmann in Jerusalem, constantly claimed “he was just following orders,” that he was innocent. Perhaps Kantian, he did his duty.

With AI, will we all become like Eichmann, claiming that we are “just following forces outside our control?” That will easily be a temptation, and, making the temptation stronger, we could claim it is rational to use AI to make ethical decisions for us — not doing so would be crazy (which brings to mind the critical work of Benjamin Fondane). Rationality would seemingly drive us into using AI, making us “thoughtless” like Eichmann — or perhaps better off? I mean, isn’t it possible that it would be better for AI to make ethical decisions for us? Indeed, we cannot say for sure, and that’s the problem: there is always space for uncertainty, and how could we oppose that uncertainty then by thinking? If we think, won’t we end up rational and thus decide to use the AI? Isn’t it checkmate? Yes, unless we begin incorporating “nonrationality” into our thinking, which would help us avoid “Nash Equilibria.” But why? Indeed, that defense would have to be made, as hopefully the work of O.G. Rose does — but others must make that judgment.

For thinker like Fondane and Lev Shestov, humans could only be free if there was the possibility of “the impossible” or “the irrational” (which they are not wrong about, even if I might word it differently). They are writing before John Nash and the birth of Game Theory, and today we certainly see situations in which it seems like all rationality can do is doom us. Indeed, without “nonrationality,” I see no way to escape being replaced (or worse) by AI, but with “nonrationality” and the “speculative reasoning” of Hegel we might negate/sublate ourselves into something higher and better (the stakes are real). Hard to say, but I find it very interesting that AI makes “the banality of evil rational,” for in Belonging Again there is usually a description of how (autonomous) rationality leads to a deconstruction of “givens,” while “givens” cause “the banality of evil,” but here we see where AI becomes a kind of “given” that combines together the worst of “autonomous rationality” and the worst of “givens.” To use Philip Rieff’s language, AI “releases” us from ethical decision, while at the same time “constraining” us within rationality. It is “given” that we must do what is rational, and AI makes it clear what is rational, yes? AI is a “given autonomous rationality” — a very strange entity. Sociologically, it is unique, and perhaps radically risky — which is perhaps why it might be radically beneficial.

In thinking like what is found in Augustine, we could not be free unless we could act according to the good, and so this would suggest that if we “outsource ethics to AI,” it would not be possible for us to be free except by following the AI, which will tell us what is good (and we will have little reason to doubt it, given its mastery of intelligence). Thus, will the good mean we must be a slave? A bizarre inversion, the very possibility of which suggests the radical “alienness” of AI. Is our best line of defense to become “aliens” ourselves? Perhaps. Always perhaps.

.

.

.

For more, please visit O.G. Rose.com. Also, please subscribe to our YouTube channel and follow us on Instagram, Anchor, and Facebook.

--

--

O.G. Rose
O.G. Rose

Written by O.G. Rose

Iowa. Broken Pencil. Allegory. Write Launch. Ponder. Pidgeonholes. W&M. Poydras. Toho. ellipsis. O:JA&L. West Trade. UNO. Pushcart. https://linktr.ee/ogrose

No responses yet