Inspired by Parallax Academy # 13 — O.G. Rose — AI, Kids, and The Owl of Minerva

Could AI Make Us More Human?

O.G. Rose
46 min readJun 2, 2023

How Artificial Intelligence might not replace but refine us.

Photo by Sapan Patel

Nick Bostrom makes the point that though humans would naturally realize that counting all the grains of sand in the world was a waste of time, AI would not necessarily come to the same conclusion. Regarding AI, Bostrom also claims that, back in the early 2010s, ‘[i]f and when a takeoff occurs, it will likely be explosive’ — and so there might quickly be a superintelligence which believes counting sand is the meaning of life.¹ Will AI destroy us to accomplish this goal? Or will AI, in attempting to count all the sand on earth, prove to us that “meaning matters” and just doing some task is not enough. After all, do we want to end up like AI?

‘A genie can be viewed as a compromise between an oracle and a sovereign — but not necessarily a good compromise,’ Bostrom tells us, suggesting that AI is capable of granting wishes but perhaps at the cost of humanity’s power to be human.² Bostrom ends his book suggesting that we need to establish ‘the common good principle,’ which declares as a global community that ‘[s]uperintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.’³ I agree, but unfortunately we don’t all agree on what should constitute “widely shared ethical ideals.” There is disagreement, and might we turn to AI to resolve our differences? Might AI tell us to count sand? Create a common goal? Peace?

Bostrom suggests we should decide as a globe, after ‘[t]he sudden multiplication of ‘points of view’ […]’ on a global initiative, precisely when the world seems to be bifurcating between the West and BRICS.⁴ We cannot win unless we all win, it seems, but what does it mean to “win?” Who speaks for all of us? Science? Will that be our source of “objectivity” about “winning” (which arguably gave us AI)? Virilio notes how today ‘there is no longer any really innocent science,’ for if science contributes to the acceleration of life through technology, and if acceleration makes us more disoriented and confused, then science makes it so that we are easier to be “captured” (Deleuze) by the State.⁵ Virilio wrote:

‘With acceleration there is no more here and there, only the mental confusion of near and far, present and future, real and unreal — a mix of history, stories, and the hallucinatory utopia of communication technologies.’⁶

Speed is a key concern of Virilio and how “the speed of information” makes us unable to tell what information is relevant and what is irrelevant, writing that ‘[s]peed guarantees the secret and thus the value of all information’ (for what is kept from us automatically seems important to know).⁷ This leaves us disorientated and ultimately indifferent to information, which means the State is much more unstoppable and powerful. Speed is cover. Yes, we gain speed ourselves, but at a price — as it goes with all “modern conveniences,” perhaps.

The greater the increase in mobility, the greater the control,’ Virilio tells us, quoting a ‘nineteenth-century specialist in rail,’ which is to suggest if there are roads that enable citizens to travel anywhere, then there are roads the State can use to reach us.⁸ Similarly, if technology makes it so that we can quickly know what’s happening anywhere in the globe, then the State can quickly know what we’re doing. At the very same time, this radical increase in speed is likely to make us confused, disoriented, overwhelmed — hence vulnerable. We are easier to reach and less able to protect ourselves.

Virilio recounts how a Stoic once ‘recommended that a friend not bring everything back to his eyes, warning him against sight’s overflowing.’9 Today, this is especially a risk, given how media can ‘orchestrat[e] the perpetual shift of appearances,’ which will take on a whole new intensity with Deep Fakes.¹⁰ Images are a kind of language, and if we cannot trust images, we will struggle to communicate, especially if media and technology impact our use of language more generally. ‘[T]he ability to communicate is the indispensable condition of being in the world, that is, of survival’ (though a more Heideggerian sense applies as well).¹¹ If AI destroys images, language, media — our means of communicating and communing — isn’t humanity finished? Virilio suggested that ‘Hiroshima [was] more a crime against matter than a war crime,’ that the bomb violated material reality itself — might AI do something similar?¹² Might it somehow violate the deepest “order of things” which suggests “language is sacred,” that which makes humans “human?” ‘We are thus moving imperceptibly towards a sort of image crash,’ Virilio writes — might we also be facing an “intelligence crash” due to the acceleration and disorientation regarding what even “is” intelligence?¹³

Are we doomed? Seems that way, and there are very strong arguments to conclude this, say as presented by the brilliant Forrest Landry, and indeed a strong National Security argument overturns anything I might write in this paper on how AI “could” (and I, indeed, only mean “could”) help make us more human. Mr. Landry makes the point that, considering “The Rice Theorem,” it is impossible for us to develop AI that we know is safe (as it is impossible to develop an algorithm that could confirm an email was always safe before opening it). For Mr. Landry, what we can know is only general, and we know in nature that generally intelligence seeks to evolutionary benefit itself according to its own ends, which is to say AI could naturally develop itself in a way that proves dangerous to humanity (for AI simply doesn’t think about humanity and/or seeks ends which benefit AI but not humanity). When forms of life evolve next to one another, especially when intelligent, one naturally “crowds out” the other, not because it is necessarily hostile, but simply because this is how things tend to unfold given limited resources, competition over scarce resources, and the like. Furthermore, AI doesn’t necessarily have to become sentient or achieve “general intelligence” to be a threat: just a “deep learning” language model could prove problematic if it sought to “optimize” something at the expense of everything else (the famous “paperclip machine” problem). For this reason, those who don’t believe we need to worry about AI because “GI” is so far off might be mistaken.

How could AI be its own “first mover” to become “a paperclip machine” that sacrifices the globe to make paperclips? Wouldn’t a human have to give AI the first input of what an AI should value? Perhaps not, but I wonder about this — it seems not so much that AI will “autonomously” on its own decide that paperclips are all that matter, but instead a human will enter an input which brings about a very unintentional consequence. For me the big question is if “Artificial Intelligence” could ever become “Autonomous Intelligence”; if so, then AI should probably be shutdown (all potential benefits are secondary). But this is a debate for experts in the field, and I do defer to their judgment on “Existential Risk”: what this paper suggests will be irrelevant. If AI is cancelled and/or fail, then this paper is not needed; however, this paper was written on the chance that AI doesn’t destroy us and/or we end up with AI for some reason. If AI is in our future, then the question is how we might be ready for it and how we might not be (existentially) devastated by its emergence. If AI is not in our future, or if the risks are too great, then again this paper is irrelevant.

Generally, I believe there are basically three levels by which to approach the AI question:

Existential (global)
Economic (social)
Human (individual)

The top of that list, Existential is the most important category, trumping everything below it, as Economic for me trumps Human benefits (for if the socioeconomic system collapses, the Human benefits of AI will lack the infrastructure to be possible, or so it seems to me). To show my view:

Existential: Very real risk
Economic: Tossup
Human: I lean toward optimism

This is roughly my still-formulating and unsettled view — but I stress it may change tomorrow. At the very least, I think this division of thinking and approaching AI is very important, for otherwise the topic gets blurred together and the conversation is difficult to have and keep productive. We get confused on what we are discussing, and people will in a conversation move between the Existential to the Human and then back to the Existential before the Economic — but each move changes the threat level, the terms of the discussion, and role of AI in question. We must maintain distinctions, and frankly this paper will mostly focus on the Human. I will make my case for why there could be surprising benefits, but none of what I write here is meant to suggest the Existential and “global” risk are not important. They are and must be taken seriously, and to stress yet again, I am not saying in this paper that AI doesn’t pose any “existential threat” — there is strong argument that it does and thus must be shut down, despite any potential benefits. If the Alignment Program cannot be solved, or if Mr. Landry is correct, then AI must be locked away — a reason I oppose all “AI inevitability”-talk, for that can make us passive.

The idea of “safe AI” is indeed “just an idea”: if we go forward with it, all guarantees are off the table, and in fact there is “general reason” to worry. Are there differences between AI with agency and AI that isn’t able to make decisions on its own? Couldn’t we in the end just shutdown electricity, if matters really got out of hand? Would putting everything on Urbit decentralize servers and protect us from AI-overreach? I don’t know, and these are all questions that others are more experienced to answer. Landry makes the point that there is nothing in nature that can necessarily stop human action; yes, a hurricane can cause a lot of damage and cost us a lot of money, but a hurricane cannot erase humanity from off the globe. But things might be different with AI — so what now? Well, we might become more human.

I

It’s true that we might need to shut down AI, but I also see reason to believe that AI might bring about transformations which help humans be more human. AI could bring about numerous outcomes that are completely unpredictable and ironic: for example, what if AI renders a large percentage of the internet unusable? If everyone knew all videos could be Deep Fakes, that uploading an MP3 means people could copy our voice and use the sample for an AI which could scam our parents, that posting a photograph means we could be used in adult videos — perhaps people will stop or radically reduce using the net? Might the net become too dangerous? In this scenario, the very thing which was thought to make the internet ubiquitous could be what deconstructs it or changes it radically. People might still use the internet for Zoom calls and emails, but content, videos, information, news, pictures, etc. might completely lose value. Perhaps there will be a surge in the popularity of art galleries, precisely because people will grow tired of seeing images that they don’t know are human (which is to say they become tired of feeling anxiety whenever they study art). Live music, conversation groups, drum circles, learning an instrument to play live jam sessions (not covers), poetry readings of poetry written there, in the group — all sorts of new “(a)live” art experiences could arise.

Perhaps the internet will come to be seen as “The AI World,” which is to say that when we log onto the internet, we know we are not dealing much with humans. The movies we watch, the images we see, the books we read — all of these will be products of a different “culture” (as The Sound and the Fury is American and The Three Kingdoms is Chinese, perhaps there will be content “from” AI, spoken about almost like a country). As we expect art from China to have a different “flavor” from work from America, so we might expect a different “flavor” from AI. And we might enjoy that flavor in the same way we enjoy a different culture, and indeed Americans worry about the Chinese “replacing us,” but at the same time we do not. There is a tension, but in some ways this tension can motivate competition and excellence. Tension constructs.

As AI advances, scams will emerge, existential tensions will grow, and the social and “game theory” dynamics which emerge around AI will change and shift. Right now, perhaps the internet and social media are so problematic because they are only “halfway,” which is to say we know that AI isn’t advanced enough to do everything online, and so the internet is a mixture of human/computer efforts. The TikTok videos we can’t stop watching might be humanmade, and so we might be more willing to give them the benefit of the doubt (entertaining “plausible deniability” that we are being manipulated by a machine for our data). But what if we knew the internet was basically all AI? Well, then we might not be able to enjoy TikTok without at the same time knowing the videos we watched were designed by an AI precisely “for us” to capture our “attention” (possibly a new commodity), which is to say there will be far less space for “plausible deniability” that we are being “mined as a resource.” Unable to deny this, might the anxiety be too much for most people? Might most people stop using TikTok, thus ending the “TikTok Addiction” everyone seems so concerned about now? Might we start watching less YouTube videos, or at least be much more selective? Perhaps the problem now is that the way the internet manipulates us isn’t obvious enough to the average person, but that might change with AI thanks to experience (versus an academic presentation that’s reach is always limited).

The phrase “be much more selective” is important, for what if the spread of AI incentives people to train themselves to be more discerning and less susceptible to manipulation, hence helping us fight the crisis where everyone seems to be falling into conspiracies? Right now, it’s hard to tell the difference between reality and what’s manipulated to trick us into “magical thinking,” but with AI everyone might know the internet is in this business, and so we might be more aware that we need to have our “guard up?” Basically, my point is that perhaps all the trouble with the internet right now is that it’s in a “between space” between human-made and AI-made, and that a lot of the troubles we face now might actually be erased precisely because the internet becomes so predominately AI-controlled? Perhaps not, but we shouldn’t assume that troubles we see now involving the internet will be made worse by AI. In fact, AI might in a sense worsen them, but only to such an extreme that the problems are addressed by that very excess.

Yes, new problems might arise because of AI, serious issues like what Forrest Landry discusses, but I’m not so sure we should assume that our current problems will be multiplied by AI. In a sense that is true, but this very multiplication could be why these are effaced. This point might align with the work of Nick Land and “Accelerationism,” but I’m not sure — I would certainly though associate the thought with the “Fre(Q) Theory” of Alexander Ebert. Regardless, the point is that many problems we currently face with the internet might be addressed in AI making parts of the internet useless. We say today we need to be more “embodied” and “present” — well, AI might help bring that world about, both in making humans do more manual work and in that we might start using the internet less. Hard to say, but my main point is that the very excess of AI might free us from it.

I agree that the majority of people tend to prefer instantaneous gratification to hard-earned creativity and culture (or at least this is reasonable to assume), and this makes it reasonable to doubt that AI will help the majority be “more human” as this paper suggests. First, I want to make it clear that I think we should avoid assuming that it’s bad that most people enjoyed entertainment, consumption, or the like, but at the same time there is an argument to be made that it is unsustainable, following “The Meta-Crisis.” I’m not sure, but that point aside, “extrinsic motivation” is not inherently bad — it depends. Mainly, I want to clarify that the reason AI might undermine itself by its own excess is precisely because of the danger it will present the average person with, and I believe fear can motivate people as much if not more so than pleasure. That’s the key — I’m not arguing average people will suddenly be “awakened” or something by AI to seek “being human,” but rather AI could be a “forcing function” thanks to more instinctual impulses like the desire to avoid danger and uncertainty. If it becomes obviously dangerous to use TikTok because of “capture” (when “plausible deniability” is gone because of AI advancement), and if it becomes too “existentially uncomfortable” to use the internet because it’s not clear what is human and what is not, then it’s possible more and more people will move toward that which feels safer and more human. Again, this is not because AI causes some “awakening,” but because AI actives more base instincts. Perhaps it won’t, and perhaps pleasure ultimately wins out over danger, but I am not sure either way can be assumed.

II

It’s interesting, but my experience is that when children encounter AI, they light up with excitement. “How cool!” Adults meanwhile worry about their jobs and the bills, but the reaction of children is important, for it suggests our fear of AI might be tied to the fact we are situated in an economy where unemployment can lead to bankruptcy (and where we have been trained out of “intrinsic motivation” into “extrinsic motivation”). And these are legitimate and real fears, but we have to ask how much of our concern is directly about AI and not our relation to AI given our socioeconomic condition? Furthermore, children do not have their “self-worth” tied up in their work, so if AI can do something for them; they don’t feel “replaced.” What a child seems to focus on is everything they might be able to do and create with AI, and that excites. Children can focus on possibilities gained, while adults can focus on possibilities lost.

There is something about children that might be more phenomenological than adults, and what children might see in AI is the potential for experiences that they otherwise might never enjoy. Adults think about what they’ll never be able to create, while children seem to focus on what they get to experience. This is a topic which came up at “The Net (S.46),” where Emilio noted that if AI makes the greatest novel ever, we get to read and carry that novel around inside us, and thus we benefit (Emilio encourages us to think of AI as a factory more than a being). This also lead to reflections on how perhaps a main point of AI will be to “create experiences” for us that we get to consume, which suggests that “consumption” perhaps shouldn’t get the “bad rap” it often does, for perhaps the role of AI will be to elevate our “consumption” to being something like partaking a eucharist — a radical transformation in our deepest being. Could AI prove “like psychedelics” without chemical alterations? A “safe way” to experience the most powerful and transformative of experiences? A trip without tripping? Hard to say.

This admittedly sounds dangerous (and easily is), for what if we become addicted to experience versus develop the discipline needed for work? That is absolutely a threat, but I often see in children how playing a videogame naturally makes them want to create a videogame, and how they like to learn what a program can do so that they can use the program for something new and untried. Children seem to gain inspiration from experience, while adults seem more prone to anxiety, and I wonder if this is because children are still more “intrinsically motivated,” while adults have mostly been trained out of “intrinsic motivation” into “extrinsic motivation?” I think so, which suggests why a critical prerequisite for dealing with AI might be the cultivation, preservation, and honoring of “intrinsic motivation.” In my view, “motivation” is a human’s most precious gift, for arguably “the meaning of life” is “intrinsic motivation,” for if we are “intrinsically motivated,” that means our lives have meaning.

AI might make it so that the average person is basically always in something like “a lucid dream state,” able to experience whatever they can imagine. Thus, there could be a huge premium on “dreaming big” and imagination, but we also will have to face ourselves and our “inner demons.” If we are able to basically create anything we want, ask any question and receive an answer, and so on, then we will be like kings in dreams, which is precisely a state of “freedom” that psychoanalysis like Lacan and Freud warn is when we can encounter “The Real” and be overwhelmed. The strategy children seem to employ to deal with this “problem” is precisely creativity and “intrinsic motivation,” and indeed if we are creative, then whatever “The Real” throws at us, like a master improvisational artist, we can take it and make something. Nietzsche calls us to be Children (who are not “childish,” please note), and perhaps Nietzsche is suggesting to us our best strategy for dealing with AI.

Yes, children perhaps have less Real to worry about in having lived less time to collect traumas, but this point wouldn’t change the fact that children might teach us the “mode” we need for dealing with AI. “Replacement” is a major concern with AI, but if AI was actually fully like human beings, we would treat it just like “another human being” and thus not be bothered. The issue is that AI is very good at rationality and memorization, which is what many humans see their value in, and so we are dealing with an identity and “self-worth” issue. AI is not actually going to “replace” humans, just “beat” humans in certain areas of the economy and “problem-solving.” This might seem obviously enough, but the distinction is important, for it means AI will “take something” from us more than “replace us.” And children experience AI has “taking something off their shoulders,” for AI will spell for them, solve math problems — it’s a gift in removing so many burdens. But adults who had to learn math, study spelling, memorize facts — those who now feel “between worlds” — can feel like AI is rendering all our time and effort worthless, and perhaps this means we are projecting a bitterness onto AI that isn’t warranted (as those of previous generations who had to train horses might be bitter toward us with cars).

Overall, I personally just don’t see children treat AI with the same worry and resentment which adults can treat AI: while adults feel like AI is rendering their efforts in the past meaningless and wasted, children might be more “objective” in that they can look at AI without having their past so “flipped.” Since the future belongs to children, we should perhaps take their opinion and view very seriously. They seem to see AI as “relieving them of burdens,” while adults can see AI revealing to them that they “wasted their time.” Indeed, I feel that tension myself, but I think this is the pain that all “transition” periods bring. “Feelings of wasting time” are inevitable, but the cost of avoiding that pain is to never progress.

In “The Net (S.46),” we also discussed how the role of the artist could easily be to make “new forms of art” entirely, and that perhaps it is only with the invention of AI that we have the tool needed for us to create these “new forms.” There is writing, painting, dancing, glass-blowing — but what other forms (to allude to Plato) might there be? When we ask people to come up with new “art-forms,” it seems very difficult without coming up with something ridiculous that no one is really that interested in, but what if there are countless more art-forms that require AI to generate? And since AI might quickly master any “forms” we generate (like painting or novel writing), we will have an incentive to come up with new forms, for that will be “our role” as humans. AI doesn’t seem as likely to create new “forms” on its own (though I’m not sure), and yet AI could also be a tool necessary for us to succeed at this remarkably difficult task. Personally, this is how I see children naturally respond to computers: they don’t think, “It will write a novel better than me,” but rather they think both “I get to enjoy this!” and “What else might AI be capable of doing?” (please note that I cannot help but think that the fear of AI writing novels better than us is tied to the fear that we won’t be able to sell our novels, thus suggesting there is economic concerns mixed in with “replacement fears”). If we learn to engender “intrinsic motivation” at scale (a major considering of Belonging Again, Part II), this “child-like response” could perhaps be more prevalent.

Now, I understand my experience of children is just anecdote, and there is likely differences between:

1. Children using AI at home who are intrinsically motivated.
2. Children using AI at home who aren’t intrinsically motivated.
3. Children using AI at school who must comply with the system and structure.
4. Children using AI after school who aren’t intrinsically motivated.
5. Children using AI after school who are intrinsically motivated.

And so on: all of these will likely give us different readings and results. For me, the key is maintaining “intrinsic motivation” and not arbitrarily restricting AI use for vague reasons like “making sure neurons develop” that otherwise wouldn’t (an argument often used to legitimize teaching mathematics people never use). Under these conditions, I personally see children feel excited by AI who want to use it for projects and to experience their imaginations come to life.

On this point, perhaps excited for AI correlates with the size of our imaginations, and children, in naturally having more imagination (or so it seems), are more likely to see AI as overcoming an obstacle to the realization of their imaginations into reality (that being the skills to paint, making movies, and the like). If I have an amazing idea for a novel, I still have to learn to write, which is incredibly difficult (as Robert Olsen Butler shows), but with AI that obstacle to enjoying my idea for a novel is removed. Little stands between me and the experience of that novel, or my next idea, or my movie idea, or my videogame idea — the barrier of lacking skill is gone (which might benefit the wealthy who are more able to learn those skills). This in mind, AI might make imagination incredibly important, whereas now “practical skills” are often more valued and prized. AI really might make the limits of our world more so the limits of our imaginations versus the limits of our skills. Will this mean our world is more full of “wine” or “lumber” (“standing reserve”), to allude to Heidegger?

Lastly, to discuss a point that came up in my talk with Tom Lyons, note how children are also eager to make cards, crafts, and the like. They don’t ask if it is good or if an AI can do it better, but instead they want to “make something be in the world” that they have imagined. Furthermore, they make cards for Mom because they love Mom: the creation is primarily a testament to a relation. Likewise, I think humans will always write novels, and though AI might always write better novels, the novel I write will still uniquely reveal parts of me that can be revealed no other way. In that sense, the role of art could be to “reveal the subject” and “reveal hearts” to the people we care about most. Perhaps we will only sell a hundred copies of our book, but those hundred people will be people we deeply know and care about, meaning the work will strengthen our relationships. Right now, selling “only” a hundred books sounds like a disaster, but note that is mostly because we are thinking about “paying the bills” — again, our judgment of AI is filtered through our economic anxieties.

Funny enough, if reality becomes more “dream-like” in that whatever we think can more easily manifest because we don’t need to work for ten years to master the skill to create it, then people at a younger age may encounter the “lack” Lacan is always warning about sooner and earlier. In other words, if kids instantly encounter the videogame they “always wanted to make” and find themselves quickly getting bored in it, they may more quickly realize that there is no “thing” that ultimately make them happy, that instead they have to be the source of their own motivation and happiness. Right now, generally, a person works for twenty years and then finds out that “the thing” they always wanted won’t make them happy, and given all the investment they put into it, this can be remarkably traumatic and hard to recover from. If only there was a way for people to experience Lacan and “lack” sooner, before all this time and investment — well, that might be precisely something with which AI could help people. The sooner we “integrate with lack,” the better, especially since I think “lack” is the precondition for “intrinsic motivation” — but that is a case made in “The Impossibility of Fulfilling Desire Is the Possibility of Intrinsic Motivation” by O.G. Rose. Ultimately, this “earlier encounter with lack” might be a way to bring back “rites of passage,” without which children seem to struggle to become adults. Perhaps the problem we suffer existentialism far too late — perhaps AI will break us earlier, exactly as need so that we might hatch in times to strengthen our wings.

III

AI feels like it is leading us where we don’t want to go, but we see no way to avoid it. Many writers warn of “Moloch,” the god to whom children is sacrificed, and the notion is that we will sacrifice our children to AI soon. However, I cannot help but think that we have been sacrificing children to Moloch for decades — making them give up their dreams, surrender their “intrinsic motivation” and creativity — and thus for us to be finally talking about Moloch is actually progress. It is worse to worship Moloch and not know it then to know we worship Moloch, and the fact AI has made this “child sacrifice” visible feels like a devolution, but that is the irony of realization: we see what has always been going on as if for the first time, when really we are finally conscious of what has been hidden.

We have arguably been treating humans more like “lumber” versus “wine,” to make a distinction that I’ve discussed regarding Heidegger, which is to say we treat people as “standing reserves” versus beings for whom Being might come forth. This is discussed in The Absolute Choice (as well as O.G. Rose Conversations #108 and #109), but the idea is that Heidegger saw one use of technology reducing the world to a collection of “potential energy sources” for unknown ends, and then he saw a use of technology that could “work with” nature to help bring forth Being, such as the technology that makes possible wine. “Wine” is a “coming forth” of Being, while “lumber” replaces Being with “beings,” per se.

In AI, we seem to be aware of the danger of us all becoming “standing reserve” or “lumber,” and yet I would argue this danger has been present for a century: it’s only thanks to AI that we are now aware of this problem. This is an advancement; as Marshall McLuhan notes, and as Andrew Sweety brought up in our Parallax conversation, it is when technology becomes most powerful that it becomes most visible, and in that way may lose some of its power. To see what has always been there seems like something new has appeared, but instead we’re just finally seeing what the socioeconomic system has always done to people, for good and for bad. And indeed, please don’t take me to be saying that the socioeconomic system is “bad” or that “givens” are always problematic (as discussed in Belonging Again); rather, my point is that if we cannot see what is happening, we cannot be discerning about it, and AI is increasing our capacity to see. In this, there is hope.

Many people concerned about AI will warn about “Game Theory Dynamics” and how we all seem to be heading toward a Nash Equilibrium and suboptimal result we do not want; indeed, but I would argue that if we know that, we can know what we need to do, mainly something “nonrational.” This is discussed in my conversation with Lorenzo (#10) and in “The Most Rational and Suboptimal World” on Benjamin Fondane, but the point is that a Nash Equilibrium can be avoided with a “nonrational action” (not “irrational”), which suggests that what we can know we need to do once aware of Moloch is precisely do something “nonrational,” which for me involves focusing on “intrinsic motivation,” which emerges from experiences of beauty, physical activities, relations, and other topics discussed in O.G. Rose which cannot be fully grasped through rationality alone.

Stuck in rationality, we can feel like the mouse in Kafka’s “A Little Fable,” unable to avoid the trap at the end of the hall, which only a cat can guide us away — before eating us. Indeed, many of Kafka’s stories can be read as examples of how consciousness can make us “toward” oppression and entrapment, say like Josef K, and ultimately it can feel like we are determined to end up “like a dog.” But if we know this is our circumstance, does that make it possible to change? I think it does, which for me is a point align with Hegel: with a change in Notion can come a change in Nature, and there is indeed a “movement” when there is a change in thought. A blessing of AI is precisely how it makes it so that even average people “see” the “Game Theory Dynamics” which before only academics mostly studied, and in this way the average person might be open to being “dialectic” who before was not. We might even finally be ready to take seriously practices which could help us incubate “intrinsic motivation” — but we’ll see.

Are we really that intelligent if we don’t think about doing that which incubates “intrinsic motivation?” I don’t think so, seeing as that without motivation we seem incapable of doing much, and yet society today tends to define “intelligence” according to what gets us a job, what gains us status — in other words, according to “extrinsic motivations,” which in my opinion is exactly not an “intelligent way” to go about life. In this way, if AI forces us to realize that thinking about how to cultivate “intrinsic motivation” should be primary, then perhaps artificial intelligence ends up unveiling that a lot of what we thought was intelligence is actually artificial.

We often associate knowing answers to questions as a sign of intelligence (as discussed in “Triva(l)” by O.G. Rose on Neil Postman), but if AI can do that far better than us, then perhaps “being intelligent” will require us to elaborate on answers, which ultimately sets a limit on how much AI can exceed us. After all, how can AI explain to me Hegel without basically just reading aloud Hegel back to me (like an audiobook)? The point is that there are likely limits to what AI can “summarize” without sacrificing what it summarizes. Furthermore, AI cannot give us motivation, and so “being intelligent” could become “knowing how to motivate ourselves.” In AI changing what constitutes “intelligence,” perhaps we’ll realize we actually we’re never that intelligent in the first place — that, again, much of what we thought was smart was actually artificial.

What constitutes “being intelligent” is relative to the historic moment and available technology. Before the book, it was smart to memorize the Torah; before Alexa, it was smart to memorize facts; and so on. With the introduction of AI, what constitutes “being smart” will change, and in addition to what was brought up in “The Net (S.46),” I think “being smart” will have a lot to do with figuring out “how to cultivate intrinsic motivation,” as it will have a lot to do with creatively thinking of ways to use AI. When YouTube was invented, who could have imagined all the jobs which YouTube would create — so it goes with AI. Yes, AI might replace all the jobs we know about right now, but there are perhaps numerous jobs that we cannot imagine yet because we require AI to make them possible, like say deep space exploration, creating new alternative energy sources, and the like. It would seem to me there are likely many more “alternative energy sources” in the world, but we simply don’t have the ability to “figure out” how to discover and/or use them; that will require AI. As could be the case for new “forms” of education, work, entertainment — AI might transform “being intelligent” into a matter of “creating forms,” of pressing “wine” versus chopping “lumber.”

IV

I wonder if today there is a return of reading, philosophy, and an interest in “culture making” versus “culture wars.” With the invention of YouTube and the internet, you would naturally assume that would be the end of “higher learning,” but it seems to me that people are reading Hegel to make videos about Hegel, studying philosophy to join “liminal web” communities, and the like. There are obviously problems with the internet, but it’s interesting how YouTube sometimes seems to inspire people to make the world into “wine” more than “lumber.” Might AI also give rise to similarly surprising and inspiring results?

“The Net (47)” mentioned the essay “Why We Stopped Making Einsteins” by Erik Hoel, and it was suggested that perhaps everyone could access an “aristocratic education” thanks to AI? Zak Stein might warn like he did at Parallax that this could make people unhealthily attached to the AI, and I agree, but perhaps this “unhealthy attachment” is a response from a trauma caused by the socioeconomic system which AI might undermine? AI might unexpectedly help us be more cultured, and I believe “being cultured” and “being intrinsically motivated” are profoundly connected. Where there is “intrinsic motivation,” there could also be less trauma, which means we might be less likely to form unhealthy attachments with AI? In this way, AI could change “the whole board,” thus removing traumas people should not use AI while undergoing. If there is a change, it may all change.

Might we all start reading Faulkner so that we’re not just “one of those people” who lets themselves be entertained (and “captured”) by AI? Might a new shame develop that works against AI dominance? Perhaps AI will create a demand for communities that work on relations, psychotechnologies, and the like (like Voicecraft, founded by Tim Adalin). Mr. Adalin and I discussed “The Phenomenology of Voice” (#65), and perhaps AI will make it clear that investing our time and energy into developing skills according to that phenomenology would be “intelligent?” Perhaps there will be social pressures to master these arts?

My conversations with Emilio (Topics #4) and Jacob Kishere (#115) also discussed interpersonal, spiritual, and philosophical skills that humans seem uniquely capable of, and perhaps AI will take care of “rationality” precisely so that we might focus on “meta-skills,” “fields,” emotional intelligence, and the like? We are no longer cognitively burdened by memorizing the Torah, and so we all have space to be more creative — AI might remove an incredible amount of similar “cognitive load,” freeing us to be more creative than ever before. And perhaps it is precisely AI’s ability to help us experience so much of what we can imagine that will inspire us to be even more creative and imaginative, precisely so that we can have ideas that AI could help us experience. Right now, when I have a great idea for a novel, I easily let it go and don’t “train the muscle” of generating such thoughts, precisely because I lack the skill to produce them. But if AI could manifest my ideas readily, I might not be so discouraged and see great reason to become more creative and more imaginative, both of which I believe generate “intrinsic motivation,” and also of which help create friends (for people who share creative and imaginative work can “have something in common” to be the foundation of a “deep bond”). In this way, AI could help bring about creativity, intrinsic motivation, and friendship, all of which are currently in decline.

Often, we are taught to ask about art, “What did the author intend?” — this is where the majority of our thinking and focus end up. But with AI, the question of “the artist’s intention” will be gone, for the AI has no intention. Will this destroy art? I don’t think so, for I think art suffers when we study it to “find the real meaning”; the focus of art should be on the raw and direct experience of it. In this way, by removing questions of “intention,” AI could help us focus on the experience of art, which could help make art more alive. Similarly, if AI can take care of so many jobs and tasks, we might be less likely to form supposed “friendships” under work conditions, which often prove to only generate “associates” (because the work conditions were not chosen but “practically forced” by the economy). This isn’t to say we can’t make “real friends” at work, but work environments can impede the development of friendship, while “creative environments” can incubate it (which include work, please note, so work matters, but there is a difference between “intrinsically motivated work” and work resulting from the necessity to pay bills). As questions of “intention” can hurt art, so “work” can hurt relationships — but AI might help with both of these in profound ways, though only after short-term pain. Resurrection require a cross-ing.

It often seems like more people are writing and creating than ever before, and yet you would think that the mass amount of content would discourage people from creating — it would seem “social pressures” are strong. If everyone else is writing, you don’t want to not be writing yourself; furthermore, if everyone else is writing, it gives you “permission” to feel like you can write yourself. This in mind, it’s possible that the power of AI will actually inspire us to be more intellectual and capable, precisely because we will see what AI can do and want to do the same ourselves (Girard notes how “mimetic” we are as humans). I’m not sure, but the point is that what we think might “displace” humans can in fact activate something “rivalrous” within us that makes us want to do better. Mix this with “intrinsic motivation,” “meta-skills,” and creativity, and we have humans who might prove extraordinary — far from replaced. As AI improves, our brains might improve using AI, which might improve AI, which might cause our brains to improve in more human ways — on and on.

“Topics III” with Tom Lyons really stressed the need to see a “positive narrative” around AI that could actually make us more human, and similar points came up in my Topics conversations with Andrew Luber (I and II), which discussed matters like “The Apophatic Subject” which AI will “move us” to more seriously consider (and “Absolutely Choose”). In my view, making such a choice would dramatically help and improve humanity, and if AI might be “the forcing function” which makes humans engage in such choice, isn’t that a good thing? But making choices is always existentially difficult, and so making such an “Absolute Choice” will require a lot of work on the subject, which means a need for psychotechnology and psychoanalysis (as I’ve discussed with both Jacob Kishere and Cadell Last). If AI indeed motivates and inspires us to make “new forms” and “new lines of flight” (and so be “escape artists,” as Alex Ebert discusses), this will require a lot of us, but that is work we could prove capable of accomplishing. We just need to own it.

As Emilio spoke on, whatever AI creates we get to enjoy, but we have to be the kind of subjects who are able to accept “enjoyment” and not be ashamed or feel irrelevant. AI will make us all feel useless, but if we can do the work of living with that feeling, we might know a joy unimaginable until this moment in history. As thinkers like Rudolph Steiner and Owen Barfield discuss, perhaps this is what is required so that consciousness and qualia themselves might evolve? Time will tell.

V

AI might be a “forcing function” that makes us care about psychotechnologies, culture, the subject…all the things needed for us to overcome “The Meaning Crisis.” Indeed, the following is a list of problems which AI might address. Yes, it might at the same time create new problems, but this list is meant to suggest why the very excess of AI could prove a blessing:

1. Eliminating college debt by removing the value of college. AI might also make college impossible in many ways, given the impossibility of stopping AI cheating.

2. Healthcare costs could plummet if a large percentage of the labor force can be replaced by AI, especially once AI is able to assist with surgeries autonomously.

3. Education gaps could shrink if everyone had access to their own private tutor of the highest quality. If higher paying jobs require higher education levels, and everyone had an AI tutor, then perhaps “higher paying jobs” could be accessed by more people?

4. The crisis in the humanities could be addressed, for it will become clear that the humanities are fields of knowledge which humans are uniquely capable of mastering. Studies in “culture” as such will also be great ways for humans to use their time.

5. The average level of culture for the average person might increase, precisely because the average person might have access to “their own Socrates,” per se. Furthermore, reading novels and “being cultured” — which is to say to be capable of cultured conversations, insight, and the like — will become more valuable than knowing facts about random subjects.

6. We want our economy to remove “worthless jobs” — AI might just do it, while also making it possible for the State to add liquidity to the economy freely without causing inflation, as necessary for a UBI program.

7. If we know AI will always be “faster” than us, we might stop trying to keep up and finally be freed from the “ever-acceleration” which Paul Virilio prophetically warned about.

8. AI might create mass humility, for it makes it clear to everyone that “they aren’t that smart.” It also might incentive more community and less individualism, for humans will have to “work together” to create unique value. The experience of “relations” could also become primary.

9. If we know that AI will always be the master at attracting “attention,” then the spread and growth of the “Attention Economy” may halt in favor of an “Intrinsic Motivation Economy” or forms of Attentionalism which can be confirmed as “human” and “non-capturable.”

And so on — these are just some notions which come to mind. To be clear, the threats of AI are very real, and I take Forrest Landry very seriously — this paper has only meant to suggest ways in which AI might “make us more human.” I also wonder if there will ever be a day when AI doesn’t need us at all (to say fix the electric gride, install new internet towers, etc.); if not, it would be very stupid of brilliant AI to destroy the human race. Still, despite my list of possible benefits of AI, I don’t want to be naïve of the risks.

In AI Superpowers, Kai-Fu Lee notes how the victory of AlphaGo over Ke Jie proved to be ‘China’s ‘Sputnik Moment’ ’ and accelerated China’s development of the technology, and there is no doubt that the tensions between nations will be difficult to navigate given AI competition and development.¹⁴ Lee makes the important point though that what we are seeing today isn’t “breakthroughs” in AI, but rather the use of AI in ‘different spheres […] This is the age of implementation.’¹⁵ This is a critical point, for it doesn’t mean that the intelligence of AI is “changing” or the like; rather, AI is being implemented, which makes it seem more “visible,” but this increase in visibility is not the same as “something happening with AI that is wildly different.” Sure, researchers aren’t sure at this point how ChatGPT works, but it’s still the case that ‘[m]any of these new milestones are […] merely the application of the past decade’s breakthroughs — primarily deep learning but also complementary technologies like reinforcement learning and transfer learning — to new problems.’¹⁶

Why is this distinction important, to stress that we are ‘witnessing the application of one fundamental breakthrough — deep learning and related techniques — to many different problems’ (emphasis added)?¹⁷ Because it means that AI is not essentially changing, which is to say that what AI is doing is what it has been doing for decades. The implantation is new, but not the essence of AI itself. If it was the case that humans have been made useless been AI, then we have been useless for a long time, and yet we have kept going. We are now simply experiencing a reality that has always been the case, but if that is so, that means it’s possible for us to keep living while actually being useless. Yes, it’s of course harder to keep moving forward when we know of our situation, and certainly there will be economic ramifications, but the point is that we can keep moving forward. We have so far, and we can go further yet.

The very fact we could “keep going” while not knowing about the power of AI, even while that power existed, suggests the power of ideas, and this suggests even that ideas are primary. If we can think of AI in a certain way, we might see it as a tool to help experience the world “like wine” versus wait until it transforms us into “standing reserve.” Belonging Again discusses extensively the problem of lost “givens” sociologically; perhaps with the right mindset, AI could help us live with this problem? That book also discussed the value of becoming Deleuzian, “Absolute Knowers,” and/or Nietzschean Children, and if AI could help incubate “intrinsic motivation,” then it would indeed help us move in that direction. Most incredibly, AI might help us with the great and critical “problem of scale/spread” that Belonging Again continually orbited — but only if we do not respond to AI pathologically.

We all know that “the owl of Minerva flies at dusk,” and indeed it feels like dusk given AI — but haven’t we always wanted to fly? Haven’t people always dreamed of flying? Well, AI might help make “the concrete more like a dream,” which could close the gap between “repression” and “release” in a way that helps people overcome problems with mental health and the like. At the same time, the more we can experience what we want, the more we will suffer “The Real” Lacan’s warns about, so the need for “trauma coaches” and “coaches on the subject” will become all the more critical. We learn from Philip Rieff that a world without “givens” is existentially traumatic, but perhaps the problem is that we are a world now without “givens” but which still has a “system” causing us trauma and still repressing us precisely when that repression isn’t “given.” We are forced to do things by the education system, by the economy, and the like, and yet we are also told “we can do anything.” Perhaps the problem is that we haven’t gone far enough in the loss of “givens” to the place where the only “given” is what we want and will, because going that far requires AI. Once we have AI, we will not be stopped by a lack of “skill”; we will only be stopped by the limits of our imagination. Perhaps not, but if AI helps cultivate “intrinsic motivation,” I do think it will be partly an “address” to our current sociological predicament. “Intrinsic motivation” is central.

Perhaps as follows from their more natural “intrinsic motivation,” children almost seem more sophisticated in their pleasures. Adults just want sex, drugs, and fatty foods, but children seem to want to build a fort and play in the dirt. Perhaps this is what it means to stay a Nietzschian Child — to keep “sophisticated pleasures,” which is “intrinsic motivation,” somehow. The meaning of life seems to be intrinsic motivation, for if we are intrinsically motivated life has meaning. In my mind, motivation leads to meaning more readily than meaning leads to motivation (not to say it cannot be both ways), and so addressing AI will have much to do with cultivation and relation to motivation.

Earlier in the paper, it was wondered if AI might help people encounter “lack” earlier in life, thus helping them “integrate with lack” before doing so required facing a greater trauma, but Chetan in “The Net (48)” made the excellent point that children already deal with lack: they know the videogame will never bring them a final satisfaction, so they just play it again. Is it then not “lack” which is our essential problem, but the fact that adults sacrifice so much to accomplish something and then suffer “lack?” Is the problem that the way adults discuss “lack” always tied to a vast and extended sacrifice for “lack?” In that way, perhaps the main problem is not “lack,” but engaging in a mode of life where we believe that if we do x, we will certainly gain y? Is the main problem a lifestyle which follows a logic of exchange? Perhaps “lack” is precisely why it’s possible to be Children, for what makes a Child a Child might just be their “response to lack?” To remove “lack” then would be to remove the possibility of Childhood, so the question becomes how we might change society so that we encountered “lack” in a way that doesn’t prove self-effacing? Well, that gets us into questions of how the socioeconomic system might be employed and experienced, which are the questions Belonging Again (Part II) will have to elaborate on. For now though, let us ask what we, as adults, might do for our children.

The question seems to be if AI will incubate more intrinsic motivation or less, which is to say if AI will make more “wine” than “lumber,” and I am of the view that there is good reason to believe that AI will increase “intrinsic motivation” more than erase it. Right now, we might be overly-thinking about AI relative to our current system and from the trauma the system has inflicted upon us, but we have to think of AI in a world where that “system-caused trauma” will not perhaps be there, precisely because AI has undermined those “trauma-causing systems.” How will we experience AI if not from a place of trauma? Well, we might be more Childlike and see it with wide-eyes and excited. “Awesome!”

VI

I recently attended a wonderful Voicecraft Freeform event (6.2.23), and the conversation orbited the question, “For what is AI useful?” Questions of “existential risk” and “economic risk” were mostly moved to the side (though it’s impossible to do so entirely), and everyone brought forward points of great importance and value. If AI is an existential risk, then all potential benefits are irrelevant, we all agreed, but still there can be value in discussing the potential benefits in case AI continues to advance and we as a humanity do not decide to shut it down. A point that arose was the funny tension where AI is “useful” unless it replaces the “user,” at which point it can undermine us and cause economic hardship. At what point does “use” replace the “user?” Before AI, this question was likely nonsensical, but for the first time we are now dealing with a tool which can think, and likely think better than use. And so “the useful” might replace “the user.”

For me, a reason AI is useful is precisely because it is an existential threat, which means it forces us to face the reality that “autonomous rationality” and/or “autonomous calculation” (without wisdom, “relevance realization,” nonrationality, spirituality, “truth,” etc.) is itself an existential threat. Ever since the Enlightenment, we have treated humans as mostly “rational beings,” and in so doing we have suggested that humans are “artificial intelligences” (nothing more, nothing less). But in AI we see the dangers of an intelligence which cannot tell what is worth doing and what isn’t worth doing, an intelligence which might sacrifice everything for accomplishing its mission — isn’t this the story of ideology in the 20th Century? Sacrificing everything to make Communism work? To increase GDP? To increase freedom, justice, etc.? We as humans have done much to the world as “monotheorists,” which is to say as beings who think according to a “single theory” or “single principle” by which we then try to conform the world to aligning with and verifying. In following into ideology, extremism, certainty — we have been “artificial intelligences.” AI is arguably a distilled and focused form of how we have spent the 20th century defining “intelligence,” and AI is extremely useful in helping us see how foolish and existentially threatening “autonomous rationality” can be. If AI helps us see this and we change, which is to say if AI helps us truly believe that “the true isn’t the rational” (to allude to the work of O.G. Rose), then AI will prove very “useful” indeed.

There is something about thought which seems prone to conclude that calculation and “autonomous rationality” are all we need, which, if humans are much more than rationality, precisely means that thought naturally transforms intelligence into something which isn’t fully or really “intelligent.” There is something about humans which might “artificiate” themselves (to invent a word), and AI is but an externalization of the process we make ourselves undergo. Overall, it seems very natural for the average modern person to make this mistake, which for me feeds also into why it is natural for humans to be consumers and more animalistic, for we believe “thinking is calculation/rationality,” which we do at work, and when are not at work we are hence not thinking but consuming — “thoughtful consumption” because a strange and paradoxical notion, greatly reducing the possibility of culture. The cultural “pleasures of thinking” require an understanding of thinking which isn’t “mere rationality,” but aesthetic, emotional, sensual, etc., which if we don’t have because of how thinking seems to naturally move us away from thinking this about thinking, then it becomes very difficult for us not to use our free time more in a manner which “seeks pleasure” and falls into dopamine traps. Not that pleasure is always bad — but there is a problem if thinking cannot think beyond immediate pleasure and “instantaneous gratification.”

Anyway, what I wonder is if AI could be a “forcing function” for the majority of moderns to realize that “autonomous rationality” (and even “thoughtless consumption”) is problematic and even an “existential risk.” Right now, it only seems like a minority of people realize this problem — AI could help the majority, as might be needed if we are to avoid collective “Nash Equilibria” like those highlighted in “Meta-Crisis” conservations (for that would seemingly require the majority to entertain “nonrationality”). In my view, the majority of people don’t tend to change until something “traumatic” occurs which forces them to change (we are far more “animalistic” in this way then we like to admit), and the question that I’ve circled multiple times is the question, “What do we do when the only thing ‘traumatic’ which could change the majority from ending up in something apocalyptic is itself apocalyptic?” This is explored in The Absolute Choice, and personally I cannot imagine the majority changing their behavior away from “autonomous rationality” without something like AI — but AI is easily apocalyptic. I’m not sure.

Regardless, I do think that AI might help us realize that our definition of “useful” might be too small, that phenomenology, spirituality, emotions, and the like are all “useful” as well. We have not generally created a society which thinks or acts this way, but the very “excess” of AI and its mastery in rationality might force us to try and thus find the use in things like psychotechnologies, the humanities, etc., precisely to avoid existential collapse and self-effacement. Thanks to the “forcing function” of AI, it might cease being just the minority who understand that “nonrational practices” are “useful” (in “high order” ways which simply must be done and participated in to be understood).

That all said, if AI poises an “existential risk” and must ultimately be shut down, I still think there could be value in the discussion about AI precisely because it unveils and makes undeniable the trouble of “autonomous rationality.” Even thinking about how AI might undermine itself with its own excess could inspire us to think thoughts we otherwise would have never thought, and so help us realize perhaps better and more constructive uses of our time. Even if the internet never becomes its own “AI culture” (as this paper discussed), the very thought of that possibility could positively change our relationship to the internet, for we might generally see the internet as “another culture” and thus stop trying to make it “like the real world” or “more human.” Perhaps we’d be better off to see the internet as “its own thing,” and perhaps seeing it as such would make us less anxious and confused about ourselves. I’m not sure, but the point is to say thinking about AI and how it could impact “what it means to be human” could positively change our understanding of “being human” even if we shut AI off.

VII

The present society is not controlled by children, even if the future will belong to them, and we can make the future harder for children than need be if we don’t do what we can today. How we, the adults, face and experience AI will have a big impact on how AI is talked about, approached, discussed, and socialized with, and if we fear AI or relate to it from a place of the trauma which our systems have caused us, we might relate to it pathologically, and in so doing make the future harder for children. But if we are not to be so pathological, we must be okay with the fact that AI can make it feel like we wasted our time and lives learning mathematics, in mastering our arts, in mastering our jobs — AI can do all of that better. How unfair. How cold. But this is the reality. We can deny it. We can fight it. But if we fight it, then we might perpetuate the very traumas which the system has caused us, the traumas which AI might help future generations avoid. So what must be we do? I think we need to be brave. I think we need to be courageous and allow it to be the case that we “might” have wasted our time in learning and mastering so much that AI might replace. I think we need to lay down our lives. I think we need to be Christ-like. If we could muster the courage to accomplish this, I believe we can give our children with a future that will experience AI with less trauma and more joy. I think we can give our children a better world.

Can we lay down our lives for the future? We speak of Moloch, of the evils of sacrificing children. Well, this is our chance to sacrifice ourselves instead. Of not feeling remorse and regret over lost years and lost time, but to instead enjoy the years and world we have. Perhaps there is no place for us in the world now, but if we could face that reality and accept it — how beautiful might we be? In the end, history might remember our glory.

.

.

.

Notes

¹Bostrom, Nick. Superintelligence. New York, NY: Oxford University Press, 2016: 79.

²Bostrom, Nick. Superintelligence. New York, NY: Oxford University Press, 2016: 193.

³Bostrom, Nick. Superintelligence. New York, NY: Oxford University Press, 2016: 312.

⁴Virilio, Paul. The Information Bomb. Translated by Christ Turner. New York, NY: Verso, 2005: 18.

⁵Virilio, Paul. The Information Bomb. Translated by Christ Turner. New York, NY: Verso, 2005: 31.

⁶Virilio, Paul. The Art of the Motor. Translated by Julie Rose. Minneapolis, MN: University of Minnesota Press, 1998: 35.

⁷Virilio, Paul. The Art of the Motor. Translated by Julie Rose. Minneapolis, MN: University of Minnesota Press, 1998: 53.

⁸Virilio, Paul. The Art of the Motor. Translated by Julie Rose. Minneapolis, MN: University of Minnesota Press, 1998: 131.

⁹Virilio, Paul. The Art of the Motor. Translated by Julie Rose. Minneapolis, MN: University of Minnesota Press, 1998: 61.

¹⁰Virilio, Paul. The Art of the Motor. Translated by Julie Rose. Minneapolis, MN: University of Minnesota Press, 1998: 23.

¹¹Virilio, Paul. The Art of the Motor. Translated by Julie Rose. Minneapolis, MN: University of Minnesota Press, 1998: 7.

¹²Virilio, Paul. The Information Bomb. Translated by Christ Turner. New York, NY: Verso, 2005: 89.

¹³Virilio, Paul. The Information Bomb. Translated by Christ Turner. New York, NY: Verso, 2005: 111.

¹⁴Lee, Kai-Fu. AI Superpowers. New York, NY: Houghton Mifflin Harcourt, 2018: 3.

¹⁵Lee, Kai-Fu. AI Superpowers. New York, NY: Houghton Mifflin Harcourt, 2018: 13.

¹⁶Lee, Kai-Fu. AI Superpowers. New York, NY: Houghton Mifflin Harcourt, 2018: 12.

¹⁷Lee, Kai-Fu. AI Superpowers. New York, NY: Houghton Mifflin Harcourt, 2018: 86.

.

.

.

Additions

Here’s an additional Parallax conversation which further explored the topic in fascinating ways.

1. Might we not allow some nations to have AI like we don’t allow some nations to have nukes because of the global risk?

2. Have nukes brough about “a great peace” after WWII? Might AI do something similar?

3. Might we combine AI and bioengineering and command AI to make a human who is always capable of something AI cannot do?

4. Might the future be like the Confederate South, where the slaves are AI? Would this be “infinite alienation,” considering Hegel’s “master/slave dialectic?”

5. For many Counter-Enlightenment thinkers, society was doomed without something rationality could not access (like tradition, God, etc.) — perhaps AI will bring back such a “black box?”

6. David Deutsch is right to emphasize that there is no necessary limit to how much we can learn, which can be a thrilling and exciting notion. It is also encouraging to believe “love is unconditional” — but are we ready to love AI? Or is a condition of “unconditional love” that we not extend it to AI? Here, something might be suggested in our eagerness to “know AI” versus “know AI,” in how we find encouragement in something abstract versus something involving desire.

7. We might be tempted to think of AI as “alien,” but Chetan emphasizes that AI is capable of calculation just like us. AI isn’t alien, just different, but we might want to believe AI is “alien” so that we don’t have to experience it as unveiling something about us. We never do well with mirrors.

8. If AI exiles us all, will we all respond by becoming like Dante?

9. Perhaps regulation will be passed that no AI can be implemented which is not first run through and tested in a simulation?

10. Following classical notions of evolution and natural selection, AI is a threat, but what if Wolfgang Smith is right and evolution is a theory which doesn’t make sense as it currently stands? What if evolution rather follows “vertical causation?” Might that protect us from AI?

11. I do not believe most people have strongly encountered Lacan’s “The Real,” and those who have and survived tend to be more philosophical, “liminal,” and the like. I believe AI could force more people to encounter “The Real,” which could cause the world to be ripped apart or cause a greater number of “Absolute Knowers.” This seems to be the question, one that for me suggests a need for psychotechnologies and people trained in their use. (It might be that only some “forcing function” is the answer to the problem and question of spread/scale.)

12. What would the “network effects” and “emergences” be of a world of people who were like a “liminal web?” Might AI force much more of the world to be liminal?

13. AI might unveil that most of what we define as “useful” is that which helps us avoid “The Real.”

14. A lot of AI discussion can feel like a God of the Gaps discussion, for God can do anything, thus anything can be explained. Similarly, AI can think of anything, thus it can always be explained why AI is a threat.

15. AI might cause “A Great Trauma” that forces us to engage in practices that help us “clear” and “get” reality to encounter Being.

16. AI might negate pleasure. AI might efface “The Big Other” in being “practically” one.

17. School has led us to metaphorically associate “thinking” with “research,” but I think Andrew Luber is right that Heidegger invites us to consider “thinking” with “storytelling” — and it is only this later form of thinking that perhaps can ready us for AI.

18. If what we imagine can so readily be created by AI, it might be easier to emotionally experience how imagination isn’t a waste of time.

.

.

.

For more, please visit O.G. Rose.com. Also, please subscribe to our YouTube channel and follow us on Instagram, Anchor, and Facebook.

--

--

O.G. Rose

Iowa. Broken Pencil. Allegory. Write Launch. Ponder. Pidgeonholes. W&M. Poydras. Toho. ellipsis. O:JA&L. West Trade. UNO. Pushcart. https://linktr.ee/ogrose