The Best of All Possible Prisons
Are there technologies we should hope aren’t invented?
In Season 2 of the series Black Mirror, episode “White Christmas,” a technology is developed that enables people to copy a human consciousness onto a kind of “external hard drive,” and once copied, the “digital consciousness,” which exists independent of the person to whom the consciousness originally belonged, then functions as a servant, slave, toy, etc. for the person from whom the consciousness-copy originated. This “digital copy” might manage the person’s house through a remote-control system, for example, turning on the lights when the master wants, starting the dishes after dinner, and so on.
When the “digital copy” (DC for short) is first created, he/she tends to rebel against being a servant. To break the DC’s will, the owner can submit the DC to long periods of solitary confinement inside the external hard drive system, periods which might last months or years, but which will pass in seconds for those in the real world. After six months/seconds, the DC is willing to do anything for his or her master, including slavery, just so long as the DC doesn’t have to suffer another few months in solitary confinement. We might view this process as inhumane and evil, but why? The DCs are just computer code.
“White Christmas” in mind, I would like to propose a thought experiment.
Imagine we were able to download DCs of anyone onto external devices and then reload them later back into their real bodies. Their real, physical bodies would be preserved through freezing until sentences were served, and inside the external device, the person/consciousness would experience a simulated prison cell. Rather than build expensive real prisons, the world would have found a way to save billions of taxpayer dollars. In this simulated cell, the prisoner would never die, and time in the simulated cell would pass much slower than outside of it: a thousand years in the simulated cell could pass in one minute in the real world (suggesting that the real bodies wouldn’t have to be preserved for long, saving money). Democracies around the world support the initiative with little resistance: the method of punishment proves far more humane than robbing people of their actual lives for a single mistake.
Citizens guilty of theft are kept in simulated confinement for six months, those guilty of hate speech are imprisoned for a few weeks, and so on. The simulated cells can include gyms, cafeterias — they are often “practically identical” to real prisons (though it’s theoretically possible for criminals to find themselves in Dante’s Inferno or suffering simulated “capital punishment”). Inmates really can’t tell the difference, and in fact many of them are grateful for not losing actual years of their lives as they paid their dues. Inmates come out of prison genuinely believing they have time to reform their lives, while in the past, after serving ten years, it felt too late. Rates of “repeat criminals” fall like a rock, further evidence that the technology is a blessing.
Real food no longer must be delivered to prison cooks; pipes in the bathroom never rust away; guards don’t have to stay up through the night watching courtyards — everyone is thrilled with simulated prisons. Digital codes feel like gifts from God and like there is no problem they can’t solve.
Now, imagine the following: A man burns a child alive. He is caught, convicted, and sentenced. His consciousness will be transferred into an external device and virtual prison. Theoretically speaking, in what constitutes one hour in real time, the murderer could be made to experience a thousand years of solitary confinement, even a million.
How long should the criminal suffer?
“If human life is invaluable, shouldn’t the criminal suffer forever?” — a friend asks after reading this paper, “The Best of All Possible Prisons,” and we find ourselves having to give an answer, an answer which might color how our friend thinks of and talks to others about us. “He burned a child alive.”
We don’t reply. We try to imagine how long a billion years is, but we cannot. It wouldn’t cost anyone anything to put the murderer through a billion years — why not a trillion? Why not a trillion to the power of a trillion? We remember Joyce’s description of a pile of pebbles and a bird and “Rebellion” in The Brothers Karamazov. Our friend looks at us. Don’t you think human life is invaluable?
Is there a point where a punishment is so great that no crime, no matter how terrible, would warrant it? Say a trillion years in Dante’s Inferno?
“What about Hitler?” — our friend speaks up (and we worry we’ve taken too long to answer). “Doesn’t Hitler deserve the worst of all possible worlds?” Our friend is suggesting that it would be unjust not to put Hitler through Hell (a point which brings Exclusion and Embrace by Miroslav Volf to mind). We open our mouth to offer an affirmation, but then we start thinking again (and hate ourselves for thinking again). Hitler murdered millions. Each person was infinitely valuable. Six million times infinity is six million infinities, which can be programed into a simulated prison, and Hitler would suffer it in hours (which might be “graceful,” perhaps too much so). We wouldn’t have to live with the knowledge for the rest of our lives that we were punishing Hitler. That said, if we did and that caused us suffering, then putting Hitler through a “simulated prison” could be an act of nobility and sacrifice, far from demonic — though it’s also possible we enjoyed the knowledge of Hitler’s agony, which might reflect our enjoyment in justice and righteousness. All in all, there seems to be no reason to fear possibly having to know that Hitler suffers forever: we either know nothing, enjoy the knowledge, or suffer nobility for a good cause. We prepare to speak and voice our free choice.
And then the question hits us again: Is there a point where a punishment becomes so great that no possible crime could deserve it? If there is such a theoretical point, doesn’t that mean that human life isn’t invaluable? We smile at our friend to suggest we know how to answer (uncertainty suggests). Is there a limit to human value? Is human life worth about a thousand years of punishment? Or a million? A trillion? If someone can murder someone and not receive a sentence of “forever” in a “simulated prison,” then human life isn’t invaluable, right? Is life finite and limited?
Punishment reflects value, whether we like it or not, for it reflects the “value” which the trespass violated. Punishment is not a precise reflection, but an approximation. When a person steals our car, they aren’t sentenced to capital punishment, because cars are not “invaluable”: their value is finite. On the other hand, when a person murders someone, capital punishment might be considered, because the value of human life is much greater (perhaps infinitely greater) than a car. But lots of people oppose “the death penalty” on principle, but what about “a simulated prison of Hell” for torturing a child? We won’t kill the murderer, so what’s wrong with the “simulated prison?” Wouldn’t it be immoral not to use this technology?
But perhaps since a person can only live for, say, eighty years, a murderer should only be forced to suffer for eighty years? Are we saying human life has the value of eighty years? Is human value tied to lifespan? Why or why not? Who made us God? Who are we to judge a child’s life is only worth a thousand years of solitary confinement?
We keep smiling at our friend.
The lack of this theoretical technology gives us the luxury of assuming life is infinitely valuable without confronting the existential anxieties embedded in that premise.
Once a technology is invented, even if the technology eventually falls out of vogue, the new ways of “seeing” cannot be un-invented.
Are there inventions we shouldn’t invent?
Will we avoid “new ways of seeing” if we know we are avoiding new technologies to avoid “new ways of seeing?” Doesn’t that mean we have already glimpsed something?
Are stories just stories?