https://web.archive.org/web/20251013153 ... se-speech/
Child pornography—today usually called “child sexual abuse material,” or CSAM, in recognition of its uniquely violent character—is one of the few things today still subject to near-universal revulsion and taboo. But AI-generated CSAM poses a novel, and important, challenge to our law and culture. As a result of a pivotal Supreme Court decision two decades ago, it is possible, and indeed likely, that the government cannot prohibit the possession or distribution of at least some of it—because, according to the court, it is First Amendment-protected speech.
[...]
Some provocateurs and libertarians, of course, ask: Why is this an issue? If no children are actually harmed in the making of purely synthetic child pornography, then what is wrong with its production or consumption? We may find it unseemly, but is that a valid basis for regulation? Some take this view even further, arguing that synthetic CSAM would act as a substitute for the real stuff—making it almost morally mandatory to make it widely available.
There are, of course, harm-based rebuttals to this view. Virtual and real CSAM can act as complements as well as substitutes, in much the same way that other addictive products do—people escalate from oxycodone to fentanyl, e.g. Similarly, consumers of CSAM often escalate to actual abuse. Moreover, some societies have treated pederastic conduct as acceptable, and it is primarily through a dense web of legal and social proscriptions that we stigmatize it. Widespread availability of virtual CSAM would almost certainly undermine that stigma.
Such arguments may be true, but they are also inadequate. The two views at play are essentially consequentialist, and turn therefore on which consequences would occur in the event that virtual CSAM was widely available. These arguments are indeterminate, at least and until we actually see the flood of virtual CSAM—at which point it will be too late.
Yet the idea of widely available AI-generated child pornography is horrifying not primarily for its consequences, but because the production, consumption, and mere existence of such content is wrong in and of itself. Our revulsion speaks to this intrinsic wrongness, which goes beyond the wrong of harm.
[...]
Like cannibalism, or incest, or zoophilia, or “just” raping and murdering, the production of CSAM is wrong not merely because it causes harm, but because that harm proceeds from the intrinsic wrongness of the act. CSAM makes a mockery of a fundamental reality: that children are not adults, and that that categorical distinction is produced in large part by the fact that children are not physically or psychologically capable of engaging in sex (and procreation, sex’s natural consequence). “Having sex with a child” is a contradiction in terms, in much the same way that treating a person as food, or treating a family member as a lover, or an animal as a sex object, is a contradiction in terms. It is from this contradiction that the violence of the act springs, and which provokes revulsion even when those acts are presented as somehow less than monstrous.
[...]
But what the rise of virtual CSAM demonstrates is that refusing to make a judgment is itself also a judgment. And refusing to name actual evil is a poor strategy for protecting our society from that evil.
