Recently, a group of patent attorneys—along with the self-proclaimed “patent holder for all neural systems that contemplate, invent, and discover via such confabulations”—has filed a set of patent applications at the U.S. Patent and Trademark Office, European Patent Office, and the UK Intellectual Property Office. But—reminiscent of the infamous “monkey selfie” copyright case, and unlike any other patent application filed with those offices—these applications list an artificial intelligence (AI) as the inventor, but claim that the AI’s owner should receive the patent.
Setting aside any arguments about whether the law currently recognizes an AI as an inventor—though at least in the U.S., the requirement that the inventor be human seems clear based on the monkey selfies decision, the patent statute’s use of the term “person,” and the requirement that an inventor sign an oath or declaration—the real question is whether permitting AI owners to own the output of the machine would be a wise policy choice. Would permitting AI output to be patented by the AI’s owner promote progress?
First and foremost, it’s a requirement that any invention be different from the prior art, both in the sense that the prior art does not contain an identical invention and in the sense that the invention would not have been obvious to a person of ordinary skill in the art. If an AI, which can be mass-produced and distributed and operated by anyone, can take as input a set of information and create something from it, is that new item any more than what would have been obvious to a person of ordinary skill in the art?
If anyone can use this AI, how is it any more than ordinary skill in the art for it to operate and produce output? By making invention a completely mechanical process, the level of ordinary skill in the art effectively automatically rises to the level of what can be mechanically achieved by the owner of an AI idea generator. Inherently, there’s a contradiction in arguing that the output of a mechanistic tool is inventive in the sense that’s required to obtain a patent.
But even if we assume that the developed idea is non-obvious, it’s equally non-obvious that assigning the invention to the AI’s owner would meaningfully promote innovation. The owner hasn’t performed any intellectual creative act and would simply receive a windfall for the output of something they bought. 1And the AI itself doesn’t need any incentive to create the new idea—it would create the idea whether or not a patent was available, meaning that there’s no reason to think that providing a patent on the output of the machine would promote progress. At most, providing a patent on the output of the machine might incentivize people to design better idea-creating AIs—but such an AI would already be patentable and doesn’t require additional incentive.
At the end of the day, the question really is “why give anyone exclusive rights to work generated by non-human entities?” That’s an extension of the question asked—and answered—over 30 years ago by Prof. Pamela Samuelson, concluding that perhaps no one should own the output of a computer. There’s no reason to think that answer has changed—especially since, in the past 30 years, we’ve seen an explosion in AI without any need to assign ownership of its output to the operators of the AI.
- The argument that individuals should own the inventive output of their AIs also bears an uncomfortable resemblance to pre-Abolition arguments made by slaveowners. Slaveowners argued that they should be able to apply for inventions created by the people they had enslaved, as described in Brian Frye’s article “Invention of a Slave.” ↵