It’s rare for legal reports, government consultations, and anime-styled selfies to feel part of the same story – but the last few days, they do.
On Tuesday, the U.S. Copyright Office released Part Two of its long-awaited report on the copyrightability of AI-generated works.
Its core message? Human creativity remains the foundation of US copyright law – and AI-generated material, on its own, doesn’t qualify.
The Office was unambiguous. Prompts alone, no matter how detailed or imaginative, are not enough. What matters is authorship, and authorship must involve human originality.
If a person curates, edits, or meaningfully transforms an AI output, that contribution may be protected. But the machine’s output itself? No.
In practice, this means that someone who generates an image using a text prompt likely doesn’t own it in the traditional sense.
The report outlines three narrow scenarios in which copyright might apply: when AI is used assistively, when original human work is perceptibly incorporated, or when a human selects and arranges AI-generated elements in a creative way.
Sounds generous in some ways, but the fact remains that courts have consistently rejected copyright claims over purely machine-made works, and this report affirms that position.
The Copyright Office likens prompts to giving instructions to a photographer: they might influence the result, but they don’t rise to the level of authorship.
But just as that line was being redrawn in Washington, OpenAI was urging lawmakers in the UK to take a different path.
On Wednesday, the company submitted its formal response to the UK government’s AI and copyright consultation.
OpenAI argues for a “broad text and data mining exception” – a legal framework that would allow AI developers to train on publicly available data without first seeking permission from rights holders.
The idea is to create a pro-innovation environment that would attract AI investment and development. In effect, let the machines read everything, unless someone explicitly opts out. It’s a stance that puts OpenAI firmly at odds with many in the creative sector, where alarm bells have been ringing for months.
Artists, authors, and publishers see the proposed exception as a backdoor license to scrape the web, turning years of human work into fuel for algorithmic engines.
Critics argue that even an opt-out model places the burden on creators, not companies, and risks eroding the already fragile economics of professional content.
Chucked into this copyright melting pot was the release of a new study this week from the AI Disclosures Project, which claims that OpenAI’s newest model, GPT-4o, shows a suspiciously high recognition of paywalled content.
And all of this came on the heels of a much more public – and wildly popular – example of AI’s blurred boundaries: the Studio Ghibli trend.
Over the weekend, OpenAI’s image generator, newly improved in ChatGPT, went viral for its ability to transform selfies into Ghibli scenes – despite the studio’s co-founder publicly stating he hated AI back in 2016.
A career distilled into a prompt. Or is AI creativity truly blooming in the public consciousness?
None of this is happening in isolation. Copyright law, historically slow-moving and text-bound, is being forced to change and adapt.
Governments, regulators, tech companies, and creators are all scrambling to define the rules – or bend them – to get the better of this debate.