Advertisement
We’re at a point where AI is writing blog posts, designing logos, making music, and yeah… even generating entire marketing campaigns. Cool? Very. Confusing legally? Also yes.
If you’re using AI tools in your workflow (or building them), you’ve probably wondered—is this even legal? Can you actually own something created by AI? What if the AI copied someone else’s work during training? What if someone uses your stuff to train their model?
All fair questions. And the answers? Well, they’re slowly being shaped by courts, copyright offices, and lawsuits popping up left and right. Let’s talk about some copyright rulings on AI.
Let’s get this one out of the way—it’s been making headlines since 2023. The U.S. Copyright Office made it clear that anything created solely by AI, without a real human’s creative input, can’t be copyrighted. Period.
One case that really brought this into the spotlight? A comic book called Zarya of the Dawn. The images were made using Midjourney (an AI image generator), but the human author arranged the story. The Copyright Office gave protection for the layout and written parts, but not the images. Because a machine made those.
This ruling is a big deal for creators and companies using AI. If you feed prompts into a model and it spits out an image or a poem or some code… you don’t automatically get copyright protection. It doesn’t matter if you typed the prompts yourself. Unless there’s clear, original human creativity involved, the result might live in a kind of copyright no man’s land.
Bottom line? AI can be a tool, but it can’t legally be the author. That means companies may need to rethink how they use AI in their workflows—and how they claim ownership.
This one’s still playing out in both the U.S. and the UK, but it’s already shaking things up.
Getty Images—yes, that Getty—filed lawsuits against Stability AI, the folks behind Stable Diffusion. They claim the company scraped millions of their copyrighted photos to train their model… without permission, without payment, and without proper licensing.
Getty says the AI doesn’t just learn from the images—it uses them in a way that creates “derivative works.” As in, work that’s based on theirs but changed. Stability AI, on the other hand, is trying to argue that the training process is fair use, not copying.
Why does this matter? Because it’s directly tied to how AI models are trained. If a court decides that training AI on copyrighted images or content is not fair use… that opens the door to all kinds of licensing demands. Think major companies having to pay big bucks just to use datasets, or being blocked from using certain content altogether.
The New York Times sued OpenAI and Microsoft (aka the people behind ChatGPT and Bing Chat) in late 2023. Their main complaint? That OpenAI trained its models on Times content, and that ChatGPT could reproduce some of it almost word for word.
This is one of the first big lawsuits from a major news outlet. And it could become a template for other media companies looking to protect their work. The Times is basically saying: “You used our copyrighted stuff to build your tool, and now that tool is competing with us.”
The case is still unfolding, but the outcome could affect the whole LLM (large language model) space. If courts side with the Times, it could mean stricter rules for AI training… more deals between media outlets and tech companies… and maybe even new AI copyright laws down the road.
(And yes—some other publishers are already hinting they might sue next.)
Okay, hear us out. This one doesn’t involve AI, but it still matters.
Back in 2011, a monkey took a selfie with a photographer’s camera. The photo went viral. Then came the debate: Who owns the copyright? The photographer? The monkey?
The courts decided: animals can’t hold copyrights. Only humans can.
You might be thinking, “Cool story, but what’s this got to do with AI?” Well… the same logic applies. AI isn’t a person. It can’t be the author. It can’t own anything it creates. That means anything generated solely by AI doesn’t automatically get copyright protection—unless a human adds substantial input.
It’s a weird, slightly hilarious case that’s now being referenced in modern AI debates. Who knew a monkey selfie would help shape AI law?
This one’s a classic, but it still comes up in AI copyright discussions.
Google scanned and digitized millions of books for its Google Books project. Authors sued, arguing it was copyright infringement. Google said it was “transformative” use because the books weren’t shown in full, and it helped users search content.
The court agreed with Google. It was ruled as fair use.
Why’s this relevant now? Because AI companies often make the same argument when it comes to training data. They say their models aren’t copying—they’re transforming. They’re creating something new from the input.
But here’s the thing… fair use is complicated. What worked for Google Books may not work for every AI model, especially if the outputs are too close to the original. So while this ruling supports the “transformative use” idea, it’s not a blanket pass.
It’s a “maybe” not a “definitely.”
This one comes from a bunch of artists who are… not happy with AI image generators. They sued Midjourney, Stability AI, and DeviantArt, claiming the tools trained on their copyrighted works and could now copy their art styles.
Their concern isn’t just about exact copies. It’s about style theft. As in, someone could type in “draw this like Sarah Andersen” (one of the artists in the case) and the AI might produce something close enough to her look.
If the court agrees that this kind of mimicry crosses a legal line, it could impact how AI art tools function. Companies might have to get permission from artists (or avoid using their styles altogether), which would definitely change the landscape.
So yeah… this case is kind of a big deal for creative professionals, and the companies making AI art tools.
Let’s zoom out of the U.S. for a sec.
The EU’s AI Act (along with the earlier Copyright Directive) is leading the way in creating guardrails for how AI should be developed and used. One thing it pushes hard? Transparency. As in, companies must disclose when content is AI-generated, and provide info on the data used to train models.
This matters for copyright because it adds a layer of accountability. If someone says, “Hey, your AI just used my song in its training set,” a company can’t just shrug anymore.
The EU Copyright Directive also puts more pressure on platforms to compensate creators, especially when their work is used for things like training, remixing, or reposting.
Even if you’re not in Europe, these rules could have global effects. Big tech players don’t want to create different models for different regions. So if they follow EU rules? That standard might just become… the standard.
AI is amazing. It’s powerful, useful, and straight-up exciting to work with. But legally? We’re still figuring things out. And these rulings (and pending lawsuits) are shaping the future, especially for folks building or using AI tools in any serious way.
Here’s the thing: copyright law wasn’t designed with AI in mind. That’s why we’re seeing this legal scramble now. Courts are adapting. Laws are changing. And companies? They’ll need to keep up—or risk getting hit with lawsuits, penalties, or worse… being blocked from using the very data that made their tools smart in the first place.
So if you’re in the AI space (or even just testing the waters), keep an eye on these rulings. They’re not just legal footnotes—they’re signposts pointing toward what’s coming next.
Advertisement
How vision language models transform AI with better accuracy, faster processing, and stronger real-world understanding. Learn why these models matter today
Explore the top 11 generative AI startups making waves in 2025. From language models and code assistants to 3D tools and brand-safe content, these companies are changing how we create
Learn how to use ChatGPT for customer service to improve efficiency, handle FAQs, and deliver 24/7 support at scale
How the Adam optimizer works and how to fine-tune its parameters in PyTorch for more stable and efficient training across deep learning models
Learn the top eight impacts of global privacy laws on small businesses and what they mean for your data security in 2025.
Thinking about upgrading to ChatGPT Plus? Here's an in-depth look at what the subscription offers, how it compares to the free version, and whether it's worth paying for
How Orca LLM challenges the traditional scale-based AI model approach by using explanation tuning to improve reasoning, accuracy, and transparency in responses
How the ORDER BY clause in SQL helps organize query results by sorting data using columns, expressions, and aliases. Improve your SQL sorting techniques with this practical guide
Learn different ways of executing shell commands with Python using tools like os, subprocess, and pexpect. Get practical examples and understand where each method fits best
Sisense adds an embeddable chatbot, enhancing generative AI with smarter, more secure, and accessible analytics for all teams
How the Vertex AI Model Garden supports thousands of open-source models, enabling teams to deploy, fine-tune, and scale open LLMs for real-world use with reliable infrastructure and easy integration
Which data science companies are actually making a difference in 2025? These nine firms are reshaping how businesses use data—making it faster, smarter, and more useful