The "Slop" Trap: Why Transparency is the Only Way Forward
I was having a conversation with some good friends last night about the rising heat on AI in the creative industries.
Expedition 33 was my game of 2025
The backlash is real. Bandcamp has announced a ban on AI-generated music. Meanwhile, the game Clair Obscur: Expedition 33, which had already won Game of the Year awards, was disqualified from the Indie Game Awards because they used AI to generate some placeholder assets during the build phase.
Then you have the platform wars: Steam requires developers to disclose AI usage, while Epic’s Tim Sweeney argues that labelling games "Made with AI" is as pointless as labelling them "Made with Photoshop," because soon, it will be in everything.
The Case for Honest Labelling
My view? I agree with Tim Sweeney that AI will eventually be everywhere. It is a tool, and it is becoming ubiquitous.
However, I side with Steam (Praise Gaben!) on the policy.
I am a huge fan of what this tech can do (I’m using it to help edit this post). To me, using AI to speed up a process or create asset variations is no different than using Adobe Creative Suite instead of pen and paper, or VSCode instead of Notepad.
For me, quality is the destination. I’m agnostic about the journey to get there.
If a developer uses AI to generate a placeholder texture so they can focus on gameplay mechanics, that’s efficiency, not cheating. .
Punishing creators for using the most efficient tools available feels like a massive backward step. But should they be transparent about how and when they are using it? Absolutely.
Transparency isn't about shaming the tech; it’s about respecting the customer. There is a growing group of "human-only purists" who want to consume art and media that is 100% human-made. They should have the right to make that choice. By forcing publishers to be open, we allow the market to decide.
If your output is high quality, the "AI" label shouldn't scare you. If you are using AI to make something better or faster - like using a spellchecker or a debugger - honesty is the best policy.
The Real Danger: The Skills Gap
While the labelling debate is important, the bigger danger is how we use these tools internally.
AI creates "slop" (quantity over quality) when there is no skilled human in the loop. The person behind the wheel needs to be expert enough to know if the output is good, or skilled enough to refine the AI output until it is ready for delivery.
This leads to a massive risk: The Skills Gap.
If we let AI do all the heavy lifting for junior and trainee roles, we are removing the training ground where skills are forged. If juniors never learn the fundamentals because ChatGPT does it for them, who will have the expertise to critique and refine the AI's output in five years?
We should be transparent about using AI, and allow people to choose how they engage with AI generated media, but we must be careful not to let it replace the expertise required to use it well.