Why so much ‘placeholder’ AI art keeps sneaking into games
Recent disclosures from studios including Owlcat and Pearl Abyss have reignited the debate over generative AI and the role of placeholder assets in development workflows.

When a developer says an AI asset shipped by accident, players react fast and loud. In the last few months that script repeated itself, as Owlcat acknowledged using generative models during development of The Expanse: Osiris Reborn, and Pearl Abyss apologized after AI-created imagery appeared in Crimson Desert’s release. How does something that’s supposedly temporary make it into a finished game?
Part of the answer lives in ordinary production practices. Placeholder assets exist to be obvious. They are crude, silly, or flagged with giant text so they stand out during testing and reviews. Historically that meant stick figures, lorem ipsum, or intentionally mismatched images that force a hand-drawn replacement before final art goes in.
That safety net breaks when teams use generative tools for concept or temporary content and treat the results as “good enough” for later builds. If an AI-generated image blends with a project’s visual direction and review processes miss it, the asset can slip past sign-off and land in a public build. The fallout is immediate because AI output attracts attention and anger in ways a silly stick man never did.
Developers have different reasons for turning to generative tools early on. Speed, cheap iteration, or the desire to explore visual options quickly are common. But concept work is also where artists discover ideas by doing the difficult part: executing and refining a design. Relying on a model trained on existing art can shortcut that creative process and narrow outcomes instead of expanding them.
There are also external costs that extend beyond aesthetics. Generative models exact a real environmental toll. Research from MIT laid out energy and emissions concerns tied to training and running these systems, and other reporting has flagged health and community impacts near data-center construction and operation. Those consequences feed the broader worry about normalizing heavy AI use across creative industries.
In some instances, studios have tried to be transparent. Owlcat’s disclosure for The Expanse came with an admission of generative AI use during development. That candor contrasts with cases where companies only disclosed AI involvement after players or analysts noticed anomalies. There is mixed AI and production pipelines, as was previously mentioned in the Arc Raiders’ AI disclosure.
When a visible placeholder ships, a stick-figure enemy becomes a meme and a shared joke between players and developers. But when the placeholder reads like off-the-shelf AI artwork, responses land much harsher. Developers then must weigh whether the savings in iteration time are worth the damage to trust and whether internal review systems are robust enough to catch these items before release.
Fixes are straightforward in principle. Make placeholders unmistakable. Tighten review and build checks. Treat AI-generated drafts like any external tool, label them, isolate them, and require explicit sign-off before anything reaches a public build. Those steps reduce accidental shipping without banning tools outright.
There’s no universal policy yet across studios. Some publishers have adopted strict no-AI rules for art, while others allow machine-assisted workflows under controls. The debate will continue as the technology and its uses evolve, but the simplest lesson from recent incidents is old-fashioned, if you want something not to ship, make sure it looks wrong on purpose.
Share your thoughts in the comments and tell us what you think about studios using generative AI during development. Follow us on X, Bluesky, YouTube, Instagram.




