Roger Proeis / Founder
16 December 2025
The Death of the Moodboard: How Generative AI Changed the Way We Think
For twenty years, the branding process has followed the same uncomfortable ritual.
The agency writes a strategy deck. The strategy deck leads to a moodboard — a carefully curated collection of images scraped from Pinterest, most of them belonging to other brands, arranged to hint at what your brand might eventually look like. The client is asked to project from these references into a finished identity they cannot yet see. The agency asks them to trust the gap.
This gap is where most brand projects fail. Not because the strategy was wrong, or the creative team was weak, but because the distance between thinking about something and seeing it is too large to bridge with words and borrowed images.
In XUMU Labs, we have closed that gap. And the tool that closed it is generative AI.
High-Fidelity Thinking
The most common misunderstanding about generative AI in creative work is that it is a tool for making things cheaper. It is not. Used well, it is a tool for thinking more clearly.
Before we design a single identity element or write a line of positioning copy, we use generative workflows to make the brand world visible. Not as a moodboard — as a prototype.
We generate hyper-realistic portraits of the specific customer your brand is being built for, using your product, in the world your brand is trying to create. We generate cinematic stills from a brand film that doesn’t exist yet — not to show you what the film will look like, but to align the entire team on lighting, tone, and emotional register before a single dollar is committed to production.
The effect is immediate. Arguments about abstract words like “bold” or “premium” or “considered” disappear when you can show exactly what those words look like on day one. The client stops imagining. They start reacting. And reactions are far more useful than projections.
Stress-Testing Before Building
A brand system needs to work everywhere. On a mobile screen and on a billboard. In a retail environment and in a digital product interface. In English and in Japanese.
Traditionally, testing this required weeks of mockup work — specialised production time spent building hypothetical applications of an identity that might change entirely before anyone signs off on it.
We now stress-test the identity before we build it.
Early design concepts go through image-to-image workflows that show how the brand behaves across hundreds of contexts in hours rather than weeks. We extend the visual language into scenarios the brand hasn’t entered yet. If this brand were a hotel, what does the lobby look like? If it were a physical product, what is the material and texture? If it showed up in Tokyo, what does it feel like next to everything else on that street?
This is not speculation. It is rigorous testing. By the time we hand over a finished brand system, it isn’t a theory about how the identity might behave — it is a system we have already broken and rebuilt.
The Infinite Asset Problem
Once a brand is defined, the bottleneck shifts. Most brands starve not from lack of strategy but from lack of content. They cannot afford to shoot new photography every quarter. They license generic stock that looks like every other company in their category. The brand system exists, but there is nothing to fill it with.
We solve this by building generative pipelines trained on each brand’s specific visual identity. Custom models that understand your exact colour relationships, your lighting aesthetic, your spatial composition principles — and can produce new assets that follow those rules without requiring a shoot.
The result is owned IP, not licensed stock. Imagery that belongs to your brand and looks like it. Sonic identity that is composed for you, not rented from a library. An asset base that grows with the brand rather than capping out at whatever the production budget allowed.
The Craft Principle
There is a version of this work that produces generic sludge. Anyone who has spent time with these tools has seen it — the default outputs, the telltale textures, the compositions that feel generated rather than considered. AI in the hands of someone without taste produces content that looks like AI.
Our position on this is simple. The machine handles the execution. The intention is always human.
AI gets us to eighty percent. Fast, high-volume, explorable eighty percent — the kind that used to take two days and now takes twenty minutes. But the final twenty — the colour grading, the typographic decisions, the retouching, the choice of which output to keep and which to discard — is craft. It requires a human eye, a point of view, and the taste that comes from knowing what you are trying to say.
We audit outputs for hallucinations and unintended bias. We are transparent with clients about where synthetic media is used. We never use AI to deceive — we use it to build worlds that couldn’t exist otherwise, and we say so.
The Actual Shift
The moodboard asked clients to trust the agency across a gap they couldn’t see into. Generative AI closes that gap. It makes strategy visible before it becomes expensive. It lets us explore the edges of an idea — the safe version, the bold version, the version that surprises everyone in the room — in the same conversation, without burning the budget.
We are not using these tools to work less carefully. We are using them to think more clearly. The resolution of the strategy goes up. The quality of the decisions improves. And the final system — the one that gets built and deployed and has to work in the real world — is better for it.






