The design trends forged by AI

We’re at the beginning of a new era of content creation thanks to AI. With every generation, new technology gives birth to new styles and visuals that become synonymous with that time.
Now Reading:  
The design trends forged by AI

We’re exploring trends that are emerging from the next-gen AI content creators and how they’ll represent a moment in time for the history books.

As AI technology is ever-evolving, so are the visuals. We’re not just talking about the fidelity of the imagery (we’re currently on Mid-journey 5.2 and the system still struggles with the hands and digits appearing and disappearing, creating trippy snapshots…)

But we’re also talking about the way the image is made – the processing, it’s this stable diffusion mist and morphing that we think will become symbolic of this era in the same way that a VHS tape noise and glitch capture the sense of the 80s and 90s. Or Super 8’s (the 8mm film camcorder) evokes the 60’s and 70’s.

The way technology processes imagery will often be the lens we revisit memories – the sun-faded disposable photos may be how we recall a childhood.

So as new technologies allow us to experience, create and remember new moments, we want to look at the aesthetics that will forever be a calling card to 2023 in our minds. 

Mid-Journey 5.2 – Image Generation / Processing

The new design trends that are being forged today may look innovative, but within 5-10 years at least, they’ll feel like a nostalgic look back to a time of our innocence and naivety around AI.


01. Generative Latency

In the above set of images, you can see the process that MidJourney uses when generating an image. It appears almost painterly like the artist is adding more and more detail into the frame – suddenly shapes begin to form and they morph and shift until they become fixed in their position. That morphing or latency to describe the time between moments when the image is fully realised is a trend we see appearing throughout entertainment and media.

Sometimes it’s a natural side-effect of using generative imagery, most of the time it’s an effect that directors, designers and animators are striving to recreate. 

You can see it in full force in the latest MCU entry on Disney+, Secret Invasion – as Marvel Studios proudly announced they used AI to help create the sequence. (Though this work definitely screams of just how we’re rushing into using or emulating AI because I would hedge a bet that with the current Writers and Actors strikes in the US for better pay and a stance against AI removing key artistic roles, Marvel won’t be shouting about its use of AI any time soon).

The style reminds us of early hand-drawn animation or stop motion, where the frame rate isn’t high enough and your eye can see the jumps between frames. It captures a moment in time when generative imagery isn’t perfect yet, but our eye doesn’t mind that imperfection because it feels like progress. These two examples both by Builders Club Studio use the techniques to great effect. 

02. Datamosh walked before NeRF could run

Datamoshing emerged in the 2000s, drawing inspiration from glitches observed in early digital video codecs like DivX. Initially, artists experimented with introducing deliberate imperfections in jpeg files, which eventually led to the exploration of techniques for manipulating glitches in digital videos. Pioneering digital artists then skillfully utilized compression artefacts and manipulated video files by intentionally implanting flaws, resulting in captivating and expressive swirls of combined visuals.

And now, almost 2 decades later we’re seeing the same ownership of ‘digital imperfections’ within the morphing and ‘latent’ effect that is seen in the use of NeRF in video content. So it’s fair to say that the idea of essentially ‘showing your working’ with AI in the visual space will be a trend that continues to grow.

The TLDR on NeRF  – It’s the technology used in your smartphone camera to generate 3D models, it uses LIDAR (like radar) but rather than radio frequency to detect an object, it uses light instead, creating a very decent-looking depiction of your scene in 3D. It still has imperfections and these are being used as visual language as you can see in ZAYN’s MV.

03. Infinite Visuals

TikTok’s acquisition of CapCut, and the following flood of visual uniqueness hitting the platform through different templates, showcases the desire for endless creative options from the creator community.

Whether it’s a rapidly changing collaged effect set to the sounds of a Radiohead x Kendrick Lamar mashup, or videos with hundreds of frames replaced by generative visuals - this trend of ‘Infinite Visuals’ is being grasped firmly by content creators and everyday social media users alike.

Generative AI no longer requires content creators to have a camera or even be anywhere but their bedroom. The latest trends we’re seeing just requires some pre-existing content and then a blank canvas can open up the entire world.

While the visual language of these endless options is exciting and hugely stimulating in video content, could it (like any trend) come full circle and land us in a space of ‘anti-generative’ trends?


Anti-trends - it’s only a matter of time

The infinitely clean and stylised nature of a lot of the content being produced could also drive people towards an ‘anti-generative’ look. Now what is anti-generative? Well, it’s hard to tell, considering we’re moving towards a world where AI can generate almost any style. But the closest example to be seen in the content space is the “corecore” TikTok visual aesthetic circa 2022.

Corecore is essentially an anti-trend that can be loosely defined as similar and disparate visual and audio clips that are meant to evoke some form of emotion. Full of intentionally jarring juxtapositions, deliberately nonsensical, and a representation of the technological disarray that Gen Z and Gen Alpha find relatable in this day and age.

As featured in TIME’s piece on the trend, a popular example of corecore is from a @masonoelle TikTok video. They explain it as “a man-on-the-street style interview of a kid saying he wants to be a doctor. When asked how much he wants to make, he responds, “I’m gonna make people feel OK.” The video quickly cuts to sped-up videos of people walking in a city, a clip of Ryan Gosling in Blade Runner 2049 screaming. Then it cuts to a row of people in a casino playing slot machines and a man talking about chicken “living in the metaverse.” 

While the content itself is intended to purely evoke emotion without sense, it acts as a sort of ‘anti-content’ trend that rejects what algorithms and audiences are being told is ‘best practice’ by leading creators and platforms.

And while generative AI offers this ‘endless choice’ visual world - could we reach a point of choice paralysis?

A summer of innocence 

In short, these trends we see emerging will become symbolic of 2023 (and MAYBE 2024), not a decade like previous technologies – because AI is moving and upgrading SO quickly. 

They’ll become a callback to a time when we were playing and discovering new AI toys. Whilst big tech companies and scientists fight against each other over the speed and consequences of AI and how quickly it’s advancing – designers, artists and content creators can blissfully explore these new possibilities.