Is it time for the creator economy to smash that like button and subscribe to AI?

The second coming of content: who will be the Casey Neistat of AI
Now Reading:  
Is it time for the creator economy to smash that like button and subscribe to AI?

“Whhhhhhhhatsup guys and welcome back to my channel, where today - I’m going to transform myself into a claymation salamander and go scuba diving in the piping hot lava of an active Indonesian volcanoooooo”

No, this isn’t The Sidemen’s latest challenge video. But with the recent developments in the world of AI video creation - it could be.

The last 6 months have given us a parabolic adoption and adaption of various AI programs from the likes of OpenAI (ChatGPT, DALL-E 2) and Mid Journey - both which are great creative tool’s for text and image based creation (shameless plug: you can read all about it in my previous article). But coming in hot, like the aforementioned lava, is something that has the potential to completely change the game for YouTube creators and beyond.

And that is Gen-1 by Runway AI. They’re on a mission to “design models which provide users abstractions to create and allow people to tell stories in new ways”. In other words, video creation is about to lose almost all of it’s pre-existing barriers to entry.

Much like a prompt in Mid Journey or DALL-E, Gen-1 allows users to use words and images to generate new videos out of existing ones. In their own words, it’s “No lights. No cameras. All action.”

And with every kid that’s ever laid eyes on a screen wanting their career to be a world famous YouTuber these days - the combination of their ambition and these new tools feels like something that’s going to cause an eruption of creativity (that was the last volcano reference).

So what does it REALLY do

Although it’s still in a testing phase, Gen-1 promises to allow users to “realistically and consistently synthesise new videos by applying the composition and style of an image or text prompt to the structure of your source video.” Put simply, it promises to allow users to generate video content with a few words typed into a box, based on existing footage.

It’s tools range from ‘Stylisation’ - which allows users to apply a specific visual style to every single frame of a video (with either prompts or image references), to ‘Storyboard’ - which let’s users film any object (like a row of of books for example) and get AI to turn them into something completely different (like skyscrapers) while maintaining their form.

And while early testing shows promise, the software isn’t going to revolutionise video in the next few months - however, what it does showcase is how storytelling and world-building is about to change forever.

A whole new world(s)

Will Storr tells us “The world we experience as ‘out there’ is actually a reconstruction of reality that is built inside your own head. It’s an act of world-building by the storytelling brain.”

This is how it works - you walk into a room, your brain predicts what the scene should look and sound and feel like, then it generates a hallucination based on these predictions. And it’s this hallucination that you experience as the world around you. Congrats - you just built a world.

And while the brain + a camera is a powerful storyteller, the brain + an AI “brain” will build us even more interesting worlds. There will always be a place for putting people in front of the camera and telling a simple story with moving video image - we’ve seen the likes of Casey Neistat inspire and entire generation of people to carry a camera and document their days in engaging, and often cinematic ways. But now the visual craft of film making can be doubled with any visual style the mind can imagine, the places people can document themselves ‘visiting’ or the environments they can ‘live in’ are somewhat endless.

It’s those who start to think how to ‘hallucinate’ in new ways within their content that will be best placed to be the AI storyteller innovators. Travel vlogs, are no longer limited to this planet, or even this universe. Challenge videos can be as extreme as your brain can think up.

Will realness continue to win?

While the success of top creators is often the human-to-human connection to their content, the AI movement begs the question of whether or not this will still be an important part of creator storytelling in the coming years.

We get joy out of seeing Mr Beast rebuild Willy Wonka’s chocolate factory or the set of Squid Games, because that’s a big, magical, and physical thing to pull off. But will people still want to seek out this ‘realness’ in the content they consume if we’re no longer limited to reality? Will people be looking for those that are trying to screw reality up and throw it into a black hole?

And who are those people going to be that begin to utilise these tools in the best ways? Existing creators? Or an infant who’s yet to even have the brain development to even mutter the word ‘YouTube’.

We’re early. But AI is moving fast.

The best stories begin with a moment of unexpected change, as change is endlessly fascinating to the human brain. And Gen-1 by Runway AI is the beginning of a monumental change in content creation. New styles, new ideas, new faces, and characters are now seconds away from being created and turned into overnight internet sensations.

Creators can start thinking about build worlds that don’t exist, but stopping short of telling viewers everything about them as we all have a natural inclination to fill in the gaps. And it’s this big “WHAT THE HELL IS GONNA HAPPEN” gap that has me keeping one eye on where the creator economy is headed.