I've been waiting for Midjourney's video feature for more than a year now... and I'm still waiting. Perhaps it will arrive this year; perhaps not. Who knows.
I wanted to create an AI video from the images I made in Midjourney, so I tried several image-to-video models, including Runway, Pika Labs, and Luma AI. Still, I wasn't satisfied with the results.
Back then, AI video generation technology was in its infancy, and the videos generated mostly looked terrible; there were exceptions, of course, but in general, they were bad.
Then Runway released its Gen-3 Alpha Turbo model, which is fast and has stunning lifelike quality.
See it for yourself (please turn on the sound).
This perfectly sums up my reaction.
It feels as if my character had come to life. I have given the character a 10-second life to express herself.
I look at the images I've previously created. There’s lots of it. Can I also "give life" to them? Perhaps they will act in ways that are beyond my imagination. How exciting.
I'd like to share my experience using Midjourney images to create videos in Runway. Just a few important ones. Perhaps it will also make you consider trying to create some videos with Runway.
(1) The video creation process isn't as complicated as you might think.
Unlike text-to-video, which requires a lengthy description of what is happening in the video, if you use Midjourney images as the input (hence, image-to-video), you do not need to write complicated prompts to move the subject.
Some of the videos I successfully created used only a few words in the prompt, and the results were fantastic.
(2) The failure rate of getting what you want in a video is higher than that of creating an image.
I had to try several prompt iterations to get the subject to do what I wanted roughly. In some cases, I was fortunate enough to obtain the desired video clip on the first try.
However, the success rate is way lower than image creation in Midjourney. Maybe it's also because I haven't created enough videos in Runway and know all the tricks.
(3) There is no cheaper Relax mode in Runway.
Access to the unlimited generation (similar to the /relax mode in Midjourney) costs $76 for the Runway's Unlimited plan and $30 for Midjourney's Standard Plan.
(4) Gen-3 Alpha Turbo has a maximum queue of four in the relax mode, unlike the ten queue jobs for Midjourney.
Videos also take longer to generate than images.
(5) The Runway Discord community is smaller and less active than in MJ.
(6) You can generate a 5- or 10-second video clip each time.
If the generated clip looks great, you can extend it with several rounds of generation.
It is currently not possible to generate the entire two hours in one go using AI. Perhaps it could be possible in the near future?
(7) Aside from creating videos, Runway enables you to lip sync the video with a sound clip, trim the video clip to remove problematic sections, reverse the video to start from the back, generate speech, and more!
(8) The learning curve is steep if you want to edit the video, add special transition effects, add a sound clip, do a voice-over, and add sound effects.
There are so many things to learn. That also means it is expensive because it requires subscriptions to multiple systems/models.
But it's learnable and not too complicated; just take one step at a time and you'll be fine.
See here for a list of the tools/AI models I used, and try them out because the majority of them offer free trials or even free credits.
(9) Upscaling AI-generated video to 4K at high frame rates takes time. It can take hours, even if the generated clip is only 1-2 minutes long.
Fortunately, if the video is only for entertainment purposes and to be shared on social media with your friends, there is no need to upscale it.
The video clips in this article were not upscaled.
(10) If you know how to use Midjourney, you're already halfway (mid-journey) to generating high-quality AI video because you've already have great images to begin with. Good input produces good output.
(11) Using Midjourney images to create AI videos led me to think about ways to optimize the images before feeding them into the video-generation machine.
That involves tweaking the Midjourney prompt to generate images that complement one another and make the video storytelling more engaging and cohesive.
(12) It's satisfying and fun.
You can create videos from any image, not just those made with Midjourney. Include photos of your family, friends, and colleagues, as well as photos of the painting you completed last summer.
Of course, moderation is in place to prevent users from abusing it to create porn.
If video creation is not currently in your plans, don't worry.
Take your time, see what others are doing, and learn a few things from them.
Understanding the process of creating an AI video will help you shorten your learning curve when Midjourney finally launches its video feature.
I hope you liked this article!
Please subscribe, like, share, and comment so that more people can discover Geeky Curiosity newsletter.
Geeky Curiosity is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.