Introduction
In this article, I’ll explore the process of converting images into videos using Gen-3 Alpha, a well-known video generation AI similar to Dream Machine.
Runway
Runway, a platform named after its parent company, offers a variety of generative AI tools, one of which is Gen-3 Alpha.
It is also available as an iOS app, but for this article, I’ll be using the browser version.
Account Creation
You can create an account via the Sign up page.
You can register using an email address, Google, or Apple account. The Enterprise plan also supports SSO.
Pricing
Runway offers a free plan, but Gen-3 Alpha is currently only accessible with paid plans. For this test, I subscribed to the Standard plan at $15/month, which provides 625 credits per month (Gen-3 Alpha consumes 10 credits per second of video). The free plan gives access to Gen-3 Alpha Turbo, a lower-cost version.
For more details about the different plans, check out pricing. If you choose the annual payment option, there’s a 20% discount.
Video Generation 1
I selected Gen-3 Alpha and began generating a video.
Since the required size is 1280x768, I used the following image that fits those dimensions.
If your image is a different size, you can crop it directly in the browser.
I used the following prompt:
A Japanese woman is smiling happily
You can also generate videos without specifying a prompt. For more details on how to use prompts, refer to the Gen-3 Alpha Prompting Guide.
It’s possible to use different images for the first and last frames, but I used the same image for both in this case.
Here’s the generated video, which is 10 seconds by default.
The result looks quite natural. The woman in the original image is genuinely smiling. If someone showed me this video without telling me it was AI-generated, I probably wouldn’t have noticed.
For comparison, I generated a similar video using Dream Machine with the same image and prompt.
Here’s the 5-second video generated by Dream Machine.
Although there’s significant movement, there is noticeable distortion, especially around the face, creating a sense of unease. This wasn’t as evident in the videos I generated in my previous article, so I thought it was worth mentioning as a reference point.
Video Generation 2
For further experimentation, I generated another video using a completely different image.
I used the following prompt for this image:
Japanese man dancing
Here’s the generated video.
This one also turned out very well.
There’s a slight awkwardness in certain areas like the hands, but the video maintains consistency over its 10-second duration.
For comparison, I also generated a video from the same image using Dream Machine. Here’s the result:
I’m not sure if this counts as dancing, but there’s definitely movement, which is a nice touch.
Conclusion
Although my testing was limited and the prompts were simple, I noticed distinct characteristics in the videos generated by both Gen-3 Alpha and Dream Machine.
The field of video generation has made incredible advancements, and I’m excited to see where it goes next.
There have also been some interesting recent developments in the video generation space.
AI-generated videos aren't just the future: They're here, and they're scary. AI companies are rolling out tech that can produce realistic videos from simple text prompts. Adobe is just the latest, and their AI-generated videos are impressive—even if the demos are brief.
Reference: Adobe's AI Video Generator Might Be as Good as OpenAI's | Lifehacker
I’m looking forward to trying it out myself.
Japanese Version of the Article
動画生成AI「Gen-3 Alpha」のImage to Videoで画像を動画に変換してみたらやっぱり自然すぎて恐くなりもした
Top comments (0)