ASML Ad: Midjourney+Stable Diffusion+Runway

Midjourney+Stable Diffusion+Runway

ASML utilized Generative AI (GenAI) technology to create a promotional video.

How This Video Was Created:

This video serves as a prime illustration of the AI video production process as of 2023. Drawing on the official website for details, it seems the journey began with Midjourney, leveraging 1,963 natural language prompts to bring forth 7,852 images. These images underwent subsequent editing and rendering by an impressive ensemble of 900 computers, likely with the aid of Stable Diffusion. The process culminated with Runway, which handled the video editing and compilation, resulting in a video comprised of 25,957 frames. Impressively, each frame demands up to 1000MB of resolution space, showcasing the cutting-edge capabilities in AI-driven video production.

Daniel's comment: This was a method for creating creative videos during the early stages of Generative AI development. Due to the poor performance of various video generation applications up until 2023 and the inconsistency in image generation services, it was the best choice at that time. Nowadays, direct video generation services like Sora, Luma AI Dream Machine, and Runway Gen-3 have significantly advanced. Midjourney also ensures consistency. Whether generating videos directly from text or by first generating images and then videos, both methods now offer better performance and simpler steps. Therefore, using such a complex method is no longer the best option.

Last updated

Was this helpful?