LIM Center, Aleje Jerozolimskie 65/79, 00-697 Warsaw, Poland
+48 (22) 364 58 00

Midjourney Video V1 Just Dropped: Turn Any Photo into a 21-Second AI Movie—See the Jaw-Dropping Demos and Secret Settings You Need to Try Today!

Midjourney Video V1 Just Dropped: Turn Any Photo into a 21-Second AI Movie—See the Jaw-Dropping Demos and Secret Settings You Need to Try Today!

Midjourney Video V1 Just Dropped: Turn Any Photo into a 21-Second AI Movie—See the Jaw-Dropping Demos and Secret Settings You Need to Try Today!

Midjourney has finally crossed the still-image frontier. Its Video V1 model, unveiled for all subscribers on 18–19 June 2025, turns any picture—uploaded or AI-generated—into four five-second clips that can be extended to a maximum of 21 seconds in length, at roughly eight image-credits per job techcrunch.comtheverge.com. Two motion presets (Low and High) and an optional “manual” text prompt give users control over camera flow and subject animation updates.midjourney.com. Early reviewers say the results “surpass expectations” in coherence, yet the launch is tempered by hard caps (480-1080 p, 24–30 fps, no audio) and a looming Disney/Universal copyright suit testingcatalog.comtecheblog.comomni.se. Below is a complete rundown of everything that surfaced in the last 48 hours.


1. What exactly is Video V1?

Midjourney’s first video model sits on top of its V7 image pipeline: generate or upload a still, hit Animate, and the backend diffusion engine interpolates motion across 120–630 frames depending on clip length siliconangle.com. The tool lives on the web (Discord remains image-only for now) and costs about an image render—approximately one image-credit per second of video updates.midjourney.comventurebeat.com.

Key workflow options

FeatureDetailsSources
AutomaticMidjourney invents a “motion prompt” on its own.updates.midjourney.com
ManualUsers write a motion prompt such as “slow pan across neon alley.”updates.midjourney.com
Low vs High MotionControls camera + subject energy; raising it increases risk of artifacts.siliconangle.comtomsguide.com
ExtendAdds ~4 s blocks up to 21 s total.siliconangle.comtestingcatalog.com

2. Technical specs & current limits

  • Resolution / Frame-rate – Midjourney’s docs omit a hard number, but hands-on tests report 480 p @ 24 fps (TechEBlog) while others see 1080 p caps (TestingCatalog) and 30 fps embeds (VentureBeat). Expect rapid tweaks this month. techeblog.comtestingcatalog.comventurebeat.com
  • No audio track – clips are silent by design; add sound in post. testingcatalog.com
  • Image-to-Video only – direct text-to-video is not supported yet, a gap versus Runway Gen-4, Google Veo 3 and OpenAI Sora. tomsguide.comtecheblog.com
  • Pricing tiers – Basic $10/mo (limited GPU minutes); Pro and Mega gain a forthcoming “Video Relax” queue for lower-priority renders. techcrunch.comtomsguide.com

3. Early community reaction

  • Creator buzz – X designer @apostraphi called V1 “surpassing all my expectations,” highlighting film-like consistency venturebeat.com.
  • Hands-on demos – YouTube compilations show 2D animation shining, while complex 3-D camera swings sometimes “flicker” youtube.comreddit.com.
  • Reddit feedback – r/Midjourney threads praise low-motion ambience but critique human biomechanics and texture warping on high-motion clips reddit.comreddit.com.

4. Legal cloud overhead

Disney and Universal filed suit last week alleging Midjourney’s training data contained protected film frames; the V1 launch therefore lands under immediate IP scrutiny omni.setheverge.com. Holz’s blog post urges “responsible use,” hinting that uploads of copyrighted stills could add risk for creators updates.midjourney.com.


5. Competitive landscape

ModelMax lengthResolutionWorkflowStarting price
Midjourney V121 s480-1080 pImage→Video$10 / mo techcrunch.comtestingcatalog.com
Google Veo 320 s4 KText→Video$249 / mo techeblog.com
OpenAI Sora20 s1080 pText→Video$20 / mo+ venturebeat.com
Runway Gen-416 s1080 pText + Image$12 / mo tomsguide.com
Luma Dream Machine10 s720 pText→Video$9.99 / mo venturebeat.com

6. Roadmap: beyond V1

Holz frames V1 as a “stepping stone” toward unified 3-D, physics-aware, real-time simulation models that users can “walk through,” slated for staged releases over the next 12 months updates.midjourney.com. Expect:

  1. Upscaling & super-resolution for current videos.
  2. Long-form coherence—mid-term goal of minute-long sequences.
  3. 3-D scene builders, feeding eventual VR/AR pipelines.

7. Getting started

  1. Log in on the web version of Midjourney.
  2. Generate or upload a still, then press Animate.
  3. Choose Automatic or craft a Manual motion prompt, plus Low/High motion.
  4. Iterate extensions (–extend) until reaching desired length.
  5. Export MP4 and add audio in your NLE of choice.

The basic $10 subscription supplies enough credits for roughly a dozen 10-second videos per month; power users should budget for higher tiers or Relax mode techcrunch.com.


Bottom line

Video V1 is not yet a full-fledged filmmaker’s studio, but it delivers Midjourney’s trademark artistry at a hobbyist-friendly price, kick-starting a new phase in consumer AI video. Watch for weekly tweaks—resolution bumps, promptable scenes, and perhaps a text-to-video beta—while keeping an eye on the courtroom, where the tool’s data diet faces its toughest test.

Tags: , ,