Midjourney Video V1 Just Dropped: Turn Any Photo into a 21-Second AI Movie—See the Jaw-Dropping Demos and Secret Settings You Need to Try Today
20 June 2025
3 mins read

Midjourney Video V1 Just Dropped: Turn Any Photo into a 21-Second AI Movie—See the Jaw-Dropping Demos and Secret Settings You Need to Try Today

  • Midjourney released Video V1 to all subscribers on June 18–19, 2025.
  • Video V1 turns any image into four five-second clips, extendable to 21 seconds, at roughly eight image-credits per job.
  • It offers two motion presets, Low and High, plus an optional manual prompt to control camera flow and subject animation.
  • Video V1 enforces hard caps of 480–1080p resolution, 24–30 fps, with no audio.
  • The model sits on Midjourney’s V7 image pipeline; you upload or generate a still, press Animate, and motion is interpolated across 120–630 frames depending on clip length.
  • The tool runs on the web (Discord remains image-only for now) and costs about eight image-credits per image render, roughly one image-credit per second of video.
  • Midjourney introduces a “motion prompt” concept, with automatic prompts and a Manual option like “slow pan across neon alley.”
  • Early reactions praise coherence and film-like output, with demos showing strong 2D animation but occasional flicker on complex 3-D camera movements.
  • Disney and Universal filed suit last week over training data containing protected film frames, placing Video V1 under IP scrutiny.
  • The roadmap describes Video V1 as a stepping stone to unified 3-D, physics-aware, real-time simulations with staged releases over 12 months, including upscaling, longer sequences, 3-D scene builders, and VR/AR pipelines.

Midjourney has finally crossed the still-image frontier. Its Video V1 model, unveiled for all subscribers on 18–19 June 2025, turns any picture—uploaded or AI-generated—into four five-second clips that can be extended to a maximum of 21 seconds in length, at roughly eight image-credits per job Techcrunch Theverge. Two motion presets (Low and High) and an optional “manual” text prompt give users control over camera flow and subject animation Midjourney. Early reviewers say the results “surpass expectations” in coherence, yet the launch is tempered by hard caps (480-1080 p, 24–30 fps, no audio) and a looming Disney/Universal copyright suit Testingcatalog Techeblog Omni. Below is a complete rundown of everything that surfaced in the last 48 hours.


1. What exactly is Video V1?

Midjourney’s first video model sits on top of its V7 image pipeline: generate or upload a still, hit Animate, and the backend diffusion engine interpolates motion across 120–630 frames depending on clip length Siliconangle. The tool lives on the web (Discord remains image-only for now) and costs about an image render—approximately one image-credit per second of video Midjourney Venturebeat.

Key workflow options

FeatureDetailsSources
AutomaticMidjourney invents a “motion prompt” on its own. Midjourney
ManualUsers write a motion prompt such as “slow pan across neon alley.” Midjourney
Low vs High MotionControls camera + subject energy; raising it increases risk of artifacts. Siliconangle Tomsguide
ExtendAdds ~4 s blocks up to 21 s total. Siliconangle Testingcatalog

2. Technical specs & current limits

  • Resolution / Frame-rate – Midjourney’s docs omit a hard number, but hands-on tests report 480 p @ 24 fps (TechEBlog) while others see 1080 p caps (TestingCatalog) and 30 fps embeds (VentureBeat). Expect rapid tweaks this month. Techeblog Testingcatalog Venturebeat
  • No audio track – clips are silent by design; add sound in post. Testingcatalog
  • Image-to-Video only – direct text-to-video is not supported yet, a gap versus Runway Gen-4, Google Veo 3 and OpenAI Sora. Tomsguide Techeblog
  • Pricing tiers – Basic $10/mo (limited GPU minutes); Pro and Mega gain a forthcoming “Video Relax” queue for lower-priority renders. Techcrunch Tomsguide

3. Early community reaction

  • Creator buzz – X designer @apostraphi called V1 “surpassing all my expectations,” highlighting film-like consistency Venturebeat.
  • Hands-on demos – YouTube compilations show 2D animation shining, while complex 3-D camera swings sometimes “flicker” Youtube Reddit.
  • Reddit feedback – r/Midjourney threads praise low-motion ambience but critique human biomechanics and texture warping on high-motion clips Reddit Reddit.

4. Legal cloud overhead

Disney and Universal filed suit last week alleging Midjourney’s training data contained protected film frames; the V1 launch therefore lands under immediate IP scrutiny Omni Theverge. Holz’s blog post urges “responsible use,” hinting that uploads of copyrighted stills could add risk for creators Midjourney.


5. Competitive landscape

ModelMax lengthResolutionWorkflowStarting price
Midjourney V121 s480-1080 pImage→Video$10 / mo Techcrunch Testingcatalog
Google Veo 320 s4 KText→Video$249 / mo Techeblog
OpenAI Sora20 s1080 pText→Video$20 / mo+ Venturebeat
Runway Gen-416 s1080 pText + Image$12 / mo Tomsguide
Luma Dream Machine10 s720 pText→Video$9.99 / mo Venturebeat

6. Roadmap: beyond V1

Holz frames V1 as a “stepping stone” toward unified 3-D, physics-aware, real-time simulation models that users can “walk through,” slated for staged releases over the next 12 months Midjourney. Expect:

  1. Upscaling & super-resolution for current videos.
  2. Long-form coherence—mid-term goal of minute-long sequences.
  3. 3-D scene builders, feeding eventual VR/AR pipelines.

7. Getting started

  1. Log in on the web version of Midjourney.
  2. Generate or upload a still, then press Animate.
  3. Choose Automatic or craft a Manual motion prompt, plus Low/High motion.
  4. Iterate extensions (–extend) until reaching desired length.
  5. Export MP4 and add audio in your NLE of choice.

The basic $10 subscription supplies enough credits for roughly a dozen 10-second videos per month; power users should budget for higher tiers or Relax mode Techcrunch.


Bottom line

Video V1 is not yet a full-fledged filmmaker’s studio, but it delivers Midjourney’s trademark artistry at a hobbyist-friendly price, kick-starting a new phase in consumer AI video. Watch for weekly tweaks—resolution bumps, promptable scenes, and perhaps a text-to-video beta—while keeping an eye on the courtroom, where the tool’s data diet faces its toughest test.

Stock Market Today

  • Australian shares edge higher as household spending supports rate-hike bets; Aristocrat-L&W settle AU$190 million
    January 12, 2026, 1:27 AM EST. Australian shares closed higher after data showing stronger domestic household spending reinforced bets the Reserve Bank of Australia (RBA) will lift the cash rate by mid-2025. The S&P/ASX 200 rose 0.5% to 8,759.4. November household spending rose 1% month on month, beating a 0.6% gain, supporting views that the economy is operating at full capacity and the RBA's next move could be a rate hike as early as February. Domestic data also showed a 0.5% fall in ANZ-Indeed job ads in December to 107, with November revised to 107.5. On the corporate front, Aristocrat Leisure said Light & Wonder will compensate about AU$190 million to settle litigation over IP misappropriation. Light & Wonder shares jumped about 18%. Super Retail Group posted 4.2% sales growth for 26 weeks; shares fell 6%. Regal Partners expects NPAT of about AU$145 million for 2025, up from AU$97.5 million; shares rose 5%.
From Field Phones to 5G: The Evolution of Military Radio and Telecommunications
Previous Story

From Field Phones to 5G: The Evolution of Military Radio and Telecommunications

Internet Access in Peru: A Comprehensive Overview
Next Story

Internet Access in Peru: A Comprehensive Overview

Go toTop