Putting a video on a website sounds simple, but there are real decisions behind compression, loading policy, scroll behaviour, and bandwidth management. This post walks through those fundamentals first, because they matter no matter what tooling you use. It also explains where the complexity starts to pile up on real pages, and why we built ViewportVideo to handle that part without forcing you into a heavyweight player setup.
1. Compression
File size
Turn a heavy source into something the web can deliver efficiently.
2. Faststart
First-frame speed
Move the MP4 metadata to the front so the browser can start sooner.
3. Loading
Loading policy
Choose when the browser should fetch, preload, or wait.
4. Viewport Rules
Playback rules
Decide what should happen when the video enters or leaves view.
That broad mix of concerns gets much easier to follow once you separate it into a few practical questions: how small the file should be, how quickly the first frame should appear, when the browser should fetch the video, and what rules should control playback on the page. The technical commands are here when you want them, but the main ideas should still be easy to scan.
We have spent enough time thinking about this topic on our own website that it felt worth building a tool to help with it, and this post is also where we introduce that direction. At the same time, this is not just a pitch for a library. We are also sharing the exact compression commands we use, plus details from our own website video pipeline, so the post stays useful even if you only take the low-level workflow and apply it yourself.
Compression File size Turn a heavy source into something the web can deliver efficiently.
Compress the video aggressively enough that it is web-sized before you worry about playback behavior. A one-off desktop tool like HandBrake is fine if you are experimenting, but for repeatable website work a small CLI workflow is better. It is easier to version, easier to share with a team, and easier to run the same way every time. A 50 MB recording often becomes a 1-5 MB file that looks identical in a browser at typical viewing sizes.
This step matters a lot, but it is not the only thing that matters. A well-compressed file is still a bad web video if it is missing
faststart, cached poorly, or loaded too aggressively. Compression is one of the big wins, not the only win.
Choose your starting point: do a full compression pass with faststart baked in, or set up a reusable workflow with a shell script and optional Git alias.
Option 1: one-off manual compression Choose this if you want a direct FFmpeg command with compression and faststart in one pass.
Use this when you are starting from a source recording and want a direct, manual FFmpeg
command instead of a wrapper script. Compression and faststart happen in the
same command, so there is no extra repackaging step later.
Customise this command
Set the source and output filenames, then copy a manual compression command with faststart already included.
ffmpeg -y -i input.mov -codec:v libx264 -crf 23 -preset medium -codec:a aac -b:a 128k -movflags +faststart output.faststart.mp4
This is the shortest path when you only need one finished output. If you want multiple sizes or a standard naming scheme, that is where a script starts paying for itself.
Option 2: git alias plus automated workflow Choose this if video prep happens often enough that it should feel built into your repo workflow.
This is our own repeatable workflow. The git alias is optional convenience, and the shell script is where the real logic lives.
From inside a repo we run:
git cmov path/to/video.mov
That alias resolves to:
Git Alias
~/.gitconfig
Keep the entrypoint short in the repo and delegate the actual work to a script you can revise over time.
[alias]
cmov = "!f(){ input=\"$1\"; if [ -z \"$input\" ]; then echo \"Usage: git cmov path/to/video.mov\" >&2; exit 1; fi; case \"$input\" in /*) ;; *) input=\"${GIT_PREFIX:-}$input\" ;; esac; ~/bin/compress-mov.sh \"$input\"; }; f"
If you do not spend much time inside git aliases, that line probably looks more hostile than it really is. It is doing four small things:
!f(){ ... }; ftells git to run a shell function instead of a normal git subcommandinput="$1"grabs the path you passed togit cmov- The usage check exits early with a helpful message if you forgot to pass a file
- The
caseblock prepends${GIT_PREFIX:-}for relative paths, so running the alias from a subdirectory still resolves the file correctly
The git alias itself is not required. You could run ~/bin/compress-mov.sh path/to/video.mov
directly and get the same result. The alias just makes that script feel like part of
your normal repo workflow, so compression is available anywhere you are already working
without needing to remember or type the full script path every time.
The script it calls looks like this:
Compression Script
~/bin/compress-mov.sh
This is the actual workflow: encode one full-size faststart MP4 and one half-size faststart MP4 in one pass.
#!/usr/bin/env bash
set -euo pipefail
if ! command -v ffmpeg >/dev/null 2>&1; then
printf "ffmpeg is not installed or not in PATH\n" >&2
exit 1
fi
INPUT="${1:-}"
if [[ -z "$INPUT" ]]; then
printf "Usage: compress-mov.sh path/to/video.mov\n" >&2
exit 1
fi
if [[ ! -f "$INPUT" ]]; then
printf "Missing input file: %s\n" "$INPUT" >&2
exit 1
fi
FILENAME="$(basename "$INPUT")"
DIRNAME="$(cd "$(dirname "$INPUT")" && pwd)"
BASENAME="${FILENAME%.*}"
OUTPUT_FULL="$DIRNAME/$BASENAME.faststart.mp4"
OUTPUT_HALF="$DIRNAME/$BASENAME.half.faststart.mp4"
if [[ ! -f "$OUTPUT_FULL" ]]; then
ffmpeg -y \
-i "$INPUT" \
-codec:v libx264 \
-crf 23 \
-preset medium \
-codec:a aac \
-b:a 128k \
-movflags +faststart \
"$OUTPUT_FULL"
fi
if [[ ! -f "$OUTPUT_HALF" ]]; then
ffmpeg -y \
-i "$INPUT" \
-vf "scale=trunc(iw*0.5/2)*2:trunc(ih*0.5/2)*2" \
-codec:v libx264 \
-crf 23 \
-preset medium \
-codec:a aac \
-b:a 128k \
-movflags +faststart \
"$OUTPUT_HALF"
fi
This script is easier to understand if you read it as three separate phases:
- Fail early.
set -euo pipefailmakes the script stop on missing variables, failing commands, and broken pipes instead of quietly stumbling forward. - Normalize the paths. It extracts the file name, parent directory, and base name once so output names stay predictable and land beside the source file.
- Encode two delivery variants. The first FFmpeg command writes a full-size H.264 AAC MP4 with
faststart. The second writes a half-scale version with the same codec settings and naming convention.
A few details are worth calling out explicitly:
-codec:v libx264picks H.264, which is still the most compatible default for website video-crf 23is the main quality-size dial. Lower numbers increase quality and file size, higher numbers shrink harder-preset mediumcontrols encoder speed versus compression efficiency, not output quality by itself-codec:a aac -b:a 128kgives you a broadly compatible audio track without wasting much space-movflags +faststartmoves MP4 metadata to the front so playback can begin immediately over the networkscale=trunc(iw*0.5/2)*2:trunc(ih*0.5/2)*2cuts the dimensions in half while forcing even pixel counts, which H.264 expects
The two if [[ ! -f ... ]] checks also matter. They make the script safe to rerun without overwriting outputs you already generated, which is helpful when you are iterating on a page and only want to compress a source clip once.
This is a good pattern because it gives you a predictable full-size MP4 and a half-size
variant in one command, both already prepared with faststart. You can copy
this exactly, or treat it as a starting point and tune the CRF, preset, scaling, or
output naming for your own pipeline.
Faststart First-frame speed Move the MP4 metadata to the front so the browser can start sooner.
An MP4 file stores a table of contents that the browser needs before it can play anything.
By default, that table of contents is written at the end of the file. The
faststart flag moves it to the beginning so the browser can start playback
as soon as the first bytes arrive. Without it, the browser has to make additional requests
to the end of the file to fetch that table of contents first, adding round-trip latency
for every video on the page. If you used the compression commands from the previous
section, faststart is already included in those.
faststartis not a codec or quality setting. It does not make the file smaller and it does not change the pixels. It changes where the MP4 metadata lives so the browser can begin playback sooner.
If you compressed your video using the commands in the previous section,
faststart is already enabled and you can skip this step. This command is for
cases where you have an MP4 that was compressed elsewhere and needs faststart
added after the fact. It rewrites the container layout only, so it is fast and lossless.
Add faststart to an existing MP4 Use this when you already have a compressed file and just need to move the metadata to the front.
Customise this command
Set the source and output filenames, then copy the FFmpeg command to repackage the container.
ffmpeg -i compressed.mp4 -codec:v copy -codec:a copy -movflags +faststart output.mp4
The -codec: copy flags tell FFmpeg to remux the file, meaning it rewrites
the container structure without re-encoding the video or audio inside it. No visual
quality is lost and the operation is usually very fast.
Why Not HLS For This?
HLS is the right choice for live streams, adaptive bitrate, and DRM. For pre-compressed short videos served from a CDN, it usually adds complexity without benefit.
Browsers do not support HLS natively on desktop, so you also need to ship a JavaScript
library like hls.js (around 60KB gzipped) just to play the video. Before a
single frame plays, the browser has to download and run that library, fetch and parse a
.m3u8 manifest, then fetch the first segment. A faststart MP4 skips all of
that: the browser reads the metadata from the front of the file and starts decoding
immediately.
Loading Loading policy Choose when the browser should fetch, preload, or wait.
Serve the video with explicit dimensions and conservative loading defaults. Two attributes do most of the work natively, without any JavaScript:
HTML Baseline
index.html
This is the minimal shape worth starting from before you add any page-level playback behavior.
<video
src="output.mp4"
preload="none"
muted
playsinline
loop
width="640"
height="360"
></video>
preload="none" tells the browser not to buffer anything until playback is
requested, so videos below the fold use zero bandwidth until the user scrolls toward
them. This is a good default whether you stay fully vanilla or later add a library on top.
Width and height are required so the browser can reserve the correct space in the layout before the video loads, preventing content shifts.
Why preload="none" Is Not Enough On Its Own
Setting preload="none" solves the initial state. No video loads until
something triggers it. But the moment you call play() on a video, the
browser starts downloading it, and once a download starts, there is no native way
to pause the network request separately from the playback. Calling pause()
stops the picture but the browser keeps fetching data in the background, typically
buffering several seconds ahead before it slows down.
On a page with several videos this becomes a real problem. You scroll past a demo, it starts playing, you scroll past it and it pauses, but the download keeps going. Do that three or four times and you have multiple videos all competing for bandwidth at the same time, none of them the one the user is actually watching. On a desktop connection this is wasteful. On mobile it can visibly delay the video the user cares about.
Controlling What Loads and When
A proper loading policy needs to answer three questions that the browser does not answer for you:
- Which video is allowed to load right now? Ideally only the one that is playing, or the one that is about to play. Everything else should be idle, not quietly buffering in the background.
- Can the next video warm up before it scrolls into view? If the user is scrolling down and a video is approaching the viewport, decoding its first frame in advance means it can start playing instantly instead of showing a blank rectangle. But warming should only happen when the active video has already finished loading, so it never competes for bandwidth.
- What happens to a video that was playing but is now off-screen?
Pausing playback is the minimum, but the download can still continue. The only way
to truly stop a download on a native video element is to remove the
srcattribute and callload(), which aborts the network request. This is aggressive, but on pages with many videos it frees bandwidth and memory for the video that actually matters. When the user scrolls back, the source is reattached and playback resumes from where it left off.
None of this is built into the <video> element. The browser gives
you preload="none" as a starting state and play() /
pause() as controls, but it has no concept of page-level priority. It
does not know that one video matters more than another, or that a paused video should
stop fetching data. That gap is exactly where a loading policy layer sits, and it is
one of the main things ViewportVideo
manages: enforcing that only the active video loads, warming the next one when
bandwidth is clear, and optionally aborting downloads on videos that leave the
viewport so they stop competing entirely.
Viewport Rules Playback rules Decide what should happen when the video enters or leaves view.
Add the smallest playback rule that solves the page you actually have. Lazy loading handles
the initial deferral. For autoplay and pause on scroll, a small IntersectionObserver
baseline is often enough:
Vanilla Scroll Logic
app.js
This works well as a baseline, and it also makes it easier to see where the complexity begins once a page has several videos.
const observer = new IntersectionObserver((entries) => {
entries.forEach(entry => {
if (entry.isIntersecting) {
entry.target.play().catch(() => {});
} else {
entry.target.pause();
}
});
}, { threshold: 0.5 });
document.querySelectorAll('video[preload="none"]').forEach(v => observer.observe(v));
When a video enters the viewport it plays. When it leaves, it pauses. The download
continues in the background, and when the user scrolls back, the video resumes from
where it stopped without re-downloading anything. Keep in mind that with
preload="none", the browser has fetched nothing until play()
is called, so there will be a visible delay before the first frame appears. On pages
with several videos, paused videos also continue competing for bandwidth as they
finish buffering their current chunks, which can slow down the video the user is
actually watching.
Think of this snippet as high-level pseudocode for the idea. For a single showcase video or a very small page it may be enough, but for anything more polished the library section below goes into much more detail on solving first-frame readiness, bandwidth competition, and smooth transitions.
Once the file itself is under control, page-level coordination is usually the next place things start to go wrong.
Polished Tends to Mean Complex
The trouble starts when you have several autoplaying demos on the same page and want them to feel polished instead of merely functional. The snippet above covers enter and leave, but a real page also needs to deal with the following.
-
01 Which video should actually play?
When two videos are both partially visible, the simple observer fires both. Both call
play(), both start downloading, and neither one gets full bandwidth. A better rule is to pick a single winner based on which video is closest to the center of the viewport, because that is the one the user is most likely focused on. That decision needs to run on every scroll frame so the winner changes smoothly as the user scrolls, and the previous winner needs to pause immediately so there is never a moment where two videos are playing at once. -
02 What happens when autoplay is blocked?
Browsers can silently block
video.play()if the page does not meet their autoplay policy. When that happens, the play promise rejects and the video sits as a blank rectangle. Without handling this, the user sees nothing and has no way to start playback. A proper recovery shows a visible play button overlay so the user can tap to start the video manually. Once they do, the browser allows playback because it counts as a user-initiated gesture. -
03 What about users who prefer reduced motion?
If the user has enabled
prefers-reduced-motionin their OS settings, autoplaying video can feel intrusive. A respectful default is to pause the video and show a static frame with a play button, the same kind of overlay used for blocked autoplay. The video stays visible but does not move until the user explicitly chooses to watch it. If the user later disables the preference, the page should not suddenly start playing all videos. The static state stays until the user interacts. -
04 What happens when the user switches tabs?
A video playing in a background tab is wasting bandwidth and battery for nobody. The sensible default is to pause when the tab loses focus and resume when the user comes back. That sounds simple, but it needs state tracking. You need to remember whether the video was playing before the blur so you can decide whether to resume on focus. A video the user had already paused manually should not start playing again just because they switched tabs and came back.
-
05 Should a video play forever?
A looping demo left unattended can play for hours. Two controls help here. An idle timeout pauses the video after a period of no user activity (mouse movement, keyboard, touch) and shows a play button to restart. A loop count limit stops the video after a set number of loops. Both prevent the page from burning CPU and battery on a video nobody is watching, while keeping it easy to resume with a single click.
None of these problems are impossible to solve yourself, but together they push the
implementation well past a small IntersectionObserver snippet and into
page-wide playback policy. That is usually the point where website video starts to
feel more complicated than it should.
Where ViewportVideo Helps
ViewportVideo
is the library we built to handle exactly that middle layer. FFmpeg and your compression
workflow prepare the file. The browser handles loading, caching, decoding, and the actual
<video> element. ViewportVideo sits above that and handles
page-level playback coordination: which video should play, which one can warm up next, and
how paused or blocked states should behave.
In practice, that means you can keep the useful fundamentals in this post, especially
compression, faststart, and caching, while letting the library take over
the repetitive coordination work:
- Only one managed video plays at a time across the page
- The video nearest the viewport center wins when multiple videos are eligible
- One upcoming video can warm up so the next transition feels faster
preload="none"stays the normal resting state instead of gradually turning into eager loading- Blocked autoplay and reduced motion fall back to a clear manual-start UI
- Tab blur, idle timeout, and optional paused-download abort behaviour are handled consistently
This is the point where the library becomes useful. If the page has one video and a
very simple autoplay rule, you may not need a library at all. But even a single video
can benefit from ViewportVideo if you want any of the following without
rebuilding them by hand:
- First-frame readiness before the video scrolls into view
- Viewport-driven play and pause rules
- Consistent window blur and focus handling
- Reduced motion behaviour
- Idle timeouts and loop count limits
- Optional paused-download abort to save bandwidth and memory on mobile
Once the page has multiple videos, those concerns multiply. Viewport arbitration,
autoplay recovery, warmup logic, and pause rules all need to agree with each other.
ViewportVideo is the layer meant to own that work.
ViewportVideo Shape
app.js
Not a full setup guide, just the core idea: keep normal video elements and bind page-level playback policy onto them.
const videos = document.querySelectorAll('[data-demo-video]');
videos.forEach((video) => {
bindViewportVideo(video, {
playbackMode: 'viewport',
visibilityThreshold: 0.5
});
});
That example is intentionally simple. This is not an installation guide. The important part is the shape of the abstraction: keep your own video elements, keep the normal browser video fundamentals, and move the repetitive page-level playback rules into a focused library when the page grows past a single simple demo.
Final Considerations
What About Stopping the Download Entirely?
You might wonder whether pausing the download as well as the playback would save
bandwidth. Sometimes it does, which is why ViewportVideo exposes that behavior
as an option. The real benefit is not just lower transfer. It is being able to shift
bandwidth and memory pressure toward the video that is actually playing or the next one
you want to warm, which can matter on mobile or on pages with several demos.
The only way to stop a download mid-flight on a native video element is to clear the source entirely:
video.src = '';
video.load();
This aborts the request, confirmed by a net::ERR_ABORTED in Chrome's
network tab. However it also destroys the element's state. Whether the
partially-downloaded bytes are preserved depends on two things: correct
Cache-Control headers so the browser keeps the partial response, and
server support for Range requests so it can resume from the right byte
offset instead of starting over. Most modern static hosts (Cloudflare Pages, Firebase
Hosting, Netlify, S3, and similar CDNs) support range requests out of the box, so
in practice the main thing to get right is your cache headers. Without them, the
browser re-downloads from byte zero every time. That means you are potentially
throwing away already-fetched chunks in exchange for better prioritization.
For most use cases, pause() on scroll-out is sufficient. The browser
throttles background downloads naturally once it has buffered a few seconds ahead, so
the bandwidth impact of an off-screen paused video is minimal in practice. The abort
path is useful when you care more about aggressively prioritizing the currently-playing
video than about preserving every partial download, but it is not something every page
needs.
Cache Headers Matter
One important deployment detail: make sure your server or CDN sends correct cache headers for MP4 files. Without them, every page load re-downloads every video from scratch, even if the user has visited before.
On Cloudflare Pages, static assets are cached automatically. On other platforms, set at minimum:
Cache-Control: public, max-age=31536000, immutable
If your video URLs include a hash or version parameter (e.g. output.abc123.mp4),
immutable tells the browser never to revalidate, the content will never
change at that URL. If not, use no-cache instead, which allows caching
but requires revalidation on each request.
One more note: during local development, avoid combining a no-store or
no-cache server with DevTools network throttling. You will see artificially
slow video loads because the browser cannot cache anything and must re-download the
full file on every request. This does not reflect production behaviour.
The Complete Picture
A video that is ready for the web:
- Encoded to the smallest acceptable file size
- Finished with
faststartpresent so playback can begin immediately - Served with
preload="none"so off-screen videos use no bandwidth - Given clear viewport play and pause rules, either through a small custom script or a library like
ViewportVideo - Cached by the CDN so repeat visits are instant
That is still the whole stack. FFmpeg handles file preparation. The browser handles file
loading, decoding, caching, and playback. ViewportVideo fits on top only when
the page needs coordinated playback rules across multiple videos. That is the boundary.
Install This As A Skill
If you want this exact workflow available inside your coding agent, install the companion skill bundle for Claude Code or Codex.
Claude Code
- Download
CLAUDE.mdandsetup-video-streaming.md. - Copy
CLAUDE.mdinto your project root, or append it to your existingCLAUDE.md. - Place
setup-video-streaming.mdin.claude/commands/. - Invoke it with
/setup-video-streaming.
your-project/
├── CLAUDE.md
└── .claude/
└── commands/
└── setup-video-streaming.md
Codex
- Download
SKILL.mdandopenai.yaml. - Create
~/.codex/skills/setup-video-streaming/. - Place
SKILL.mdin that folder andopenai.yamlinagents/. - Invoke it with
$setup-video-streaming.
~/.codex/skills/setup-video-streaming/
├── SKILL.md
└── agents/
└── openai.yaml