如何使用 Blob URL、MediaSource 或其他方法播放连接的媒体片段 Blob?
Posted
技术标签:
【中文标题】如何使用 Blob URL、MediaSource 或其他方法播放连接的媒体片段 Blob?【英文标题】:How to use Blob URL, MediaSource or other methods to play concatenated Blobs of media fragments? 【发布时间】:2017-12-26 08:13:34 【问题描述】:由于缺乏不同的描述,我正在尝试实现离线媒体上下文。
这个概念是创建 1 秒 Blob
s 的录制媒体,能够
-
在
htmlMediaElement
上独立播放 1 秒 Blobs
从串联的Blob
s 播放完整的媒体资源
问题在于,一旦Blob
s 被连接,媒体资源就不会使用Blob URL
或MediaSource
在HTMLMedia
元素上播放。
创建的Blob URL
只播放连接的Blob
的1 秒。 MediaSource
抛出两个异常
DOMException: Failed to execute 'addSourceBuffer' on 'MediaSource': The MediaSource's readyState is not 'open'
和
DOMException: Failed to execute 'appendBuffer' on 'SourceBuffer': This SourceBuffer has been removed from the parent media source.
如何正确编码连接的Blob
s 或以其他方式实施变通方法以将媒体片段作为单个重组媒体资源播放?
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<script>
const src = "https://nickdesaulniers.github.io/netfix/demo/frag_bunny.mp4";
fetch(src)
.then(response => response.blob())
.then(blob =>
const blobURL = URL.createObjectURL(blob);
const chunks = [];
const mimeCodec = "vdeo/webm; codecs=opus";
let duration;
let media = document.createElement("video");
media.onloadedmetadata = () =>
media.onloadedmetadata = null;
duration = Math.ceil(media.duration);
let arr = Array.from(
length: duration
, (_, index) => index);
// record each second of media
arr.reduce((p, index) =>
p.then(() =>
new Promise(resolve =>
let recorder;
let video = document.createElement("video");
video.onpause = e =>
video.onpause = null;
console.log(e);
recorder.stop();
video.oncanplay = () =>
video.oncanplay = null;
video.play();
let stream = video.captureStream();
recorder = new MediaRecorder(stream);
recorder.start();
recorder.ondataavailable = e =>
console.log("data event", recorder.state, e.data);
chunks.push(e.data);
recorder.onstop = e =>
resolve();
video.src = `$blobURL#t=$index,$index+1`;
)
), Promise.resolve())
.then(() =>
console.log(chunks);
let video = document.createElement("video");
video.controls = true;
document.body.appendChild(video);
let select = document.createElement("select");
document.body.appendChild(select);
let option = new Option("select a segment");
select.appendChild(option);
for (let chunk of chunks)
let index = chunks.indexOf(chunk);
let option = new Option(`Play $index-$index + 1 seconds of media`, index);
select.appendChild(option)
let fullMedia = new Blob(chunks,
type: mimeCodec
);
let opt = new Option("Play full media", "Play full media");
select.appendChild(opt);
select.onchange = () =>
if (select.value !== "Play full media")
video.src = URL.createObjectURL(chunks[select.value])
else
const mediaSource = new MediaSource();
video.src = URL.createObjectURL(mediaSource);
mediaSource.addEventListener("sourceopen", sourceOpen);
function sourceOpen(event)
// if the media type is supported by `mediaSource`
// fetch resource, begin stream read,
// append stream to `sourceBuffer`
if (MediaSource.isTypeSupported(mimeCodec))
var sourceBuffer = mediaSource.addSourceBuffer(mimeCodec);
// set `sourceBuffer` `.mode` to `"sequence"`
sourceBuffer.mode = "segments";
fetch(URL.createObjectURL(fullMedia))
// return `ReadableStream` of `response`
.then(response => response.body.getReader())
.then(reader =>
const processStream = (data) =>
if (data.done)
return;
// append chunk of stream to `sourceBuffer`
sourceBuffer.appendBuffer(data.value);
// at `sourceBuffer` `updateend` call `reader.read()`,
// to read next chunk of stream, append chunk to
// `sourceBuffer`
sourceBuffer.addEventListener("updateend", function()
reader.read().then(processStream);
);
// start processing stream
reader.read().then(processStream);
// do stuff `reader` is closed,
// read of stream is complete
return reader.closed.then(() =>
// signal end of stream to `mediaSource`
mediaSource.endOfStream();
return mediaSource.readyState;
)
)
// do stuff when `reader.closed`, `mediaSource` stream ended
.then(msg => console.log(msg))
.catch(err => console.log(err))
// if `mimeCodec` is not supported by `MediaSource`
else
alert(mimeCodec + " not supported");
;
)
media.src = blobURL;
)
</script>
</body>
</html>
使用Blob URL
else
select
change
事件声明,仅播放媒体资源的第一秒
video.src = URL.createObjectURL(fullMedia);
plnkr http://plnkr.co/edit/dNznvxe504JX7RWY658T?p=preview 版本 1 Blob URL
,版本 2 MediaSource
【问题讨论】:
您可以接受here 中描述的解决方案吗?即使它可能是您试图实现的逆向,也可能会应用相同的逻辑(使用一个记录器用于完整版本,其他用于切片)。如果您有多个视频源,请使用一个画布流获得完整版。 @Kaiido 问题的要点类似,但我们不需要使用navigator.mediaDevices.getUserMedia()
。是你链接到bugs.chromium.org/p/chromium/issues/detail?id=642012,是吗?从哪里找到github.com/w3c/mediacapture-record/issues/119,是吗?如果不需要,我们不想再次使用MediaRecorder
。我们希望找到一种将连接的媒体片段正确编码为单个文件的方法
不需要gUM,同样适用于任何类型的MediaStream。如果你想从同一个源记录多个文件,那么你需要多个记录器还记得我们已经讨论过的元数据吗?完整记录和每个切片都需要不同的记录。
@Kaiido 我们需要找到一种方法将元数据写入文件。
不,这不是您需要的。如果我今天或明天有时间,我会写一个答案,但您似乎没有正确理解它。元数据问题是它为什么不能也不能工作的原因。即使您自己编写了一个 js webm VP8 元数据库来修复 chrome 的文件,它也只适用于当前 chrome 的 webm 编码实现。 FF 和 chrome 今天在实现上已经有所不同。我让你猜猜 5 年后其他人加入聚会会变成什么,这两个会开始支持更多的多路复用器和文件格式。您的库将立即被弃用
【参考方案1】:
目前没有针对视频编辑的 Web API。 MediaStream 和 MediaRecorder API 旨在处理实时源。
由于视频文件的结构,您不能只将其中的一部分切片来制作新视频,也不能将小视频文件串联起来制作更长的视频。在这两种情况下,您都需要重建其元数据才能制作新的视频文件。 当前唯一能够生成 MediaFiles 的 API 是 MediaRecorder。
目前只有两个 MediaRecorder API 的实现者,但它们在两个不同的容器中支持大约 3 种不同的编解码器,这确实意味着您需要自己构建至少 5 个元数据解析器才能仅支持当前的实现(这将保持数量不断增加,并且随着实现的更新可能需要更新)。 听起来是一项艰巨的工作。
也许传入的 WebAssembly API 将允许我们将 ffmpeg 移植到浏览器,这会使它变得更简单,但我不得不承认我根本不了解 WA,所以我什至不确定它是否真的可行.
我听到你说“好吧,没有专门为此而设计的工具,但我们是黑客,我们还有其他工具,功能强大。” 嗯,是。如果我们真的愿意这样做,我们可以破解一些东西......
如前所述,MediaStream 和 MediaRecorder 用于实时视频。因此,我们可以使用[HTMLVideoElement | HTMLCanvasElement].captureStream()
方法将静态视频文件转换为实时流。
借助 MediaRecorder API,我们还可以将这些实时流录制到静态文件中。
然而,我们不能做的是改变当前的流源 MediaRecorder 作为馈送。
所以为了将小视频文件合并成一个更长的文件,我们需要
将这些视频加载到<video>
元素中
按所需顺序在<canvas>
元素上绘制这些<video>
元素
使用 <video>
元素提供 AudioContext 的流源
将 canvas.captureStream 和 AudiostreamSource 的流合并到一个 MediaStream 中
录制此媒体流
但这意味着合并实际上是对所有视频的重新录制,并且只能实时完成(速度= x1)
这是一个现场概念验证,我们首先将原始视频文件分割成多个较小的部分,将这些部分打乱以模仿一些蒙太奇,然后创建一个基于画布的播放器,还能够记录此蒙太奇并将其导出。
NotaBene :这是第一个版本,我仍然有很多错误(特别是在 Firefox 中,在 chrome 中应该几乎可以正常工作)。
(() =>
if (!('MediaRecorder' in window))
throw new Error('unsupported browser');
// some global params
const CHUNK_DURATION = 1000;
const MAX_SLICES = 15; // get only 15 slices
const FPS = 30;
async function init()
const url = 'https://nickdesaulniers.github.io/netfix/demo/frag_bunny.mp4';
const slices = await getSlices(url); // slice the original media in longer chunks
mess_up_array(slices); // Let's shuffle these slices,
// otherwise there is no point merging it in a new file
generateSelect(slices); // displays each chunk independentely
window.player = new SlicePlayer(slices); // init our player
;
const SlicePlayer = class
/*
@args: Array of populated HTMLVideoElements
*/
constructor(parts)
this.parts = parts;
this.initVideoContext();
this.initAudioContext();
this.currentIndex = 0; // to know which video we'll play
this.currentTime = 0;
this.duration = parts.reduce((a, b) => b._duration + a, 0); // the sum of all parts' durations
// (see below why "_")
this.initDOM();
// attach our onended callback only on the last vid
this.parts[this.parts.length - 1].onended = e => this.onended();
this.resetAll(); // set all videos' currentTime to 0 + draw first frame
initVideoContext()
const c = this.canvas = document.createElement('canvas');
c.width = this.parts[0].videoWidth;
c.height = this.parts[0].videoHeight;
this.v_ctx = c.getContext('2d');
initAudioContext()
const a = this.a_ctx = new AudioContext();
const gain = this.volume_node = a.createGain();
gain.connect(a.destination);
// extract the audio from our video elements so that we can record it
this.audioSources = this.parts.map(v => a.createMediaElementSource(v));
this.audioSources.forEach(s => s.connect(gain));
initDOM()
// all DOM things...
canvas_player_timeline.max = this.duration;
canvas_player_cont.appendChild(this.canvas);
canvas_player_play_btn.onclick = e => this.startVid(this.currentIndex);
canvas_player_cont.style.display = 'inline-block';
canvas_player_timeline.oninput = e =>
if (!this.recording)
this.onseeking(e);
;
canvas_player_record_btn.onclick = e => this.record();
resetAll()
this.currentTime = canvas_player_timeline.value = 0;
// when the first part as actually been reset to start
this.parts[0].onseeked = e =>
this.parts[0].onseeked = null;
this.draw(0); // draw it
;
this.parts.forEach(v => v.currentTime = 0);
if (this.playing && this.stopLoop)
this.playing = false;
this.stopLoop();
startVid(index) // starts playing the video at given index
if (index > this.parts.length - 1) // that was the last one
this.onended();
return;
this.playing = true;
this.currentIndex = index; // update our currentIndex
this.parts[index].play().then(() =>
// try to avoid at maximum the gaps between different parts
if (this.recording && this.recorder.state === 'paused')
this.recorder.resume();
);
this.startLoop();
startNext() // starts the next part before the current one actually ended
const nextPart = this.parts[this.currentIndex + 1];
if (!nextPart) // current === last
return;
this.playing = true;
if (!nextPart.paused) // already playing ?
return;
// try to avoid at maximum the gaps between different parts
if (this.recording && this.recorder && this.recorder.state === 'recording')
this.recorder.pause();
nextPart.play()
.then(() =>
++this.currentIndex; // this is now the current video
if (!this.playing) // somehow got stop in between ?
this.playing = true;
this.startLoop(); // start again
// try to avoid at maximum the gaps between different parts
if (this.recording && this.recorder.state === 'paused')
this.recorder.resume();
);
startLoop() // starts our update loop
// see https://***.com/questions/40687010/
this.stopLoop = audioTimerLoop(e => this.update(), 1000 / FPS);
update(t) // at every tick
const currentPart = this.parts[this.currentIndex];
this.updateTimeLine(); // update the timeline
if (!this.playing || currentPart.paused) // somehow got stopped
this.playing = false;
if (this.stopLoop)
this.stopLoop(); // stop the loop
this.draw(this.currentIndex); // draw the current video on the canvas
// calculate how long we've got until the end of this part
const remainingTime = currentPart._duration - currentPart.currentTime;
if (remainingTime < (2 / FPS)) // less than 2 frames ?
setTimeout(e => this.startNext(), remainingTime / 2); // start the next part
draw(index) // draw the video[index] on the canvas
this.v_ctx.drawImage(this.parts[index], 0, 0);
updateTimeLine()
// get the sum of all parts' currentTime
this.currentTime = this.parts.reduce((a, b) =>
(isFinite(b.currentTime) ? b.currentTime : b._duration) + a, 0);
canvas_player_timeline.value = this.currentTime;
onended() // triggered when the last part ends
// if we are recording, stop the recorder
if (this.recording && this.recorder.state !== 'inactive')
this.recorder.stop();
// go back to first frame
this.resetAll();
this.currentIndex = 0;
this.playing = false;
onseeking(evt) // when we click the timeline
// first reset all videos' currentTime to 0
this.parts.forEach(v => v.currentTime = 0);
this.currentTime = +evt.target.value;
let index = 0;
let sum = 0;
// find which part should be played at this time
for (index; index < this.parts.length; index++)
let p = this.parts[index];
if (sum + p._duration > this.currentTime)
break;
sum += p._duration;
p.currentTime = p._duration;
this.currentIndex = index;
// set the currentTime of this part
this.parts[index].currentTime = this.currentTime - sum;
if (this.playing) // if we were playing
this.startVid(index); // set this part as the current one
else
this.parts[index].onseeked = e => // wait we actually seeked the correct position
this.parts[index].onseeked = null;
this.draw(index); // and draw a single frame
;
record() // inits the recording
this.recording = true; // let the app know we're recording
this.resetAll(); // go back to first frame
canvas_controls.classList.add('disabled'); // disable controls
const v_stream = this.canvas.captureStream(FPS); // make a stream of our canvas
const dest = this.a_ctx.createMediaStreamDestination(); // make a stream of our AudioContext
this.volume_node.connect(dest);
// FF bug... see https://bugzilla.mozilla.org/show_bug.cgi?id=1296531
let merged_stream = null;
if (!('mozCaptureStream' in HTMLVideoElement.prototype))
v_stream.addTrack(dest.stream.getAudioTracks()[0]);
merged_stream = v_stream;
else
merged_stream = new MediaStream(
v_stream.getVideoTracks().concat(dest.stream.getAudioTracks())
);
const chunks = [];
const rec = this.recorder = new MediaRecorder(merged_stream,
mimeType: MediaRecorder._preferred_type
);
rec.ondataavailable = e => chunks.push(e.data);
rec.onstop = e =>
merged_stream.getTracks().forEach(track => track.stop());
this.export(new Blob(chunks));
rec.start();
this.startVid(0); // start playing
export (blob) // once the recording is over
const a = document.createElement('a');
a.download = a.innerHTML = 'merged.webm';
a.href = URL.createObjectURL(blob,
type: MediaRecorder._preferred_type
);
exports_cont.appendChild(a);
canvas_controls.classList.remove('disabled');
this.recording = false;
this.resetAll();
// END Player
function generateSelect(slices) // generates a select to show each slice independently
const select = document.createElement('select');
select.appendChild(new Option('none', -1));
slices.forEach((v, i) => select.appendChild(new Option(`slice $i`, i)));
document.body.insertBefore(select, slice_player_cont);
select.onchange = e =>
slice_player_cont.firstElementChild && slice_player_cont.firstElementChild.remove();
if (+select.value === -1) return; // 'none'
slice_player_cont.appendChild(slices[+select.value]);
;
async function getSlices(url) // loads the main video, and record some slices from it
const mainVid = await loadVid(url);
// try to make the slicing silent... That's not easy.
let a = null;
if (mainVid.mozCaptureStream) // target FF
a = new AudioContext();
// this causes an Range error in chrome
// a.createMediaElementSource(mainVid);
else // chrome
// this causes the stream to be muted too in FF
mainVid.muted = true;
// mainVid.volume = 0; // same
mainVid.play();
const mainStream = mainVid.captureStream ? mainVid.captureStream() : mainVid.mozCaptureStream();
console.log('mainVid loaded');
const slices = await getSlicesInLoop(mainStream, mainVid);
console.log('all slices loaded');
setTimeout(() => console.clear(), 1000);
if (a && a.close) // kill the silence audio context (FF)
a.close();
mainVid.pause();
URL.revokeObjectURL(mainVid.src);
return Promise.resolve(slices);
async function getSlicesInLoop(stream, mainVid) // far from being precise
// to do it well, we would need to get the keyframes info, but it's out of scope for this answer
let slices = [];
const loop = async function(i)
const slice = await mainVid.play().then(() => getNewSlice(stream, mainVid));
console.log(`$i + 1 slice(s) loaded`);
slices.push(slice);
if ((mainVid.currentTime < mainVid._duration) && (i + 1 < MAX_SLICES))
loop(++i);
else done(slices);
;
loop(0);
let done;
return new Promise((res, rej) =>
done = arr => res(arr);
);
function getNewSlice(stream, vid) // one recorder per slice
return new Promise((res, rej) =>
const rec = new MediaRecorder(stream,
mimeType: MediaRecorder._preferred_type
);
const chunks = [];
rec.ondataavailable = e => chunks.push(e.data);
rec.onstop = e =>
const blob = new Blob(chunks);
res(loadVid(URL.createObjectURL(blob)));
rec.start();
setTimeout(() =>
const p = vid.pause();
if (p && p.then)
p.then(() => rec.stop())
else
rec.stop()
, CHUNK_DURATION);
);
function loadVid(url) // helper returning an video, preloaded
return fetch(url)
.then(r => r.blob())
.then(b => makeVid(URL.createObjectURL(b)))
;
function makeVid(url) // helper to create a video element
const v = document.createElement('video');
v.control = true;
v.preload = 'metadata';
return new Promise((res, rej) =>
v.onloadedmetadata = e =>
// chrome duration bug...
// see https://bugs.chromium.org/p/chromium/issues/detail?id=642012
// will also occur in next FF versions, in worse...
if (v.duration === Infinity)
v.onseeked = e =>
v._duration = v.currentTime; // FF new bug never updates duration to correct value
v.onseeked = null;
v.currentTime = 0;
res(v);
;
v.currentTime = 1e5; // big but not too big either
else
v._duration = v.duration;
res(v);
;
v.onerror = rej;
v.src = url;
);
;
function mess_up_array(arr) // shuffles an array
const _sort = () =>
let r = Math.random() - .5;
return r < -0.1 ? -1 : r > 0.1 ? 1 : 0;
;
arr.sort(_sort)
arr.sort(_sort)
arr.sort(_sort);
/*
An alternative timing loop, based on AudioContext's clock
@arg callback : a callback function
with the audioContext's currentTime passed as unique argument
@arg frequency : float in ms;
@returns : a stop function
*/
function audioTimerLoop(callback, frequency)
const freq = frequency / 1000; // AudioContext time parameters are in seconds
const aCtx = new AudioContext();
// Chrome needs our oscillator node to be attached to the destination
// So we create a silent Gain Node
const silence = aCtx.createGain();
silence.gain.value = 0;
silence.connect(aCtx.destination);
onOSCend();
var stopped = false; // A flag to know when we'll stop the loop
function onOSCend()
const osc = aCtx.createOscillator();
osc.onended = onOSCend; // so we can loop
osc.connect(silence);
osc.start(0); // start it now
osc.stop(aCtx.currentTime + freq); // stop it next frame
callback(aCtx.currentTime); // one frame is done
if (stopped) // user broke the loop
osc.onended = function()
aCtx.close(); // clear the audioContext
return;
;
;
// return a function to stop our loop
return () => stopped = true;
// get the preferred codec available (vp8 is my personal, more reader support)
MediaRecorder._preferred_type = [
"video/webm\;codecs=vp8",
"video/webm\;codecs=vp9",
"video/webm\;codecs=h264",
"video/webm"
]
.filter(t => MediaRecorder.isTypeSupported(t))[0];
init();
)();
#canvas_player_cont
display: none;
position: relative;
#canvas_player_cont.disabled
opacity: .7;
pointer-events: none;
#canvas_controls
position: absolute;
bottom: 4px;
left: 0px;
width: calc(100% - 8px);
display: flex;
background: rgba(0, 0, 0, .7);
padding: 4px;
#canvas_player_play_btn
flex-grow: 0;
#canvas_player_timeline
flex-grow: 1;
<div id="slice_player_cont">
</div>
<div id="canvas_player_cont">
<div id="canvas_controls">
<button id="canvas_player_play_btn">play</button>
<input type="range" min="0" max="10" step="0.01" id="canvas_player_timeline">
<button id="canvas_player_record_btn">save</button>
</div>
</div>
<div id="exports_cont"></div>
【讨论】:
干得好。导出的文件在 firefox 54 和 chromium 59 的 12 秒到 34 秒之间停止视频播放。 @guest271314 我不确定我能不能重现这个。我修复了一些错误 + 改进了每个切片之间的过渡 + 添加了 cmets。 那是MAX_SLICES
设置为 60
。该要求不包括完美。我们可以使用您的模型作为基础。必须重新阅读并至少再试几次
fwiw,经过 29 次尝试,已经能够使用 ts-ebml
和 MediaSource
在 Firefox 和 Chromium 上达到要求 github.com/guest271314/recordMediaFragments以上是关于如何使用 Blob URL、MediaSource 或其他方法播放连接的媒体片段 Blob?的主要内容,如果未能解决你的问题,请参考以下文章
如何从 javascript 中的 URL 获取 File() 或 Blob()?