浅析WebRtc中视频数据的收集和发送流程

Posted BennuCTech

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了浅析WebRtc中视频数据的收集和发送流程相关的知识,希望对你有一定的参考价值。

前言

本文是基于PineAppRtc开源项目https://github.com/thfhongfeng/PineAppRtc

因为一个需求,我们需要将一个视频流通过WebRtc发送出去,所以就研究一下WebRtc是如何采集视频数据并进行处理发送的,于是有了这篇文章。

采集发送

在使用webrtc进行即时通话时,双方连接上后,会根据参数创建一个PeerConnection连接对象,具体代码在PeerConnectionClient类中,这个是需要自己来实现的。这个连接的作用来进行推拉流的。

然后创建一个MediaStream对象,并添加给PeerConnection

mPeerConnection.addStream(mMediaStream);

这个MediaStream就是处理流的,可以给MediaStream对象添加多个轨道,比如声音轨道、视频轨道

mMediaStream.addTrack(createVideoTrack(mVideoCapturer));

这里mVideoCapturer是一个VideoCapturer对象,用来处理视频收集的,实际上就是封装了相机

VideoCapturer是一个接口,有很多实现类。这里以CameraCapturer及子类Camera1Capturer为例子

继续看createVideoTrack这个函数

private VideoTrack createVideoTrack(VideoCapturer capturer) 
    mVideoSource = mFactory.createVideoSource(capturer);
    capturer.startCapture(mVideoWidth, mVideoHeight, mVideoFps);

    mLocalVideoTrack = mFactory.createVideoTrack(VIDEO_TRACK_ID, mVideoSource);
    mLocalVideoTrack.setEnabled(mRenderVideo);
    mLocalVideoTrack.addRenderer(new VideoRenderer(mLocalRender));
    return mLocalVideoTrack;

可以看到通过createVideoSource函数将VideoCapturer封装到VideoSource对象中,然后利用VideoSource创建出轨道的VideoTrack

来看看createVideoSource函数

public VideoSource createVideoSource(VideoCapturer capturer) 
    org.webrtc.EglBase.Context eglContext = this.localEglbase == null ? null : this.localEglbase.getEglBaseContext();
    SurfaceTextureHelper surfaceTextureHelper = SurfaceTextureHelper.create("VideoCapturerThread", eglContext);
    long nativeandroidVideoTrackSource = nativeCreateVideoSource(this.nativeFactory, surfaceTextureHelper, capturer.isScreencast());
    CapturerObserver capturerObserver = new AndroidVideoTrackSourceObserver(nativeAndroidVideoTrackSource);
    capturer.initialize(surfaceTextureHelper, ContextUtils.getApplicationContext(), capturerObserver);
    return new VideoSource(nativeAndroidVideoTrackSource);

可以看到这里新建了一个AndroidVideoTrackSourceObserver对象,它是CaptureObserver接口的实现,然后调用了VideoCapturerinitialize函数
CameraCapturer实现的initialize函数中将AndroidVideoTrackSourceObserver对象赋值给了VideoCapturercapturerObserver属性。

回过头再看看PeerConnectionClient类中,还调用了VideoCapturerstartCapture函数,看看它在CameraCapturer中的实现

public void startCapture(int width, int height, int framerate) 
    Logging.d("CameraCapturer", "startCapture: " + width + "x" + height + "@" + framerate);
    if (this.applicationContext == null) 
        throw new RuntimeException("CameraCapturer must be initialized before calling startCapture.");
     else 
        Object var4 = this.stateLock;
        synchronized(this.stateLock) 
            if (!this.sessionOpening && this.currentSession == null) 
                this.width = width;
                this.height = height;
                this.framerate = framerate;
                this.sessionOpening = true;
                this.openAttemptsRemaining = 3;
                this.createSessionInternal(0, (MediaRecorder)null);
             else 
                Logging.w("CameraCapturer", "Session already open");
            
        
    

最后执行了createSessionInternal

private void createSessionInternal(int delayMs, final MediaRecorder mediaRecorder) 
    this.uiThreadHandler.postDelayed(this.openCameraTimeoutRunnable, (long)(delayMs + 10000));
    this.cameraThreadHandler.postDelayed(new Runnable() 
        public void run() 
            CameraCapturer.this.createCameraSession(CameraCapturer.this.createSessionCallback, CameraCapturer.this.cameraSessionEventsHandler, CameraCapturer.this.applicationContext, CameraCapturer.this.surfaceHelper, mediaRecorder, CameraCapturer.this.cameraName, CameraCapturer.this.width, CameraCapturer.this.height, CameraCapturer.this.framerate);
        
    , (long)delayMs);

又执行了createCameraSession,在Camera1Capturer中该函数代码如下

protected void createCameraSession(CreateSessionCallback createSessionCallback, Events events, Context applicationContext, SurfaceTextureHelper surfaceTextureHelper, MediaRecorder mediaRecorder, String cameraName, int width, int height, int framerate) 
    Camera1Session.create(createSessionCallback, events, this.captureToTexture || mediaRecorder != null, applicationContext, surfaceTextureHelper, mediaRecorder, Camera1Enumerator.getCameraIndex(cameraName), width, height, framerate);

可以看到创建了一个Camera1Session,这个类就是实际操作相机的,在这个类里就看到了熟悉的Camera,在listenForBytebufferFrames函数中

private void listenForBytebufferFrames() 
    this.camera.setPreviewCallbackWithBuffer(new PreviewCallback() 
        public void onPreviewFrame(byte[] data, Camera callbackCamera) 
            Camera1Session.this.checkIsOnCameraThread();
            if (callbackCamera != Camera1Session.this.camera) 
                Logging.e("Camera1Session", "Callback from a different camera. This should never happen.");
             else if (Camera1Session.this.state != Camera1Session.SessionState.RUNNING) 
                Logging.d("Camera1Session", "Bytebuffer frame captured but camera is no longer running.");
             else 
                long captureTimeNs = TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime());
                if (!Camera1Session.this.firstFrameReported) 
                    int startTimeMs = (int)TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - Camera1Session.this.constructionTimeNs);
                    Camera1Session.camera1StartTimeMsHistogram.addSample(startTimeMs);
                    Camera1Session.this.firstFrameReported = true;
                

                Camera1Session.this.events.onByteBufferFrameCaptured(Camera1Session.this, data, Camera1Session.this.captureFormat.width, Camera1Session.this.captureFormat.height, Camera1Session.this.getFrameOrientation(), captureTimeNs);
                Camera1Session.this.camera.addCallbackBuffer(data);
            
        
    );

可以看到在通过预览回调onPreviewFrame拿到视频数据后,调用了events.onByteBufferFrameCaptured,这个events就是create时传入的,回溯上面的流程可以发现这个events就是CameraCapturer中的cameraSessionEventsHandler,它的onByteBufferFrameCaptured函数如下:

public void onByteBufferFrameCaptured(CameraSession session, byte[] data, int width, int height, int rotation, long timestamp) 
    CameraCapturer.this.checkIsOnCameraThread();
    synchronized(CameraCapturer.this.stateLock) 
        if (session != CameraCapturer.this.currentSession) 
            Logging.w("CameraCapturer", "onByteBufferFrameCaptured from another session.");
         else 
            if (!CameraCapturer.this.firstFrameObserved) 
                CameraCapturer.this.eventsHandler.onFirstFrameAvailable();
                CameraCapturer.this.firstFrameObserved = true;
            

            CameraCapturer.this.cameraStatistics.addFrame();
            CameraCapturer.this.capturerObserver.onByteBufferFrameCaptured(data, width, height, rotation, timestamp);
        
    

这里调用了capturerObserver.onByteBufferFrameCaptured,这个capturerObserver就是前面initialize时传入的AndroidVideoTrackSourceObserver对象,它的onByteBufferFrameCaptured函数

public void onByteBufferFrameCaptured(byte[] data, int width, int height, int rotation, long timeStamp) 
    this.nativeOnByteBufferFrameCaptured(this.nativeSource, data, data.length, width, height, rotation, timeStamp);

调用了native函数。这样整个流程就结束了,应该在native中对数据进行处理并发送。

其实这里关键就是VideoCapturer,除了CameraCapturer及子类,还有FileVideoCapturer等。
如果我们需要直接发送byte[]原生数据,可以自定义实现一个VideoCapturer,获取他的capturerObserver变量,主动调用它的onByteBufferFrameCaptured函数即可。

以上是关于浅析WebRtc中视频数据的收集和发送流程的主要内容,如果未能解决你的问题,请参考以下文章

浅析webrtc中音频的录制和播放流程

浅析webrtc中音频的录制和播放流程

实时音视频入门学习:开源工程WebRTC的技术原理和使用浅析

WebRTC Native M96 视频发送编码(OpenH264)流程以及接收视频包解码(FFmpeg)播放流程

浅析开源工程WebRTC的技术原理和使用

webrtc音频发送流程