WebRTC paced sender
Posted shichaog
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了WebRTC paced sender相关的知识,希望对你有一定的参考价值。
文章目录
paced sender通常简称为pacer,其是WebRTC RTP栈的一部分,用于平滑发送到网络上的数据流包,考虑一个帧率为60fps带宽为10Mbps的视频流,在理想情况下,每个帧大小约21kB并打包成36个RTP包。然而视频分为I/P/B三种类型的帧,I帧压缩率最低但是可以独立解码,P帧可以使用前帧信息解码因而压缩率比I帧大一些,B帧可以使用来自前后帧的信息解码因而压缩率是最大的,因而通常每帧大小并不一样,这就导致短时生成的需要发送的视频流可以很大也可以是零,此外,视频编码器在突然移动的情况下超过目标帧大小也是比较常见的,尤其是在处理屏幕共享时,比理想尺寸大10倍甚至100倍是一个非常常见的场景。这类数据包突发生传输导致诸如网络拥塞、缓冲区溢出以及丢包等问题。pacer通过设置一个缓冲区来解决这个问题,在缓冲区中媒体数据包(音频、视频等)被排队,然后使用leaky bucket 将这些媒体数据包定速到网络上。缓冲区包含所有媒体流都有独立FIFO流控,因此音频可以优先于视频,并且可以以循环方式发送相同优先级的媒体流,以避免任何一个流阻塞其他流。由于pacer控制着发送到网络上的比特率,因此在需要最小发送速率的情况下pacer负责生成填充。
- WebRTC中多媒体数据包的生命周期
1.RTPSenderVideo
或RTPSenderAudio
将多媒体流打包成RTP数据包;
2.RTP数据包被送到用于传输的RTPSender对象;
3.通过RtpPacketSender接口调用pacer对象将RTP数据包按批次依次排入队列中;
4.RTP数据包被放入pacer的队列中以在合适的时机发送出去;
5.在计算时间时,pacer调用PacingController::PacketSender()
回调方法完成,该回调方法通常由PacketRouter实现;
6.根据RTP数据包的SSRC,步骤5中的路由对象将RTP数据包发送到对应的RTP模块,在该RTP模块中由RTPSenderEgress
对象决定最终的时间戳;
7.将RTP数据包发送给更底层的Transport
接口,至此RTP数据包离开pacer管理范围。
于此异步进行的是发送带宽的估计,目标发送比特率通过void SetPacingRates(DataRate pacing_rate, DataRate padding_rate)
方法设置到RtpPacketPacer
对象。
- 发包优先级
pacer根据RTP包类型和入队顺序确定发包的优先级,这和实时操作系统进程调度策略思想上非常相似。
RTP包按音频、重传、视频和FEC以及填充优先级递减;相同SSRC的RTP数据包才会根据入队顺序判断优先级,给定相同的优先级,PrioritizedPacketQueue
在媒体流之间交替,以确保没有流不必要地阻塞其他流。
- 实现
pacer主要使用的类是 TaskQueuePacedSender,该类使用任务队列(task queue)管理线程安全并调度延迟的任务,但其将大部分工作委派给PacingController
类,通过这种方式,可以在pacing逻辑不变时使用定制化的不同调度策略的pacer。
- RTP包路由
PacketRouter用于将从pacer出来的包路由到对应的RTP模块中(音频、视频),其作用如下:
SendPacket
方法根据RTP数据包的SSRC查找对应的RTP模块以便进一步路由到网络模块;- 如果使用了发送端带宽估计,其填充传输范围的扩展序列号;
- 生成填充,支持基于负载的填充的模块被优先考虑,最后一个发送媒体的模块始终是第一选择;
- 返回发送媒体后生成的任何FEC;
- 将REMB和/或传输反馈消息转发到合适的RTP模块。
当前FEC是按照不同的SSRC独立生成的,所以总是在发送媒体之后从RTP模块返回。
- 涉及API
使用RtpPacketSender::EnqueuePackets(std::vector<std::unique_ptr<RtpPacketToSend>> packets)
发送RTP数据包,pacer接收一个类型为PacingController::PacketSender
构造参数,这个参数在实际发送RTP数据包时被回调,使用void SetPacingRates(DataRate pacing_rate, DataRate padding_rate)
控制发送比特率,如果发送队列没有待发送RTP包并且发送比特率低于padding_rate
,pacer将要求PacketRouter
给其padding包,为了完全控制发送的暂停/恢复(比如网络不可用等)提供了Pause()
和Resume()
方法。
如果带宽估计器支持带宽侦测,其可能会请求一组在特定比特率上发送的数据包以便侦测这是否会引起网络的延迟和丢包,对于使用void CreateProbeCluster(...)
方法发送的数据包,PacketRouter
会根据其PacedPacketInfo
信息标记该数据包对应的cluster_id。如果使用了网络拥塞发送窗口,则拥塞窗口状态使用SetCongestionWindow()
和UpdateOutstandingData()
跟新。此外,SetAccountForAudioPackets()
方法设置音频数据包是否计入带宽消耗,SetIncludeOverhead()
方法是否将整个RTP包还是RTP包的有效载荷作为带宽,SetTransportOverhead()
设置每个包额外消耗的大小,比如UDP/IP头。
此外还有一些用于pace状态统计的API,OldestPacketWaitTime()
用于计算自添加队列中最旧的数据包以来的时间,QueueSizeData()
则是pacer队列里所有数据包的字节数总和,FirstSentPacketTime()
自从第一个数据包发送以来的绝对时间,ExpectedQueueTime()
则是pacer队列里所有数据包的字节数总和与发送比特率相除值。
4.1 pacer创建
在第二章2.2小节webrtc::PeerConnectionFactory
在创建 PeerConnection
对象时会为其创建 webrtc::Call
对象,而在webrtc::Call
对象创建时,会创建管理RTP传输的 webrtc::RtpTransportControllerSend
对象。RTP传输控制对象主要是 PacedSender
、PacketRouter
和 CongestionControl
等模块。PacedSender
是 pacer 模块对内接口,用于接收音视频 RTP 数据包和 pacer 的发送控制参数;PacketRouter
则是 pacer 模块对外接口,音视频 RTP 数据包路由到对应的RTP模块中。WebRTC中PacedSender
有两种实现,分别是 webrtc::PacedSender
和webrtc::TaskQueuePacedSender
,可以通过配置选项配置选择 PacedSender
, 默认为 webrtc::TaskQueuePacedSender
,3.7.3小节发送RTP包就是基于webrtc::TaskQueuePacedSender
的实现。
webrtc::TaskQueuePacedSender
对象的创建过程(webrtc/modules/pacer/task_queue_paced_sender.cc
)如下:
图4-1 pacer创建过程
call对象是一个具有零个或者多个使用RTP transport对象输入输出的双向连接,一个call对象可以包含多个发送和接收多媒体流(Aduiosteam/videostream),这些多媒体流通信的终点是一样的并且共享比特率估计,当在使用PeerConnection
API时,PeerConnection
和call
对象是一一对应的,call
对象定义于webrtc/call/call.h
,定义如下:
class Call
public:
using Config = CallConfig;
struct Stats
std::string ToString(int64_t time_ms) const;
int send_bandwidth_bps = 0; // Estimated available send bandwidth.
int max_padding_bitrate_bps = 0; // Cumulative configured max padding.
int recv_bandwidth_bps = 0; // Estimated available receive bandwidth.
int64_t pacer_delay_ms = 0;
int64_t rtt_ms = -1;
;
static Call* Create(const Call::Config& config);
static Call* Create(const Call::Config& config,
Clock* clock,
std::unique_ptr<RtpTransportControllerSendInterface>
transportControllerSend);
virtual AudioSendStream* CreateAudioSendStream(
const AudioSendStream::Config& config) = 0;
virtual void DestroyAudioSendStream(AudioSendStream* send_stream) = 0;
virtual AudioReceiveStreamInterface* CreateAudioReceiveStream(
const AudioReceiveStreamInterface::Config& config) = 0;
virtual void DestroyAudioReceiveStream(
AudioReceiveStreamInterface* receive_stream) = 0;
virtual VideoSendStream* CreateVideoSendStream(
VideoSendStream::Config config,
VideoEncoderConfig encoder_config) = 0;
virtual VideoSendStream* CreateVideoSendStream(
VideoSendStream::Config config,
VideoEncoderConfig encoder_config,
std::unique_ptr<FecController> fec_controller);
virtual void DestroyVideoSendStream(VideoSendStream* send_stream) = 0;
virtual VideoReceiveStreamInterface* CreateVideoReceiveStream(
VideoReceiveStreamInterface::Config configuration) = 0;
virtual void DestroyVideoReceiveStream(
VideoReceiveStreamInterface* receive_stream) = 0;
// In order for a created VideoReceiveStreamInterface to be aware that it is
// protected by a FlexfecReceiveStream, the latter should be created before
// the former.
virtual FlexfecReceiveStream* CreateFlexfecReceiveStream(
const FlexfecReceiveStream::Config config) = 0;
virtual void DestroyFlexfecReceiveStream(
FlexfecReceiveStream* receive_stream) = 0;
// When a resource is overused, the Call will try to reduce the load on the
// sysem, for example by reducing the resolution or frame rate of encoded
// streams.
virtual void AddAdaptationResource(rtc::scoped_refptr<Resource> resource) = 0;
// All received RTP and RTCP packets for the call should be inserted to this
// PacketReceiver. The PacketReceiver pointer is valid as long as the
// Call instance exists.
virtual PacketReceiver* Receiver() = 0;
// This is used to access the transport controller send instance owned by
// Call. The send transport controller is currently owned by Call for legacy
// reasons. (for instance variants of call tests are built on this assumtion)
// TODO(srte): Move ownership of transport controller send out of Call and
// remove this method interface.
virtual RtpTransportControllerSendInterface* GetTransportControllerSend() = 0;
// Returns the call statistics, such as estimated send and receive bandwidth,
// pacing delay, etc.
virtual Stats GetStats() const = 0;
// TODO(skvlad): When the unbundled case with multiple streams for the same
// media type going over different networks is supported, track the state
// for each stream separately. Right now it's global per media type.
virtual void SignalChannelNetworkState(MediaType media,
NetworkState state) = 0;
virtual void OnAudioTransportOverheadChanged(
int transport_overhead_per_packet) = 0;
// Called when a receive stream's local ssrc has changed and association with
// send streams needs to be updated.
virtual void OnLocalSsrcUpdated(AudioReceiveStreamInterface& stream,
uint32_t local_ssrc) = 0;
virtual void OnLocalSsrcUpdated(VideoReceiveStreamInterface& stream,
uint32_t local_ssrc) = 0;
virtual void OnLocalSsrcUpdated(FlexfecReceiveStream& stream,
uint32_t local_ssrc) = 0;
virtual void OnUpdateSyncGroup(AudioReceiveStreamInterface& stream,
absl::string_view sync_group) = 0;
virtual void OnSentPacket(const rtc::SentPacket& sent_packet) = 0;
virtual void SetClientBitratePreferences(
const BitrateSettings& preferences) = 0;
virtual const FieldTrialsView& trials() const = 0;
virtual TaskQueueBase* network_thread() const = 0;
virtual TaskQueueBase* worker_thread() const = 0;
virtual ~Call()
;
其接口方法定义名称已经显示了对应方法的作用,这些方法会在webrtc::internal::call
对象中重写,在这个internal对象成员变量中有如下定义:
std::map<uint32_t, AudioSendStream*> audio_send_ssrcs_
RTC_GUARDED_BY(worker_thread_);
std::map<uint32_t, VideoSendStream*> video_send_ssrcs_
std::set<VideoSendStream*> video_send_streams_
std::set<AudioReceiveStreamImpl*> audio_receive_streams_
RTC_GUARDED_BY(worker_thread_);
std::set<VideoReceiveStream2*> video_receive_streams_
RTC_GUARDED_BY(worker_thread_);
audio_send_ssrcs_
和video_send_ssrcs_
都是map容器,其将ssrc(uint32_t)和webrtc::internal::AudioSendStream
对象关联起来,一个ssrc对应于一路的多媒体流,比如麦克风采集和共享电脑声音这两路音频可以是不同的ssrc,在接收端同步混音后才播放出来,video也是类似的方法。
namespace internal
class AudioState;
class AudioSendStream final : public webrtc::AudioSendStream,
public webrtc::BitrateAllocatorObserver
public:
const std::unique_ptr<voe::ChannelSendInterface> channel_send_;
RtpTransportControllerSendInterface* const rtp_transport_;
RtpRtcpInterface* const rtp_rtcp_module_;
AudioStream在构造的时候或者显示调用AudioSendStream::Reconfigure()
时会触发AudioSendStream::ConfigureStream()
执行,这一方法会调用ChannelSend::RegisterSenderCongestionControlObjects()
将pacer和channel对象关联起来,其实现如下:
void ChannelSend::RegisterSenderCongestionControlObjects(
RtpTransportControllerSendInterface* transport,
RtcpBandwidthObserver* bandwidth_observer)
RTC_DCHECK_RUN_ON(&worker_thread_checker_);
//Sender对象
RtpPacketSender* rtp_packet_pacer = transport->packet_sender();
//Router对象
PacketRouter* packet_router = transport->packet_router();
RTC_DCHECK(rtp_packet_pacer);
RTC_DCHECK(packet_router);
RTC_DCHECK(!packet_router_);
rtcp_observer_->SetBandwidthObserver(bandwidth_observer);
rtp_packet_pacer_proxy_->SetPacketPacer(rtp_packet_pacer);
rtp_rtcp_->SetStorePacketsStatus(true, 600);
packet_router_ = packet_router;
4.2 音视频数据包发送
音频数据包的发送见3.7.2和3.7.3小节,视频数据包发送过程如下:
图4-2 视频数据包发送函数调用过程
同一个 PeerConnection
的音频数据包发送和视频数据包发送走相同的 pacer。
从 PacedSender
对象的创建和音视频数据包的发送过程,可以看到 pacer 相关的类组件结构大体如下图所示:
图4-3 pacer类UML关系图
pacer 模块实现的中心为 webrtc::PacingController
,Pacer 模块对外的入口和出口分别是 webrtc::TaskQueuePacedSender
和 webrtc::PacketRouter
,webrtc::TaskQueuePacedSender
继承了 webrtc::RtpPacketPacer
接口和 webrtc::RtpPacketSender
接口,这两个接口分类别定义了控制接口和数据接口。webrtc::TaskQueuePacedSender
的构造函数中会创建一个任务队列。当webrtc::TaskQueuePacedSender
的配置接口被调用或者有RTP数据包到来时候webrtc::TaskQueuePacedSender
会向任务队列中抛一个异步任务,在这个异步任务中通过 webrtc::PacerController
执行相应的操作,并执行处理数据包的操作。
由于webrtc::TaskQueuePacedSender
采用异步队列而非线程来处理处理数据包,在没有上面提到的那些传输控制配置接口被调用,同时没有数据包进来时,如何确定下次执行数据包处理操作的时间,并调度数据包处理操作的下次执行,这段逻辑在webrtc::TaskQueuePacedSender::MaybeProcessPackets
中。
webrtc::RtpPacketPacer
接口的定义(位于 webrtc/modules/pacer/rtp_packet_pacer.h
)如下:
namespace webrtc
class RtpPacketPacer
public:
virtual ~RtpPacketPacer() = default;
virtual void CreateProbeClusters(
std::vector<ProbeClusterConfig> probe_cluster_configs) = 0;
// Temporarily pause all sending.
virtual void Pause() = 0;
// Resume sending packets.
virtual void Resume() = 0;
virtual void SetCongested(bool congested) = 0;
// Sets the pacing rates. Must be called once before packets can be sent.
virtual void SetPacingRates(DataRate pacing_rate, DataRate padding_rate) = 0;
// Time since the oldest packet currently in the queue was added.
virtual TimeDelta OldestPacketWaitTime() const = 0;
// Sum of payload + padding bytes of all packets currently in the pacer queue.
virtual DataSize QueueSizeData() const = 0;
// Returns the time when the first packet was sent.
virtual absl::optional<Timestamp> FirstSentPacketTime() const = 0;
// Returns the expected number of milliseconds it will take to send the
// current packets in the queue, given the current size and bitrate, ignoring
// priority.
virtual TimeDelta ExpectedQueueTime() const = 0;
// Set the average upper bound on pacer queuing delay. The pacer may send at
// a higher rate than what was configured via SetPacingRates() in order to
// keep ExpectedQueueTimeMs() below `limit_ms` on average.
virtual void SetQueueTimeLimit(TimeDelta limit) = 0;
// Currently audio traffic is not accounted by pacer and passed through.
// With the introduction of audio BWE audio traffic will be accounted for
// the pacer budget calculation. The audio traffic still will be injected
// at high priority.
virtual void SetAccountForAudioPackets(bool account_for_audio) = 0;
virtual void SetIncludeOverhead() = 0;
virtual void SetTransportOverhead(DataSize overhead_per_packet) = 0;
;
// namespace webrtc
RtpPacketPacer
包含了一些时间统计和诸如发包数据率以及拥塞等控制配置。
webrtc::RtpPacketSender
接口的定义(位于 webrtc/modules/rtp_rtcp/include/rtp_packet_sender.h
),webrtc::RtpPacketSender
接口的定义位于rtp_rtcp
模块,pacer 模块通过实现这个接口,可以方便地被接进rtp_rtcp
模块,该类定义如下:
class RtpPacketSender
public:
virtual ~RtpPacketSender() = default;
// Insert a set of packets into queue, for eventual transmission. Based on the
// type of packets, they will be prioritized and scheduled relative to other
// packets and the current target send rate.
virtual void EnqueuePackets(
std::vector<std::unique_ptr<RtpPacketToSend>> packets) = 0;
// Clear any pending packets with the given SSRC from the queue.
// TODO(crbug.com/1395081): Make pure virtual when downstream code has been
// updated.
virtual void RemovePacketsForSsrc(uint32_t ssrc)
;
// namespace webrtc
从图4-3中pacer的UML关系图可以看出,Pacer 模块的使用者webrtc::AudioSendStream
和 webrtc::VideoSendStream
分别调用 webrtc::RTPSenderAudio
和 webrtc::RtpVideoSender
类通过该类的EnqueuePackets
接口将 RTP 包送进 pacer 模块,然后由 pacer 模块平滑发送RTP数据包。而Pacer 模块平滑发送所需的拥塞窗口、发送码率等控制参数控制则通过 webrtc::RtpPacketPacer
接口设置。
4.3 webrtc::PacketRouter
webrtc::PacketRouter
的接口主要分为RTP模块接入删除控制接口、媒体数据接口以及传输控制数据发送接口三个部分,该类的定义如下:
namespace webrtc
class RtpRtcpInterface;
// PacketRouter keeps track of rtp send modules to support the pacer.
// In addition, it handles feedback messages, which are sent on a send
// module if possible (sender report), otherwise on receive module
// (receiver report). For the latter case, we also keep track of the
// receive modules.
class PacketRouter : public PacingController::PacketSender
public:
PacketRouter();
explicit PacketRouter(uint16_t start_transport_seq);
~PacketRouter() override;
PacketRouter(const PacketRouter&) = delete;
PacketRouter& operator=(const PacketRouter&) = delete;
//RTP模块接入删除控制接口
void AddSendRtpModule(RtpRtcpInterface* rtp_module, bool remb_candidate);
void RemoveSendRtpModule(RtpRtcpInterface* rtp_module);
void AddReceiveRtpModule(RtcpFeedbackSenderInterface* rtcp_sender,
bool remb_candidate);
void RemoveReceiveRtpModule(RtcpFeedbackSenderInterface* rtcp_sender);
//从 webrtc::PacingController::PacketSender 继承的媒体数据接口
//发送媒体数据包(包括 FEC 数据包和填充数据包),获取 FEC 数据包,生成填充数据包,发送 RTCP 数据包和 REMB 数据包。
void SendPacket(std::unique_ptr<RtpPacketToSend> packet,
const PacedPacketInfo& cluster_info) override;
std::vector<std::unique_ptr<RtpPacketToSend>> FetchFec() override;
std::vector<std::unique_ptr<RtpPacketToSend>> GeneratePadding(
DataSize size) override;
void OnAbortedRetransmissions(
uint32_t ssrc,
rtc::ArrayView<const uint16_t> sequence_numbers) override;
absl::optional<uint32_t> GetRtxSsrcForMedia(uint32_t ssrc) const override;
uint16_t CurrentTransportSequenceNumber() const;
//传输控制数据发送接口
// Send REMB feedback.
void SendRemb(int64_t bitrate_bps, std::vector<uint32_t> ssrcs);
// Sends `packets` in one or more IP packets.
void SendCombinedRtcpPacket(
std::vector<std::unique_ptr<rtcp::RtcpPacket>> packets);
private:
void AddRembModuleCandidate(RtcpFeedbackSenderInterface* candidate_module,
bool media_sender)
RTC_EXCLUSIVE_LOCKS_REQUIRED(modules_mutex_);
void MaybeRemoveRembModuleCandidate(
RtcpFeedbackSenderInterface* candidate_module,
bool media_sender) RTC_EXCLUSIVE_LOCKS_REQUIRED(modules_mutex_);
void UnsetActiveRembModule() RTC_EXCLUSIVE_LOCKS_REQUIRED(modules_mutex_);
void DetermineActiveRembModule() RTC_EXCLUSIVE_LOCKS_REQUIRED(modules_mutex_);
void AddSendRtpModuleToMap(RtpRtcpInterface* rtp_module, uint32_t ssrc)
RTC_EXCLUSIVE_LOCKS_REQUIRED(modules_mutex_);
void RemoveSendRtpModuleFromMap(uint32_t ssrc)
RTC_EXCLUSIVE_LOCKS_REQUIRED(modules_mutex_);
mutable Mutex modules_mutex_;
// Ssrc to RtpRtcpInterface module;
std::unordered_map<uint32_t, RtpRtcpInterface*> send_modules_map_
RTC_GUARDED_BY(modules_mutex_);
std::list<RtpRtcpInterface*> send_modules_list_
RTC_GUARDED_BY(modules_mutex_);
// The last module used to send media.
RtpRtcpInterface* last_send_module_ RTC_GUARDED_BY(modules_mutex_);
// Rtcp modules of the rtp receivers.
std::vector<RtcpFeedbackSenderInterface*> rtcp_feedback_senders_
RTC_GUARDED_BY(modules_mutex_);
// Candidates for the REMB module can be RTP sender/receiver modules, with
// the sender modules taking precedence.
std::vector<RtcpFeedbackSenderInterface*> sender_remb_candidates_
RTC_GUARDED_BY(modules_mutex_);
std::vector<RtcpFeedbackSenderInterface*> receiver_remb_candidates_
RTC_GUARDED_BY(modules_mutex_);
RtcpFeedbackSenderInterface* active_remb_module_
RTC_GUARDED_BY(modules_mutex_);
uint64_t transport_seq_ RTC_GUARDED_BY(modules_mutex_);
// TODO(bugs.webrtc.org/10809): Replace lock with a sequence checker once the
// process thread is gone.
std::vector<std::unique_ptr<RtpPacketToSend>> pending_fec_packets_
RTC_GUARDED_BY(modules_mutex_);
;
// namespace webrtc
4.4 Pacer 媒体数据发送控制
Pacer 媒体数据发送控制主要由 webrtc::PacingController
及其辅助组件 webrtc::PrioritizedPacketQueue
、webrtc::IntervalBudget
和 webrtc::BitrateProber
等实现,Pacer 根据音频、视频、FEC 和填充数据类型给每个数据包分配一个优先级,优先级分配的规则如下:
//webrtc/modules/pacing/prioritized_packet_queue.cc
int GetPriorityForType(RtpPacketMediaType type)
// Lower number takes priority over higher.
switch (type)
case RtpPacketMediaType::kAudio:
// Audio is always prioritized over other packet types.
return kAudioPrioLevel;
case RtpPacketMediaType::kRetransmission:
// Send retransmissions before new media.
return kAudioPrioLevel + 1;
case RtpPacketMediaType::kVideo:
case RtpPacketMediaType::kForwardErrorCorrection:
// Video has "normal" priority, in the old speak.
// Send redundancy concurrently to video. If it is delayed it might have a
// lower chance of being useful.
return kAudioPrioLevel + 2;
case RtpPacketMediaType::kPadding:
// Packets that are in themselves likely useless, only sent to keep the
// BWE high.
return kAudioPrioLevel + 3;
RTC_CHECK_NOTREACHED();
音频数据包的优先级最高(值为零,最小)。webrtc::PrioritizedPacketQueue
为每种类型的数据包(音频、重传、视频/FEC以及padding)分配一个FIFO队列,相同优先级的队列内部使用轮询的方式发包,将包添加到队列里的函数实现如下:
//webrtc/modules/pacing/prioritized_packet_queue.cc
//priority_level值在音频时等于0,重传时为1,依次类推,padding时是3
bool PrioritizedPacketQueue::StreamQueue::EnqueuePacket(QueuedPacket packet,
int priority_level)
bool first_packet_at_level = packets_[priority_level].empty();
//放在队列的尾部,属于FIFO架构
packet以上是关于WebRTC paced sender的主要内容,如果未能解决你的问题,请参考以下文章
Self-Paced Training - Docker Operations
Self-Paced Training - Docker Fundamentals