音视频系列8:gstreamer基础

Posted IE06

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了音视频系列8:gstreamer基础相关的知识,希望对你有一定的参考价值。

1. 安装

mac和linux都可以系统安装,除了必要的gstreamer1以外,还需要安装一堆插件:
在这里插入图片描述
可以参考下面的列表进行安装:

sudo apt-get install \\
    libssl1.0.0 \\
    libgstreamer1.0-0 \\
    gstreamer1.0-tools \\
    gstreamer1.0-plugins-good \\
    gstreamer1.0-plugins-bad \\
    gstreamer1.0-plugins-ugly \\
    gstreamer1.0-libav \\
    libgstrtspserver-1.0-0 \\
    libjansson4=2.11-1
yum install gstreamer*
yum install libgstreamer*

jetson平台可以参照官网安装:

sudo apt-get install gstreamer1.0-tools gstreamer1.0-alsa \\
  gstreamer1.0-plugins-base gstreamer1.0-plugins-good \\
  gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly \\
  gstreamer1.0-libav
sudo apt-get install libgstreamer1.0-dev \\
  libgstreamer-plugins-base1.0-dev \\
  libgstreamer-plugins-good1.0-dev \\
  libgstreamer-plugins-bad1.0-dev

输入gst-launch-1.0 -v videotestsrc ! cacasink进行测试:
在这里插入图片描述
另外也可以用下面测试:

gst-launch-1.0 videotestsrc ! ximagesink
gst-launch-1.0 videotestsrc pattern=11 ! ximagesink

2 播放视频

使用gst-inspect-1.0 查看pad templates,使用gst-launch-1.0可以执行pipeline。
首先是两种简单的运行pipeline方式(软解):

gst-launch-1.0 playbin uri=file://1.mp4
gst-launch-1.0 filesrc location=1.mp4 ! decodebin ! glimagesink

硬解码读取本地视频,使用python+cv2:

import cv2
pipeline = 'filesrc location=1.mp4 ! qtdemux ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw,format=(string)BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink'
capture = cv2.VideoCapture(pipeline, cv2.CAP_GSTREAMER)

硬解码读取rtsp流的pipeline:

import cv2
pipeline = "rtspsrc location=\\"rtsp://106.14.142.15:8554/mystream\\" ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, format=(string)BGRx! videoconvert ! appsink"
capture = cv2.VideoCapture(pipeline, cv2.CAP_GSTREAMER)

对比一下,主要是src和demux两者改了。

3. Gstreamer基础

3.1 src

从树莓派摄像头:nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM)
从usb摄像头:v4l2src device=/dev/video1
audiotestsrc:音频测试源
videotestsrc:视频测试源

3.2 sink

alsasink:音频播放设备
appsink:当前编辑软件
ximagesink:视频播放
gimagesink:使用opengl渲染的视频播放
v4lsink, v4l2sink:video for linux的输出
ximagesink:使用XWindow输出,基本都支持
xvimagesink:使用XVideo extension输出,在ubuntu下要装一大堆库才支持
sdlvideosink:使用sdl库输出,需要装sdl库
dfbvideosink:用DirectFB库输出,需要apt-get install directfb* 安装后才能使用
cacasink:使用的是libcaca库,是在控制台下用字符加颜色的方式显示图像的,与win32下的 SetConsoleTextAttribute 函数效果差不多,支持X11, S-Lang,ncurses和raw等几种方式输出用ncurse的方式:CACA_DRIVER=ncurses gst-launch filesrc location=test.avi ! decodebin ! ffmpegcolorspace ! cacasink
fpsdisplaysink:能在控制台上打印出当前的和平均的framerate
aasink:用ascii字符的形式在控制台输出图像,与cacasink类似,但是不支持颜色

3.3 convert

autovideoconvert

3.4 capability

(videosrc) ! video/x-raw,format=BGR
(videosrc) ! nvvidconv flip-method=2
videoconvert ! video/x-raw,width=1280,height=960

3.5 properties

wbmode 白平衡
在sink中使用drop=True

4. gstd

4.1 简介

这东西在百度上都没有资料,只能自己去啃英文资料了。简单来说,就是使用中间协议控制音视频的协议框架,其他程序可以通过接口来访问。下图是mvc设计框架:
在这里插入图片描述
GstD包含model和controller两部分,用户需要实现view的部分。
安装方法:

sudo apt-get install \\
automake \\
libtool \\
pkg-config \\
libgstreamer1.0-dev \\
libgstreamer-plugins-base1.0-dev \\
libglib2.0-dev \\
libjson-glib-dev \\
gtk-doc-tools \\
libreadline-dev \\
libncursesw5-dev \\
libdaemon-dev \\
libjansson-dev \\
libsoup2.4-dev \\
python3-pip

git clone https://github.com/RidgeRun/gstd-1.x.git
cd gstd-1.x
./autogen.sh
./configure
make

如果是mac系统,安装的系统内容包括:

brew install jansson libsoup autoconf automake gtk-doc json-glib gstreamer gst-plugins-base gst-plugins-good

在服务器端输入gstd,开启服务;在客户端输入gstd-client进入操作界面。

4.2 综合示例

github地址为https://github.com/RidgeRun/gtc-2020-demo,文件结构如下:

gtc-2020-demo/
├── deepstream-models
│   ├── config_infer_primary_1_cameras.txt
│   ├── config_infer_primary_4_cameras.txt
│   ├── config_infer_primary.txt
│   ├── libnvds_mot_klt.so
│   └── Primary_Detector
│       ├── cal_trt.bin
│       ├── labels.txt
│       ├── resnet10.caffemodel
│       ├── resnet10.caffemodel_b1_fp16.engine
│       ├── resnet10.caffemodel_b30_fp16.engine
│       ├── resnet10.caffemodel_b4_fp16.engine
│       └── resnet10.prototxt
├── python-example
│   └── media-server.py
└── README.md

deepstream-models中主要是配置文件和模型文件
python-example文件夹下是主程序

首先创建辅助程序:

import time
from pygstc.gstc import *

# Create PipelineEntity object to manage each pipeline
class PipelineEntity(object):
    def __init__(self, client, name, description):
        self._name = name
        self._description = description
        self._client = client
        print("Creating pipeline: " + self._name)
        self._client.pipeline_create(self._name, self._description)
    def play(self):
        print("Playing pipeline: " + self._name)
        self._client.pipeline_play(self._name)
    def stop(self):
        print("Stopping pipeline: " + self._name)
        self._client.pipeline_stop(self._name)
    def delete(self):
        print("Deleting pipeline: " + self._name)
        self._client.pipeline_delete(self._name)
    def eos(self):
        print("Sending EOS to pipeline: " + self._name)
        self._client.event_eos(self._name)
    def set_file_location(self, location):
        print("Setting " + self._name + " pipeline recording/snapshot location to " + location);
        filesink_name = "filesink_" + self._name;
        self._client.element_set(self._name, filesink_name, 'location', location);
    def listen_to(self, sink):
        print(self._name + " pipeline listening to " + sink);
        self._client.element_set(self._name, self._name + '_src', 'listen-to', sink);

pipelines_base = []
pipelines_video_rec = []
pipelines_video_enc = []
pipelines_snap = []

# Create GstD Python client
client = GstdClient()

以上是关于音视频系列8:gstreamer基础的主要内容,如果未能解决你的问题,请参考以下文章

视频编码原理及Gstreamer 硬编码代码实现

RIoTBoard开发板系列笔记—— gstreamer + vpu实现视频硬件解码播放

gstreamer学习教程系列

音视频处理基础框架介绍,FFmpegGStreamerOpenCVOpenGL

gstreamer视频进入python中的窗口

Gstreamer 录制带音频的视频