WebRTC java服务器问题
Posted
技术标签:
【中文标题】WebRTC java服务器问题【英文标题】:WebRTC java server trouble 【发布时间】:2013-12-12 08:02:43 【问题描述】:我认为我非常接近让我的 Java 服务器应用程序通过 WebRTC 与浏览器页面对话,但我无法让它工作。我觉得我错过了一些小东西,所以我希望这里有人可以提出建议。
我仔细查看了 WebRTC 示例 - Java 单元测试 (org.webrtc.PeerConnectionTest
) 和示例 android 应用程序 (trunk/talk/examples/android
)。根据我学到的知识,我整理了一个使用 WebSockets 发送信号并尝试将视频流发送到 Chrome 的 java 应用程序。
问题是浏览器中没有视频,即使我的所有代码(javascript 和 Java)都按我期望的顺序执行,并点击了所有正确的日志记录语句。本机 libjingle 代码的控制台日志中有一些可疑的输出,但我不知道该怎么做。我用下面的“>>”突出显示了日志中的可疑行。例如,似乎视频端口分配器在创建后不久就被销毁了,所以显然有问题。此外,“Changing video state, recv=1 send=0
”似乎也不正确,因为 Java 端应该发送视频,而不是接收......也许我误用了 OfferToReceiveVideo
选项?
如果您查看下面的日志,您将看到 WebSocket 与浏览器的通信工作正常,并且我能够成功地将 SDP Offer 发送到浏览器并从浏览器接收到 SDP Answer。在 PeerConnections 上设置本地和远程描述似乎也可以正常工作。 html5 视频元素将源设置为 BLOB url,因为它应该。那么,我会错过什么?即使我的客户端和服务器现在在同一台机器上,我是否需要对 ICE 候选人做任何事情?
任何建议将不胜感激!
SDP 消息(来自 Chrome 的 Javascript 控制台)
1.134: Java Offer:
v=0
o=- 5893945934600346864 2 IN IP4 127.0.0.1
s=-
t=0 0
a=group:BUNDLE video
a=msid-semantic: WMS JavaMediaStream
m=video 1 RTP/SAVPF 100 116 117
c=IN IP4 0.0.0.0
a=rtcp:1 IN IP4 0.0.0.0
a=ice-ufrag:dJxTlMlXy7uASrDU
a=ice-pwd:r8BRkXVnc4dqCABUDhuRjpp7
a=ice-options:google-ice
a=mid:video
a=extmap:2 urn:ietf:params:rtp-hdrext:toffset
a=extmap:3 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time
a=sendrecv
a=rtcp-mux
a=crypto:0 AES_CM_128_HMAC_SHA1_80 inline:yq6wOHhk/QfsWuh+1oOEqfB4GjKZzz8XfQnGCDP3
a=rtpmap:100 VP8/90000
a=rtcp-fb:100 ccm fir
a=rtcp-fb:100 nack
a=rtcp-fb:100 goog-remb
a=rtpmap:116 red/90000
a=rtpmap:117 ulpfec/90000
a=s-s-rc:3720473526 cname:nul6R21KmwAms3Ge
a=s-s-rc:3720473526 msid:JavaMediaStream JavaMediaStream_v0
a=s-s-rc:3720473526 mslabel:JavaMediaStream
a=s-s-rc:3720473526 label:JavaMediaStream_v0
1.149: Received remote stream
1.150: Browsers Answer:
v=0
o=- 4261396844048664099 2 IN IP4 127.0.0.1
s=-
t=0 0
a=group:BUNDLE video
a=msid-semantic: WMS
m=video 1 RTP/SAVPF 100 116 117
c=IN IP4 0.0.0.0
a=rtcp:1 IN IP4 0.0.0.0
a=ice-ufrag:quzQNsX+ZlUWUQqV
a=ice-pwd:y5A0+7sM8P88AatBLd1fdd5G
a=mid:video
a=extmap:2 urn:ietf:params:rtp-hdrext:toffset
a=extmap:3 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time
a=recvonly
a=rtcp-mux
a=crypto:0 AES_CM_128_HMAC_SHA1_80 inline:WClNA69OfpjdJy3Bv4ujejk/IYnn4DW8kjrB18xP
a=rtpmap:100 VP8/90000
a=rtcp-fb:100 ccm fir
a=rtcp-fb:100 nack
a=rtcp-fb:100 goog-remb
a=rtpmap:116 red/90000
a=rtpmap:117 ulpfec/90000
这对我来说似乎没问题。 Java 的报价包括我的视频流。
本机代码日志记录 (libjingle)
(用'>>'标记的可疑行)
Camera '/dev/video0' started with format YUY2 640x480x30, elapsed time 59 ms
Ignored line: c=IN IP4 0.0.0.0
NACK enabled for channel 0
NACK enabled for channel 0
Created channel for video
Jingle:Channel[video|1|__]: NULL DTLS identity supplied. Not doing DTLS
Jingle:Channel[video|2|__]: NULL DTLS identity supplied. Not doing DTLS
Session:5893945934600346864 Old state:STATE_INIT New state:STATE_SENTINITIATE Type:urn:xmpp:jingle:apps:rtp:1 Transport:http://www.google.com/transport/p2p
Setting local video description
AddSendStream id:JavaMediaStream_v0;s-s-rcs:[3720473526];s-s-rc_groups:;cname:nul6R21KmwAms3Ge;sync_label:JavaMediaStream
Add send s-s-rc: 3720473526
>> Warning(webrtcvideoengine.cc:2704): SetReceiverBufferingMode(0, 0) failed, err=12606
Changing video state, recv=0 send=0
Transport: video, allocating candidates
Transport: video, allocating candidates
Jingle:Net[eth0:192.168.0.0/24]: Allocation Phase=Udp
Jingle:Port[:1:0::Net[eth0:192.168.0.0/24]]: Port created
Adding allocated port for video
Jingle:Port[video:1:0::Net[eth0:192.168.0.0/24]]: Added port to allocator
Jingle:Net[tun0:192.168.128.6/32]: Allocation Phase=Udp
Jingle:Port[:1:0::Net[tun0:192.168.128.6/32]]: Port created
Adding allocated port for video
Jingle:Port[video:1:0::Net[tun0:192.168.128.6/32]]: Added port to allocator
Ignored line: c=IN IP4 0.0.0.0
Warning(webrtcvideoengine.cc:2309): GetStats: sender information not ready.
Jingle:Channel[video|1|__]: Other side didn't support DTLS.
Jingle:Channel[video|2|__]: Other side didn't support DTLS.
Enabling BUNDLE, bundling onto transport: video
Channel enabled
>> Changing video state, recv=1 send=0
Session:5893945934600346864 Old state:STATE_SENTINITIATE New state:STATE_RECEIVEDACCEPT Type:urn:xmpp:jingle:apps:rtp:1 Transport:http://www.google.com/transport/p2p
Setting remote video description
Hybrid NACK/FEC enabled for channel 0
Hybrid NACK/FEC enabled for channel 0
SetSendCodecs() : selected video codec VP8/1280x720x30fps@2000kbps (min=50kbps, start=300kbps)
Video max quantization: 56
VP8 number of temporal layers: 1
VP8 options : picture loss indication = 0, feedback mode = 0, complexity = normal, resilience = off, denoising = 0, error concealment = 0, automatic resize = 0, frame dropping = 1, key frame interval = 3000
WARNING: no real random source present!
SRTP activated with negotiated parameters: send cipher_suite AES_CM_128_HMAC_SHA1_80 recv cipher_suite AES_CM_128_HMAC_SHA1_80
Changing video state, recv=1 send=0
Session:5893945934600346864 Old state:STATE_RECEIVEDACCEPT New state:STATE_INPROGRESS Type:urn:xmpp:jingle:apps:rtp:1 Transport:http://www.google.com/transport/p2p
Jingle:Net[eth0:192.168.0.0/24]: Allocation Phase=Relay
Jingle:Net[tun0:192.168.128.6/32]: Allocation Phase=Relay
Jingle:Net[eth0:192.168.0.0/24]: Allocation Phase=Tcp
Jingle:Port[:1:0:local:Net[eth0:192.168.0.0/24]]: Port created
Adding allocated port for video
Jingle:Port[video:1:0:local:Net[eth0:192.168.0.0/24]]: Added port to allocator
Jingle:Net[tun0:192.168.128.6/32]: Allocation Phase=Tcp
Jingle:Port[:1:0:local:Net[tun0:192.168.128.6/32]]: Port created
Adding allocated port for video
Jingle:Port[video:1:0:local:Net[tun0:192.168.128.6/32]]: Added port to allocator
Jingle:Net[eth0:192.168.0.0/24]: Allocation Phase=SslTcp
Jingle:Net[tun0:192.168.128.6/32]: Allocation Phase=SslTcp
All candidates gathered for video:1:0
Transport: video, component 1 allocation complete
Transport: video allocation complete
Candidate gathering is complete.
Capture delay changed to 120 ms
Captured frame size 640x480. Expected format YUY2 640x480x30
Capture size changed : selected video codec VP8/640x480x30fps@2000kbps (min=50kbps, start=300kbps)
Video max quantization: 56
VP8 number of temporal layers: 1
VP8 options : picture loss indication = 0, feedback mode = 0, complexity = normal, resilience = off, denoising = 0, error concealment = 0, automatic resize = 1, frame dropping = 1, key frame interval = 3000
VAdapt Frame: 0 / 300 Changes: 0 Input: 640x480 Scale: 1 Output: 640x480 Changed: false
>> Jingle:Port[video:1:0::Net[eth0:192.168.0.0/24]]: Port deleted
>> Jingle:Port[video:1:0::Net[eth0:192.168.0.0/24]]: Removed port from allocator (3 remaining)
Removed port from p2p socket: 3 remaining
Jingle:Port[video:1:0::Net[tun0:192.168.128.6/32]]: Port deleted
Jingle:Port[video:1:0::Net[tun0:192.168.128.6/32]]: Removed port from allocator (2 remaining)
Removed port from p2p socket: 2 remaining
>> Jingle:Port[video:1:0:local:Net[eth0:192.168.0.0/24]]: Port deleted
>> Jingle:Port[video:1:0:local:Net[eth0:192.168.0.0/24]]: Removed port from allocator (1 remaining)
Removed port from p2p socket: 1 remaining
Jingle:Port[video:1:0:local:Net[tun0:192.168.128.6/32]]: Port deleted
Jingle:Port[video:1:0:local:Net[tun0:192.168.128.6/32]]: Removed port from allocator (0 remaining)
Removed port from p2p socket: 0 remaining
HTML
<html lang="en">
<head>
<title>Web Socket Signalling</title>
<link rel="stylesheet" href="css/socket.css">
<script src="js/socket.js"></script>
</head>
<body>
<h2>Repsonse from Server</h2>
<textarea id="responseText"></textarea>
<h2>Video</h2>
<video id="remoteVideo" autoplay></video>
</body>
</html>
Javascript
(function()
var remotePeerConnection;
var sdpConstraints =
'mandatory' :
'OfferToReceiveAudio' : false,
'OfferToReceiveVideo' : true
;
var Sock = function()
var socket;
if (!window.WebSocket)
window.WebSocket = window.MozWebSocket;
if (window.WebSocket)
socket = new WebSocket("ws://localhost:8080/websocket");
socket.onopen = onopen;
socket.onmessage = onmessage;
socket.onclose = onclose;
else
alert("Your browser does not support Web Socket.");
function onopen(event)
getTextAreaElement().value = "Web Socket opened!";
function onmessage(event)
appendTextArea(event.data);
sdpOffer = new RTCSessionDescription(JSON.parse(event.data));
remotePeerConnection = new webkitRTCPeerConnection(null);
remotePeerConnection.onaddstream = gotRemoteStream;
trace("Java Offer: \n" + sdpOffer.sdp);
remotePeerConnection.setRemoteDescription(sdpOffer);
remotePeerConnection.createAnswer(gotRemoteDescription, onCreateSessionDescriptionError, sdpConstraints);
function onCreateSessionDescriptionError(error)
console.log('Failed to create session description: '
+ error.toString());
function gotRemoteDescription(answer)
remotePeerConnection.setLocalDescription(answer);
trace("Browser's Answer: \n" + answer.sdp);
socket.send(JSON.stringify(answer));
function gotRemoteStream(event)
var remoteVideo = document.getElementById("remoteVideo");
remoteVideo.src = URL.createObjectURL(event.stream);
trace("Received remote stream");
function onclose(event)
appendTextArea("Web Socket closed");
function appendTextArea(newData)
var el = getTextAreaElement();
el.value = el.value + '\n' + newData;
function getTextAreaElement()
return document.getElementById('responseText');
function trace(text)
console.log((performance.now() / 1000).toFixed(3) + ": " + text);
window.addEventListener('load', function()
new Sock();
, false);
)();
Java 服务器
public class PeerConnectionManager
/**
* Called when the WebSocket handshake is completed
*/
public void createOffer()
peerConnection = factory.createPeerConnection(
new ArrayList<PeerConnection.IceServer>(),
new MediaConstraints(),
new PeerConnectionObserverImpl());
// Get the video source
videoSource = factory.createVideoSource(VideoCapturer.create(""), new MediaConstraints());
// Create a MediaStream with one video track
MediaStream lMS = factory.createLocalMediaStream("JavaMediaStream");
VideoTrack videoTrack = factory.createVideoTrack("JavaMediaStream_v0", videoSource);
videoTrack.addRenderer(new VideoRenderer(new VideoRendererObserverImpl()));
lMS.addTrack(videoTrack);
peerConnection.addStream(lMS, new MediaConstraints());
// We don't want to receive anything
MediaConstraints sdpConstraints = new MediaConstraints();
sdpConstraints.mandatory.add(new MediaConstraints.KeyValuePair(
"OfferToReceiveAudio", "false"));
sdpConstraints.mandatory.add(new MediaConstraints.KeyValuePair(
"OfferToReceiveVideo", "false"));
// Get the Offer SDP
SdpObserverImpl sdpOfferObserver = new SdpObserverImpl();
peerConnection.createOffer(sdpOfferObserver, sdpConstraints);
SessionDescription offerSdp = sdpOfferObserver.getSdp();
// Set local SDP, don't care for any callbacks
peerConnection.setLocalDescription(new SdpObserverImpl(), offerSdp);
// Serialize Offer and send to the Browser via a WebSocket
JSONObject offerSdpJson = new JSONObject();
offerSdpJson.put("sdp", offerSdp.description);
offerSdpJson.put("type", offerSdp.type.canonicalForm());
webSocketContext.channel().writeAndFlush(
new TextWebSocketFrame(offerSdpJson.toString()));
/**
* Called when an SDP Answer arrives via the WebSocket
*/
public void setRemoteDescription(SessionDescription answer)
peerConnection.setRemoteDescription( new SdpObserverImpl(), answer);
【问题讨论】:
你好。我正在尝试做同样的事情,但仍然没有运气。请问可以上传你的代码吗?谢谢 您在服务器端使用哪个库或 webrtc 实现? 嗨@Val Blant,我正在研究与WebRTC相关的相同内容,只是为了了解WebRtc的整体流程(用于呼入和呼出),您能否上传一些代码作为示例。 【参考方案1】:呃。没关系。对不起这个愚蠢的问题。
缺少的部分是浏览器和 Java 服务器之间的 ICE 候选交换。现在我添加了通过 WebSocket 进行 ICE 协商的代码,一切正常!
【讨论】:
能否请您发布固定代码或提供链接? @KishorPrakash:当然,这是工作代码:github.com/vace117/CreeperAndroid以上是关于WebRTC java服务器问题的主要内容,如果未能解决你的问题,请参考以下文章