I am trying to use WebRTC to build a web application that needs to pause/resume the video/audio stream when some events trigger. I have tried the getTracks()[0].stop()
You should try using renegotiation, I believe the difference still exists how it is done in chrome and firefox:
In chrome, you just call addStream
or removeStream
on the PeerConnection
object to add/ remove the stream, then create and exchange sdp
.
In firefox, there is no direct removeStream
, you need to use RTCRtpSender and addTrack
and removeTrack
methods, you can take a look at this question
getTracks()[0].stop()
is permanent.
Use getTracks()[0].enabled = false
instead. To unpause getTracks()[0].enabled = true
.
This will replace your video with black, and your audio with silence.
Try it (use https fiddle for Chrome):
var pc1 = new RTCPeerConnection(), pc2 = new RTCPeerConnection();
navigator.mediaDevices.getUserMedia({ video: true, audio: true })
.then(stream => pc1.addStream(video1.srcObject = stream))
.catch(log);
var mute = () => video1.srcObject.getTracks().forEach(t => t.enabled = !t.enabled);
var add = (pc, can) => can && pc.addIceCandidate(can).catch(log);
pc1.onicecandidate = e => add(pc2, e.candidate);
pc2.onicecandidate = e => add(pc1, e.candidate);
pc2.onaddstream = e => video2.srcObject = e.stream;
pc1.onnegotiationneeded = e =>
pc1.createOffer().then(d => pc1.setLocalDescription(d))
.then(() => pc2.setRemoteDescription(pc1.localDescription))
.then(() => pc2.createAnswer()).then(d => pc2.setLocalDescription(d))
.then(() => pc1.setRemoteDescription(pc2.localDescription))
.catch(log);
var log = msg => div.innerHTML += "<br>" + msg;
<video id="video1" height="120" width="160" autoplay muted></video>
<video id="video2" height="120" width="160" autoplay></video><br>
<input type="checkbox" onclick="mute()">mute</input><div id="div"></div>
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
PeerConnections basically stop sending packets in this muted state, so it is highly efficient.