webrtc

How to get frame data in AppRTC iOS app for video modifications?

眉间皱痕 提交于 2021-02-07 20:31:52
问题 I am currently trying to make some modifications to the incoming WebRTC video stream in the AppRTC app for iOS in Swift (which in turn is based on this Objective-C version). To do so, I need access to the data which is stored in the frame objects of class RTCI420Frame (which is a basic class for the Objective-C implementation of libWebRTC). In particular, I need an array of bytes: [UInt8] and Size of the frames. This data is to be used for further processing & addition of some filters. The

webrtc-client在浏览器使用c++推流程序崩溃nlohmann::detail::type_error错误修复

风格不统一 提交于 2021-02-07 15:38:22
我们介绍过很多TSINGSEE青犀视频开发团队开发WebRTC的过程,我们使用到了c开发。C不仅拥有计算机高效运行的实用性特征,同时还致力于提高大规模程序的编程质量与程序设计语言的问题描述能力。 在开发WebRTC流媒体服务器时,WebRTC推流在使用浏览器拉流一段时间后,c++ 程序会导致崩溃,浏览器拉流端的视频也会停止播放,c++ 推流端在vs2017会进入到错误断点报以下内容: “0xxxxxxx处(位于xxxxx.exe中)有未经处理的异常,异常:nlohmann::detail::type_error,位于内存位置0xxxxxxxxx处”。 看报错信息,是在使用webrtc json的库里出现的错误信息,这样就能确定一点,应该是某部分代码在调用webrtc json的库出现不一致的情况导致的错误。 我们在代码中找到WebRTCClient.cpp文件,打印一下服务端反馈的数据: 服务端反馈的错误数据格式如下: {“data”:{“errCode”:-1,”errMsg”:”Router not found”,”id”:9,”method”:”createWebRtcTransport”}}。 知道问题所在,那么解决就变简单多了。我们只需要在添加一个简单的判断代码,判断字段是否存在即可。 来源: oschina 链接: https://my.oschina.net/u

Webrtc stream local video file

こ雲淡風輕ζ 提交于 2021-02-07 10:24:01
问题 How would one stream a local media file(video file) to peers?( i am using janus-gateway - videoroom plugin for this ). For audio there is webAudio, but what about the video? Thanks! Update : Maybe someone has an example? Or a small code snippet? Maybe a link to some lib? 回答1: Render the local video on Canvas & create stream object from Canvas element. And then you can add the stream to PeerConnection. Then stream will be sent to remote peer(Janus/Browser/any server). Demo: https://webrtc

Webrtc stream local video file

谁说我不能喝 提交于 2021-02-07 10:22:31
问题 How would one stream a local media file(video file) to peers?( i am using janus-gateway - videoroom plugin for this ). For audio there is webAudio, but what about the video? Thanks! Update : Maybe someone has an example? Or a small code snippet? Maybe a link to some lib? 回答1: Render the local video on Canvas & create stream object from Canvas element. And then you can add the stream to PeerConnection. Then stream will be sent to remote peer(Janus/Browser/any server). Demo: https://webrtc

How to record video with webcam in Safari on iOS and macOS?

青春壹個敷衍的年華 提交于 2021-02-07 08:00:32
问题 I've released several paths: 1) Recording video with https://caniuse.com/#feat=html-media-capture But it works only on iOS and cannot be customizable. I need to render a red frame over the video preview layer and limit video length to 30 seconds. 2) Recording with a WebRTC client placed on the server, but I can't find any software to do that. I've found kurento media server, but its client js utils library does not support Safari 11. 3) Recording with flash plugin. But it is not supported on

Comparing Media Source Extensions (MSE) with WebRTC

南楼画角 提交于 2021-02-07 07:15:29
问题 What are the fundamental differences between Media Source Extensions and WebRTC? If I may project my own understanding for a moment. WebRTC includes an the RTCPeerConnection which handles getting streams from Media Streams and passing them into a protocol for streaming to connected peers of the application. It seems under the hood WebRTC abstracting a lot of the bigger issues like codecs and transcoding. Would this be a correct assessment? Where does Media Source Extensions fit into things? I

Comparing Media Source Extensions (MSE) with WebRTC

北战南征 提交于 2021-02-07 07:14:52
问题 What are the fundamental differences between Media Source Extensions and WebRTC? If I may project my own understanding for a moment. WebRTC includes an the RTCPeerConnection which handles getting streams from Media Streams and passing them into a protocol for streaming to connected peers of the application. It seems under the hood WebRTC abstracting a lot of the bigger issues like codecs and transcoding. Would this be a correct assessment? Where does Media Source Extensions fit into things? I

基于 DTLS 协议的反射攻击深度分析

和自甴很熟 提交于 2021-02-06 10:25:20
作者:百度安全实验室 原文链接: https://mp.weixin.qq.com/s/Ye_AuMDLQotv3M5rv9OmOA 0x00概述 德国软件公司Anaxco GmbH的Marco Hofmann首次发现黑客利用Citrix ADC网关上的DTLS服务(UDP:443)作为反射源实施DDoS放大攻击,之后国内多个安全团队针对此类攻击进行过解读。 百度智云盾团队在2021年1月也捕获到此类攻击,经过深入分析,我们确认了这次攻击是黑客利用DTLS协议未启用HelloVerifyRequest安全认证机制的漏洞发起的反射放大攻击。 0x01 事件回顾 2020年12月19日,德国软件公司Anaxco GmbH的Marco Hofmann在公司的监控系统上发现异常流量告警,在公司防火墙上进行抓包分析,发现流量中有同一客户端IP的大量请求和海量响应,随后跟踪到流量来源为Citrix ADC设备上的DTLS服务(UDP 443)。由于异常流量已经影响到了公司网络,他决定阻断针对UDP 443端口的请求,从而缓解了这次攻击对公司网络的影响。之后他将这次攻击信息发布到社交网站,不久有其他安全研究人员在EUC社区及社交网站上发帖表示也遭受到了这种攻击。 Citrix ADC是Citrix(思杰)公司的一系列网络产品的统称,ADC的中文名为应用程序交付控制器,也被人称为NetScaler

Where to store WebRTC streams when building React app with redux

拈花ヽ惹草 提交于 2021-02-06 02:14:55
问题 I'm building a React.js application that interacts with the WebRTC apis to do audio/video calling. When a call is successfully established, an 'onaddstream' event is fired on the RTCPeerConnection instance, which contains the stream that I as a developer am supposed to connect to a video element to display the remote video to the user. Problem I'm having is understanding the best way to get the stream from the event to the React component for rendering. I have it successfully working by just

Where to store WebRTC streams when building React app with redux

与世无争的帅哥 提交于 2021-02-06 02:02:51
问题 I'm building a React.js application that interacts with the WebRTC apis to do audio/video calling. When a call is successfully established, an 'onaddstream' event is fired on the RTCPeerConnection instance, which contains the stream that I as a developer am supposed to connect to a video element to display the remote video to the user. Problem I'm having is understanding the best way to get the stream from the event to the React component for rendering. I have it successfully working by just