opus

腾讯会议突围背后:端到端实时语音技术是如何保障交流通畅的?

为君一笑 提交于 2020-03-25 11:37:58
3 月,跳不动了?>>> 腾讯会议去年推出,疫情期间两个月急速扩容,日活跃账户数已超过1000万,成为了当前中国最多人使用的视频会议应用。腾讯会议突围背后,是如何通过端到端实时语音技术保障交流通畅的?本文是腾讯多媒体实验室音频技术中心高级总监商世东老师在「云加社区沙龙online」的分享整理,从实时语音通信的发展历程,到5G下语音通信体验的未来,为你一一揭晓。 点击此链接,查看完整直播视频回放 ​ 一、通信系统的衍变 1. 从模拟电话到数字电话 说到腾讯会议背后的实时语音端到端解决方案,大家可能第一时间就想到了PSTN电话,从贝尔实验室创造模拟电话开始,经过一百多年的发展,整个语音通信、语音电话系统经历了很大一部分变化。尤其是最近三十年来,语音通话由模拟信号变为数字信号,从固定电话变为移动电话,从电路交换到现在的分组交换。 以前的PSTN电话系统,用的都是老式模拟话机。然后数字相对模拟电话的优势是显而易见的,尤其在通话语音质量上抗干扰,抗长距离信号衰减的能力明显优于模拟电话和系统,所以电话系统演进的第一步就是从终端从模拟电话升级到了数字电话,网络也升级到了ISDN(综合业务数字网),可以支持数字语音和数据业务。 ISDN的最重要特征是能够支持端到端的数字连接,并且可实现话音业务和数据业务的综合,使数据和话音能够在同一网络中传递。但是本质上,ISDN还是电路交换网络系统。

移动端实时音视频直播采集技术详解

随声附和 提交于 2020-03-19 12:04:06
3 月,跳不动了?>>> 采集是整个视频推流过程中的第一个环节,它从系统的采集设备中获取原始视频数据,将其输出到下一个环节。视频的采集涉及两方面数据的采集:音频采集和图像采集,它们分别对应两种完全不同的输入源和数据格式。 采集内容 1 音频采集 音频数据既能与图像结合组合成视频数据,也能以纯音频的方式采集播放,后者在很多成熟的应用场景如在线电台和语音电台等起着非常重要的作用。音频的采集过程主要通过设备将环境中的模拟信号采集成 PCM 编码的原始数据,然后编码压缩成 MP3 等格式的数据分发出去。常见的音频压缩格式有:MP3,AAC,OGG,WMA,Opus,FLAC,APE,m4a 和 AMR 等 音频采集和编码主要面临的挑战在于: 延时敏感; 卡顿敏感; 噪声消除(Denoise); 回声消除(AEC); 静音检测(VAD); 各种混音算法等。 在音频采集阶段,参考的主要技术参数有 : 采样率(samplerate): 采样就是把模拟信号数字化的过程,采样频率越高,记录这一段音频信号所用的数据量就越大,同时音频质量也就越高; 位宽: 每一个采样点都需要用一个数值来表示大小,这个数值的数据类型大小可以是:4bit、8bit、16bit、32bit 等等,位数越多,表示得就越精细,声音质量自然就越好,而数据量也会成倍增大。 声道数(channels):

How to encode self-delimited opus in iOS

Deadly 提交于 2020-01-25 07:20:07
问题 I can record opus using AVAudioRecorder as following: let opusRecordingSettings = [AVFormatIDKey: kAudioFormatOpus, AVSampleRateKey: 16000.0, AVNumberOfChannelsKey: 1] as [String: Any] do { try audioRecordingSession.setCategory(.playAndRecord, mode: .default) try audioRecordingSession.setActive(true) audioRecorder = try AVAudioRecorder(url: fileUrl(), settings: opusRecordingSettings) audioRecorder.delegate = self audioRecorder.prepareToRecord() audioRecorder.record() } catch _ { } // ... ...

How to convert .opus file to .mp3/.m4a/.aac in iOS swift?

风格不统一 提交于 2020-01-23 15:51:06
问题 I want to play .opus file using AVAudioPlayer, since AVAudioPlayer doesn't support .opus file, I am trying to find a way to convert .opus to any other audio format so that I can play using AVAudioPlayer . Could anyone help me on this? Thank you 回答1: You could use libopus or libopusfile C libraries to dynamically decode Opus files to raw PCM audio and feed that to a memory buffer that AVAudioPlayer could decode. You would most likely need to prepend the memory buffer with AIFF/WAV/RIFF header

Clipping sound with opus on Android, sent from IOS

泪湿孤枕 提交于 2020-01-23 12:09:43
问题 I am recording audio in IOS from audioUnit, encoding the bytes with opus and sending it via UDP to android side. The problem is that the sound is playing a bit clipped . I have also tested the sound by sending the Raw data from IOS to Android and it plays perfect. My AudioSession code is try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker]) try audioSession.setPreferredIOBufferDuration(0.02) try audioSession.setActive(true) My recording callBack code is:

Playing custom .opus audio file in iOS

被刻印的时光 ゝ 提交于 2020-01-06 05:36:08
问题 I was able to record and play opus using AVFoundation. The problem is I got a custom opus audio file (coming from server, also processed in server) as follows: | header 1 (1 byte) | opus data 1 (1~255 bytes) | header 2 (1 byte) | opus data 2 (1~255 bytes) | ... | ... | Each header indicates size of the opus data i.e. if header 1 is 200 (Int) then opus data 1 is 200 bytes So, I am extracting opus data and appending to Data buffer as following: guard let url = Bundle.main.url(forResource: "test

Webm (VP8 / Opus) file read and write back

十年热恋 提交于 2019-12-25 16:54:37
问题 I am trying to develop a webrtc simulator in C/C++. For media handling, I plan to use libav . I am thinking of below steps to realize media exchange between two webrtc simulator. Say I have two webrtc simulators A and B . Read media at A from a input webm file using av_read_frame api. I assume I will get the encoded media (audio / video) data, am I correct here? Send the encoded media data to simulator B over a UDP socket. Simulator B receives the media data in UDP socket as RTP packets.

Angular ogv.js audio player controls

耗尽温柔 提交于 2019-12-24 07:28:38
问题 I'm using ogv.js in Angular 8. I want to play ogg audio in my browser (Safari). In the browser, just the audio is being played without any controls (play/pause etc.)? My component ts file has: const ogv = require('ogv'); @ViewChild('ogvContainer', { static: true }) ogvContainer: ElementRef; ogv.OGVLoader.base = '../../assets/js/ogv'; let player = new ogv.OGVPlayer(); this.ogvContainer.nativeElement.appendChild(player); let blob = new Blob([new Uint8Array(decodedData)]); player.src = URL

Decoding opus using libavcodec from FFmpeg

删除回忆录丶 提交于 2019-12-23 19:23:12
问题 I am trying to decode opus using libavcodec. I am able to do it using libopus library alone. But I am trying to acheive same using libavcodec. I am trying to figure it out Why its not working in my case. I have an rtp stream and trying to decode it. The result in decoded packet is same as input. Decoded frame normally contain pcm values instead of that Im receving opus frame that actually I send. Please help me. av_register_all(); avcodec_register_all(); AVCodec *codec; AVCodecContext *c =

Use ExternalProject_Add to include Opus in Android

怎甘沉沦 提交于 2019-12-22 18:17:32
问题 This is probably quite simple. I have an Android project that uses NDK. I will like to include the opus source code in the native code. I have tried using the ExternalProject_Add property of CMake but my native code still cannot import headers from the Opus library and fails to build. Below is my ExternalProject_Add definition: ExternalProject_Add(project_opus URL https://archive.mozilla.org/pub/opus/opus-1.2.1.tar.gz CONFIGURE_COMMAND <SOURCE_DIR>/configure --prefix=<INSTALL_DIR> BUILD