onaudioprocess not called on ios11

旧街凉风 提交于 2019-11-27 00:56:18

问题


I am trying to get audio capture from the microphone working on Safari on iOS11 after support was recently added

However, the onaudioprocess callback is never called. Here's an example page:

<html>
    <body>
        <button onclick="doIt()">DoIt</button>
        <ul id="logMessages">
        </ul>
        <script>
            function debug(msg) {
                if (typeof msg !== 'undefined') {
                    var logList = document.getElementById('logMessages');
                    var newLogItem = document.createElement('li');
                    if (typeof  msg === 'function') {
                        msg = Function.prototype.toString(msg);
                    } else if (typeof  msg !== 'string') {
                        msg = JSON.stringify(msg);
                    }
                    var newLogText = document.createTextNode(msg);
                    newLogItem.appendChild(newLogText);
                    logList.appendChild(newLogItem);
                }
            }
            function doIt() {
                var handleSuccess = function (stream) {
                    var context = new AudioContext();
                    var input = context.createMediaStreamSource(stream)
                    var processor = context.createScriptProcessor(1024, 1, 1);

                    input.connect(processor);
                    processor.connect(context.destination);

                    processor.onaudioprocess = function (e) {
                        // Do something with the data, i.e Convert this to WAV
                        debug(e.inputBuffer);
                    };
                };

                navigator.mediaDevices.getUserMedia({audio: true, video: false})
                        .then(handleSuccess);
            }
        </script>
    </body>
</html>

On most platforms, you will see items being added to the messages list as the onaudioprocess callback is called. However, on iOS, this callback is never called.

Is there something else that I should do to try and get it called on iOS 11 with Safari?


回答1:


There are two problems. The main one is that Safari on iOS 11 seems to automatically suspend new AudioContext's that aren't created in response to a tap. You can resume() them, but only in response to a tap.

(Update: Chrome mobile also does this, and Chrome desktop will have the same limitation starting in version 70 / December 2018.)

So, you have to either create it before you get the MediaStream, or else get the user to tap again later.

The other issue with your code is that AudioContext is prefixed as webkitAudioContext in Safari.

Here's a working version:

<html>
<body>
<button onclick="beginAudioCapture()">Begin Audio Capture</button>
<script>
  function beginAudioCapture() {

    var AudioContext = window.AudioContext || window.webkitAudioContext;    
    var context = new AudioContext();
    var processor = context.createScriptProcessor(1024, 1, 1);
    processor.connect(context.destination);

    var handleSuccess = function (stream) {
      var input = context.createMediaStreamSource(stream);
      input.connect(processor);

      var recievedAudio = false;
      processor.onaudioprocess = function (e) {
        // This will be called multiple times per second.
        // The audio data will be in e.inputBuffer
        if (!recievedAudio) {
          recievedAudio = true;
          console.log('got audio', e);
        }
      };
    };

    navigator.mediaDevices.getUserMedia({audio: true, video: false})
      .then(handleSuccess);
  }
</script>
</body>
</html>

(You can set the onaudioprocess callback sooner, but then you get empty buffers until the user approves of microphone access.)

Oh, and one other iOS bug to watch out for: the Safari on iPod touch (as of iOS 12.1.1) reports that it does not have a microphone (it does). So, getUserMedia will reject with an Error: Invalid constraint if you ask for audio there.

FYI: I maintain the microphone-stream package on npm that does this for you and provides the audio in a Node.js-style ReadableStream. I just updated it with this fix, if you or anyone else would prefer to use that over the raw code.




回答2:


Tried it on iOS 11.0.1, and unfortunately this problem still isn't fixed.

As a workaround, I wonder if it makes sense to replace the ScriptProcessor with a function that takes the steam data from a buffet and then processes it every x milliseconds. But that's a big change to the functionality.




回答3:


Just wondering... do you have the setting enabled in Safari settings? It comes enabled by default in iOS11, but maybe you just disabled it without noticing.



来源:https://stackoverflow.com/questions/46363048/onaudioprocess-not-called-on-ios11

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!