Here is a sample of the relevant code im working on:
AudioRecord recorder = setupAudio();
recorder.startRecording();
SetupAudio method:
public AudioRecord setupAudio() {
AudioRecord recorder;
minBufferSizeInBytes = AudioRecord.getMinBufferSize(
RECORDER_SAMPLERATE, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
Log.e("MoverAudio","BufferSize: " + minBufferSizeInBytes);
recorder = new AudioRecord(MediaRecorder.AudioSource.CAMCORDER,
RECORDER_SAMPLERATE, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT, minBufferSizeInBytes);
return recorder;
}
RECORDER_SAMPLERATE = 8000;
Im trying to find out if there is any way to improve the time it takes to initialize.
Currently im testing it with 3 devices with the following results:
Galaxy S3
- setupAudio : ~200ms
- startRecording() : ~280ms
Galaxy S3 mini
- setupAudio : ~10ms
- startRecording() : ~290ms
Galaxy Nexus
- setupAudio : ~10ms
- startRecording() : ~235ms
BufferSizes:
- Nexus: 704
- s3: 1024
- s3 mini: 640
However, only the data from the galaxy nexus is usable. For the purpose of my application I have to be able to get the audio Data as soon as possible. With the current values only the Nexus is within acceptable time.
The S3 mini may seem fast since it only takes a bit more than the Nexus, however the first ~200ms of sample are listed as 0, so it's not usable.
From what i understand after analyzing the data gathered, the audio on the S3 and S3 mini seems to be somehow filtered seeing that the resulting FFT is a lot more clean and the low frequency sounds are always a lot less visible. Here is an example of both S3mini and Galaxy Nexus recorded audio:
http://img41.imageshack.us/img41/4177/ox7h.png S3 Mini
http://img690.imageshack.us/img690/8717/iya6.png Galaxy Nexus
If you request a long buffer, then you have to wait for the OS to fill it at the current sample rate. If you request a sample rate other than what the hardware ADC is running, then you have to additionally wait for the resampler filter delay. Different Android devices and OS versions may support different minimum buffer sizes and native hardware sample rates.
One technique to hide the latency is to start recording earlier in the app's life cycle, and just keep throwing away audio samples until the app needs them. Then there is no startup overhead.
Added: On some devices/OS versions, the data may really be captured into longer OS driver buffers at some hardware sample rate (for instance 4096 at 44.1k or 48kHz) and only after a couple of those buffers are filled, converted to another sample rate, and the chopped into the shorter requested buffer lengths, with the audio command start sending data to an app. To bypass, if even possible, you might need to mod the OS and write your own ADC driver. But try using a higher sample rate (44.1k or 48k) and request shorter buffers first.
来源:https://stackoverflow.com/questions/18590225/android-audiorecord-initialization-delay