问题
I'm a chemistry student trying to use NAudio in C# to gather data from my computer's microphone (planning on switching to an audio port later, in case that's pertinent to how this gets answered). I understand what source streams are, and how NAudio uses an event handler to know whether or not to start reading information from said stream, but I get stumped when it comes to working with the data that has been read from the stream. As I understand it, a buffer array is populated in either byte or WAV format from the source stream (with the AddSamples command). For now, all that I'm trying to do is populate the buffer and write its contents on the console or make a simple visualization. I can't seem to get my values out of the buffer, and I've tried treating it as both a WAV and byte array. Can someone give me a hand in understanding how NAudio works from the ground up, and how to extract the data from the buffer and store it in a more useful format (i.e. doubles)? Here's the code I have so far for handling NAudio and all that comes with it:
public NAudio.Wave.BufferedWaveProvider waveBuffer = null; // clears buffer
NAudio.Wave.WaveIn sourceStream = null; // clears source stream
public void startRecording(int samplingFrequency, int deviceNumber, string fileName)
{
sourceStream = new NAudio.Wave.WaveIn(); // initializes incoming audio stream
sourceStream.DeviceNumber = deviceNumber; // specifies microphone device number
sourceStream.WaveFormat = new NAudio.Wave.WaveFormat(samplingFrequency, NAudio.Wave.WaveIn.GetCapabilities(deviceNumber).Channels); // specifies sampling frequency, channels
waveBuffer = new NAudio.Wave.BufferedWaveProvider(sourceStream.WaveFormat); // initializes buffer
sourceStream.DataAvailable += new EventHandler<NAudio.Wave.WaveInEventArgs>(sourceStream_DataAvailable); // event handler for when incoming audio is available
sourceStream.StartRecording();
PauseForMilliSeconds(500); // delay before recording is stopped
sourceStream.StopRecording(); // terminates recording
sourceStream.Dispose();
sourceStream = null;
}
void sourceStream_DataAvailable(object sender, NAudio.Wave.WaveInEventArgs e)
{
waveBuffer.AddSamples(e.Buffer, 0, e.BytesRecorded); // populate buffer with audio stream
waveBuffer.DiscardOnBufferOverflow = true;
}
回答1:
Disclaimer: I don't have that much experience with NAudio.
It kind of depends on what you want to do with the audio data.
If you simply want to store or dump the data (be it a file target or just the console) then you don't need a BufferedWaveProvider
. Just do whatever you want to do directly in the event handler sourceStream_DataAvailable()
. But keep in mind that you receive the data as raw bytes, i.e. how many bytes actually constitute a single frame (a.k.a. sample) of the recorded audio depends on the wave format:
var bytesPerFrame = sourceStream.WaveFormat.BitsPerSample / 8
* sourceStream.WaveFormat.Channels
If you want to analyze the data (fourier analysis with FFT, for instance) then I suggest to use NAudio's ISampleProvider
. This interface hides all the raw byte, bit-depth stuff and lets you access the data frame by frame in an easy manner.
First create an ISampleProvider
from your BufferedWaveProvider
like so:
var samples = waveBuffer.ToSampleProvider();
You can then access a sample frame with the Read()
method. Make sure to check if data is actually available with the BufferedBytes
property on your BufferedWaveProvider
:
while (true)
{
var bufferedFrames = waveBuffer.BufferedBytes / bytesPerFrame;
if (bufferedFrames < 1)
continue;
var frames = new float[bufferedFrames];
samples.Read(frames, 0, bufferedFrames);
DoSomethingWith(frames);
}
Because you want to do two things at once -- recording and analyzing audio data concurrently -- you should use two separate threads for this.
There is a small GitHub project that uses NAudio for DTMF analysis of recorded audio data. You might wanna have a look to get some ideas how to bring it all together. The file DtmfDetector\Program.cs there is a good starting point.
For a quick start that should give you "more coherent" output try the following:
Add this field to your class:
ISampleProvider samples;
Add this line to your method startRecording()
:
samples = waveBuffer.ToSampleProvider();
Extend sourceStream_DataAvailable()
like so:
void sourceStream_DataAvailable(object sender, NAudio.Wave.WaveInEventArgs e)
{
waveBuffer.AddSamples(e.Buffer, 0, e.BytesRecorded);
waveBuffer.DiscardOnBufferOverflow = true;
var bytesPerFrame = sourceStream.WaveFormat.BitsPerSample / 8
* sourceStream.WaveFormat.Channels
var bufferedFrames = waveBuffer.BufferedBytes / bytesPerFrame;
var frames = new float[bufferedFrames];
samples.Read(frames, 0, bufferedFrames);
foreach (var frame in frames)
Debug.WriteLine(frame);
}
来源:https://stackoverflow.com/questions/37148997/trying-to-understand-buffers-with-regard-to-naudio-in-c-sharp