问题
I am currently working on a SR. design project which is a windows forms app which will allow for users to plug in their guitar and create distortions in real time and play it with input and output, and auto tab it into ASCII tabs. Currently I am trying to get the real time listening portion working, I have the recording and implementation of distortions working just fine just having some issues using ASIO. I've looked at this post How to record and playback with NAudio using AsioOut but it was not of much help with my issue, here is my code:
private BufferedWaveProvider buffer;
private AsioOut input;
private AsioOut output;
private void listenBtn_Click(object sender, EventArgs e)
{
input = new AsioOut(RecordInCbox.SelectedIndex);
WaveFormat format = new WaveFormat();
buffer = new BufferedWaveProvider(format);
buffer.DiscardOnBufferOverflow = true;
input.InitRecordAndPlayback(buffer, 1, 44100);
input.AudioAvailable += new EventHandler<AsioAudioAvailableEventArgs>(AudioAvailable);
//output = new AsioOut(RecordInCbox.SelectedIndex);
//output.Init(buffer);
input.Play();
//output.Play();
}
public void AudioAvailable(object sender, AsioAudioAvailableEventArgs e)
{
byte[] buf = new byte[e.SamplesPerBuffer];
e.WrittenToOutputBuffers = true;
for (int i = 0; i < e.InputBuffers.Length; i++)
{
Array.Copy(e.InputBuffers, e.OutputBuffers, 1);
Marshal.Copy(e.InputBuffers[i], buf, 0, e.SamplesPerBuffer);
buffer.AddSamples(buf, 0, buf.Length);
}
}
Currently it is getting the audio and pushing it into the buffer but the output is not working. I am able to get it to play the guitar if I set the recording setting to Listen on windows but feel this is unnecessary as how I want to be able to perform my distortion and hear that as the output. Thanks!
回答1:
You don't need to add samples in buffer. The buffer only serves to determine the number of output channels you want. I've did it this way:
[DllImport("Kernel32.dll", EntryPoint = "RtlMoveMemory", SetLastError = false)]
private static unsafe extern void MoveMemory(IntPtr dest, IntPtr src, int size);
private void OnAudioAvailable(object sender, AsioAudioAvailableEventArgs e)
{
for (int i = 0; i < e.InputBuffers.Length; i++)
{
MoveMemory(e.OutputBuffers[i], e.InputBuffers[i], e.SamplesPerBuffer * e.InputBuffers.Length);
}
e.WrittenToOutputBuffers = true;
}
But doing like this feels a bit latency and a bit of echo and I don't know how to solve them. So if you have any ideas I'm here to listen.
回答2:
Unfortunately I could not find a way to get the ASIO to work, but I have come up with an alternative method which works just as well, as for the latency I got it down to 50 ms, but have been looking into the NAudio source to see if there might be a way to get it below that. (roughly around 20-30 ms) For a better realtime play.
private BufferedWaveProvider buffer;
private WaveOut waveOut;
private WaveIn sourceStream = null;
private bool listen = false;
private void listenBtn_Click(object sender, EventArgs e)
{
listen = !listen;
if (listen)
listenBtn.Text = "Stop listening";
else
{
listenBtn.Text = "Listen";
sourceStream.StopRecording();
return;
}
sourceStream = new WaveIn();
sourceStream.WaveFormat = new WaveFormat(44100, 1);
waveOut = new WaveOut(WaveCallbackInfo.FunctionCallback());
sourceStream.DataAvailable += new EventHandler<WaveInEventArgs>(sourceStream_DataAvailable);
sourceStream.RecordingStopped += new EventHandler<StoppedEventArgs>(sourceStream_RecordingStopped);
buffer = new BufferedWaveProvider(sourceStream.WaveFormat);
buffer.DiscardOnBufferOverflow = true;
waveOut.DesiredLatency = 51;
waveOut.Volume = 1f;
waveOut.Init(buffer);
sourceStream.StartRecording();
}
private void sourceStream_DataAvailable(object sender, WaveInEventArgs e)
{
buffer.AddSamples(e.Buffer, 0, e.BytesRecorded);
waveOut.Play();
}
private void sourceStream_RecordingStopped(object sender, StoppedEventArgs e)
{
sourceStream.Dispose();
waveOut.Dispose();
}
Again I do understand that this is not using ASIO but it was a better alternative based on the resources I had available and the documentation. Instead of using ASIO I am just creating the waveIn and mocking a "recording" but instead of writing that to a file I am taking the stream and pushing it into a waveOut buffer which will allow for it play after I do some sound manipulation.
回答3:
Probably I am wrong but I have successfully managed simultaneous Asio record and playback using NAudio with very low latencies (on very cheap USB audio hardware ;).
Instead of your event handler method used in your first example you may try this:
private float[] recordingBuffer = null;
private byte[] recordingByteBuffer = null;
private BufferedWaveProvider bufferedWaveProvider;
private BufferedSampleProvider bsp;
private SampleToWaveProvider swp;
// somewhere in e.g. constructor
// set up our signal chain
bufferedWaveProvider = new BufferedWaveProvider(waveFormat);
//bufferedWaveProvider.DiscardOnBufferOverflow = true;
bsp = new BufferedSampleProvider(waveFormat);
swp = new SampleToWaveProvider(bsp);
// ...
private void OnAudioAvailable(object sender, AsioAudioAvailableEventArgs e)
{
this.recordingBuffer = BufferHelpers.Ensure(this.recordingBuffer, e.SamplesPerBuffer * e.InputBuffers.Length);
this.recordingByteBuffer = BufferHelpers.Ensure(this.recordingByteBuffer, e.SamplesPerBuffer * 4 * e.InputBuffers.Length);
int count = e.GetAsInterleavedSamples(this.recordingBuffer);
this.bsp.CurrentBuffer = this.recordingBuffer;
int count2 = this.swp.Read(this.recordingByteBuffer, 0, count * 4);
bufferedWaveProvider.AddSamples(this.recordingByteBuffer, 0, this.recordingByteBuffer.Length);
}
with class BufferedSampleProvider.cs:
public class BufferedSampleProvider : ISampleProvider
{
private WaveFormat waveFormat;
private float[] currentBuffer;
public BufferedSampleProvider(WaveFormat waveFormat)
{
this.waveFormat = waveFormat;
this.currentBuffer = null;
}
public float[] CurrentBuffer
{
get { return this.currentBuffer; }
set { this.currentBuffer = value; }
}
public int Read(float[] buffer, int offset, int count)
{
if (this.currentBuffer != null)
{
if (count <= currentBuffer.Length)
{
for (int i = 0; i < count; i++)
{
buffer[i] = this.currentBuffer[i];
}
return count;
}
}
return 0;
}
public WaveFormat WaveFormat
{
get { return this.waveFormat; }
}
}
I have it done this (messy) way because otherwise I would have to copy the bytes from asio buffers dependent on sample byte count and so on (look at source code from GetAsInterleavedSamples(...) method). To keep it simple for me I have used a BufferedWaveProvider to be really sure there are enough (filled) buffers on the output side of my signal chain even when I'm not really needing it, but it's safe. After several processing blocks following this provider the chain ends up in the last provider "output". The last provider was passed into
asioOut.InitRecordAndPlayback(output, this.InputChannels, this.SampleRate);
when initializing the whole objects. Even when I use many processing blocks in my chain, I have no hearable drops or buzzy sounds with asio buffer size of 512 samples. But I think this is really depending on the Asio hardware used. The most important for me was to be sure to have input and output in sync.
To compare: If I used WaveIn/WaveOutEvent in the same way, I can reach nearly the same latency (on same cheap hardware) but since my tests were on two separate sound devices too, the input buffer duration increases after some time due to some drops or nonsynchronous audio clocks ;) For reaching the very low latency even when using in a WPF application I had to patch WaveOutEvent class to increase priority of playing thread to highest possible, this helps 'against' most of the possible GC interruptions.
Currently it seems with using Asio interface I have sorted out this GC problem at all.
Hope this helps.
来源:https://stackoverflow.com/questions/22543513/naudio-using-asio-to-record-audio-and-output-with-a-guitar