I'm trying to encode the ID3D11Texture2D to mp4 using MediaFoundation. Below is my current code.
Initializing Sink Writer
private int InitializeSinkWriter(String outputFile, int videoWidth, int videoHeight)
{
IMFMediaType mediaTypeIn = null;
IMFMediaType mediaTypeOut = null;
IMFAttributes attributes = null;
int hr = 0;
if (Succeeded(hr)) hr = (int)MFExtern.MFCreateAttributes(out attributes, 1);
if (Succeeded(hr)) hr = (int)attributes.SetUINT32(MFAttributesClsid.MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS, 1);
if (Succeeded(hr)) hr = (int)attributes.SetUINT32(MFAttributesClsid.MF_LOW_LATENCY, 1);
// Create the sink writer
if (Succeeded(hr)) hr = (int)MFExtern.MFCreateSinkWriterFromURL(outputFile, null, attributes, out sinkWriter);
// Create the output type
if (Succeeded(hr)) hr = (int)MFExtern.MFCreateMediaType(out mediaTypeOut);
if (Succeeded(hr)) hr = (int)mediaTypeOut.SetGUID(MFAttributesClsid.MF_MT_MAJOR_TYPE, MFMediaType.Video);
if (Succeeded(hr)) hr = (int)mediaTypeOut.SetGUID(MFAttributesClsid.MF_TRANSCODE_CONTAINERTYPE, MFTranscodeContainerType.MPEG4);
if (Succeeded(hr)) hr = (int)mediaTypeOut.SetGUID(MFAttributesClsid.MF_MT_SUBTYPE, MFMediaType.H264);
if (Succeeded(hr)) hr = (int)mediaTypeOut.SetUINT32(MFAttributesClsid.MF_MT_AVG_BITRATE, videoBitRate);
if (Succeeded(hr)) hr = (int)mediaTypeOut.SetUINT32(MFAttributesClsid.MF_MT_INTERLACE_MODE, (int)MFVideoInterlaceMode.Progressive);
if (Succeeded(hr)) hr = (int)MFExtern.MFSetAttributeSize(mediaTypeOut, MFAttributesClsid.MF_MT_FRAME_SIZE, videoWidth, videoHeight);
if (Succeeded(hr)) hr = (int)MFExtern.MFSetAttributeRatio(mediaTypeOut, MFAttributesClsid.MF_MT_FRAME_RATE, VIDEO_FPS, 1);
if (Succeeded(hr)) hr = (int)MFExtern.MFSetAttributeRatio(mediaTypeOut, MFAttributesClsid.MF_MT_PIXEL_ASPECT_RATIO, 1, 1);
if (Succeeded(hr)) hr = (int)sinkWriter.AddStream(mediaTypeOut, out streamIndex);
// Create the input type
if (Succeeded(hr)) hr = (int)MFExtern.MFCreateMediaType(out mediaTypeIn);
if (Succeeded(hr)) hr = (int)mediaTypeIn.SetGUID(MFAttributesClsid.MF_MT_MAJOR_TYPE, MFMediaType.Video);
if (Succeeded(hr)) hr = (int)mediaTypeIn.SetGUID(MFAttributesClsid.MF_MT_SUBTYPE, MFMediaType.ARGB32);
if (Succeeded(hr)) hr = (int)mediaTypeIn.SetUINT32(MFAttributesClsid.MF_SA_D3D11_AWARE, 1);
if (Succeeded(hr)) hr = (int)mediaTypeIn.SetUINT32(MFAttributesClsid.MF_MT_INTERLACE_MODE, (int)MFVideoInterlaceMode.Progressive);
if (Succeeded(hr)) hr = (int)MFExtern.MFSetAttributeSize(mediaTypeIn, MFAttributesClsid.MF_MT_FRAME_SIZE, videoWidth, videoHeight);
if (Succeeded(hr)) hr = (int)MFExtern.MFSetAttributeRatio(mediaTypeIn, MFAttributesClsid.MF_MT_FRAME_RATE, VIDEO_FPS, 1);
if (Succeeded(hr)) hr = (int)MFExtern.MFSetAttributeRatio(mediaTypeIn, MFAttributesClsid.MF_MT_PIXEL_ASPECT_RATIO, 1, 1);
if (Succeeded(hr)) hr = (int)sinkWriter.SetInputMediaType(streamIndex, mediaTypeIn, null);
// Start accepting data
if (Succeeded(hr)) hr = (int)sinkWriter.BeginWriting();
COMBase.SafeRelease(mediaTypeOut);
COMBase.SafeRelease(mediaTypeIn);
return hr;
}
Writing frame
int hr = 0;
IMFSample sample = null;
IMFMediaBuffer buffer = null;
IMF2DBuffer p2Dbuffer = null;
object texNativeObject = Marshal.GetObjectForIUnknown(surface.NativePointer);
if (Succeeded(hr)) hr = (int)MFExtern.MFCreateDXGISurfaceBuffer(new Guid("6f15aaf2-d208-4e89-9ab4-489535d34f9c"), texNativeObject, 0, false, out p2Dbuffer);
buffer = MFVideoEncoderST.ReinterpretCast<IMF2DBuffer,IMFMediaBuffer>(p2Dbuffer);
int length=0;
if (Succeeded(hr)) hr = (int)p2Dbuffer.GetContiguousLength(out length);
if (Succeeded(hr)) hr = (int)buffer.SetCurrentLength(length);
if (Succeeded(hr)) hr = (int)MFExtern.MFCreateVideoSampleFromSurface(null, out sample);
if (Succeeded(hr)) hr = (int)sample.AddBuffer(buffer);
if (Succeeded(hr)) hr = (int)sample.SetSampleTime(prevRecordingDuration);
if (Succeeded(hr)) hr = (int)sample.SetSampleDuration((recordDuration - prevRecordingDuration));
if (Succeeded(hr)) hr = (int)sinkWriter.WriteSample(streamIndex, sample);
COMBase.SafeRelease(sample);
COMBase.SafeRelease(buffer);
using MFTRACE I'm getting the below error.
02:48:04.99463 CMFSinkWriterDetours::WriteSample @024BEA18 Stream Index 0x0, Sample @17CEACE0, Time 571ms, Duration 16ms, Buffers 1, Size 4196352B,2088,2008 02:48:04.99465 CMFSinkWriterDetours::WriteSample @024BEA18 failed hr=0x887A0005 (null)2088,2008
02:48:05.01090 CMFSinkWriterDetours::WriteSample @024BEA18 Stream Index 0x0, Sample @17CE9FC0, Time 587ms, Duration 17ms, Buffers 1, Size 4196352B,2088,2008 02:48:05.01091 CMFSinkWriterDetours::WriteSample @024BEA18 failed hr=0x887A0005 (null)2088,2008
02:48:05.02712 CMFSinkWriterDetours::WriteSample @024BEA18 Stream Index 0x0, Sample @17CEACE0, Time 604ms, Duration 16ms, Buffers 1, Size 4196352B,2088,2008 02:48:05.02713 CMFSinkWriterDetours::WriteSample @024BEA18 failed hr=0x887A0005 (null)
Can anyone tell me whats wrong with my code? I can only produce 0 bytes of mp4 file.
There are a few potential problems that occur to me here. Roman mentioned the two big ones so I'll elaborate on those. I have a couple other critiques / suggestions for you as well.
Not using IMFDXGIDeviceManager
In order to use hardware acceleration within Media Foundation, you need to create a DirectX device manager object, either an IDirect3DDeviceManager9
for DX9 or in your case an IMFDXGIDeviceManager
for DXGI. I strongly suggest reading all the MSDN documentation of that interface. The reason this is necessary is because the same DX device must be shared across all the cooperating hardware MF transforms being used, since they all need access to the shared GPU memory the device controls, and each one needs exclusive control of the device while it's working, so a locking system is needed. The device manager object provides that locking system, and is also the standard way of providing a DX device to one or more transforms. For DXGI, you create this using MFCreateDXGIDeviceManager
.
From there, you need to create your DX11 device, and call IMFDXGIDeviceManager::ResetDevice
with your DX11 device. You then need to set the device manager for the Sink Writer itself, which is not done in the code you provided above. That is accomplished like this:
// ... inside your InitializeSinkWriter function that you listed above
// I'm assuming you've already created and set up the DXGI device manager elsewhere
IMFDXGIDeviceManager pDeviceManager;
// Passing 3 as the argument because we're adding 3 attributes immediately, saves re-allocations
if (Succeeded(hr)) hr = (int)MFExtern.MFCreateAttributes(out attributes, 3);
if (Succeeded(hr)) hr = (int)attributes.SetUINT32(MFAttributesClsid.MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS, 1);
if (Succeeded(hr)) hr = (int)attributes.SetUINT32(MFAttributesClsid.MF_LOW_LATENCY, 1);
// Here's the key piece!
if (Succeeded(hr)) hr = (int)attributes.SetUnknown(MFAttributesClsid.MF_SINK_WRITER_D3D_MANAGER, pDeviceManager);
// Create the sink writer
if (Succeeded(hr)) hr = (int)MFExtern.MFCreateSinkWriterFromURL(outputFile, null, attributes, out sinkWriter);
This will actually enable D3D11 support for the hardware encoder and allow it access to read the Texture2D
you're passing in. It's worth noting that MF_SINK_WRITER_D3D_MANAGER
works for both DX9 and DXGI device managers.
Encoder buffering multiple IMFSample
instances of the same texture
This is a potential cause of your problem as well - at the very least it will lead to a lot of unintended behavior even if it isn't the cause of the obvious problem. Building off Roman's comment, many encoders will buffer multiple frames as part of their encoding process. You don't see that behavior when using the Sink Writer because it handles all the detail work for you. However, what you are trying to accomplish (i.e. sending D3D11 textures as input frames) is sufficiently low level that you start having to worry about the internal details of the encoder MFT being used by the Sink Writer.
Most video encoder MFTs will use an internal buffer of some size to store the last N samples provided via IMFTransform::ProcessInput
. This has the side effect that multiple samples must be provided as inputs before any output will be generated. Video encoders need access to multiple samples in order because they use the subsequent frames to determine how to encode the current frame. In other words, if the decoder is working on frame 0, it might need to see frames 1, 2, and 3 as well. From a technical standpoint, this is because of things like inter-frame prediction and motion estimation. Once the encoder is finished processing the oldest sample, it generates an output buffer (another IMFSample
object, but this time on the output side via IMFTransform::ProcessOutput
) then discards the input sample it was working on (by calling IUnknown::Release
), then requests more input, and eventually moves on to the next frame. You can read more about this process in the MSDN article Processing Data in the Encoder
What this means, as Roman alluded to, is that you are encapsulating an ID3D11Texture2D
inside an IMFMediaBuffer
inside an IMFSample
and then passing that to the Sink Writer. That sample is likely being buffered by the encoder as part of the encoding process. As the encoder is working, the contents of that Texture2D
are probably changing, which can cause a variety of problems. Even if that didn't cause program errors, it would certainly lead to very strange encoded video outputs. Imagine if the encoder is trying to predict how the visual content of one frame changes in the next frame, and then the actual visual content of both frames is updated out from under the encoder!
This specific problem is happening because the encoder only has a pointer reference to your IMFSample
instance, which is ultimately just a pointer itself to your ID3D11Texture2D
object, and that object is a kind of pointer reference to mutable graphics memory. Ultimately, the contents of that graphics memory are changing due to some other part of your program, but because it's always the same GPU texture being updated, every single sample you send the encoder points to the same single texture. That means whenever you update the texture, by changing GPU memory, all the active IMFSample
objects will reflect those changes, since they are all effectively pointing to the same GPU texture.
To fix this, you'll need to allocate multiple ID3D11Texture2D
objects, such that you can pair up one texture with one IMFSample
when providing it to the Sink Writer. This will fix the problem of all the samples pointing to the same single GPU texture by making each sample point to a unique texture. You won't necessarily know how many textures you need to create, though, so the safest way to handle this is to write your own texture allocator. This can still be done within C# for what it's worth, MediaFoundation.NET has the interfaces defined that you'll need to use.
The allocator should maintain a list of "free" SharpDX.Texture2D
objects - those that are not currently in use by the Sink Writer / encoder. Your program should be able to request new texture objects from the allocator, in which case it will either return an object from the free list, or create a new texture to accommodate the request.
The next problem is knowing when the IMFSample
object has been discarded by the encoder, so that you can add the attached texture back to the free list. As it happens, the MFCreateVideoSampleFromSurface
function you're currently using allocates samples that implement the IMFTrackedSample
interface. You'll need that interface so that you can be notified when the sample is freed, so that you can reclaim the Texture2D
objects.
The trick is that you have to tell the sample that you are the allocator. First, your allocator class needs to implement IMFAsyncCallback
. If you set your allocator class on the sample via IMFTrackedSample::SetAllocator
, your allocator's IMFAsyncCallback::Invoke
method will be called, with an IMFAsyncResult
passed as an argument whenever the encoder releases the sample. Here's a general example of what that allocator class could look like.
sealed class TextureAllocator : IMFAsyncCallback, IDisposable
{
private ConcurrentStack<SharpDX.Direct3D11.Texture2D> m_freeStack;
private static readonly Guid s_IID_ID3D11Texture2D = new Guid("6f15aaf2-d208-4e89-9ab4-489535d34f9c");
// If all textures are the exact same size and color format,
// consider making those parameters private class members and
// requiring they be specified as arguments to the constructor.
public TextureAllocator()
{
m_freeStack = new ConcurrentStack<SharpDX.Direct3D11.Texture2D>();
}
private bool disposedValue = false;
private void Dispose(bool disposing)
{
if(!disposedValue)
{
if(disposing)
{
// Dispose managed resources here
}
if(m_freeStack != null)
{
SharpDX.Direct3D11.Texture2D texture;
while(m_freeStack.TryPop(out texture))
{
texture.Dispose();
}
m_freeStack = null;
}
disposedValue = true;
}
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
~TextureAllocator()
{
Dispose(false);
}
private SharpDX.Direct3D11.Texture2D InternalAllocateNewTexture()
{
// Allocate a new texture with your format, size, etc here.
}
public SharpDX.Direct3D11.Texture2D AllocateTexture()
{
SharpDX.Direct3D11.Texture2D existingTexture;
if(m_freeStack.TryPop(out existingTexture))
{
return existingTexture;
}
else
{
return InternalAllocateNewTexture();
}
}
public IMFSample CreateSampleAndAllocateTexture()
{
IMFSample pSample;
IMFTrackedSample pTrackedSample;
HResult hr;
// Create the video sample. This function returns an IMFTrackedSample per MSDN
hr = MFExtern.MFCreateVideoSampleFromSurface(null, out pSample);
MFError.ThrowExceptionForHR(hr);
// Query the IMFSample to see if it implements IMFTrackedSample
pTrackedSample = pSample as IMFTrackedSample;
if(pTrackedSample == null)
{
// Throw an exception if we didn't get an IMFTrackedSample
// but this shouldn't happen in practice.
throw new InvalidCastException("MFCreateVideoSampleFromSurface returned a sample that did not implement IMFTrackedSample");
}
// Use our own class to allocate a texture
SharpDX.Direct3D11.Texture2D availableTexture = AllocateTexture();
// Convert the texture's native ID3D11Texture2D pointer into
// an IUnknown (represented as as System.Object)
object texNativeObject = Marshal.GetObjectForIUnknown(availableTexture.NativePointer);
// Create the media buffer from the texture
IMFMediaBuffer p2DBuffer;
hr = MFExtern.MFCreateDXGISurfaceBuffer(s_IID_ID3D11Texture2D, texNativeObject, 0, false, out p2DBuffer);
// Release the object-as-IUnknown we created above
COMBase.SafeRelease(texNativeObject);
// If media buffer creation failed, throw an exception
MFError.ThrowExceptionForHR(hr);
// Set the owning instance of this class as the allocator
// for IMFTrackedSample to notify when the sample is released
pTrackedSample.SetAllocator(this, null);
// Attach the created buffer to the sample
pTrackedSample.AddBuffer(p2DBuffer);
return pTrackedSample;
}
// This is public so any textures you allocate but don't make IMFSamples
// out of can be returned to the allocator manually.
public void ReturnFreeTexture(SharpDX.Direct3D11.Texture2D freeTexture)
{
m_freeStack.Push(freeTexture);
}
// IMFAsyncCallback.GetParameters
// This is allowed to return E_NOTIMPL as a way of specifying
// there are no special parameters.
public HResult GetParameters(out MFAsync pdwFlags, out MFAsyncCallbackQueue pdwQueue)
{
pdwFlags = MFAsync.None;
pdwQueue = MFAsyncCallbackQueue.Standard;
return HResult.E_NOTIMPL;
}
public HResult Invoke(IMFAsyncResult pResult)
{
object pUnkObject;
IMFSample pSample = null;
IMFMediaBuffer pBuffer = null;
IMFDXGIBuffer pDXGIBuffer = null;
// Get the IUnknown out of the IMFAsyncResult if there is one
HResult hr = pResult.GetObject(out pUnkObject);
if(Succeeded(hr))
{
pSample = pUnkObject as IMFSample;
}
if(pSample != null)
{
// Based on your implementation, there should only be one
// buffer attached to one sample, so we can always grab the
// first buffer. You could add some error checking here to make
// sure the sample has a buffer count that is 1.
hr = pSample.GetBufferByIndex(0, out pBuffer);
}
if(Succeeded(hr))
{
// Query the IMFMediaBuffer to see if it implements IMFDXGIBuffer
pDXGIBuffer = pBuffer as IMFDXGIBuffer;
}
if(pDXGIBuffer != null)
{
// Got an IMFDXGIBuffer, so we can extract the internal
// ID3D11Texture2D and make a new SharpDX.Texture2D wrapper.
hr = pDXGIBuffer.GetResource(s_IID_ID3D11Texture2D, out pUnkObject);
}
if(Succeeded(hr))
{
// If we got here, pUnkObject is the native D3D11 Texture2D as
// a System.Object, but it's unlikely you have an interface
// definition for ID3D11Texture2D handy, so we can't just cast
// the object to the proper interface.
// Happily, SharpDX supports wrapping System.Object within
// SharpDX.ComObject which makes things pretty easy.
SharpDX.ComObject comWrapper = new SharpDX.ComObject(pUnkObject);
// If this doesn't work, or you're using something like SlimDX
// which doesn't support object wrapping the same way, the below
// code is an alternative way.
/*
IntPtr pD3DTexture2D = Marshal.GetIUnknownForObject(pUnkObject);
// Create your wrapper object here, like this for SharpDX
SharpDX.ComObject comWrapper = new SharpDX.ComObject(pD3DTexture2D);
// or like this for SlimDX
SlimDX.Direct3D11.Texture2D.FromPointer(pD3DTexture2D);
Marshal.Release(pD3DTexture2D);
*/
// You might need to query comWrapper for a SharpDX.DXGI.Resource
// first, then query that for the SharpDX.Direct3D11.Texture2D.
SharpDX.Direct3D11.Texture2D texture = comWrapper.QueryInterface<SharpDX.Direct3D11.Texture2D>();
if(texture != null)
{
// Now you can add "texture" back to the allocator's free list
ReturnFreeTexture(texture);
}
}
}
}
Setting MF_SA_D3D_AWARE
on the Sink Writer input media type
I don't think this is causing the bad HRESULT
you're getting, but it isn't the right thing to do regardless. MF_SA_D3D_AWARE
(and its DX11 counterpart, MF_SA_D3D11_AWARE
) are attributes set by an IMFTransform
object to inform you that the transform supports graphics acceleration via DX9 or DX11, respectively. There is no need to set this on the Sink Writer's input media type.
No SafeRelease
on texNativeObject
I'd recommend calling COMBase.SafeRelease()
on texNativeObject
or you may leak memory. That, or you'll prolong the lifetime of that COM object unnecessarily until the GC cleans up the reference count for you
Unnecessary casting
This is part of your code from above:
buffer = MFVideoEncoderST.ReinterpretCast<IMF2DBuffer,IMFMediaBuffer>(p2Dbuffer);
int length=0;
if (Succeeded(hr)) hr = (int)p2Dbuffer.GetContiguousLength(out length);
if (Succeeded(hr)) hr = (int)buffer.SetCurrentLength(length);
I'm not sure what your ReinterpretCast
function is doing, but if you do need to perform a QueryInterface
style cast in C#, you can just use the as
operator or a regular cast.
// pMediaBuffer is of type IMFMediaBuffer and has been created elsewhere
IMF2DBuffer p2DBuffer = pMediaBuffer as IMF2DBuffer;
if(p2DBuffer != null)
{
// pMediaBuffer is an IMFMediaBuffer that also implements IMF2DBuffer
}
else
{
// pMediaBuffer does not implement IMF2DBuffer
}
First issue: IMFDXGIDeviceManager::ResetDevice
always fails.
After working with @kripto in the comments of my earlier answer, we diagnosed a lot of other issues. The biggest issue of all was setting up the IMFDXGIDeviceManager
in order to enable a hardware H.264 encoder MFT to accept Direct3D11 Texture2D
samples, contained inside an IMFDXGIBuffer
. There was a very hard to notice mistake in the code:
// pDevice is a SharpDX.Direct3D11.Texture2D instance
// pDevice.NativePointer is an IntPtr that refers to the native IDirect3D11Device
// being wrapped by SharpDX.
IMFDXGIDeviceManager pDeviceManager;
object d3dDevice = Marshal.GetIUnknownForObject(pDevice.NativePointer);
HResult hr = MFExtern.MFCreateDXGIDeviceManager(out resetToken, out pDeviceManager);
if(Succeeded(hr))
{
// The signature of this is:
// HResult ResetDevice(object d3d11device, int resetToken);
hr = pDeviceManager.ResetDevice(d3dDevice, resetToken);
}
Here's what is happening in the above code. The device manager is created, but in order for the encoder MFT to access Texture2D
samples, it needs a copy of the same Direct3D device that created the textures. Therefore, you have to call IMFDXGIDeviceManager::ResetDevice
on the device manager in order to provide it with the Direct3D device. See [1] for some important footnotes on ResetDevice
. SharpDX only provides access to the IntPtr
that points to the native IDirect3D11Device
, but the MediaFoundation.NET interface requires an object
to be passed in instead.
See the error yet? The above code type-checks and compiles just fine, but contains a critical error. The mistake was using Marshal.GetIUnknownForObject
instead of Marshal.GetObjectForIUnknown
. Amusingly, because object
can box an IntPtr
just fine, you can use the exact opposite marshalling function and it still compiles just fine. The issue is that we're trying to convert an IntPtr
into a .NET RCW inside an object
, which is what ResetDevice
from MediaFoundation.NET is expecting. This error caused ResetDevice
to return E_INVALIDARG
instead of working correctly.
Second issue: strange encoder output
A second problem was that the Intel Quick Sync Video H.264 Encoder MFT was not particularly happy, and although it was being created correctly, there was a second or two of black output at the start of the resulting file, as well as blocking and motion errors for the first few seconds, sometimes with half of the video being gray and not showing the actual duplicated desktop image.
I wanted to make sure the actual Texture2D
objects were being sent over to the encoder correctly, so I wrote a simple class to dump a Direct3D 11 Texture2D
to a .png
file. I've included that here for anyone else that needs it - this requires SharpDX and MediaFoundation.NET to work, although you could remove the MF dependency by using CopyMemory
in a loop to account for the different strides. Note that this is only set up to work with textures in DXGI.Format.B8G8R8A8_UNorm
format. It will probably work with textures in other formats, but the output will look very odd.
using System;
using System.Drawing;
namespace ScreenCapture
{
class Texture2DDownload : IDisposable
{
private SharpDX.Direct3D11.Device m_pDevice;
private SharpDX.Direct3D11.Texture2D m_pDebugTexture;
public Texture2DDownload(SharpDX.Direct3D11.Device pDevice)
{
m_pDevice = pDevice;
}
/// <summary>
/// Compare all the relevant properties of the texture descriptions for both input textures.
/// </summary>
/// <param name="texSource">The source texture</param>
/// <param name="texDest">The destination texture that will have the source data copied into it</param>
/// <returns>true if the source texture can be copied to the destination, false if their descriptions are incompatible</returns>
public static bool TextureCanBeCopied(SharpDX.Direct3D11.Texture2D texSource, SharpDX.Direct3D11.Texture2D texDest)
{
if(texSource.Description.ArraySize != texDest.Description.ArraySize)
return false;
if(texSource.Description.Format != texDest.Description.Format)
return false;
if(texSource.Description.Height != texDest.Description.Height)
return false;
if(texSource.Description.MipLevels != texDest.Description.MipLevels)
return false;
if(texSource.Description.SampleDescription.Count != texDest.Description.SampleDescription.Count)
return false;
if(texSource.Description.SampleDescription.Quality != texDest.Description.SampleDescription.Quality)
return false;
if(texSource.Description.Width != texDest.Description.Width)
return false;
return true;
}
/// <summary>
/// Saves the contents of a <see cref="SharpDX.Direct3D11.Texture2D"/> to a file with name contained in <paramref name="filename"/> using the specified <see cref="System.Drawing.Imaging.ImageFormat"/>.
/// </summary>
/// <param name="texture">The <see cref="SharpDX.Direct3D11.Texture2D"/> containing the data to save.</param>
/// <param name="filename">The filename on disk where the output image should be saved.</param>
/// <param name="imageFormat">The format to use when saving the output file.</param>
public void SaveTextureToFile(SharpDX.Direct3D11.Texture2D texture, string filename, System.Drawing.Imaging.ImageFormat imageFormat)
{
// If the existing debug texture doesn't exist, or the incoming texture is different than the existing debug texture...
if(m_pDebugTexture == null || !TextureCanBeCopied(m_pDebugTexture, texture))
{
// Dispose of any existing texture
if(m_pDebugTexture != null)
{
m_pDebugTexture.Dispose();
}
// Copy the original texture's description...
SharpDX.Direct3D11.Texture2DDescription newDescription = texture.Description;
// Then modify the parameters to create a CPU-readable staging texture
newDescription.BindFlags = SharpDX.Direct3D11.BindFlags.None;
newDescription.CpuAccessFlags = SharpDX.Direct3D11.CpuAccessFlags.Read;
newDescription.OptionFlags = SharpDX.Direct3D11.ResourceOptionFlags.None;
newDescription.Usage = SharpDX.Direct3D11.ResourceUsage.Staging;
// Re-generate the debug texture by copying the new texture's description
m_pDebugTexture = new SharpDX.Direct3D11.Texture2D(m_pDevice, newDescription);
}
// Copy the texture to our debug texture
m_pDevice.ImmediateContext.CopyResource(texture, m_pDebugTexture);
// Map the debug texture's resource 0 for read mode
SharpDX.DataStream data;
SharpDX.DataBox dbox = m_pDevice.ImmediateContext.MapSubresource(m_pDebugTexture, 0, 0, SharpDX.Direct3D11.MapMode.Read, SharpDX.Direct3D11.MapFlags.None, out data);
// Create a bitmap that's the same size as the debug texture
Bitmap b = new Bitmap(m_pDebugTexture.Description.Width, m_pDebugTexture.Description.Height, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
// Lock the bitmap data to get access to the native bitmap pointer
System.Drawing.Imaging.BitmapData bd = b.LockBits(new Rectangle(0, 0, b.Width, b.Height), System.Drawing.Imaging.ImageLockMode.WriteOnly, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
// Use the native pointers to do a native-to-native memory copy from the mapped subresource to the bitmap data
// WARNING: This might totally blow up if you're using a different color format than B8G8R8A8_UNorm, I don't know how planar formats are structured as D3D textures!
//
// You can use Win32 CopyMemory to do the below copy if need be, but you have to do it in a loop to respect the Stride and RowPitch parameters in case the texture width
// isn't on an aligned byte boundary.
MediaFoundation.MFExtern.MFCopyImage(bd.Scan0, bd.Stride, dbox.DataPointer, dbox.RowPitch, bd.Width * 4, bd.Height);
/// Unlock the bitmap
b.UnlockBits(bd);
// Unmap the subresource mapping, ignore the SharpDX.DataStream because we don't need it.
m_pDevice.ImmediateContext.UnmapSubresource(m_pDebugTexture, 0);
data = null;
// Save the bitmap to the desired filename
b.Save(filename, imageFormat);
b.Dispose();
b = null;
}
#region IDisposable Support
private bool disposedValue = false; // To detect redundant calls
protected virtual void Dispose(bool disposing)
{
if(!disposedValue)
{
if(disposing)
{
}
if(m_pDebugTexture != null)
{
m_pDebugTexture.Dispose();
}
disposedValue = true;
}
}
// TODO: override a finalizer only if Dispose(bool disposing) above has code to free unmanaged resources.
~Texture2DDownload() {
// Do not change this code. Put cleanup code in Dispose(bool disposing) above.
Dispose(false);
}
// This code added to correctly implement the disposable pattern.
public void Dispose()
{
// Do not change this code. Put cleanup code in Dispose(bool disposing) above.
Dispose(true);
GC.SuppressFinalize(this);
}
#endregion
}
}
Once I verified I had good images on their way in to the encoder, I found that the code was not calling IMFSinkWriter::SendStreamTick
after calling IMFSinkWriter::BeginWriting
but before sending the first IMFSample
. The initial sample also had a time delta that was non-zero, which was causing the initial black output. To fix this, I added the following code:
// Existing code to set the sample time and duration
// recordDuration is the current frame time in 100-nanosecond units
// prevRecordingDuration is the frame time of the last frame in
// 100-nanosecond units
sample.SetSampleTime(recordDuration);
sample.SetSampleDuration(recordDuration - prevRecordingDuration);
// The fix is here:
if(frames == 0)
{
sinkWriter.SendStreamTick(streamIndex, recordDuration);
sample.SetUINT32(MFAttributesClsid.MFSampleExtension_Discontinuity, 1);
}
sinkWriter.WriteSample(streamIndex, sample);
frames++;
By sending a stream tick to the sink writer, it establishes that whatever value is in recordDuration
is now considered as the time = 0 point for the output video stream. In other words, once you call SetStreamTick
and pass in a frame timestamp, all subsequent timestamps have that initial timestamp subtracted from them. This is how you make the first sample frame show up immediately in the output file.
In addition, whenever SendStreamTick
is called, the sample given to the sink writer directly after must have MFSampleExtension_Discontinuity
set to 1
on its attributes. This means that there has been a gap in the samples being sent, and the frame being handed to the encoder is the first frame after that gap. This more or less tells the encoder to make a keyframe out of the sample, which prevents the motion and blocking effects that I was seeing before.
Results
Once these fixes were implemented I tested the app and achieved full screen capture at 1920x1080 resolution and 60 frames per second output. Bit rate was set to 4096 kbit. CPU usage on an Intel i7-4510U laptop CPU was between 2.5% and 7% for most workloads - extreme amounts of motion would see it get up to about 10%. GPU utilization via SysInternals' Process Explorer was between 1% and 2%.
[1] I believe some of this is a relic from Direct3D 9, when multithreading support was not well built in to the DirectX API, and the device had to be exclusively locked whenever any component was using it (i.e. decoder, renderer, encoder). Using D3D 9 you call
ResetDevice
but then can never use your own pointer to the device again. Instead, you have to call LockDevice
and UnlockDevice
even in your own code to get the device pointer, because an MFT could be using the device at that same moment. In Direct3D 11, there seems to be no issue using the same device simultaneously in the MFT and in the controlling application - although if any random crashes happen I'd advise reading a great deal about how IMFDXGIDeviceManager::LockDevice
and UnlockDevice
works and implementing those to make sure the device is always exclusively controlled.来源:https://stackoverflow.com/questions/44402898/mf-sinkwriter-write-sample-failed