We have a D2007 application whose memory footprint grows steadily when running on Windows Server 2008 (x64, sp1).
It behaves normally on Windows Server 2003 (x32 or x64), XP, etc... where it goes up and down as expected.
We have tried with the included Memory Manager or the latest FastMM4 4.92 with the same results.
Has anyone tried to monitor the memory usage of any Delphi app on Win2008 and would confirm?
Or would have any clue?
Precisions:
- no memory leaks in the common sense (and yes I'm quite familiar with FastMM et al)
- memory usage was monitored with Process Explorer; both Virtual Memory (Private Bytes) and Physical Memory (WorkingSet Private) are growing on Win2008
- memory consumption was still growing even when there was memory pressure. (that's how we came to investigate as it caused a failure, but only on Win2008 boxes)
Update: the //** repaced **// code is much simpler than our app but shows the same behavior.
Creating a List of 10,000,000 objects then 10,000,000 interfaces, executed 2 times grows the used memory by ~60MB and roughly 300MB more for 100 more executions on Windows Server 2008, but just returns to where it was on XP.
If you launch multiple instances, the memory is not released to allow the other instances to run. Instead, the page file grows and the server crawls...
Update 2: see QC report 73347
After further investigation, we have tracked it down to Critical Sections as shown in the code below.
Put that code into a simple VCL application with a Button. And monitor with Process Explorer:
it starts at ~2.6 MB and after 5 runs (clicks on the button) it stays at ~118.6MB.
116MB lost in 5 executions.
//***********************
const
CS_NUMBER = 10000000;
type
TCSArray = Array[1..CS_NUMBER] of TRTLCriticalSection;
PCSArray = ^TCSArray;
procedure TestStatic;
var
csArray: PCSArray;
idx: Integer;
begin
New(csArray);
for idx := 1 to length(csArray^) do
InitializeCriticalSection(csArray^[idx]);
for idx := 1 to length(csArray^) do
DeleteCriticalSection(csArray^[idx]);
Dispose(csArray);
end;
procedure TestDynamic(const Number: Integer);
var
csArray: array of TRTLCriticalSection;
idx: Integer;
begin
SetLength(csArray, Number);
for idx := Low(csArray) to High(csArray) do
InitializeCriticalSection(csArray[idx]);
for idx := Low(csArray) to High(csArray) do
DeleteCriticalSection(csArray[idx]);
end;
procedure TForm4.Button1Click(Sender: TObject);
begin
ReportMemoryLeaksOnShutdown := True;
TestStatic;
TestDynamic(CS_NUMBER);
end;
There is a new sysinternals tool called VMMap which visualizes the allocated memory. Maybe it could show you what the big memory blocks are.
Actually, Microsoft made a change to the Critical Sections to add some debug information. This debug memory is not released until the end of the application but somehow cached and reused which is why after a while it can plateau.
The solution if you want to create a lot of Critical Sections without feeling this memory penalty is to patch the VCL code to replace calls to InitializeCriticalSection
by calls to InitializeCriticalSectionEx
and pass it the flag CRITICAL_SECTION_NO_DEBUG_INFO
to avoid the creation of the debug structure.
Did you include FastMM with full debug mode? Just include the FastMM4 unit directly in your project and set
ReportMemoryLeaksOnShutdown := True
If there is nothing reported, maybe everything is normally freed on program exit (maybe because of reference counting). You could use AQTime to monitor memory in real time. With this application you can see the bytes "counting" for each class name and for rest of the used memory. Maybe you can see who uses the memory. The time limited demo version is enough for this job.
Are you referring to the Private Bytes, Virtual Size or the Working Set? Run Process Explorer from SysInternals to monitor the memory for a better idea of what is going on.
I don't have any specific experience with this (although I am running 2008 x64 SP1, so could test it) but I am going to suggest you create a test application that allocates a bunch of memory and then free it. Run Process Explorer from SysInternals to monitor the memory.
If you test application reproduces the same behavior then try creating some memory pressure by allocating memory in another process - so much that it will fail unless that previously freed memory in the first process is reclaimed.
If that continues to fail, then try a different memory manager. Maybe it is FastMM that is doing it.
Check if you have this issue (this is another issue, unrelated to the one, which I've mentioned in the comments to your question).
I did this code to correct this problem on my applications. Is the same case of FastCode, to make the fix run you must to put the unit as the first unit of your project. Like the uRedirecionamentos in this case:
unit uCriticalSectionFix;
// By Rodrigo F. Rezino - rodrigofrezino@gmail.com
interface
uses
Windows;
implementation
uses
SyncObjs, SysUtils;
type
InitializeCriticalSectionExProc = function(var lpCriticalSection: TRTLCriticalSection; dwSpinCount: DWORD; Flags: DWORD): BOOL; stdcall;
var
IsNewerThenXP: Boolean;
InitializeCriticalSectionEx: InitializeCriticalSectionExProc;
type
PJump = ^TJump;
TJump = packed record
OpCode: Byte;
Distance: Pointer;
end;
TCriticalSectionHack = class(TSynchroObject)
protected
FSection: TRTLCriticalSection;
public
constructor Create;
end;
function GetMethodAddress(AStub: Pointer): Pointer;
const
CALL_OPCODE = $E8;
begin
if PBYTE(AStub)^ = CALL_OPCODE then
begin
Inc(Integer(AStub));
Result := Pointer(Integer(AStub) + SizeOf(Pointer) + PInteger(AStub)^);
end
else
Result := nil;
end;
procedure AddressPatch(const ASource, ADestination: Pointer);
const
JMP_OPCODE = $E9;
SIZE = SizeOf(TJump);
var
NewJump: PJump;
OldProtect: Cardinal;
begin
if VirtualProtect(ASource, SIZE, PAGE_EXECUTE_READWRITE, OldProtect) then
begin
NewJump := PJump(ASource);
NewJump.OpCode := JMP_OPCODE;
NewJump.Distance := Pointer(Integer(ADestination) - Integer(ASource) - 5);
FlushInstructionCache(GetCurrentProcess, ASource, SizeOf(TJump));
VirtualProtect(ASource, SIZE, OldProtect, @OldProtect);
end;
end;
procedure OldCriticalSectionMethod;
asm
call TCriticalSection.Create;
end;
{ TCriticalSectionHack }
const
CRITICAL_SECTION_NO_DEBUG_INFO = $01000000;
NEW_THEN_XP = 6;
constructor TCriticalSectionHack.Create;
begin
inherited Create;
if IsNewerThenXP then
InitializeCriticalSectionEx(FSection, 0, CRITICAL_SECTION_NO_DEBUG_INFO)
else
InitializeCriticalSection(FSection);
end;
procedure AdjustMethod;
var
LKernel32: HModule;
begin
if IsNewerThenXP then
begin
LKernel32 := LoadLibrary('kernel32.dll');
@InitializeCriticalSectionEx := GetProcAddress(LKernel32, 'InitializeCriticalSectionEx');
end;
end;
initialization
AddressPatch(GetMethodAddress(@OldCriticalSectionMethod), @TCriticalSectionHack.Create);
IsNewerThenXP := CheckWin32Version(NEW_THEN_XP, 0);
AdjustMethod;
end.
In addition to Alexander, usually this is called "heap fragmentation".
Note that FastMM is supposed to be more resilient and faster overall, but if the original app was tuned for the D7 memmanager, FastMM might actually perform worse.
Well, memory usage can increase even if there is no memory leak in your application. In those cases there is possibility than you have leak of another resource. For example, if your code allocates, say, a bitmap and though it releases all objects, but manages to forget about finalizing some HBITMAP.
FastMM will tell you that you have no memory leak in your application, since you've freed all of your objects and data. But you still leaks other types of resources (in my example - GDI objects). Leaking other types of resources can affect your memory too.
I suggest you to try other tool, which checks not only memory leaks, but other types of leaks too. I think that AQTime is capable of doing that, but I'm not sure.
Another possible reason for this behaviour is memory fragmentation. Suppose you mave allocated 2000 objects of 1 Mb in size (let's forget for a minute about MM overheads and presence of another objects in user space). Now you have full 2 Gb of busy memory. Now, suppose that you free all even objects, so now you have "stripped" memory space, where 1 Mb busy and free blocks are mixed. Though you now do have 1 Gb of free memory, but you are not able to allocate a memory for any 2Mb-object, since the maximum size of free block is 1 Mb only (but you do have 1000 of such blocks ;) ). If memory manager used blocks larger than 1 Mb for your objects, then it can not release memory blocks back to OS, when you've freed your even objects:
[ [busy] [free] [busy] [free] [busy] [free] ]
[ [busy] [free] [busy] [free] [busy] [free] ]...
Those large [...] blocks are half-busy, so MM can not give them to OS. If you'll ask for another block, which is > 1 Mb then MM will need to allocate yet another block from OS:
[ [busy] [free] [busy] [free] [busy] [free] ]
[ [busy] [free] [busy] [free] [busy] [free] ]...
[ [your-new-object] [free.................] ]
Note, that these are just examples of incresing memory usage, though you do not have memory leak. I do not say that you have the EXACT situation :D
来源:https://stackoverflow.com/questions/780073/is-the-memory-not-reclaimed-for-delphi-apps-running-on-windows-server-2008-sp1