Earlier this week a long-time customer asked me why
FastMM allocates large memory blocks from the top of the memory and if that option could safely
be turned off. Their reasoning was that such allocations are much slower than normal ones. The question surprised me as I didn’t know of any such difference in speed so I did what I usually do–I wrote a benchmark application and measured it.
TL;DR Yes, allocating from the top is slower. No, the difference is not big and in most cases you’ll not even notice it.
There were, however, other interesting results that my simple benchmark pointed out. More on that in a moment, but first…
Allocating from bottom and top
In Windows, the code can ask for a memory block by calling
VirtualAlloc with flag
MEM_COMMIT and Windows will give you a suitable chunk of memory. This memory will usually
be allocated from the start of the virtual memory visible to the program.
The code can, however, call
VirtualAlloc with flag
MEM_COMMIT OR MEM_TOP_DOWN and Windows will return a block from the end of virtual memory available to the process. In a typical 32-bit Delphi program first such memory block will have address close to $7FF00000 (but
a bit lower).
You may want to allocate memory “from the top” if your program has two very distinct modes of allocating memory and you don’t want to mix them. For example, a frequently reallocated memory could come “from the bottom” and large blocks that
are used for long periods of time “from the top”. This
can reduce memory fragmentation, but the potential advantages are specific to each program.
In other words – maybe it will help, maybe it will hurt.
Another good scenario for
MEM_TOP_BOTTOM is testing 64-bit code
ported from 32-bits. For example, a typical “from the top” allocated block in a 64-bit program will have an address like this: $7FF4FDE30000. If
your code at some point stores pointers into 4-byte integers, part of the address will be lost and as soon as that integer
is converted back into a pointer and the code accesses that pointer, you’ll
quite probably get an access violation. If a memory comes “from the bottom”, such problems would not
be so easily detected.
FastMM4 allocates large blocks (with sizes greater or equal to 258.5 KB) “from the top”.
If I recall correctly, this was done to prevent memory fragmentation.
Additionally, it can allocate all other block “from the top” if you define conditional symbol
AlwaysAllocateTopDown and rebuild. (
You have to use FastMM4 from
github instead of the built-in Delphi version to use this functionality.) You can use this mode to test 32-bit programs ported to 64-bit code.
MEM_TOP_DOWN is slower?
The article my customer pointed to
claimed that allocating from the top works much slower than allocating from the bottom. Even worse,
the allocation algorithm was supposed to work in O
(n^2) time so each additional allocation needs more time to execute. To top that off, the
official documentation for the MEM_TOP_DOWN flag mentions:
This can be slower than regular allocations, especially when there are many allocations.
To verify that claim, I wrote a trivial benchmarking app (
download it here). It allocates from 1 to 6000 blocks of size 264752 and measures the time needed.
Block size 264752 was picked because at that size FastMM4
starts allocating memory “from the top”. 6000 blocks can safely
be allocated in a 32-bit application (6000 * 264752 = 1.5 GB). In my tests I could allocate 6105 such blocks before I ran “out of memory” but just to be on the safe side I reduced the number in the released application.
Results, measured on my fresh new notebook with
a i7-8750 processor, were much closer to my expectations than to some O
(n^2) algorithm. The “Top” algorithm is slightly slower (needs more time to execute) but the difference is not drastically large.
What’s going on then? Is MEM_TOP_DOWN slow or not?
As it turned out, the article I was referring to
was written in 2011 and Windows have improved
a lot since then. I don’t know which Windows version has fixed the “top allocation” problem, but it definitely doesn’t appear in Windows 7 and 10.
Another interesting result is that the first 200 MB (approximately) are almost “free”. Somewhere around that number, the execution time jumps from around 3 ms to 50 ms and then continues to grow in more-or-less linear fashion. The benchmarking program measures each test only once and is therefore very susceptible to measurement errors but the result
clearly shows an O
(n) algorithm.
Why are allocations smaller than 200 MB so fast? I’m guessing that Windows maps such amount of physical memory into the process’ virtual space when the process
is started. When you exceed that limit, the allocator needs more time to allocate physical memory and map it into the process’ virtual space. That’s, however, just a guess. If you know better, please let me know in the comments.
How fast are YOUR allocations?
Just for the sake of completeness I rerun tests on my main PC (HP z820 with two E5
Xeons) and the results completely surprised me.
The shape of the curve is almost the same–but notice the difference in speed! On the laptop, 4000 allocations execute in 250 ms. On the
Xeon machine, over 1000 ms
is needed for the same job.
This machine is
quite old (around 4 years IIRC) and it
obviously contains a much slower memory. I know that computers can have
faster or slower memory chips, but I never expected to see such a big difference in
VirtualAlloc speed. (And yes, both machines are running latest Windows 10.)
Now the whole shebang
started to interest me
even more, and I did some further tests on a few PCs used by fellow programmers. All of them were running Windows 10. As you can see below, there is some difference between them but none are so slow than my main computer :( Maybe the time
has come to upgrade…
If you want to download raw data and compare it to your own results, you can
access it here.
MEM_TOP_DOWN or not?
The difference in speed is not that big–and most programs will not notice it–but I have to agree with the customer. The time
has come to remove hard-coded
MEM_TOP_DOWN from FastMM4 and replace it with a conditional
{$IFDEF AllocateLargeBlocksTopDown}MEM_TOP_DOWN{$ENDIF}. I have already created a
pull request for that change.
I expect two physical porcessors also slows memory allocations. It would be interesting to see other dual processor machines (or more, if you have them). Back when I was working on multiprocessor code there was a real per thread slow-down going from 4GHz single chip systems to 3.5GHz multiprocessor, but the jump from 4 cores to 64 more than made up for it :)
ReplyDeleteGood point! Anyone who wants to commit more info can send the resulting memory.csv file to me and I'll include it in the shared "raw data" table.
DeleteAt the moment, this is the only dual-CPU machine I can test on.
DeleteCorrection - I _do_ have another 2xE5 machine, but it is running a Windows 2012 Server and that makes a BIG difference: https://www.dropbox.com/s/2vvjhsnzmjlrws5/VirtualAllocSpeed.jpg?dl=1
DeleteI think your last comment links to the wrong image. I guess you mean the one you posted to en.Delphi-PRAXIS.
DeleteInded I did, Rudy. Thank you for the correction!
DeleteThis is the correct image:
https://www.dropbox.com/s/kpzl44ngvtnatb4/Windows2012Server.jpg?dl=1
you can go the opposite direction - turn your 2-CPU computer into 1-CPU for a while - and rerun your test.
DeleteI is VERY likely that the delay is exactly cache coherency between CPUs that have rely to relatively slow mainboard channel for communication.
Maybe your motherboard BIOS can do this, so you won't have to physically take one CPU off, but perhaps not.
Single E2620 chip seems to be 6/12 formula. https://xeon-e5450.ru/socket-2011/xeon-e5-2600-v1/
So other avenus would be to make test when disabling hyper-threading (this BIO should do) and maybe reducing core count to 4 or even 2 - but this i do not think server motherboard would do. Desktop CPU testers often do it though, so who knows....
You can enable SetPEFlags IMAGE_FILE_LARGE_ADDRESS_AWARE to use 3.5 Gb of memory in 32-bit app in 64-bit Windows OS.
ReplyDelete> The time has come to remove hard-coded MEM_TOP_DOWN from FastMM4
ReplyDeleteActually that would potentially increase fragmentation instead.
What perhaps there is to do would be INVERTING strategies.
Allocate pools on top, and large chunks on bottom.
Since pulls are aggregated by FastMM4 the possibly worse Windows latency would be spread thinly over many more application-level allocations.
P.S. now, to really go insane, it would be nice if that test would be repeated not on MS Windows but on OS/2 Odin, Linux WinE and ReactOS
I think this can only be proved/disproved by measuring real applications. I don't believe we are running any code that would be affected by the change. You can do some testing on your side and then we'll see.
DeleteIf you don't believe then what is the point of the pull request?
DeleteNice article, accurate and actual. However, your measurements surprisingly differ from those published by Bruce Dawson in his "Making VirtualAlloc Arbitrarily Slower" article (https://randomascii.wordpress.com/2011/08/05/making-virtualalloc-arbitrarily-slower/) - please take a look.
ReplyDeleteProbably, the VirtualAlloc's implementation of MEM_TOP_DOWN strategy has been improved since then...
This article is actually linked from my post and was the main reason I researched the (supposed) problem :)
DeleteIndeed, something has changed in Windows between the 2011 and 2019. Not surprising, really :)