[.NET Internals 08] What about Large Object Heap (LOH)?

So far within the .NET Internals series we focused on Small Object Heap (SOH). We know, for instance, that the LOH is not compacted (by default) during garbage collection. So how is it actually handled by the GC?

(De)allocating objects on LOH

As we know from the second post, during allocating the memory only objects of size greater than 85,000 bytes are placed on LOH. There are also some exceptions, like arrays of double which are put on LOH (in 32-bit architectures only) as soon as they reach 1000 (or more) elements (not something around 10626 elements as could be expected). This is quite important to know to be aware what kind of objects have impact on heap fragmentation (more details below).

 

So we know when the objects are allocated on LOH, but when are they deallocated?

LOH is collected in the same time as the generation 2 collection occurs. It can be triggered if memory threshold for either gen 2 or LOH is exceeded. Conditions for garbage collection can be found in this post.

That’s why keeping large LOH may affect the GC’s – and the whole application’s – performance.

Garbage collection on LOH

LOH fragmentation

The reason why Large Object Heap is not compacted (by default) is because it’s used to store big objects (>85,000 bytes). Copying such amounts of data would seriously incur the performance of garbage collection process.

Anyway, memory of objects allocated on the LOH is reclaimed so it may eventually become fragmented:

LOH fragmentation

We’ll see below how, but .NET keeps track of “Free space” memory blocks to know which chunks are available for new allocations on LOH. When allocating it looks for a block large enough to store the whole object.

 

However, imagine that there are 2 free space blocks next to each other. Both were marked free, but they represented different objects (maybe they were next to each other on the heap because one referenced the other). What do you think GC will do? Will it treat them as two separate free memory blocks, making less chances for the next allocated object to fit into one of them?

Fortunately not. GC has an optimization introduced which makes such adjacent free memory chunks “merged” together:

Free memory chunks “merged” on LOH

How does GC do it? Let’s see in more details.

Free memory representation on LOH

Instead of compacting Large Object Heap, garbage collector keeps the address ranges of not used large objects in a Free Space Table:

Free Space Table, source

As you can see on the figure above, as soon as gen 2 collection run, address ranges of two unused objects were just added to the Free Space Table.

Now you can see that “merging” two adjacent free memory chunks is just a simple addition operation (or modification of one number in the table).

Allocating memory on LOH

As soon as a new large (>85,000 bytes or applicable array) object is to be allocated on the managed heap, GC looks for a single “Free Space” block to hold it. However, it’s rather unlikely that the particular object’s size will fit into one of the free memory chunks. In that case, a new object will be allocated on the top of the heap (just after ‘Object D’ on the figure above).

 

It may happen that the memory obtained from the operating system for LOH is already used (read here for more info about memory). Garbage collector then asks the operating system for more memory segments to be acquired for LOH. If it fails, gen 2 collection is triggered hoping that some memory blocks will be freed and then the allocation will be possible.

 

Let’s now think about it for a while. We said previously that LOH collection triggers gen 2 collection. So trying to clean-up the Large Object Heap every time an allocation on it is made would be a potential performance killer.

 

How does GC solve this issue? Well, in fact, after a lot of optimizations introduced to LOH management in .NET 4.5, the GC takes the following actions in order to make a new allocation of a large object:

  • firstly, the GC tries to allocate new objects into one of the free space “holes” on the LOH (it’s quite simple to calculate whether any chunk is large enough to store the object knowing ranges of free space blocks from the Free Space Table).
  • if the above fails – garbage collector prefers to allocate new large objects at the end of the heap. Even though it may involve asking the OS for more memory segments, it’s been found to be easier and less consuming operation than performing full GC hoping to free some memory chunks on the LOH first.
  • only if the above fails (LOH cannot be “extended”) – GC triggers gen 2 collection hoping to free some additional space that could be used for a new allocation.

What’s worth noticing is that this actions order seems to be good for performance, but can sometimes be a reason of memory fragmentation.

Manual LOH compaction

As you should already know from the previous article, LOH can be compacted programmatically, by setting the GCSettings.LargeObjectHeapCompactionMode property. The simplest way to force LOH compaction is as the following snippet presents:


GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();

There might be some niche cases in which LOH compaction may be useful. More details and discussions can be found for example here.

Best practices working with LOH objects

We can try to simply design our applications to use the less possible number of large objects, but let’s not exaggerate. We are living in a world where 8GBs of 2133MHz RAM costs sometimes less than 100$, so the memory is generally cheap 😉

In principle, the rule seems to be simple: large objects we allocate should be reused (e.g. cached) as much as possible.

We should just keep in mind that allocation of large objects can be costly, because of a need to perform gen 2 collection in some cases before the object is allocated.

An example of potentially problematic large object can be a ViewState used in ASP.NET applications, size of which can easily exceed 85K. There are some good articles explaining how to not stupidly incur ASP.NET app’s performance using it, for instance this one.

There are also a lot of tools which can be used to measure the memory state and performance of our applications (also its internal mechanisms like garbage collection, heaps compaction etc.) which we’ll surely cover in one of the next posts within the series 🙂

Summary

Today we examined – previously a bit forgotten – Large Object Heap. We saw how the information about free memory blocks on it is stored by .NET Framework and how new objects are allocated on it.

 

I think it’s one of the next internal concepts of .NET worth knowing and understanding, even though in the common scenarios and business applications you probably won’t get into troubles with LOH. However, it may be practical and useful to know when working with some more memory-demanding applications like games.

I hope this post clarifies some LOH topics for you.

Let me know if there are any topics you’d be interested in reading about. I’m here to provide some value to you, so I’m open for your criticism and suggestions 🙂

Stay tuned!

 

GET A FREE GUIDE 🎁

16 STEPS TO BECOME
.NET FULL STACK WEB DEVELOPER
IN 2023

After you sign up, I may be sending you some emails with additional free content from time to time.
No spam, only awesome stuff

.NET full stack web developer & digital nomad
0 0 votes
Article Rating
Subscribe
Notify of
guest
7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Adam Furmanek
Adam Furmanek
5 years ago

> objects of size greater than 85 kilobytes are placed on LOH
It’s not 85 kilobytes (= 85 * 1024 bytes), it is 85 thousand bytes (= 85 * 1000 bytes). See this https://dotnetfiddle.net/Sw4TuJ

> not 10626 elements as could be expected
Just nitpicking 🙂 I think it should be 10624, this gives 10624 * 8 doubles + 4 for sync block + 4 for type handle + 4 for size = 85004.

Konrad Kokosa
Konrad Kokosa
5 years ago
Reply to  Adam Furmanek

Btw such arrays of doubles are allocated in LOH only in case of 32-bit framework, which is a small detail probably worth calling out.

Dawid Sibiński
Dawid Sibiński
5 years ago
Reply to  Konrad Kokosa

Thanks guys.
@adamfurmanek:disqus indeed, I should rather use 85,000 bytes terminology. Sometimes forgetting we’re on Windows and with Microsoft, so using 1KB I should assume 1024 bytes 😉

Konrad Kokosa
Konrad Kokosa
5 years ago

And again nice article! Just single note from my side:

“garbage collector prefers to allocate new large objects at the end of the heap”

This is generally quite misleading – in fact GC always prefers to make use of fragmentation first.

Dawid Sibiński
Dawid Sibiński
5 years ago
Reply to  Konrad Kokosa

Thanks Konrad.
Hmm, you’re probably right, it was not very clear for me. I’ve just verified in ‘Under the Hood of .NET Memory Management” which says “In fact, for performance reasons, .NET preferentially allocates large objects at the end of
the heap”. That’s probably because it’s quite on old book (2011), while LOH handling was optimized a lot in .NET 4.5.

However on the schemas presented on https://docs.microsoft.com/en-us/dotnet/standard/garbage-collection/large-object-heap it seems as you said, so:
1. Try to allocate in the free memory blocks
2. If no free memory chunks large enough found – try to allocate at the end of the heap (and request more segments from the OS if necessary)
3. If point 2. didn’t work – perform a full GC hoping that some more objects get reclaimed.

Konrad Kokosa
Konrad Kokosa
5 years ago

Right, such statement could be much more accurate in 2011 🙂

Konrad Kokosa
Konrad Kokosa
5 years ago

Right, such statement could be much more accurate in 2011 🙂