Continuing .NET Internals series on the blog, today we’re going to see what is generational garbage collection. You’ll also get to know what is a card table data structure and for what it’s used 🙂
As we know from this post, heaps used by .NET process to allocate reference objects are allocated on different kinds of heaps. In the previous article we also got to know that only Small Object Heap (SOH) is the subject of compaction process. Large Object Heap (LOH) is not compacted because of the performance reasons.
This is however not the only optimization introduced by GC contributors into Small Object Heap. SOH in .NET is also divided into 3 generations:
- Generation 0 – containing newly created objects, which did not have any collection performed on them so far,
- Generation 1 – storing objects that survived a single garbage collection (were not reclaimed during it because of still being in use),
- Generation 2 – keeping long-lived objects that survived two or more GC cycles.
LOH is not divided into generations. More details on LOH management in the article linked just before the “Summary” section below.
Thanks to such a division of managed heap, the actual collection is performed not on the whole heap, but only on one of the generations, where:
to perform a collection on generation X == to collect objects on generation X and all its younger generations
That’s why a generation 2 collection is knows as a full garbage collection. During this process objects present on all heaps (on all generations: gen 0, gen 1 and gen 2) are examined and reclaimed. Obviously, this collection is also the most costly one.
In that place we should introduce a term collection survivor. It can be defined as an object which survives the collection (is not collected during GC collection cycle as it’s still referenced by something).
Knowing that, the following “promotion rules” can be defined:
- object survives generation 0 collection => object is promoted to generation 1,
- object survives generation 1 collection => object is promoted to generation 2,
- object survives generation 2 collection => object stays in generation 2.
Collection thresholds in generations
In the 5th post in the series, it was mentioned that one of the conditions on which the garbage collection can be triggered is when “the memory used by the objects on the managed heap exceeds some defined threshold”. In fact, this is not a single threshold, but several thresholds. More precisely, there’s a separate one per generation. As soon as size of all objects in a particular generation exceeds threshold’s value, the collection is started.
In the beginning, the thresholds’ values for each generation are initialized to the following values:
- gen 0: ~256K,
- gen 1: ~2MBs,
- gen 2: ~10MBs.
Nonetheless, these are only initial values and are adjusted by the GC at runtime. One of the conditions to increase a particular threshold’s value is then the survival rate in a generation is high (more objects from particular generation are either promoted to the next one or stay in generation 2), which makes the GC cycles to run less frequently (the condition of exceeding the threshold is not met that often).
This approach seems reasonable, as the main goal of GC is to collect as many objects as possible and reclaim their memory, not to “waste time” only to get to know that most of the objects are still in use.
In general, CLR tries to find a balance between two factors:
- not letting application’s working memory set to get too big,
- not allowing GC to take too much time.
Structure of generational heap
The diagram below presents how SOH actually looks like being divided into above-described 3 generations:
As can be noticed, the next object pointer (you can read more about it in this post) refers to the place after the last object in generation 0. That’s how it happens that newly created objects are always allocated in generation 0.
Generational GC cycle
The following schema very clearly presents how the full generational collection looks like (click to enlarge to see it clearly):
The first step is generation 0 collection. You can notice that objects H and D are not referenced by anything, so after the first cycle on gen 0 is performed, these objects are reclaimed and not promoted to gen 1. The same happens with object F, which is not being referenced to anymore when the collection on generation 1 is performed – that’s why it’s reclaimed during this step and not promoted to generation 2. In the end of the full GC cycle, only still-referenced objects stay on generation 2.
What’s also important to notice is that after each collection generation 0 becomes empty (contains no objects). Generation 1 may still contain objects promoted from generation 0 and generation 2 may contain long-lived objects.
Why is static data harmful for GC?
Having the knowledge presented in this post, we can deduce that the more objects are stored on a generation the longer collection process lasts. What we should also already be able to see is that generation 2 contains long-lived objects which are mostly static or global objects that survive garbage collection cycles (as there’s always a “live” reference to them in the application).
Of course, there are some reasons why static data can be justified (e.g. server-side apps storing some context), but normally using static data should be avoided as much as possible and reasonable. I think you already know why – it simply incurs the application’s performance by making garbage collector’s work harder and longer.
There’s also one very interesting issue in having the managed heap divided into generations. Can you imagine the case, when an object is already promoted to gen 2, but one of its reference-typed properties has just been initialized in gen 0? Consider the following code:
|public class GuyFromGen2|
|private YoungObject _youngObject;|
|public void CreateYoungObject()|
|_youngObject = new YoungObject();|
Let’s assume that an instance of GuyFromGen2 has already been created long time ago and it’s already promoted to generation 2. At this moment someone calls CreateYoungObject() method, which initializes _youngObject variable. This makes the “parent” instance of GuyFromGen2 holding a reference to its instance object located in gen 0.
When there’s a generation 0 collection triggered (which, as we said, examines only objects in generation 0) at this exact moment, it would find _youngObject allocated on the heap having no gen 0 references to it (as it doesn’t check gen 2 references during gen 0 collection) so it would assume the object not being used and reclaim its memory.
Fortunately, .NET has a solution for this issue – card table data structure.
Card table data structure
Card table is used by .NET’s memory management system to store information about objects from older generation (gen 2 in that case) that reference younger objects (from gen 0 or gen 1). The execution engine adds an entry into card table as soon as any reference to gen 0 or gen 1 object is created “inside” gen 2 object.
For performance optimization reasons the card table stores this information per “bit”, which represents 128 bytes (or 256 bytes in 64-bit architectures) of managed memory. That’s how the card table can be imagined:
Then, as soon as gen 0 or gen 1 collection takes place, apart from examining references only from the particular generation’s roots, GC also checks which memory ranges have their bits set to 1, finds out which objects are allocated within these memory ranges and treats them as potential GC roots (analyzing what these objects reference).
What’s more, as Konrad pointed out in his comment, because of write barrier optimization the whole bytes (8-bit chunks, not only single bits as presented on the figure above) are marked as potential roots sources. So in fact, such cross-generations reference can mark up to 2048 bytes (for 64-bit architectures: 256 bytes x 8 = 2048 bytes) memory chunk as a potential GC roots source.
Every referenced object found this way is added to the list of objects still in use (if you still don’t know what are GC roots and still in use objects list, read the previous post now).
but… what about LOH?
If you’re curious about Large Object Heap (LOH) in-depth memory organization, I recommend you reading this article by Maoni Stephens. I can just mention that LOH has its own memory threshold which can be exceeded and it also triggers Gen 2 collection – from GC point-of-view, collection of objects allocated on LOH happens only during Gen 2 collection.
We went through the concept of having SOH managed heap divided into 3 generations. By understanding how objects are promoted between generations and the implications of being a long-lived object we saw the potential harmful impact of static data on our application’s performance.
Another interesting thing we pointed out was the card table data structure used to solve cross-generations references issue.
Next week we’ll see more on unmanaged resources and objects’ finalization.
Stay tuned! 😉