Skip to content

LogEventInfo - Changed LayoutCache from dictionary to linked list#6025

Merged
snakefoot merged 1 commit intoNLog:devfrom
snakefoot:dev
Jan 31, 2026
Merged

LogEventInfo - Changed LayoutCache from dictionary to linked list#6025
snakefoot merged 1 commit intoNLog:devfrom
snakefoot:dev

Conversation

@snakefoot
Copy link
Contributor

@snakefoot snakefoot commented Oct 13, 2025

The Dictionary needs to allocate 3 objects upfront, and additional allocates 2 larger array when inserting more than 3 items:

  • Dictionary-object
  • integer-array (bonus allocation on resize)
  • value-array (bonus allocation on resize)

The linked list will reduce allocations, when only needing to cache 1-4 layout-outputs. The linked-list should still have fast lookup when less than 10 layout-outputs.

After having removed the implicit caching of layout-output on Layout.Render(), and the caching only happens on precalculate, then the linked-list should not grow aggressively with many duplicate entries.

@coderabbitai
Copy link

coderabbitai bot commented Oct 13, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Replaces LogEventInfo's internal layout cache from a locked Dictionary to a lock-free singly linked list via a private LayoutCacheLinkedNode and Interlocked.CompareExchange; AddCachedLayoutValue prepends nodes, TryGetCachedLayoutValue traverses the list. Public API unchanged.

Changes

Cohort / File(s) Summary of Changes
Layout cache refactor
src/NLog/LogEventInfo.cs
Replaced dictionary-based, lock-guarded _layoutCache with a lock-free singly linked list implemented by a private LayoutCacheLinkedNode (stores Layout, Value, Next). Implemented AddCachedLayoutValue to prepend nodes using Interlocked.CompareExchange with retry and early-exit via TryGetCachedLayoutValue. Implemented TryGetCachedLayoutValue to traverse the linked list by reference equality. Removed dictionary and locking paths; no public API changes.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Caller
  participant LogEventInfo
  participant Cache as LayoutCache (linked list)

  Caller->>LogEventInfo: TryGetCachedLayoutValue(layout)
  LogEventInfo->>Cache: Traverse head -> next -> ...
  alt Match found
    Cache-->>LogEventInfo: Return cached value
    LogEventInfo-->>Caller: Value
  else Not found
    LogEventInfo-->>Caller: Miss
  end
Loading
sequenceDiagram
  autonumber
  participant Caller
  participant LogEventInfo
  participant Interlocked
  participant Cache as LayoutCache (linked list)

  Caller->>LogEventInfo: AddCachedLayoutValue(layout, value)
  LogEventInfo->>Cache: Read current head
  loop Retry (CAS) or early-exit if already present
    LogEventInfo->>Interlocked: CompareExchange(ref head, newNode{Next=oldHead}, oldHead)
    alt CAS success
      Interlocked-->>LogEventInfo: Swapped (new head)
      LogEventInfo-->>Caller: Done
    else CAS failed
      Interlocked-->>LogEventInfo: No swap (reload head)
    end
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

I hop through nodes with whiskers bright,
I prepend a leaf with nimble delight.
A CAS click, the head turns new,
I guard no locks — just rhythm true.
— a rabbit, caching in the night 🥕

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title "LogEventInfo - Changed LayoutCache from dictionary to linked list" is fully related to and clearly summarizes the main change in the pull request. It identifies the class being modified (LogEventInfo), the specific component affected (LayoutCache), and the core transformation (dictionary to linked list). The title is concise, specific, and directly reflects the primary objective of the changeset without unnecessary elaboration.
Description Check ✅ Passed The pull request description is clearly related to the changeset. It explains the problem being addressed (unnecessary allocations required by Dictionary) and the motivation for the solution (reducing allocations for typical cache sizes of 1-4 layouts while maintaining reasonable lookup performance). The description also provides relevant context about the removal of implicit caching, demonstrating understanding of how this change fits into the broader system.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6350737 and 979ecc9.

📒 Files selected for processing (1)
  • src/NLog/LogEventInfo.cs (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/NLog/LogEventInfo.cs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: task-list-completed

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
src/NLog/LogEventInfo.cs (2)

72-72: Strengthen memory visibility on lock‑free cache reads (optional)

To guarantee readers always see fully published nodes across CPUs, consider volatile semantics for the head and volatile reads during traversal.

Option A (simplest): make the head volatile.

-        private LayoutCacheLinkedNode? _layoutCache;
+        private volatile LayoutCacheLinkedNode? _layoutCache;

Option B: use Volatile.Read on head and next in TryGet.

-            var nodeHead = _layoutCache;
+            var nodeHead = Volatile.Read(ref _layoutCache);
             while (nodeHead != null)
             {
                 if (ReferenceEquals(nodeHead.Key, layout))
                 {
                     value = nodeHead.Value;
                     return true;
                 }
-                nodeHead = nodeHead.Next;
+                nodeHead = Volatile.Read(ref nodeHead.Next);
             }

Also applies to: 637-646


814-825: Invalidate layout cache when message/parameters change (avoid stale values)

If Message/Parameters/FormatProvider mutate after precalc, cached layout outputs can become stale. Consider clearing the cache in ResetFormattedMessage.

         private void ResetFormattedMessage(bool rebuildMessageTemplateParameters)
         {
             if (_messageFormatter is null || _messageFormatter.Target is ILogMessageFormatter)
             {
                 _formattedMessage = null;
+                // Cached layout values may depend on message/parameters/format-provider
+                Volatile.Write(ref _layoutCache, null);
             }

Please confirm this matches intended lifecycle (i.e., LogEventInfo not mutated after layouts start precalculating). If mutation never happens post-precalc, this can be skipped.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5df46fa and 32222e7.

📒 Files selected for processing (1)
  • src/NLog/LogEventInfo.cs (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/NLog/LogEventInfo.cs (1)
src/NLog/Layouts/SimpleLayout.cs (1)
  • Layout (57-487)
🔇 Additional comments (2)
src/NLog/LogEventInfo.cs (2)

625-633: CAS prepend loop looks correct

Lock-free insertion is correct and wait-free for writers. No correctness issues spotted.


841-852: Node type is minimal and appropriate

Immutable key/value, single Next pointer—fits the prepend pattern. LGTM.

@snakefoot snakefoot requested a review from Copilot October 13, 2025 14:44
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Replaces Dictionary-based layout caching with a linked list implementation to reduce memory allocations for small cache sizes. The change aims to optimize performance when caching 1-4 layout outputs by eliminating the upfront allocation overhead of Dictionary objects.

  • Replaced IDictionary<Layout, object?> with custom LayoutCacheLinkedNode linked list
  • Implemented lock-free concurrent operations using Interlocked.CompareExchange
  • Added new LayoutCacheLinkedNode nested class for the linked list structure

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@snakefoot snakefoot force-pushed the dev branch 2 times, most recently from 4b09018 to 85cb4a9 Compare October 18, 2025 15:31
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
src/NLog/LogEventInfo.cs (1)

625-636: Critical race condition allows duplicate cache entries.

The duplicate check on line 630 reads _layoutCache directly via TryGetCachedLayoutValue, but another thread can insert the same layout between that check and the CAS on line 634. This race condition will create duplicate entries in the linked list, defeating the memory optimization goal and degrading lookup performance.

Additionally, creating the new node (line 625) before checking for existence wastes an allocation when the layout is already cached.

Consider this approach to prevent duplicates:

 internal void AddCachedLayoutValue(Layout layout, object? value)
 {
-    var newNode = new LayoutCacheLinkedNode(layout, value);
-    var nodeHead = _layoutCache;
+    if (TryGetCachedLayoutValue(layout, out _))
+        return;
 
+    var newNode = new LayoutCacheLinkedNode(layout, value);
+    var nodeHead = _layoutCache;
     do
     {
-        if (nodeHead != null && TryGetCachedLayoutValue(layout, out _))
-            return;
-
+        // Check if layout exists in the current snapshot
+        var current = nodeHead;
+        while (current != null)
+        {
+            if (ReferenceEquals(current.Key, layout))
+                return;
+            current = current.Next;
+        }
+        
         newNode.Next = nodeHead;
         nodeHead = Interlocked.CompareExchange(ref _layoutCache, newNode, nodeHead);
     }
     while (!ReferenceEquals(nodeHead, newNode.Next));
 }

This checks the snapshot (nodeHead) rather than re-reading _layoutCache, reducing (though not eliminating) the race window. Note that duplicates may still occur under high concurrency, but this significantly reduces the likelihood.

Based on past review comments.

🧹 Nitpick comments (1)
src/NLog/LogEventInfo.cs (1)

641-654: Consider using volatile read for better memory visibility.

The plain read of _layoutCache (line 641) might not immediately see updates from other threads on architectures with weaker memory models. While .NET's memory model is relatively strong and this is unlikely to cause issues in practice, using Volatile.Read would provide stronger guarantees.

Apply this diff for explicit memory ordering:

 internal bool TryGetCachedLayoutValue(Layout layout, out object? value)
 {
-    var nodeHead = _layoutCache;
+    var nodeHead = Volatile.Read(ref _layoutCache);
     while (nodeHead != null)
     {
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4b09018 and 85cb4a9.

📒 Files selected for processing (1)
  • src/NLog/LogEventInfo.cs (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/NLog/LogEventInfo.cs (1)
src/NLog/Layouts/SimpleLayout.cs (1)
  • Layout (57-487)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: task-list-completed
🔇 Additional comments (2)
src/NLog/LogEventInfo.cs (2)

636-636: Loop termination logic is correct.

The condition !ReferenceEquals(nodeHead, newNode.Next) correctly implements the CAS retry loop:

  • On success: CompareExchange returns the old value (equals newNode.Next), so the condition is false and the loop exits.
  • On failure: CompareExchange returns the current value (differs from newNode.Next), so the condition is true and the loop retries.

The past review comment claiming this causes an infinite loop is incorrect.

Based on past review comments.


845-856: Well-designed node class for the linked list cache.

The LayoutCacheLinkedNode class is appropriately designed:

  • Sealed to prevent inheritance overhead
  • Readonly Key and Value prevent accidental mutation
  • Mutable Next enables lock-free list construction
  • Minimal memory footprint

@snakefoot snakefoot force-pushed the dev branch 2 times, most recently from 3d0e014 to 6350737 Compare October 18, 2025 15:36
@sonarqubecloud
Copy link

@snakefoot snakefoot merged commit b216b70 into NLog:dev Jan 31, 2026
5 of 6 checks passed
@snakefoot snakefoot added this to the 6.1.0 milestone Jan 31, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants