XmlParser - Handle XML comments after root-end-tag#5930
Conversation
|
Warning Rate limit exceeded@snakefoot has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 9 minutes and 40 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (4)
WalkthroughThe changes update the XML parser to skip and ignore trailing XML comments after the root element's end tag. Corresponding unit tests are added to verify that XML documents with comments after the root element are parsed correctly as empty documents. Additionally, the file target's private timer interval logic is refined, and a test for concurrent file target flushing is improved with retry logic. Changes
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
src/NLog/Targets/FileTarget.cs (1)
462-473: Add documentation for magic numbers and consider using named constants.The conditional logic introduces hardcoded values (120, 240, 3600) without explanation. This makes the code harder to understand and maintain.
Consider adding constants and documentation to clarify the rationale:
+ // Timer interval constants (in seconds) + private const int OptimalMonitorInterval = 120; + private const int MinCacheTimeoutForOptimization = 240; + private const int MaxCacheTimeoutForOptimization = 3600; + private int OpenFileMonitorTimerInterval { get { if (OpenFileFlushTimeout <= 0 || AutoFlush || !KeepFileOpen) - return (OpenFileCacheTimeout > 240 && OpenFileCacheTimeout < 3600) ? 120 : OpenFileCacheTimeout; + return (OpenFileCacheTimeout > MinCacheTimeoutForOptimization && OpenFileCacheTimeout < MaxCacheTimeoutForOptimization) ? OptimalMonitorInterval : OpenFileCacheTimeout; else if (OpenFileCacheTimeout <= 0) return OpenFileFlushTimeout; else return Math.Min(OpenFileFlushTimeout, OpenFileCacheTimeout); } }Additionally, consider adding a comment explaining why the 120-second interval is optimal for cache timeouts in the 240-3600 range.
tests/NLog.Targets.ConcurrentFile.Tests/ConcurrentFileTargetTests.cs (1)
1180-1186: Consider using more robust retry logic with better error handling.The retry mechanism improves test reliability, but there are several areas for improvement:
- Magic numbers: The retry count (3) and sleep multiplier (1.5) should be constants
- Zero timeout handling: If
autoFlushTimeoutis 0, the sleep will be 0, making the retry ineffective- No failure indication: If all retries fail, the test continues without indication of the retry failure
Consider this improved implementation:
- for (int i = 0; i < 3; ++i) - { - Thread.Sleep(TimeSpan.FromSeconds(autoFlushTimeout * 1.5)); - var fileInfo = new FileInfo(logFile); - if (fileInfo.Exists && fileInfo.Length > 0) - break; - } + const int maxRetries = 3; + const double timeoutMultiplier = 1.5; + var retryDelay = TimeSpan.FromSeconds(Math.Max(autoFlushTimeout * timeoutMultiplier, 0.1)); + + bool fileReady = false; + for (int i = 0; i < maxRetries; ++i) + { + Thread.Sleep(retryDelay); + var fileInfo = new FileInfo(logFile); + if (fileInfo.Exists && fileInfo.Length > 0) + { + fileReady = true; + break; + } + } + + if (!fileReady) + { + // Log or assert that retries failed for better debugging + Assert.True(false, $"File {logFile} was not ready after {maxRetries} retries"); + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/NLog/Internal/XmlParser.cs(1 hunks)src/NLog/Targets/FileTarget.cs(1 hunks)tests/NLog.Targets.ConcurrentFile.Tests/ConcurrentFileTargetTests.cs(1 hunks)tests/NLog.UnitTests/Internal/XmlParserTests.cs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- src/NLog/Internal/XmlParser.cs
- tests/NLog.UnitTests/Internal/XmlParserTests.cs
3a11d7c to
9107168
Compare
|



Resolves #5928 - Followup to #5712