Browse Source
CRITICAL BUG FIX: ReadMessagesAtOffset was returning error instead of attempting disk I/O when data was flushed from memory, causing massive message loss (6254 out of 12192 messages = 51% loss). Problem: In log_read_stateless.go lines 120-131, when data was flushed to disk (empty previous buffer), the code returned an 'offset out of range' error instead of attempting disk I/O. This caused consumers to skip over flushed data entirely, leading to catastrophic message loss. The bug occurred when: 1. Data was written to LogBuffer 2. Data was flushed to disk due to buffer rotation 3. Consumer requested that offset range 4. Code found offset in expected range but not in memory 5. ❌ Returned error instead of reading from disk Root Cause: Lines 126-131 had early return with error when previous buffer was empty: // Data not in memory - for stateless fetch, we don't do disk I/O return messages, startOffset, highWaterMark, false, fmt.Errorf("offset %d out of range...") This comment was incorrect - we DO need disk I/O for flushed data! Fix: 1. Lines 120-132: Changed to fall through to disk read logic instead of returning error when previous buffer is empty 2. Lines 137-177: Enhanced disk read logic to handle TWO cases: - Historical data (offset < bufferStartOffset) - Flushed data (offset >= bufferStartOffset but not in memory) Changes: - Line 121: Log "attempting disk read" instead of breaking - Line 130-132: Fall through to disk read instead of returning error - Line 141: Changed condition from 'if startOffset < bufferStartOffset' to 'if startOffset < currentBufferEnd' to handle both cases - Lines 143-149: Add context-aware logging for both historical and flushed data - Lines 154-159: Add context-aware error messages Expected Results: - Before: 51% message loss (6254/12192 missing) - After: <1% message loss (only from rebalancing, which we already fixed) - Duplicates: Should remain ~47% (from rebalancing, expected until offsets committed) Testing: - ✅ Compiles successfully - Ready for integration testing with standard-test Related Issues: - This explains the massive data loss in recent load tests - Disk I/O fallback was implemented but not reachable due to early return - Disk chunk cache is working but was never being used for flushed data Priority: CRITICAL - Fixes production-breaking data loss bugpull/7329/head
1 changed files with 30 additions and 18 deletions
Loading…
Reference in new issue