- Added centralized errors.go with complete Kafka error code definitions
- Implemented timeout detection and network error classification
- Enhanced connection handling with configurable timeouts and better error reporting
- Added comprehensive error handling test suite with 21 test cases
- Unified error code usage across all protocol handlers
- Improved request/response timeout handling with graceful fallbacks
- All protocol and E2E tests passing with robust error handling
- Added flexible_versions.go with utilities for Kafka flexible versions (v3+)
- Implemented ParseRequestHeader for compact string parsing and tagged fields
- Added fallback mechanism in handler.go for backward compatibility
- Updated handleApiVersions to support flexible version responses
- Added comprehensive tests for flexible version utilities
- All protocol tests passing with robust error handling
Multi-batch Fetch support completed:
## Core Features
- **MaxBytes compliance**: Respects fetch request MaxBytes limits to prevent oversized responses
- **Multi-batch concatenation**: Properly concatenates multiple record batches in single response
- **Size estimation**: Pre-estimates batch sizes to optimize MaxBytes usage before construction
- **Kafka-compliant behavior**: Always returns at least one batch even if it exceeds MaxBytes (first batch rule)
## Implementation Details
- **MultiBatchFetcher**: New dedicated class for multi-batch operations
- **Intelligent batching**: Adapts record count per batch based on available space (10-50 records)
- **Proper concatenation format**: Each batch maintains independent headers and structure
- **Fallback support**: Graceful fallback to single batch if multi-batch fails
## Advanced Features
- **Compression ready**: Basic support for compressed record batches (GZIP placeholder)
- **Size tracking**: Tracks total response size and batch count across operations
- **Edge case handling**: Handles large single batches, empty responses, partial batches
## Integration & Testing
- **Fetch API integration**: Seamlessly integrated with existing handleFetch pipeline
- **17 comprehensive tests**: Multi-batch scenarios, size limits, concatenation format validation
- **E2E compatibility**: Sarama tests pass with no regressions
- **Performance validation**: Benchmarks for batch construction and multi-fetch operations
## Performance Improvements
- **Better bandwidth utilization**: Fills available MaxBytes space efficiently
- **Reduced round trips**: Multiple batches in single response
- **Adaptive sizing**: Smaller batches when space limited, larger when space available
Ready for Phase 6: Basic flexible versions support
ApiVersions Matrix Accuracy completed:
## Critical Fixes
- **OffsetFetch API**: Updated advertised from v0-v2 to v0-v5 (MAJOR fix)
- Implementation already supported v3+ throttle_time_ms and v5+ leader_epoch
- Clients can now use advanced OffsetFetch features
- **CreateTopics API**: Updated advertised from v0-v4 to v0-v5 (minor fix)
- Implementation already routed v5 requests to v2+ handler
- Better client compatibility for v5 CreateTopics requests
## Implementation
- **handleApiVersions()**: Corrected advertised max versions
- **validateAPIVersion()**: Updated validation ranges to match advertisements
- **Consistency**: Eliminated mismatch between advertised vs implemented versions
## Testing & Verification
- **Comprehensive test suite**: 6 new tests in api_versions_test.go
- **Version validation tests**: OffsetFetch v3-v5 and CreateTopics v5 now accepted
- **End-to-end verification**: E2E tests still pass, no regressions
- **API audit documentation**: Complete version matrix in API_VERSION_MATRIX.md
## Impact
- **Client compatibility**: Higher-version clients can now connect properly
- **Feature utilization**: Advanced features like leader epoch, throttle time accessible
- **Protocol compliance**: Advertised versions now match actual implementation
- **Future-proofing**: Clear process for managing API version accuracy
Ready for Phase 4: Consumer group protocol metadata parsing
CreateTopics Protocol Compliance completed:
## Implementation
- Implement handleCreateTopicsV0V1() with proper v0/v1 request parsing
- Support regular array/string format (not compact) for v0/v1
- Parse topic name, partitions, replication factor, assignments, configs
- Handle timeout_ms and validate_only fields correctly
- Maintain existing v2+ compact format support
- Wire to SeaweedMQ handler for actual topic creation
## Key Features
- Full v0-v5 CreateTopics API version support
- Proper error handling (TOPIC_ALREADY_EXISTS, INVALID_PARTITIONS, etc.)
- Partition count validation and enforcement
- Compatible with existing SeaweedMQ topic management
## Tests
- Comprehensive unit tests for v0/v1/v2+ parsing
- Error condition testing (duplicate topics, invalid partitions)
- Multi-topic creation support
- Integration tests across all API versions
- Performance benchmarks for CreateTopics operations
## Verification
- All protocol tests pass (v0-v5 CreateTopics)
- E2E Sarama tests continue to work
- Real topics created with specified partition counts
- Proper error responses for edge cases
Ready for Phase 3: ApiVersions matrix accuracy
Core SeaweedMQ Integration completed:
## Implementation
- Implement SeaweedMQHandler.GetStoredRecords() to retrieve actual records from SeaweedMQ
- Add SeaweedSMQRecord wrapper implementing offset.SMQRecord interface
- Wire Fetch API to use real SMQ records instead of synthetic batches
- Support both agent and broker client connections for record retrieval
## Key Features
- Proper Kafka offset mapping from SeaweedMQ records
- Respects maxRecords limit and batch size constraints
- Graceful error handling for missing topics/partitions
- High water mark boundary checking
## Tests
- Unit tests for SMQRecord interface compliance
- Edge case testing (empty topics, offset boundaries, limits)
- Integration with existing end-to-end Kafka tests
- Benchmark tests for record accessor performance
## Verification
- All integration tests pass
- E2E Sarama test shows 'Found X SMQ records' debug output
- GetStoredRecords now returns real data instead of TODO placeholder
Ready for Phase 2: CreateTopics protocol compliance
Fixed GetHighWaterMark() to use correct partition managers
Fixed GetPartitionOffsetInfo() with proper struct fields
Fixed GetOffsetMetrics() with correct types and system
- Fix gateway tests: Replace AgentAddress with Masters in Options struct
- Fix consumer test: Correct GenerateMemberID test to expect deterministic behavior
- Fix schema tests: Remove incorrect error assertions for mock broker scenarios
- All core offset management and protocol tests now pass
- Gateway, consumer, protocol, and offset packages compile and test successfully
- Remove old SMQIntegratedStorage implementation from persistence.go
- Update all integration modules to use SMQOffsetStorage instead
- Add delegation methods to PersistentLedger for backward compatibility
- Fix method signatures and compilation errors
- Maintain support for legacy offset operations through SeaweedMQStorage
- Add end-to-end flow tests for Kafka OffsetCommit to SMQ storage
- Test multiple consumer groups with independent offset tracking
- Validate SMQ file path and format compatibility
- Test error handling and edge cases (negative, zero, max offsets)
- Verify offset encoding/decoding matches SMQ broker format
- Ensure consumer group isolation and proper key generation
- Update Kafka protocol handler to use SMQOffsetStorage for consumer offsets
- Modify OffsetCommit to save consumer offsets using SMQ's filer format
- Modify OffsetFetch to read consumer offsets from SMQ's filer location
- Add proper ConsumerOffsetKey creation with consumer group and instance ID
- Maintain backward compatibility with in-memory storage fallback
- Include comprehensive test coverage for offset handler integration
- Add SMQOffsetStorage that uses same filer locations and format as SMQ brokers
- Store offsets in <topic-dir>/<partition-dir>/<consumerGroup>.offset files
- Use 8-byte big-endian format matching SMQ broker implementation
- Include comprehensive test coverage for core functionality
- Maintain backward compatibility through legacy method support