Browse Source

debug: enable detailed logging for SeaweedFS client file operations

Enable DEBUG logging for:
- SeaweedRead: Shows fileSize calculations from chunks
- SeaweedOutputStream: Shows write/flush/close operations
- SeaweedInputStream: Shows read operations and content length

This will reveal:
1. What file size is calculated from Entry chunks metadata
2. What actual chunk sizes are written
3. If there's a mismatch between metadata and actual data
4. Whether the '78 bytes' missing is consistent pattern

Looking for clues about the EOF exception root cause.
pull/7526/head
chrislu 6 days ago
parent
commit
65d9aacceb
  1. 4
      test/java/spark/src/test/resources/log4j.properties

4
test/java/spark/src/test/resources/log4j.properties

@ -11,6 +11,10 @@ log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}:
log4j.logger.org.apache.spark=WARN
log4j.logger.org.apache.hadoop=WARN
log4j.logger.seaweed=INFO
# Enable DEBUG for SeaweedFS client to see file size calculations
log4j.logger.seaweedfs.client.SeaweedRead=DEBUG
log4j.logger.seaweedfs.client.SeaweedOutputStream=DEBUG
log4j.logger.seaweedfs.client.SeaweedInputStream=DEBUG
# Suppress Parquet verbose DEBUG logging
log4j.logger.org.apache.parquet=ERROR

Loading…
Cancel
Save