After extensive testing and debugging:
PROVEN TO WORK:
✅ Direct Parquet writes to SeaweedFS
✅ Spark reads Parquet from SeaweedFS
✅ Spark df.write() in isolation
✅ I/O operations identical to local filesystem
✅ Spark INSERT INTO
STILL FAILS:
❌ SparkSQLTest with DataFrame.write().parquet()
ROOT CAUSE IDENTIFIED:
The issue is in Spark's file commit protocol:
1. Spark writes to _temporary directory (succeeds)
2. Spark renames to final location
3. Metadata after rename is stale/incorrect
4. Spark reads final file, gets 78-byte EOF error
ATTEMPTED FIX:
- Added ensureMetadataVisible() in close()
- Result: Method HANGS when calling lookupEntry()
- Reason: Cannot lookup from within close() (deadlock)
CONCLUSION:
The issue is NOT in write path, it's in RENAME operation.
Need to investigate SeaweedFS rename() to ensure metadata
is correctly preserved/updated when moving files from
temporary to final locations.
Removed hanging metadata check, documented findings.