true if the file has been truncated to the desired
- * newLength and is immediately available to be reused for
- * write operations such as append, or
- * false if a background process of adjusting the length of
- * the last block has been started, and clients should wait for it to
- * complete before proceeding with further file updates.
- * @throws IOException IO failure
- * @throws UnsupportedOperationException if the operation is unsupported
- * (default).
- */
- @Override
- public boolean truncate(Path f, long newLength) throws IOException {
- throw new UnsupportedOperationException("Not implemented by the " +
- getClass().getSimpleName() + " FileSystem implementation");
- }
-
- @Override
- public void createSymlink(final Path target, final Path link,
- final boolean createParent) throws IOException {
- // Supporting filesystems should override this method
- throw new UnsupportedOperationException(
- "Filesystem does not support symlinks!");
- }
-
- public boolean supportsSymlinks() {
- return false;
- }
-
- /**
- * Create a snapshot.
- *
- * @param path The directory where snapshots will be taken.
- * @param snapshotName The name of the snapshot
- * @return the snapshot path.
- * @throws IOException IO failure
- * @throws UnsupportedOperationException if the operation is unsupported
- */
- @Override
- public Path createSnapshot(Path path, String snapshotName)
- throws IOException {
- throw new UnsupportedOperationException(getClass().getSimpleName()
- + " doesn't support createSnapshot");
- }
-
- /**
- * Rename a snapshot.
- *
- * @param path The directory path where the snapshot was taken
- * @param snapshotOldName Old name of the snapshot
- * @param snapshotNewName New name of the snapshot
- * @throws IOException IO failure
- * @throws UnsupportedOperationException if the operation is unsupported
- * (default outcome).
- */
- @Override
- public void renameSnapshot(Path path, String snapshotOldName,
- String snapshotNewName) throws IOException {
- throw new UnsupportedOperationException(getClass().getSimpleName()
- + " doesn't support renameSnapshot");
- }
-
- /**
- * Delete a snapshot of a directory.
- *
- * @param path The directory that the to-be-deleted snapshot belongs to
- * @param snapshotName The name of the snapshot
- * @throws IOException IO failure
- * @throws UnsupportedOperationException if the operation is unsupported
- * (default outcome).
- */
- @Override
- public void deleteSnapshot(Path path, String snapshotName)
- throws IOException {
- throw new UnsupportedOperationException(getClass().getSimpleName()
- + " doesn't support deleteSnapshot");
- }
-
- /**
- * Modifies ACL entries of files and directories. This method can add new ACL
- * entries or modify the permissions on existing ACL entries. All existing
- * ACL entries that are not specified in this call are retained without
- * changes. (Modifications are merged into the current ACL.)
- *
- * @param path Path to modify
- * @param aclSpec List<AclEntry> describing modifications
- * @throws IOException if an ACL could not be modified
- * @throws UnsupportedOperationException if the operation is unsupported
- * (default outcome).
- */
- @Override
- public void modifyAclEntries(Path path, List
- * Refer to the HDFS extended attributes user documentation for details.
- *
- * @param path Path to modify
- * @param name xattr name.
- * @param value xattr value.
- * @param flag xattr set flag
- * @throws IOException IO failure
- * @throws UnsupportedOperationException if the operation is unsupported
- * (default outcome).
- */
- @Override
- public void setXAttr(Path path, String name, byte[] value,
- EnumSet
- * Refer to the HDFS extended attributes user documentation for details.
- *
- * @param path Path to get extended attribute
- * @param name xattr name.
- * @return byte[] xattr value.
- * @throws IOException IO failure
- * @throws UnsupportedOperationException if the operation is unsupported
- * (default outcome).
- */
- @Override
- public byte[] getXAttr(Path path, String name) throws IOException {
- throw new UnsupportedOperationException(getClass().getSimpleName()
- + " doesn't support getXAttr");
- }
-
- /**
- * Get all of the xattr name/value pairs for a file or directory.
- * Only those xattrs which the logged-in user has permissions to view
- * are returned.
- *
- * Refer to the HDFS extended attributes user documentation for details.
- *
- * @param path Path to get extended attributes
- * @return Map describing the XAttrs of the file or directory
- * @throws IOException IO failure
- * @throws UnsupportedOperationException if the operation is unsupported
- * (default outcome).
- */
- @Override
- public Map
- * Refer to the HDFS extended attributes user documentation for details.
- *
- * @param path Path to get extended attributes
- * @param names XAttr names.
- * @return Map describing the XAttrs of the file or directory
- * @throws IOException IO failure
- * @throws UnsupportedOperationException if the operation is unsupported
- * (default outcome).
- */
- @Override
- public Map
- * Refer to the HDFS extended attributes user documentation for details.
- *
- * @param path Path to get extended attributes
- * @return List{@literal
- * Refer to the HDFS extended attributes user documentation for details.
- *
- * @param path Path to remove extended attribute
- * @param name xattr name
- * @throws IOException IO failure
- * @throws UnsupportedOperationException if the operation is unsupported
- * (default outcome).
- */
- @Override
- public void removeXAttr(Path path, String name) throws IOException {
- throw new UnsupportedOperationException(getClass().getSimpleName()
- + " doesn't support removeXAttr");
- }
-
-}
diff --git a/other/java/hdfs2/src/main/java/seaweed/hdfs/SeaweedFileSystemStore.java b/other/java/hdfs2/src/main/java/seaweed/hdfs/SeaweedFileSystemStore.java
deleted file mode 100644
index f65c1961b..000000000
--- a/other/java/hdfs2/src/main/java/seaweed/hdfs/SeaweedFileSystemStore.java
+++ /dev/null
@@ -1,291 +0,0 @@
-package seaweed.hdfs;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FSInputStream;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.permission.FsPermission;
-import org.apache.hadoop.security.UserGroupInformation;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import seaweedfs.client.*;
-
-import java.io.FileNotFoundException;
-import java.io.IOException;
-import java.io.OutputStream;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.List;
-
-import static seaweed.hdfs.SeaweedFileSystem.*;
-
-public class SeaweedFileSystemStore {
-
- private static final Logger LOG = LoggerFactory.getLogger(SeaweedFileSystemStore.class);
-
- private FilerClient filerClient;
- private Configuration conf;
-
- public SeaweedFileSystemStore(String host, int port, int grpcPort, String cn, Configuration conf) {
- filerClient = new FilerClient(host, port, grpcPort, cn);
- this.conf = conf;
- String volumeServerAccessMode = this.conf.get(FS_SEAWEED_VOLUME_SERVER_ACCESS, "direct");
- if (volumeServerAccessMode.equals("publicUrl")) {
- filerClient.setAccessVolumeServerByPublicUrl();
- } else if (volumeServerAccessMode.equals("filerProxy")) {
- filerClient.setAccessVolumeServerByFilerProxy();
- }
- }
-
- public void close() {
- try {
- this.filerClient.shutdown();
- } catch (InterruptedException e) {
- e.printStackTrace();
- }
- }
-
- public static String getParentDirectory(Path path) {
- return path.isRoot() ? "/" : path.getParent().toUri().getPath();
- }
-
- static int permissionToMode(FsPermission permission, boolean isDirectory) {
- int p = permission.toShort();
- if (isDirectory) {
- p = p | 1 << 31;
- }
- return p;
- }
-
- public boolean createDirectory(final Path path, UserGroupInformation currentUser,
- final FsPermission permission, final FsPermission umask) {
-
- LOG.debug("createDirectory path: {} permission: {} umask: {}",
- path,
- permission,
- umask);
-
- return filerClient.mkdirs(
- path.toUri().getPath(),
- permissionToMode(permission, true),
- currentUser.getUserName(),
- currentUser.getGroupNames()
- );
- }
-
- public FileStatus[] listEntries(final Path path) throws IOException {
- LOG.debug("listEntries path: {}", path);
-
- FileStatus pathStatus = getFileStatus(path);
-
- if (pathStatus == null) {
- return new FileStatus[0];
- }
-
- if (!pathStatus.isDirectory()) {
- return new FileStatus[]{pathStatus};
- }
-
- List
- * This is to match the behavior of DFSInputStream.available(),
- * which some clients may rely on (HBase write-ahead log reading in
- * particular).
- */
- @Override
- public synchronized int available() throws IOException {
- return seaweedInputStream.available();
- }
-
- /**
- * Returns the length of the file that this stream refers to. Note that the length returned is the length
- * as of the time the Stream was opened. Specifically, if there have been subsequent appends to the file,
- * they wont be reflected in the returned length.
- *
- * @return length of the file.
- * @throws IOException if the stream is closed
- */
- public long length() throws IOException {
- return seaweedInputStream.length();
- }
-
- /**
- * Return the current offset from the start of the file
- *
- * @throws IOException throws {@link IOException} if there is an error
- */
- @Override
- public synchronized long getPos() throws IOException {
- return seaweedInputStream.getPos();
- }
-
- /**
- * Seeks a different copy of the data. Returns true if
- * found a new source, false otherwise.
- *
- * @throws IOException throws {@link IOException} if there is an error
- */
- @Override
- public boolean seekToNewSource(long l) throws IOException {
- return false;
- }
-
- @Override
- public synchronized void close() throws IOException {
- seaweedInputStream.close();
- }
-
- /**
- * Not supported by this stream. Throws {@link UnsupportedOperationException}
- *
- * @param readlimit ignored
- */
- @Override
- public synchronized void mark(int readlimit) {
- throw new UnsupportedOperationException("mark()/reset() not supported on this stream");
- }
-
- /**
- * Not supported by this stream. Throws {@link UnsupportedOperationException}
- */
- @Override
- public synchronized void reset() throws IOException {
- throw new UnsupportedOperationException("mark()/reset() not supported on this stream");
- }
-
- /**
- * gets whether mark and reset are supported by {@code ADLFileInputStream}. Always returns false.
- *
- * @return always {@code false}
- */
- @Override
- public boolean markSupported() {
- return false;
- }
-}
diff --git a/other/java/hdfs2/src/main/java/seaweed/hdfs/SeaweedHadoopOutputStream.java b/other/java/hdfs2/src/main/java/seaweed/hdfs/SeaweedHadoopOutputStream.java
deleted file mode 100644
index da5b56bbc..000000000
--- a/other/java/hdfs2/src/main/java/seaweed/hdfs/SeaweedHadoopOutputStream.java
+++ /dev/null
@@ -1,16 +0,0 @@
-package seaweed.hdfs;
-
-// adapted from org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream
-
-import seaweedfs.client.FilerClient;
-import seaweedfs.client.FilerProto;
-import seaweedfs.client.SeaweedOutputStream;
-
-public class SeaweedHadoopOutputStream extends SeaweedOutputStream {
-
- public SeaweedHadoopOutputStream(FilerClient filerClient, final String path, FilerProto.Entry.Builder entry,
- final long position, final int bufferSize, final String replication) {
- super(filerClient, path, entry, position, bufferSize, replication);
- }
-
-}
diff --git a/other/java/hdfs2/src/test/java/seaweed/hdfs/SeaweedFileSystemConfigTest.java b/other/java/hdfs2/src/test/java/seaweed/hdfs/SeaweedFileSystemConfigTest.java
deleted file mode 100644
index bcc08b8e2..000000000
--- a/other/java/hdfs2/src/test/java/seaweed/hdfs/SeaweedFileSystemConfigTest.java
+++ /dev/null
@@ -1,90 +0,0 @@
-package seaweed.hdfs;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.Path;
-import org.junit.Before;
-import org.junit.Test;
-
-import static org.junit.Assert.*;
-
-/**
- * Unit tests for SeaweedFileSystem configuration that don't require a running SeaweedFS instance.
- *
- * These tests verify basic properties and constants.
- */
-public class SeaweedFileSystemConfigTest {
-
- private SeaweedFileSystem fs;
- private Configuration conf;
-
- @Before
- public void setUp() {
- fs = new SeaweedFileSystem();
- conf = new Configuration();
- }
-
- @Test
- public void testScheme() {
- assertEquals("seaweedfs", fs.getScheme());
- }
-
- @Test
- public void testConstants() {
- // Test that constants are defined correctly
- assertEquals("fs.seaweed.filer.host", SeaweedFileSystem.FS_SEAWEED_FILER_HOST);
- assertEquals("fs.seaweed.filer.port", SeaweedFileSystem.FS_SEAWEED_FILER_PORT);
- assertEquals("fs.seaweed.filer.port.grpc", SeaweedFileSystem.FS_SEAWEED_FILER_PORT_GRPC);
- assertEquals(8888, SeaweedFileSystem.FS_SEAWEED_DEFAULT_PORT);
- assertEquals("fs.seaweed.buffer.size", SeaweedFileSystem.FS_SEAWEED_BUFFER_SIZE);
- assertEquals(4 * 1024 * 1024, SeaweedFileSystem.FS_SEAWEED_DEFAULT_BUFFER_SIZE);
- assertEquals("fs.seaweed.replication", SeaweedFileSystem.FS_SEAWEED_REPLICATION);
- assertEquals("fs.seaweed.volume.server.access", SeaweedFileSystem.FS_SEAWEED_VOLUME_SERVER_ACCESS);
- assertEquals("fs.seaweed.filer.cn", SeaweedFileSystem.FS_SEAWEED_FILER_CN);
- }
-
- @Test
- public void testWorkingDirectoryPathOperations() {
- // Test path operations that don't require initialization
- Path testPath = new Path("/test/path");
- assertTrue("Path should be absolute", testPath.isAbsolute());
- assertEquals("/test/path", testPath.toUri().getPath());
-
- Path childPath = new Path(testPath, "child");
- assertEquals("/test/path/child", childPath.toUri().getPath());
- }
-
- @Test
- public void testConfigurationProperties() {
- // Test that configuration can be set and read
- conf.set(SeaweedFileSystem.FS_SEAWEED_FILER_HOST, "testhost");
- assertEquals("testhost", conf.get(SeaweedFileSystem.FS_SEAWEED_FILER_HOST));
-
- conf.setInt(SeaweedFileSystem.FS_SEAWEED_FILER_PORT, 9999);
- assertEquals(9999, conf.getInt(SeaweedFileSystem.FS_SEAWEED_FILER_PORT, 0));
-
- conf.setInt(SeaweedFileSystem.FS_SEAWEED_BUFFER_SIZE, 8 * 1024 * 1024);
- assertEquals(8 * 1024 * 1024, conf.getInt(SeaweedFileSystem.FS_SEAWEED_BUFFER_SIZE, 0));
-
- conf.set(SeaweedFileSystem.FS_SEAWEED_REPLICATION, "001");
- assertEquals("001", conf.get(SeaweedFileSystem.FS_SEAWEED_REPLICATION));
-
- conf.set(SeaweedFileSystem.FS_SEAWEED_VOLUME_SERVER_ACCESS, "publicUrl");
- assertEquals("publicUrl", conf.get(SeaweedFileSystem.FS_SEAWEED_VOLUME_SERVER_ACCESS));
-
- conf.set(SeaweedFileSystem.FS_SEAWEED_FILER_CN, "test-cn");
- assertEquals("test-cn", conf.get(SeaweedFileSystem.FS_SEAWEED_FILER_CN));
- }
-
- @Test
- public void testDefaultBufferSize() {
- // Test default buffer size constant
- int expected = 4 * 1024 * 1024; // 4MB
- assertEquals(expected, SeaweedFileSystem.FS_SEAWEED_DEFAULT_BUFFER_SIZE);
- }
-
- @Test
- public void testDefaultPort() {
- // Test default port constant
- assertEquals(8888, SeaweedFileSystem.FS_SEAWEED_DEFAULT_PORT);
- }
-}
diff --git a/other/java/hdfs2/src/test/java/seaweed/hdfs/SeaweedFileSystemTest.java b/other/java/hdfs2/src/test/java/seaweed/hdfs/SeaweedFileSystemTest.java
deleted file mode 100644
index ec43b3481..000000000
--- a/other/java/hdfs2/src/test/java/seaweed/hdfs/SeaweedFileSystemTest.java
+++ /dev/null
@@ -1,379 +0,0 @@
-package seaweed.hdfs;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FSDataInputStream;
-import org.apache.hadoop.fs.FSDataOutputStream;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.permission.FsPermission;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Test;
-
-import java.io.IOException;
-import java.net.URI;
-
-import static org.junit.Assert.*;
-
-/**
- * Unit tests for SeaweedFileSystem.
- *
- * These tests verify basic FileSystem operations against a SeaweedFS backend.
- * Note: These tests require a running SeaweedFS filer instance.
- *
- * To run tests, ensure SeaweedFS is running with default ports:
- * - Filer HTTP: 8888
- * - Filer gRPC: 18888
- *
- * Set environment variable SEAWEEDFS_TEST_ENABLED=true to enable these tests.
- */
-public class SeaweedFileSystemTest {
-
- private SeaweedFileSystem fs;
- private Configuration conf;
- private static final String TEST_ROOT = "/test-hdfs2";
- private static final boolean TESTS_ENABLED =
- "true".equalsIgnoreCase(System.getenv("SEAWEEDFS_TEST_ENABLED"));
-
- @Before
- public void setUp() throws Exception {
- if (!TESTS_ENABLED) {
- return;
- }
-
- conf = new Configuration();
- conf.set("fs.seaweed.filer.host", "localhost");
- conf.setInt("fs.seaweed.filer.port", 8888);
- conf.setInt("fs.seaweed.filer.port.grpc", 18888);
-
- fs = new SeaweedFileSystem();
- URI uri = new URI("seaweedfs://localhost:8888/");
- fs.initialize(uri, conf);
-
- // Clean up any existing test directory
- Path testPath = new Path(TEST_ROOT);
- if (fs.exists(testPath)) {
- fs.delete(testPath, true);
- }
- }
-
- @After
- public void tearDown() throws Exception {
- if (!TESTS_ENABLED || fs == null) {
- return;
- }
-
- // Clean up test directory
- Path testPath = new Path(TEST_ROOT);
- if (fs.exists(testPath)) {
- fs.delete(testPath, true);
- }
-
- fs.close();
- }
-
- @Test
- public void testInitialization() throws Exception {
- if (!TESTS_ENABLED) {
- System.out.println("Skipping test - SEAWEEDFS_TEST_ENABLED not set");
- return;
- }
-
- assertNotNull(fs);
- assertEquals("seaweedfs", fs.getScheme());
- assertNotNull(fs.getUri());
- assertEquals("/", fs.getWorkingDirectory().toUri().getPath());
- }
-
- @Test
- public void testMkdirs() throws Exception {
- if (!TESTS_ENABLED) {
- System.out.println("Skipping test - SEAWEEDFS_TEST_ENABLED not set");
- return;
- }
-
- Path testDir = new Path(TEST_ROOT + "/testdir");
- assertTrue("Failed to create directory", fs.mkdirs(testDir));
- assertTrue("Directory should exist", fs.exists(testDir));
-
- FileStatus status = fs.getFileStatus(testDir);
- assertTrue("Path should be a directory", status.isDirectory());
- }
-
- @Test
- public void testCreateAndReadFile() throws Exception {
- if (!TESTS_ENABLED) {
- System.out.println("Skipping test - SEAWEEDFS_TEST_ENABLED not set");
- return;
- }
-
- Path testFile = new Path(TEST_ROOT + "/testfile.txt");
- String testContent = "Hello, SeaweedFS!";
-
- // Create and write to file
- FSDataOutputStream out = fs.create(testFile, FsPermission.getDefault(),
- false, 4096, (short) 1, 4 * 1024 * 1024, null);
- assertNotNull("Output stream should not be null", out);
- out.write(testContent.getBytes());
- out.close();
-
- // Verify file exists
- assertTrue("File should exist", fs.exists(testFile));
-
- // Read and verify content
- FSDataInputStream in = fs.open(testFile, 4096);
- assertNotNull("Input stream should not be null", in);
- byte[] buffer = new byte[testContent.length()];
- int bytesRead = in.read(buffer);
- in.close();
-
- assertEquals("Should read all bytes", testContent.length(), bytesRead);
- assertEquals("Content should match", testContent, new String(buffer));
- }
-
- @Test
- public void testFileStatus() throws Exception {
- if (!TESTS_ENABLED) {
- System.out.println("Skipping test - SEAWEEDFS_TEST_ENABLED not set");
- return;
- }
-
- Path testFile = new Path(TEST_ROOT + "/statustest.txt");
- String content = "test content";
-
- FSDataOutputStream out = fs.create(testFile);
- out.write(content.getBytes());
- out.close();
-
- FileStatus status = fs.getFileStatus(testFile);
- assertNotNull("FileStatus should not be null", status);
- assertFalse("Should not be a directory", status.isDirectory());
- assertTrue("Should be a file", status.isFile());
- assertEquals("File length should match", content.length(), status.getLen());
- assertNotNull("Path should not be null", status.getPath());
- }
-
- @Test
- public void testListStatus() throws Exception {
- if (!TESTS_ENABLED) {
- System.out.println("Skipping test - SEAWEEDFS_TEST_ENABLED not set");
- return;
- }
-
- Path testDir = new Path(TEST_ROOT + "/listtest");
- fs.mkdirs(testDir);
-
- // Create multiple files
- for (int i = 0; i < 3; i++) {
- Path file = new Path(testDir, "file" + i + ".txt");
- FSDataOutputStream out = fs.create(file);
- out.write(("content" + i).getBytes());
- out.close();
- }
-
- FileStatus[] statuses = fs.listStatus(testDir);
- assertNotNull("List should not be null", statuses);
- assertEquals("Should have 3 files", 3, statuses.length);
- }
-
- @Test
- public void testRename() throws Exception {
- if (!TESTS_ENABLED) {
- System.out.println("Skipping test - SEAWEEDFS_TEST_ENABLED not set");
- return;
- }
-
- Path srcFile = new Path(TEST_ROOT + "/source.txt");
- Path dstFile = new Path(TEST_ROOT + "/destination.txt");
- String content = "rename test";
-
- // Create source file
- FSDataOutputStream out = fs.create(srcFile);
- out.write(content.getBytes());
- out.close();
-
- assertTrue("Source file should exist", fs.exists(srcFile));
-
- // Rename
- assertTrue("Rename should succeed", fs.rename(srcFile, dstFile));
-
- // Verify
- assertFalse("Source file should not exist", fs.exists(srcFile));
- assertTrue("Destination file should exist", fs.exists(dstFile));
-
- // Verify content preserved
- FSDataInputStream in = fs.open(dstFile);
- byte[] buffer = new byte[content.length()];
- in.read(buffer);
- in.close();
- assertEquals("Content should be preserved", content, new String(buffer));
- }
-
- @Test
- public void testDelete() throws Exception {
- if (!TESTS_ENABLED) {
- System.out.println("Skipping test - SEAWEEDFS_TEST_ENABLED not set");
- return;
- }
-
- Path testFile = new Path(TEST_ROOT + "/deletetest.txt");
-
- // Create file
- FSDataOutputStream out = fs.create(testFile);
- out.write("delete me".getBytes());
- out.close();
-
- assertTrue("File should exist before delete", fs.exists(testFile));
-
- // Delete
- assertTrue("Delete should succeed", fs.delete(testFile, false));
- assertFalse("File should not exist after delete", fs.exists(testFile));
- }
-
- @Test
- public void testDeleteDirectory() throws Exception {
- if (!TESTS_ENABLED) {
- System.out.println("Skipping test - SEAWEEDFS_TEST_ENABLED not set");
- return;
- }
-
- Path testDir = new Path(TEST_ROOT + "/deletedir");
- Path testFile = new Path(testDir, "file.txt");
-
- // Create directory with file
- fs.mkdirs(testDir);
- FSDataOutputStream out = fs.create(testFile);
- out.write("content".getBytes());
- out.close();
-
- assertTrue("Directory should exist", fs.exists(testDir));
- assertTrue("File should exist", fs.exists(testFile));
-
- // Recursive delete
- assertTrue("Recursive delete should succeed", fs.delete(testDir, true));
- assertFalse("Directory should not exist after delete", fs.exists(testDir));
- assertFalse("File should not exist after delete", fs.exists(testFile));
- }
-
- @Test
- public void testAppend() throws Exception {
- if (!TESTS_ENABLED) {
- System.out.println("Skipping test - SEAWEEDFS_TEST_ENABLED not set");
- return;
- }
-
- Path testFile = new Path(TEST_ROOT + "/appendtest.txt");
- String initialContent = "initial";
- String appendContent = " appended";
-
- // Create initial file
- FSDataOutputStream out = fs.create(testFile);
- out.write(initialContent.getBytes());
- out.close();
-
- // Append
- FSDataOutputStream appendOut = fs.append(testFile, 4096, null);
- assertNotNull("Append stream should not be null", appendOut);
- appendOut.write(appendContent.getBytes());
- appendOut.close();
-
- // Verify combined content
- FSDataInputStream in = fs.open(testFile);
- byte[] buffer = new byte[initialContent.length() + appendContent.length()];
- int bytesRead = in.read(buffer);
- in.close();
-
- String expected = initialContent + appendContent;
- assertEquals("Should read all bytes", expected.length(), bytesRead);
- assertEquals("Content should match", expected, new String(buffer));
- }
-
- @Test
- public void testSetWorkingDirectory() throws Exception {
- if (!TESTS_ENABLED) {
- System.out.println("Skipping test - SEAWEEDFS_TEST_ENABLED not set");
- return;
- }
-
- Path originalWd = fs.getWorkingDirectory();
- assertEquals("Original working directory should be /", "/", originalWd.toUri().getPath());
-
- Path newWd = new Path(TEST_ROOT);
- fs.mkdirs(newWd);
- fs.setWorkingDirectory(newWd);
-
- Path currentWd = fs.getWorkingDirectory();
- assertTrue("Working directory should be updated",
- currentWd.toUri().getPath().contains(TEST_ROOT));
- }
-
- @Test
- public void testSetPermission() throws Exception {
- if (!TESTS_ENABLED) {
- System.out.println("Skipping test - SEAWEEDFS_TEST_ENABLED not set");
- return;
- }
-
- Path testFile = new Path(TEST_ROOT + "/permtest.txt");
-
- // Create file
- FSDataOutputStream out = fs.create(testFile);
- out.write("permission test".getBytes());
- out.close();
-
- // Set permission
- FsPermission newPerm = new FsPermission((short) 0644);
- fs.setPermission(testFile, newPerm);
-
- FileStatus status = fs.getFileStatus(testFile);
- assertNotNull("Permission should not be null", status.getPermission());
- }
-
- @Test
- public void testSetOwner() throws Exception {
- if (!TESTS_ENABLED) {
- System.out.println("Skipping test - SEAWEEDFS_TEST_ENABLED not set");
- return;
- }
-
- Path testFile = new Path(TEST_ROOT + "/ownertest.txt");
-
- // Create file
- FSDataOutputStream out = fs.create(testFile);
- out.write("owner test".getBytes());
- out.close();
-
- // Set owner - this may not fail even if not fully implemented
- fs.setOwner(testFile, "testuser", "testgroup");
-
- // Just verify the call doesn't throw an exception
- FileStatus status = fs.getFileStatus(testFile);
- assertNotNull("FileStatus should not be null", status);
- }
-
- @Test
- public void testRenameToExistingDirectory() throws Exception {
- if (!TESTS_ENABLED) {
- System.out.println("Skipping test - SEAWEEDFS_TEST_ENABLED not set");
- return;
- }
-
- Path srcFile = new Path(TEST_ROOT + "/movefile.txt");
- Path dstDir = new Path(TEST_ROOT + "/movedir");
-
- // Create source file and destination directory
- FSDataOutputStream out = fs.create(srcFile);
- out.write("move test".getBytes());
- out.close();
- fs.mkdirs(dstDir);
-
- // Rename file to existing directory (should move file into directory)
- assertTrue("Rename to directory should succeed", fs.rename(srcFile, dstDir));
-
- // File should be moved into the directory
- Path expectedLocation = new Path(dstDir, srcFile.getName());
- assertTrue("File should exist in destination directory", fs.exists(expectedLocation));
- assertFalse("Source file should not exist", fs.exists(srcFile));
- }
-}
-
diff --git a/other/java/hdfs3/src/main/java/seaweed/hdfs/SeaweedAtomicOutputStream.java b/other/java/hdfs3/src/main/java/seaweed/hdfs/SeaweedAtomicOutputStream.java
deleted file mode 100644
index ed42af0a9..000000000
--- a/other/java/hdfs3/src/main/java/seaweed/hdfs/SeaweedAtomicOutputStream.java
+++ /dev/null
@@ -1,109 +0,0 @@
-package seaweed.hdfs;
-
-import org.apache.hadoop.fs.Syncable;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import seaweedfs.client.FilerClient;
-import seaweedfs.client.FilerProto;
-
-import java.io.ByteArrayOutputStream;
-import java.io.IOException;
-
-/**
- * Atomic output stream for Parquet files.
- *
- * Buffers all writes in memory and writes atomically on close().
- * This ensures that getPos() always returns accurate positions that match
- * the final file layout, which is required for Parquet's footer metadata.
- */
-public class SeaweedAtomicOutputStream extends SeaweedHadoopOutputStream implements Syncable {
-
- private static final Logger LOG = LoggerFactory.getLogger(SeaweedAtomicOutputStream.class);
-
- private final ByteArrayOutputStream memoryBuffer;
- private final String filePath;
- private boolean closed = false;
-
- public SeaweedAtomicOutputStream(FilerClient filerClient, String path, FilerProto.Entry.Builder entry,
- long position, int maxBufferSize, String replication) {
- super(filerClient, path, entry, position, maxBufferSize, replication);
- this.filePath = path;
- this.memoryBuffer = new ByteArrayOutputStream(maxBufferSize);
- LOG.info("[ATOMIC] Created atomic output stream for: {} (maxBuffer={})", path, maxBufferSize);
- }
-
- @Override
- public synchronized void write(int b) throws IOException {
- if (closed) {
- throw new IOException("Stream is closed");
- }
- memoryBuffer.write(b);
- }
-
- @Override
- public synchronized void write(byte[] b, int off, int len) throws IOException {
- if (closed) {
- throw new IOException("Stream is closed");
- }
- memoryBuffer.write(b, off, len);
- }
-
- @Override
- public synchronized long getPos() throws IOException {
- // Return the current size of the memory buffer
- // This is always accurate since nothing is flushed until close()
- long pos = memoryBuffer.size();
-
- // Log getPos() calls around the problematic positions
- if (pos >= 470 && pos <= 476) {
- LOG.error("[ATOMIC-GETPOS] getPos() returning pos={}", pos);
- }
-
- return pos;
- }
-
- @Override
- public synchronized void flush() throws IOException {
- // No-op for atomic writes - everything is flushed on close()
- LOG.debug("[ATOMIC] flush() called (no-op for atomic writes)");
- }
-
- @Override
- public synchronized void hsync() throws IOException {
- // No-op for atomic writes
- LOG.debug("[ATOMIC] hsync() called (no-op for atomic writes)");
- }
-
- @Override
- public synchronized void hflush() throws IOException {
- // No-op for atomic writes
- LOG.debug("[ATOMIC] hflush() called (no-op for atomic writes)");
- }
-
- @Override
- public synchronized void close() throws IOException {
- if (closed) {
- return;
- }
-
- try {
- byte[] data = memoryBuffer.toByteArray();
- int size = data.length;
-
- LOG.info("[ATOMIC] Closing atomic stream: {} ({} bytes buffered)", filePath, size);
-
- if (size > 0) {
- // Write all data at once using the parent's write method
- super.write(data, 0, size);
- }
-
- // Now close the parent stream which will flush and write metadata
- super.close();
-
- LOG.info("[ATOMIC] Successfully wrote {} bytes atomically to: {}", size, filePath);
- } finally {
- closed = true;
- memoryBuffer.reset();
- }
- }
-}
diff --git a/test/java/spark/COMMIT_SUMMARY.md b/test/java/spark/COMMIT_SUMMARY.md
deleted file mode 100644
index a8b405f55..000000000
--- a/test/java/spark/COMMIT_SUMMARY.md
+++ /dev/null
@@ -1,132 +0,0 @@
-# Fix Parquet EOF Error by Removing ByteBufferReadable Interface
-
-## Summary
-
-Fixed `EOFException: Reached the end of stream. Still have: 78 bytes left` error when reading Parquet files with complex schemas in Spark.
-
-## Root Cause
-
-`SeaweedHadoopInputStream` declared it implemented `ByteBufferReadable` interface but didn't properly implement it, causing incorrect buffering strategy and position tracking issues during positioned reads (critical for Parquet).
-
-## Solution
-
-Removed `ByteBufferReadable` interface from `SeaweedHadoopInputStream` to match Hadoop's `RawLocalFileSystem` pattern, which uses `BufferedFSInputStream` for proper position tracking.
-
-## Changes
-
-### Core Fix
-
-1. **`SeaweedHadoopInputStream.java`**:
- - Removed `ByteBufferReadable` interface
- - Removed `read(ByteBuffer)` method
- - Cleaned up debug logging
- - Added documentation explaining the design choice
-
-2. **`SeaweedFileSystem.java`**:
- - Changed from `BufferedByteBufferReadableInputStream` to `BufferedFSInputStream`
- - Applies to all streams uniformly
- - Cleaned up debug logging
-
-3. **`SeaweedInputStream.java`**:
- - Cleaned up debug logging
-
-### Cleanup
-
-4. **Deleted debug-only files**:
- - `DebugDualInputStream.java`
- - `DebugDualInputStreamWrapper.java`
- - `DebugDualOutputStream.java`
- - `DebugMode.java`
- - `LocalOnlyInputStream.java`
- - `ShadowComparisonStream.java`
-
-5. **Reverted**:
- - `SeaweedFileSystemStore.java` (removed all debug mode logic)
-
-6. **Cleaned**:
- - `docker-compose.yml` (removed debug environment variables)
- - All `.md` documentation files in `test/java/spark/`
-
-## Testing
-
-All Spark integration tests pass:
-- ✅ `SparkSQLTest.testCreateTableAndQuery` (complex 4-column schema)
-- ✅ `SimpleOneColumnTest` (basic operations)
-- ✅ All other Spark integration tests
-
-## Technical Details
-
-### Why This Works
-
-Hadoop's `RawLocalFileSystem` uses the exact same pattern:
-- Does NOT implement `ByteBufferReadable`
-- Uses `BufferedFSInputStream` for buffering
-- Properly handles positioned reads with automatic position restoration
-
-### Position Tracking
-
-`BufferedFSInputStream` implements positioned reads correctly:
-```java
-public int read(long position, byte[] buffer, int offset, int length) {
- long oldPos = getPos();
- try {
- seek(position);
- return read(buffer, offset, length);
- } finally {
- seek(oldPos); // Restores position!
- }
-}
-```
-
-This ensures buffered reads don't permanently change the stream position, which is critical for Parquet's random access pattern.
-
-### Performance Impact
-
-Minimal to none:
-- Network latency dominates for remote storage
-- Buffering is still active (4x buffer size)
-- Extra byte[] copy is negligible compared to network I/O
-
-## Commit Message
-
-```
-Fix Parquet EOF error by removing ByteBufferReadable interface
-
-SeaweedHadoopInputStream incorrectly declared ByteBufferReadable interface
-without proper implementation, causing position tracking issues during
-positioned reads. This resulted in "78 bytes left" EOF errors when reading
-Parquet files with complex schemas in Spark.
-
-Solution: Remove ByteBufferReadable and use BufferedFSInputStream (matching
-Hadoop's RawLocalFileSystem pattern) which properly handles position
-restoration for positioned reads.
-
-Changes:
-- Remove ByteBufferReadable interface from SeaweedHadoopInputStream
-- Change SeaweedFileSystem to use BufferedFSInputStream for all streams
-- Clean up debug logging
-- Delete debug-only classes and files
-
-Tested: All Spark integration tests pass
-```
-
-## Files Changed
-
-### Modified
-- `other/java/hdfs3/src/main/java/seaweed/hdfs/SeaweedHadoopInputStream.java`
-- `other/java/hdfs3/src/main/java/seaweed/hdfs/SeaweedFileSystem.java`
-- `other/java/client/src/main/java/seaweedfs/client/SeaweedInputStream.java`
-- `test/java/spark/docker-compose.yml`
-
-### Reverted
-- `other/java/hdfs3/src/main/java/seaweed/hdfs/SeaweedFileSystemStore.java`
-
-### Deleted
-- `other/java/hdfs3/src/main/java/seaweed/hdfs/DebugDualInputStream.java`
-- `other/java/hdfs3/src/main/java/seaweed/hdfs/DebugDualInputStreamWrapper.java`
-- `other/java/hdfs3/src/main/java/seaweed/hdfs/DebugDualOutputStream.java`
-- `other/java/hdfs3/src/main/java/seaweed/hdfs/DebugMode.java`
-- `other/java/hdfs3/src/main/java/seaweed/hdfs/LocalOnlyInputStream.java`
-- `other/java/hdfs3/src/main/java/seaweed/hdfs/ShadowComparisonStream.java`
-- All `.md` files in `test/java/spark/` (debug documentation)
-