Replace ssh.InsecureIgnoreHostKey() with ssh.FixedHostKey() that
verifies the server's host key matches the known test key we generated.
This addresses CodeQL warning go/insecure-hostkeycallback.
Also updates go.mod to specify go 1.24.0 explicitly.
Fix path.Join issue where paths starting with '/' weren't joined correctly.
path.Join('/sftp/user', '/file') returns '/file' instead of '/sftp/user/file'.
Now we strip the leading '/' before joining.
Test improvements:
- Update go.mod to Go 1.24
- Fix weed binary discovery to prefer local build over PATH
- Add stabilization delay after service startup
- All 8 SFTP integration tests pass locally
Add GitHub Actions workflow for SFTP tests:
- Runs on push/PR affecting sftpd code or tests
- Tests HomeDir path translation, file ops, directory ops
- Covers issue #7470 fix verification
Add comprehensive integration tests for the SFTP server including:
- HomeDir path translation tests (verifies fix for issue #7470)
- Basic file upload/download operations
- Directory operations (mkdir, rmdir, list)
- Large file handling (1MB test)
- File rename operations
- Stat/Lstat operations
- Path edge cases (trailing slashes, .., unicode filenames)
- Admin root access verification
The test framework starts a complete SeaweedFS cluster with:
- Master server
- Volume server
- Filer server
- SFTP server with test user credentials
Test users are configured in testdata/userstore.json:
- admin: HomeDir=/ with full access
- testuser: HomeDir=/sftp/testuser with access to home
- readonly: HomeDir=/public with read-only access
When users have a non-root HomeDir (e.g., '/sftp/user'), their SFTP
operations should be relative to that directory. Previously, when a
user uploaded to '/' via SFTP, the path was not translated to their
home directory, causing 'permission denied for / for permission write'.
This fix adds a toAbsolutePath() method that implements chroot-like
behavior where the user's HomeDir becomes their root. All file and
directory operations now translate paths through this method.
Example: User with HomeDir='/sftp/user' uploading to '/' now correctly
maps to '/sftp/user'.
Fixes: https://github.com/seaweedfs/seaweedfs/issues/7470
This fixes issue #6823 where a single volume server shutdown would cause
other healthy volume servers to fail their health checks and get restarted
by Kubernetes, causing a cascading failure.
Previously, the healthz handler checked if all replicated volumes could
reach their remote replicas via GetWritableRemoteReplications(). When a
volume server went down, the master would remove it from the volume
location list. Other volume servers would then fail their healthz checks
because they couldn't find all required replicas, causing Kubernetes to
restart them.
The healthz endpoint now only checks local conditions:
1. Is the server shutting down?
2. Is the server heartbeating with the master?
This follows the principle that a health check should only verify the
health of THIS server, not the overall cluster state.
Fixes#6823
* pb: add id field to Heartbeat message for stable volume server identification
This adds an 'id' field to the Heartbeat protobuf message that allows
volume servers to identify themselves independently of their IP:port address.
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* storage: add Id field to Store struct
Add Id field to Store struct and include it in CollectHeartbeat().
The Id field provides a stable volume server identity independent of IP:port.
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* topology: support id-based DataNode identification
Update GetOrCreateDataNode to accept an id parameter for stable node
identification. When id is provided, the DataNode can maintain its identity
even when its IP address changes (e.g., in Kubernetes pod reschedules).
For backward compatibility:
- If id is provided, use it as the node ID
- If id is empty, fall back to ip:port
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* volume: add -id flag for stable volume server identity
Add -id command line flag to volume server that allows specifying a stable
identifier independent of the IP address. This is useful for Kubernetes
deployments with hostPath volumes where pods can be rescheduled to different
nodes while the persisted data remains on the original node.
Usage: weed volume -id=node-1 -ip=10.0.0.1 ...
If -id is not specified, it defaults to ip:port for backward compatibility.
Fixes https://github.com/seaweedfs/seaweedfs/issues/7487
* server: add -volume.id flag to weed server command
Support the -volume.id flag in the all-in-one 'weed server' command,
consistent with the standalone 'weed volume' command.
Usage: weed server -volume.id=node-1 ...
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* topology: add test for id-based DataNode identification
Test the key scenarios:
1. Create DataNode with explicit id
2. Same id with different IP returns same DataNode (K8s reschedule)
3. IP/PublicUrl are updated when node reconnects with new address
4. Different id creates new DataNode
5. Empty id falls back to ip:port (backward compatibility)
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* pb: add address field to DataNodeInfo for proper node addressing
Previously, DataNodeInfo.Id was used as the node address, which worked
when Id was always ip:port. Now that Id can be an explicit string,
we need a separate Address field for connection purposes.
Changes:
- Add 'address' field to DataNodeInfo protobuf message
- Update ToDataNodeInfo() to populate the address field
- Update NewServerAddressFromDataNode() to use Address (with Id fallback)
- Fix LookupEcVolume to use dn.Url() instead of dn.Id()
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* fix: trim whitespace from volume server id and fix test
- Trim whitespace from -id flag to treat ' ' as empty
- Fix store_load_balancing_test.go to include id parameter in NewStore call
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* refactor: extract GetVolumeServerId to util package
Move the volume server ID determination logic to a shared utility function
to avoid code duplication between volume.go and rack.go.
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* fix: improve transition logic for legacy nodes
- Use exact ip:port match instead of net.SplitHostPort heuristic
- Update GrpcPort and PublicUrl during transition for consistency
- Remove unused net import
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* fix: add id normalization and address change logging
- Normalize id parameter at function boundary (trim whitespace)
- Log when DataNode IP:Port changes (helps debug K8s pod rescheduling)
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
fix: EC volume deletion issues
Fixes#7489
1. Skip cookie check for EC volume deletion when SkipCookieCheck is set
When batch deleting files from EC volumes with SkipCookieCheck=true
(e.g., orphan file cleanup), the cookie is not available. The deletion
was failing with 'unexpected cookie 0' because DeleteEcShardNeedle
always validated the cookie.
2. Optimize doDeleteNeedleFromAtLeastOneRemoteEcShards to return early
Return immediately when a deletion succeeds, instead of continuing
to try all parity shards unnecessarily.
3. Remove useless log message that always logged nil error
The log at V(1) was logging err after checking it was nil.
Regression introduced in commit 7bdae5172 (Jan 3, 2023) when EC batch
delete support was added.
* Add placement package for EC shard placement logic
- Consolidate EC shard placement algorithm for reuse across shell and worker tasks
- Support multi-pass selection: racks, then servers, then disks
- Include proper spread verification and scoring functions
- Comprehensive test coverage for various cluster topologies
* Make ec.balance disk-aware for multi-disk servers
- Add EcDisk struct to track individual disks on volume servers
- Update EcNode to maintain per-disk shard distribution
- Parse disk_id from EC shard information during topology collection
- Implement pickBestDiskOnNode() for selecting best disk per shard
- Add diskDistributionScore() for tie-breaking node selection
- Update all move operations to specify target disk in RPC calls
- Improves shard balance within multi-disk servers, not just across servers
* Use placement package in EC detection for consistent disk-level placement
- Replace custom EC disk selection logic with shared placement package
- Convert topology DiskInfo to placement.DiskCandidate format
- Use SelectDestinations() for multi-rack/server/disk spreading
- Convert placement results back to topology DiskInfo for task creation
- Ensures EC detection uses same placement logic as shell commands
* Make volume server evacuation disk-aware
- Use pickBestDiskOnNode() when selecting evacuation target disk
- Specify target disk in evacuation RPC requests
- Maintains balanced disk distribution during server evacuations
* Rename PlacementConfig to PlacementRequest for clarity
PlacementRequest better reflects that this is a request for placement
rather than a configuration object. This improves API semantics.
* Rename DefaultConfig to DefaultPlacementRequest
Aligns with the PlacementRequest type naming for consistency
* Address review comments from Gemini and CodeRabbit
Fix HIGH issues:
- Fix empty disk discovery: Now discovers all disks from VolumeInfos,
not just from EC shards. This ensures disks without EC shards are
still considered for placement.
- Fix EC shard count calculation in detection.go: Now correctly filters
by DiskId and sums actual shard counts using ShardBits.ShardIdCount()
instead of just counting EcShardInfo entries.
Fix MEDIUM issues:
- Add disk ID to evacuation log messages for consistency with other logging
- Remove unused serverToDisks variable in placement.go
- Fix comment that incorrectly said 'ascending' when sorting is 'descending'
* add ec tests
* Update ec-integration-tests.yml
* Update ec_integration_test.go
* Fix EC integration tests CI: build weed binary and update actions
- Add 'Build weed binary' step before running tests
- Update actions/setup-go from v4 to v6 (Node20 compatibility)
- Update actions/checkout from v2 to v4 (Node20 compatibility)
- Move working-directory to test step only
* Add disk-aware EC rebalancing integration tests
- Add TestDiskAwareECRebalancing test with multi-disk cluster setup
- Test EC encode with disk awareness (shows disk ID in output)
- Test EC balance with disk-level shard distribution
- Add helper functions for disk-level verification:
- startMultiDiskCluster: 3 servers x 4 disks each
- countShardsPerDisk: track shards per disk per server
- calculateDiskShardVariance: measure distribution balance
- Verify no single disk is overloaded with shards
Prevents potential screen garbling when operations are parallelized
.Also simplifies logging by automatically adding newlines on output, if necessary.
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
* Fix SSE-S3 copy: preserve encryption metadata and set chunk SSE type
Fixes GitHub #7562: Copying objects between encrypted buckets was failing.
Root causes:
1. processMetadataBytes was re-adding SSE headers from source entry, undoing
the encryption header filtering. Now uses dstEntry.Extended which is
already filtered.
2. SSE-S3 streaming copy returned nil metadata. Now properly generates and
returns SSE-S3 destination metadata (SeaweedFSSSES3Key, AES256 header)
via ExecuteStreamingCopyWithMetadata.
3. Chunks created during streaming copy didn't have SseType set. Now sets
SseType and per-chunk SseMetadata with chunk-specific IVs for SSE-S3,
enabling proper decryption on GetObject.
* Address review: make SSE-S3 metadata serialization failures fatal errors
- In executeEncryptCopy: return error instead of just logging if
SerializeSSES3Metadata fails
- In createChunkFromData: return error if chunk SSE-S3 metadata
serialization fails
This ensures objects/chunks are never created without proper encryption
metadata, preventing unreadable/corrupted data.
* fmt
* Refactor: reuse function names instead of creating WithMetadata variants
- Change ExecuteStreamingCopy to return (*EncryptionSpec, error) directly
- Remove ExecuteStreamingCopyWithMetadata wrapper
- Change executeStreamingReencryptCopy to return (*EncryptionSpec, error)
- Remove executeStreamingReencryptCopyWithMetadata wrapper
- Update callers to ignore encryption spec with _ where not needed
* Add TODO documenting large file SSE-S3 copy limitation
The streaming copy approach encrypts the entire stream with a single IV
but stores data in chunks with per-chunk IVs. This causes decryption
issues for large files. Small inline files work correctly.
This is a known architectural issue that needs separate work to fix.
* Use chunk-by-chunk encryption for SSE-S3 copy (consistent with SSE-C/SSE-KMS)
Instead of streaming encryption (which had IV mismatch issues for multi-chunk
files), SSE-S3 now uses the same chunk-by-chunk approach as SSE-C and SSE-KMS:
1. Extended copyMultipartCrossEncryption to handle SSE-S3:
- Added SSE-S3 source decryption in copyCrossEncryptionChunk
- Added SSE-S3 destination encryption with per-chunk IVs
- Added object-level metadata generation for SSE-S3 destinations
2. Updated routing in executeEncryptCopy/executeDecryptCopy/executeReencryptCopy
to use copyMultipartCrossEncryption for all SSE-S3 scenarios
3. Removed streaming copy functions (shouldUseStreamingCopy,
executeStreamingReencryptCopy) as they're no longer used
4. Added large file (1MB) integration test to verify chunk-by-chunk copy works
This ensures consistent behavior across all SSE types and fixes data corruption
that occurred with large files in the streaming copy approach.
* fmt
* fmt
* Address review: fail explicitly if SSE-S3 metadata is missing
Instead of silently ignoring missing SSE-S3 metadata (which could create
unreadable objects), now explicitly fail the copy operation with a clear
error message if:
- First chunk is missing
- First chunk doesn't have SSE-S3 type
- First chunk has empty SSE metadata
- Deserialization fails
* Address review: improve comment to reflect full scope of chunk creation
* Address review: fail explicitly if baseIV is empty for SSE-S3 chunk encryption
If DestinationIV is not set when encrypting SSE-S3 chunks, the chunk would
be created without SseMetadata, causing GetObject decryption to fail later.
Now fails explicitly with a clear error message.
Note: calculateIVWithOffset returns ([]byte, int) not ([]byte, error) - the
int is a skip amount for intra-block alignment, not an error code.
* Address review: handle 0-byte files in SSE-S3 copy
For 0-byte files, there are no chunks to get metadata from. Generate an IV
for the object-level metadata to ensure even empty files are properly marked
as SSE-S3 encrypted.
Also validate that we don't have a non-empty file with no chunks (which
would indicate an internal error).
* Fix issue #6847: S3 chunked encoding includes headers in stored content
- Add hasTrailer flag to s3ChunkedReader to track trailer presence
- Update state transition logic to properly handle trailers in unsigned streaming
- Enhance parseChunkChecksum to handle multiple trailer lines
- Skip checksum verification for unsigned streaming uploads
- Add test case for mixed format handling (unsigned headers with signed chunks)
- Remove redundant CRLF reading in trailer processing
This fixes the issue where chunk-signature and x-amz headers were appearing
in stored file content when using chunked encoding with newer AWS SDKs.
* Fix checksum validation for unsigned streaming uploads
- Always validate checksum for data integrity regardless of signing
- Correct checksum value in test case
- Addresses PR review feedback about checksum verification
* Add warning log when multiple checksum headers found in trailer
- Log a warning when multiple valid checksum headers appear in trailers
- Uses last checksum header as suggested by CodeRabbit reviewer
- Improves debugging for edge cases with multiple checksum algorithms
* Improve trailer parsing robustness in parseChunkChecksum
- Remove redundant trimTrailingWhitespace call since readChunkLine already trims
- Use bytes.TrimSpace for both key and value to handle whitespace around colon separator
- Follows HTTP header specifications for optional whitespace around separators
- Addresses Gemini Code Assist review feedback
- Modified test/s3/tagging/s3_tagging_test.go to use environment variables for configurable endpoint and credentials
- Added s3-tagging-tests job to .github/workflows/s3-go-tests.yml to run tagging tests in CI
- Tests will now run automatically on pull requests
- Add X-Amz-Tagging header parsing in putToFiler function for PUT object operations
- Store tags with X-Amz-Tagging- prefix in entry.Extended metadata
- Add comprehensive test suite for S3 object tagging functionality
- Tests cover upload tagging, API operations, special characters, and edge cases
* filer: add username and keyPrefix support for Redis stores
Addresses https://github.com/seaweedfs/seaweedfs/issues/7299
- Add username config option to redis2, redis_cluster2, redis_lua, and
redis_lua_cluster stores (sentinel stores already had it)
- Add keyPrefix config option to all Redis stores to prefix all keys,
useful for Envoy Redis Proxy or multi-tenant Redis setups
* refactor: reduce duplication in redis.NewClient creation
Address code review feedback by defining redis.Options once and
conditionally setting TLSConfig instead of duplicating the entire
NewClient call.
* filer.toml: add username and keyPrefix to redis2.tmp example
* Fix#7575: Correct interface check for filer address function in admin server
Problem:
User creation in object store was failing with error:
'filer_etc: filer address function not configured'
Root Cause:
In admin_server.go, the code checked for incorrect interface method
SetFilerClient(string, grpc.DialOption) instead of the actual
SetFilerAddressFunc(func() pb.ServerAddress, grpc.DialOption)
This interface mismatch prevented the filer address function from being
configured, causing user creation operations to fail.
Solution:
- Fixed interface check to use SetFilerAddressFunc
- Updated function call to properly configure filer address function
- Function now dynamically returns current active filer address
Tests Added:
- Unit tests in weed/admin/dash/user_management_test.go
- Integration tests in test/admin/user_creation_integration_test.go
- Documentation in test/admin/README.md
All tests pass successfully.
* Fix#7575: Correct interface check for filer address function in admin UI
Problem:
User creation in Admin UI was failing with error:
'filer_etc: filer address function not configured'
Root Cause:
In admin_server.go, the code checked for incorrect interface method
SetFilerClient(string, grpc.DialOption) instead of the actual
SetFilerAddressFunc(func() pb.ServerAddress, grpc.DialOption)
This interface mismatch prevented the filer address function from being
configured, causing user creation operations to fail in the Admin UI.
Note: This bug only affects the Admin UI. The S3 API and weed shell
commands (s3.configure) were unaffected as they use the correct interface
or bypass the credential manager entirely.
Solution:
- Fixed interface check in admin_server.go to use SetFilerAddressFunc
- Updated function call to properly configure filer address function
- Function now dynamically returns current active filer (HA-aware)
- Cleaned up redundant comments in the code
Tests Added:
- Unit tests in weed/admin/dash/user_management_test.go
* TestFilerAddressFunctionInterface - verifies correct interface
* TestGenerateAccessKey - tests key generation
* TestGenerateSecretKey - tests secret generation
* TestGenerateAccountId - tests account ID generation
All tests pass and will run automatically in CI.
* Fix#7575: Correct interface check for filer address function in admin UI
Problem:
User creation in Admin UI was failing with error:
'filer_etc: filer address function not configured'
Root Cause:
1. In admin_server.go, the code checked for incorrect interface method
SetFilerClient(string, grpc.DialOption) instead of the actual
SetFilerAddressFunc(func() pb.ServerAddress, grpc.DialOption)
2. The admin command was missing the filer_etc import, so the store
was never registered
This interface mismatch prevented the filer address function from being
configured, causing user creation operations to fail in the Admin UI.
Note: This bug only affects the Admin UI. The S3 API and weed shell
commands (s3.configure) were unaffected as they use the correct interface
or bypass the credential manager entirely.
Solution:
- Added filer_etc import to weed/command/admin.go to register the store
- Fixed interface check in admin_server.go to use SetFilerAddressFunc
- Updated function call to properly configure filer address function
- Function now dynamically returns current active filer (HA-aware)
- Hoisted credentialManager assignment to reduce code duplication
Tests Added:
- Unit tests in weed/admin/dash/user_management_test.go
* TestFilerAddressFunctionInterface - verifies correct interface
* TestGenerateAccessKey - tests key generation
* TestGenerateSecretKey - tests secret generation
* TestGenerateAccountId - tests account ID generation
All tests pass and will run automatically in CI.
* Enable FIPS 140-3 compliant crypto by default
Addresses #6889
- Enable GOEXPERIMENT=systemcrypto by default in all Makefiles
- Enable GOEXPERIMENT=systemcrypto by default in all Dockerfiles
- Go 1.24+ has native FIPS 140-3 support via this setting
- Users can disable by setting GOEXPERIMENT= (empty)
Algorithms used (all FIPS approved):
- AES-256-GCM for data encryption
- AES-256-CTR for SSE-C
- HMAC-SHA256 for S3 signatures
- TLS 1.2/1.3 for transport encryption
* Fix: Remove invalid GOEXPERIMENT=systemcrypto
Go 1.24 uses GODEBUG=fips140=on at runtime, not GOEXPERIMENT at build time.
- Remove GOEXPERIMENT=systemcrypto from all Makefiles
- Remove GOEXPERIMENT=systemcrypto from all Dockerfiles
FIPS 140-3 mode can be enabled at runtime:
GODEBUG=fips140=on ./weed server ...
* Add FIPS 140-3 support enabled by default
Addresses #6889
- FIPS 140-3 mode is ON by default in Docker containers
- Sets GODEBUG=fips140=on via entrypoint.sh
- To disable: docker run -e GODEBUG=fips140=off ...
Have `volume.check.disk` select a random (heathly) source volume when repairing read-only volumes.
This ensures uniform load across the topology when the command is run. Also remove a lingering
TODO about ignoring full volumes; not only there's no way to discern read-only volumes from
being full vs. being damaged, we ultimately want to check the former anyway.
* Add link to wiki installation page in README
* Add building for docker in weed/Makefile
Building without `CGO_ENABLED=0` and using the executable in docker can result in a docker container exiting with an error
* Update README.md
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Add GOOS=linux to build_docker target for cross-compilation
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Chris Lu <chris.lu@gmail.com>
* mount: improve read throughput with parallel chunk fetching
This addresses issue #7504 where a single weed mount FUSE instance
does not fully utilize node network bandwidth when reading large files.
Changes:
- Add -concurrentReaders mount option (default: 16) to control the
maximum number of parallel chunk fetches during read operations
- Implement parallel section reading in ChunkGroup.ReadDataAt() using
errgroup for better throughput when reading across multiple sections
- Enhance ReaderCache with MaybeCacheMany() to prefetch multiple chunks
ahead in parallel during sequential reads (now prefetches 4 chunks)
- Increase ReaderCache limit dynamically based on concurrentReaders
to support higher read parallelism
The bottleneck was that chunks were being read sequentially even when
they reside on different volume servers. By introducing parallel chunk
fetching, a single mount instance can now better saturate available
network bandwidth.
Fixes: #7504
* fmt
* Address review comments: make prefetch configurable, improve error handling
Changes:
1. Add DefaultPrefetchCount constant (4) to reader_at.go
2. Add GetPrefetchCount() method to ChunkGroup that derives prefetch count
from concurrentReaders (1/4 ratio, min 1, max 8)
3. Pass prefetch count through NewChunkReaderAtFromClient
4. Fix error handling in readDataAtParallel to prioritize errgroup error
5. Update all callers to use DefaultPrefetchCount constant
For mount operations, prefetch scales with -concurrentReaders:
- concurrentReaders=16 (default) -> prefetch=4
- concurrentReaders=32 -> prefetch=8 (capped)
- concurrentReaders=4 -> prefetch=1
For non-mount paths (WebDAV, query engine, MQ), uses DefaultPrefetchCount.
* fmt
* Refactor: use variadic parameter instead of new function name
Use NewChunkGroup with optional concurrentReaders parameter instead of
creating a separate NewChunkGroupWithConcurrency function.
This maintains backward compatibility - existing callers without the
parameter get the default of 16 concurrent readers.
* Use explicit concurrentReaders parameter instead of variadic
* Refactor: use MaybeCache with count parameter instead of new MaybeCacheMany function
* Address nitpick review comments
- Add upper bound (128) on concurrentReaders to prevent excessive goroutine fan-out
- Cap readerCacheLimit at 256 accordingly
- Fix SetChunks: use Lock() instead of RLock() since we are writing to group.sections
* filer use context without cancellation
* pass along context
* fix: copy to bucket with default SSE-S3 encryption fails (#7562)
When copying an object from an encrypted bucket to a temporary unencrypted
bucket, then to another bucket with default SSE-S3 encryption, the operation
fails with 'invalid SSE-S3 source key type' error.
Root cause:
When objects are copied from an SSE-S3 encrypted bucket to an unencrypted
bucket, the 'X-Amz-Server-Side-Encryption: AES256' header is preserved but
the actual encryption key (SeaweedFSSSES3Key) is stripped. This creates an
'orphaned' SSE-S3 header that causes IsSSES3EncryptedInternal() to return
true, triggering decryption logic with a nil key.
Fix:
1. Modified IsSSES3EncryptedInternal() to require BOTH the AES256 header
AND the SeaweedFSSSES3Key to be present before returning true
2. Added isOrphanedSSES3Header() to detect orphaned SSE-S3 headers
3. Updated copy handler to strip orphaned headers during copy operations
Fixes#7562
* fmt
* refactor: simplify isOrphanedSSES3Header function logic
Remove redundant existence check since the caller iterates through
metadata map, making the check unnecessary. Improves readability
while maintaining the same functionality.