Added call to iamApiServer.Shutdown() to ensure cleanup happens when possible,
and documented the limitations of the current approach.
Problem:
The Shutdown() method was defined in IamApiServer but never called anywhere,
meaning the KeepConnectedToMaster goroutine would continue running even when
the IAM server stopped, causing resource leaks.
Changes:
1. Store iamApiServer instance in weed/command/iam.go
- Changed: _, iamApiServer_err := iamapi.NewIamApiServer(...)
- To: iamApiServer, iamApiServer_err := iamapi.NewIamApiServer(...)
2. Added defer call for best-effort cleanup
- defer iamApiServer.Shutdown()
- This will execute if startIamServer() returns normally
3. Added logging in Shutdown() method
- Log when shutdown is triggered for visibility
4. Documented limitations and future improvements
- Added note that defer only works for normal function returns
- SeaweedFS commands don't currently have signal handling
- Suggested future enhancement: add SIGTERM/SIGINT handling
Current behavior:
- ✓ Cleanup happens if HTTP server fails to start (glog.Fatalf path)
- ✓ Cleanup happens if Serve() returns with error (unlikely)
- ✗ Cleanup does NOT happen on SIGTERM/SIGINT (process killed)
The last case is a limitation of the current command architecture - all
SeaweedFS commands (s3, filer, volume, master, iam) lack signal handling
for graceful shutdown. This is a systemic issue that affects all services.
Future enhancement:
To properly handle SIGTERM/SIGINT, the command layer would need:
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGTERM, syscall.SIGINT)
go func() {
httpServer.Serve(listener) // Non-blocking
}()
<-sigChan
glog.V(0).Infof("Received shutdown signal")
iamApiServer.Shutdown()
httpServer.Shutdown(context.Background())
This would require refactoring the command structure for all services,
which is out of scope for this change.
Benefits of current approach:
✓ Best-effort cleanup (better than nothing)
✓ Proper cleanup in error paths
✓ Documented for future improvement
✓ Consistent with how other SeaweedFS services handle lifecycle
Fixed several code quality issues identified during review:
1. Corrected backoff algorithm description in filer_client.go:
- Changed "Exponential backoff" to "Multiplicative backoff with 1.5x factor"
- The formula waitTime * 3/2 produces 1s, 1.5s, 2.25s, not exponential 2^n
- More accurate terminology prevents confusion
2. Removed redundant nil check in vidmap_client.go:
- After the for loop, node is guaranteed to be non-nil
- Loop either returns early or assigns non-nil value to node
- Simplified: if node != nil { node.cache.Store(nil) } → node.cache.Store(nil)
3. Added startup logging to IAM server for consistency:
- Log when master client connection starts
- Matches pattern in S3ApiServer (line 100 in s3api_server.go)
- Improves operational visibility during startup
- Added missing glog import
4. Fixed indentation in filer/reader_at.go:
- Lines 76-91 had incorrect indentation (extra tab level)
- Line 93 also misaligned
- Now properly aligned with surrounding code
5. Updated deprecation comment to follow Go convention:
- Changed "DEPRECATED:" to "Deprecated:" (standard Go format)
- Tools like staticcheck and IDEs recognize the standard format
- Enables automated deprecation warnings in tooling
- Better developer experience
All changes are cosmetic and do not affect functionality.
The IAM server's KeepConnectedToMaster goroutine used context.Background(),
which is non-cancellable, causing the goroutine and its gRPC connections
to leak on server shutdown.
Problem:
go masterClient.KeepConnectedToMaster(context.Background())
- context.Background() never cancels
- KeepConnectedToMaster goroutine runs forever
- gRPC connection to master stays open
- No way to stop cleanly on server shutdown
Result: Resource leaks when IAM server is stopped
Fix:
1. Added shutdownContext and shutdownCancel to IamApiServer struct
2. Created cancellable context in NewIamApiServerWithStore:
shutdownCtx, shutdownCancel := context.WithCancel(context.Background())
3. Pass shutdownCtx to KeepConnectedToMaster:
go masterClient.KeepConnectedToMaster(shutdownCtx)
4. Added Shutdown() method to invoke cancel:
func (iama *IamApiServer) Shutdown() {
if iama.shutdownCancel != nil {
iama.shutdownCancel()
}
}
5. Stored masterClient reference on IamApiServer for future use
Benefits:
- Goroutine stops cleanly when Shutdown() is called
- gRPC connections are closed properly
- No resource leaks on server restart/stop
- Shutdown() is idempotent (safe to call multiple times)
Usage (for future graceful shutdown):
iamServer, _ := iamapi.NewIamApiServer(...)
defer iamServer.Shutdown()
// or in signal handler:
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGTERM, syscall.SIGINT)
go func() {
<-sigChan
iamServer.Shutdown()
os.Exit(0)
}()
Note: Current command implementations (weed/command/iam.go) don't have
shutdown paths yet, but this makes IAM server ready for proper lifecycle
management when that infrastructure is added.
The IAM server creates and uses a MasterClient but never started
KeepConnectedToMaster, which could cause blocking if IAM config files
have chunks requiring volume lookups.
Problem flow:
NewIamApiServerWithStore()
→ creates masterClient
→ ❌ NEVER starts KeepConnectedToMaster
GetS3ApiConfigurationFromFiler()
→ filer.ReadEntry(iama.masterClient, ...)
→ StreamContent(masterClient, ...) if file has chunks
→ masterClient.GetLookupFileIdFunction()
→ GetMaster(ctx) ← BLOCKS indefinitely waiting for connection!
While IAM config files (identity & policies) are typically small and
stored inline without chunks, the code path exists and would block
if the files ever had chunks.
Fix:
Start KeepConnectedToMaster in background goroutine right after
creating masterClient, following the documented pattern:
mc := wdclient.NewMasterClient(...)
go mc.KeepConnectedToMaster(ctx)
This ensures masterClient is usable if ReadEntry ever needs to
stream chunked content from volume servers.
Note: This bug was dormant because IAM config files are small (<256 bytes)
and SeaweedFS stores small files inline in Entry.Content, not as chunks.
The bug would only manifest if:
- IAM config grew > 256 bytes (inline threshold)
- Config was stored as chunks on volume servers
- ReadEntry called StreamContent
- GetMaster blocked indefinitely
Now all 9 production MasterClient instances correctly follow the pattern.
Replace action read/write to readAcp/writeAcp for handlers with acl query
https://github.com/seaweedfs/seaweedfs/issues/4519
Co-authored-by: Konstantin Lebedev <9497591+kmlebedev@users.noreply.github.co>