Tree:
e86e65e5ab
add-ec-vacuum
add-filer-iam-grpc
add-iam-grpc-management
add_fasthttp_client
add_remote_storage
adding-message-queue-integration-tests
adjust-fsck-cutoff-default
also-delete-parent-directory-if-empty
avoid_releasing_temp_file_on_write
changing-to-zap
collect-public-metrics
copilot/fix-helm-chart-installation
copilot/fix-s3-object-tagging-issue
copilot/make-renew-interval-configurable
copilot/make-renew-interval-configurable-again
copilot/sub-pr-7677
create-table-snapshot-api-design
data_query_pushdown
dependabot/maven/other/java/client/com.google.protobuf-protobuf-java-3.25.5
dependabot/maven/other/java/examples/org.apache.hadoop-hadoop-common-3.4.0
detect-and-plan-ec-tasks
do-not-retry-if-error-is-NotFound
ec-disk-type-support
enhance-erasure-coding
fasthttp
feature/mini-port-detection
feature/modernize-s3-tests
feature/s3-multi-cert-support
filer1_maintenance_branch
fix-GetObjectLockConfigurationHandler
fix-bucket-name-case-7910
fix-helm-fromtoml-compatibility
fix-mount-http-parallelism
fix-mount-read-throughput-7504
fix-pr-7909
fix-s3-configure-consistency
fix-s3-object-tagging-issue-7589
fix-sts-session-token-7941
fix-versioning-listing-only
fix/windows-test-file-cleanup
ftp
gh-pages
iam-multi-file-migration
iam-permissions-and-api
improve-fuse-mount
improve-fuse-mount2
logrus
master
message_send
mount2
mq-subscribe
mq2
nfs-cookie-prefix-list-fixes
optimize-delete-lookups
original_weed_mount
pr-7412
pr/7984
pr/8140
raft-dual-write
random_access_file
refactor-needle-read-operations
refactor-volume-write
remote_overlay
remove-implicit-directory-handling
revert-5134-patch-1
revert-5819-patch-1
revert-6434-bugfix-missing-s3-audit
s3-remote-cache-singleflight
s3-select
s3tables-by-claude
sub
tcp_read
test-reverting-lock-table
test_udp
testing
testing-sdx-generation
tikv
track-mount-e2e
upgrade-versions-to-4.00
volume_buffered_writes
worker-execute-ec-tasks
0.72
0.72.release
0.73
0.74
0.75
0.76
0.77
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
1.09
1.10
1.11
1.12
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
1.51
1.52
1.53
1.54
1.55
1.56
1.57
1.58
1.59
1.60
1.61
1.61RC
1.62
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.70
1.71
1.72
1.73
1.74
1.75
1.76
1.77
1.78
1.79
1.80
1.81
1.82
1.83
1.84
1.85
1.86
1.87
1.88
1.90
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
1;70
2.00
2.01
2.02
2.03
2.04
2.05
2.06
2.07
2.08
2.09
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
2.56
2.57
2.58
2.59
2.60
2.61
2.62
2.63
2.64
2.65
2.66
2.67
2.68
2.69
2.70
2.71
2.72
2.73
2.74
2.75
2.76
2.77
2.78
2.79
2.80
2.81
2.82
2.83
2.84
2.85
2.86
2.87
2.88
2.89
2.90
2.91
2.92
2.93
2.94
2.95
2.96
2.97
2.98
2.99
3.00
3.01
3.02
3.03
3.04
3.05
3.06
3.07
3.08
3.09
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.71
3.72
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
3.89
3.90
3.91
3.92
3.93
3.94
3.95
3.96
3.97
3.98
3.99
4.00
4.01
4.02
4.03
4.04
4.05
4.06
4.07
dev
helm-3.65.1
v0.69
v0.70beta
v3.33
${ noResults }
108 Commits (e86e65e5ab0a1603fb1031c03d39dbe49222296f)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
20952aa514
|
Fix jwt error in admin UI (#8140)
* add jwt token in weed admin headers requests * add jwt token to header for download * :s/upload/download * filer_signing.read despite of filer_signing key * finalize filer_browser_handlers.go * admin: add JWT authorization to file browser handlers * security: fix typos in JWT read validation descriptions * Move security.toml to example and secure keys * security: address PR feedback on JWT enforcement and example keys * security: refactor JWT logic and improve example keys readability * Update docker/Dockerfile.local Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --------- Co-authored-by: Chris Lu <chris.lu@gmail.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> |
2 days ago |
|
|
41d079a316
|
Fix Javascript merge issue and UI worker detail display bug (#8135)
* Fix previous merge issues in Javascript Signed-off-by: Alasdair Macmillan <aimmac23@gmail.com> * Fix issue where worker detail doesn't display without tasks --------- Signed-off-by: Alasdair Macmillan <aimmac23@gmail.com> |
2 days ago |
|
|
c5b53397c6 |
templ
|
3 days ago |
|
|
5a7c74feac
|
migrate IAM policies to multi-file storage (#8114)
* Add IAM gRPC service definition - Add GetConfiguration/PutConfiguration for config management - Add CreateUser/GetUser/UpdateUser/DeleteUser/ListUsers for user management - Add CreateAccessKey/DeleteAccessKey/GetUserByAccessKey for access key management - Methods mirror existing IAM HTTP API functionality * Add IAM gRPC handlers on filer server - Implement IamGrpcServer with CredentialManager integration - Handle configuration get/put operations - Handle user CRUD operations - Handle access key create/delete operations - All methods delegate to CredentialManager for actual storage * Wire IAM gRPC service to filer server - Add CredentialManager field to FilerOption and FilerServer - Import credential store implementations in filer command - Initialize CredentialManager from credential.toml if available - Register IAM gRPC service on filer gRPC server - Enable credential management via gRPC alongside existing filer services * Regenerate IAM protobuf with gRPC service methods * fix: compilation error in DeleteUser * fix: address code review comments for IAM migration * feat: migrate policies to multi-file layout and fix identity duplicated content * refactor: remove configuration.json and migrate Service Accounts to multi-file layout * refactor: standardize Service Accounts as distinct store entities and fix Admin Server persistence * config: set ServiceAccountsDirectory to /etc/iam/service_accounts * Fix Chrome dialog auto-dismiss with Bootstrap modals - Add modal-alerts.js library with Bootstrap modal replacements - Replace all 15 confirm() calls with showConfirm/showDeleteConfirm - Auto-override window.alert() for all alert() calls - Fixes Chrome 132+ aggressively blocking native dialogs * Upgrade Bootstrap from 5.3.2 to 5.3.8 * Fix syntax error in object_store_users.templ - remove duplicate closing braces * create policy * display errors * migrate to multi-file policies * address PR feedback: use showDeleteConfirm and showErrorMessage in policies.templ, refine migration check * Update policies_templ.go * add service account to iam grpc * iam: fix potential path traversal in policy names by validating name pattern * iam: add GetServiceAccountByAccessKey to CredentialStore interface * iam: implement service account support for PostgresStore Includes full CRUD operations and efficient lookup by access key. * iam: implement GetServiceAccountByAccessKey for filer_etc, grpc, and memory stores Provides efficient lookup of service accounts by access key where possible, with linear scan fallbacks for file-based stores. * iam: remove filer_multiple support Deleted its implementation and references in imports, scaffold config, and core interface constants. Redundant with filer_etc. * clear comment * dash: robustify service account construction - Guard against nil sa.Credential when constructing responses - Fix Expiration logic to only set if > 0, avoiding Unix epoch 1970 - Ensure consistency across Get, Create, and Update handlers * credential/filer_etc: improve error propagation in configuration handlers - Return error from loadServiceAccountsFromMultiFile to callers - Ensure listEntries errors in SaveConfiguration (cleanup logic) are propagated unless they are "not found" failures. - Fixes potential silent failures during IAM configuration sync. * credential/filer_etc: add existence check to CreateServiceAccount Ensures consistency with other stores by preventing accidental overwrite of existing service accounts during creation. * credential/memory: improve store robustness and Reset logic - Enforce ID immutability in UpdateServiceAccount to prevent orphans - Update Reset() to also clear the policies map, ensuring full state cleanup for tests. * dash: improve service account robustness and policy docs - Wrap parent user lookup errors to preserve context - Strictly validate Status field in UpdateServiceAccount - Add deprecation comments to legacy policy management methods * credential/filer_etc: protect against path traversal in service accounts Implemented ID validation (alphanumeric, underscores, hyphens) and applied it to Get, Save, and Delete operations to ensure no directory traversal via saId.json filenames. * credential/postgres: improve robustness and cleanup comments - Removed brainstorming comments in GetServiceAccountByAccessKey - Added missing rows.Err() check during iteration - Properly propagate Scan and Unmarshal errors instead of swallowing them * admin: unify UI alerts and confirmations using Bootstrap modals - Updated modal-alerts.js with improved automated alert type detection - Replaced native alert() and confirm() with showAlert(), showConfirm(), and showDeleteConfirm() across various Templ components - Improved UX for delete operations by providing better context and styling - Ensured consistent error reporting across IAM and Maintenance views * admin: additional UI consistency fixes for alerts and confirmations - Replaced native alert() and confirm() with Bootstrap modals in: - EC volumes (repair flow) - Collection details (repair flow) - File browser (properties and delete) - Maintenance config schema (save and reset) - Improved delete confirmation in file browser with item context - Ensured consistent success/error/info styling for all feedbacks * make * iam: add GetServiceAccountByAccessKey RPC and update GetConfiguration * iam: implement GetServiceAccountByAccessKey on server and client * iam: centralize policy and service account validation * iam: optimize MemoryStore service account lookups with indexing * iam: fix postgres service_accounts table and optimize lookups * admin: refactor modal alerts and clean up dashboard logic * admin: fix EC shards table layout mismatch * admin: URL-encode IAM path parameters for safety * admin: implement pauseWorker logic in maintenance view * iam: add rows.Err() check to postgres ListServiceAccounts * iam: standardize ErrServiceAccountNotFound across credential stores * iam: map ErrServiceAccountNotFound to codes.NotFound in DeleteServiceAccount * iam: refine service account store logic, errors and schema * iam: add validation to GetServiceAccountByAccessKey * admin: refine modal titles and ensure URL safety * admin: address bot review comments for alerts and async usage * iam: fix syntax error by restoring missing function declaration * [FilerEtcStore] improve error handling in CreateServiceAccount Refine error handling to provide clearer messages when checking for existing service accounts. * [PostgresStore] add nil guards and validation to service account methods Ensure input parameters are not nil and required IDs are present to prevent runtime panics and ensure data integrity. * [JS] add shared IAM utility script Consolidate common IAM operations like deleteUser and deleteAccessKey into a shared utility script for better maintainability. * [View] include shared IAM utilities in layout Include iam-utils.js in the main layout to make IAM functions available across all administrative pages. * [View] refactor IAM logic and restore async in EC Shards view Remove redundant local IAM functions and ensure that delete confirmation callbacks are properly marked as async. * [View] consolidate IAM logic in Object Store Users view Remove redundant local definitions of deleteUser and deleteAccessKey, relying on the shared utilities instead. * [View] update generated templ files for UI consistency * credential/postgres: remove redundant name column from service_accounts table The id is already used as the unique identifier and was being copied to the name column. This removes the name column from the schema and updates the INSERT/UPDATE queries. * credential/filer_etc: improve logging for policy migration failures Added Errorf log if AtomicRenameEntry fails during migration to ensure visibility of common failure points. * credential: allow uppercase characters in service account ID username Updated ServiceAccountIdPattern to allow [A-Za-z0-9_-]+ for the username component, matching the actual service account creation logic which uses the parent user name directly. * Update object_store_users_templ.go * admin: fix ec_shards pagination to handle numeric page arguments Updated goToPage in cluster_ec_shards.templ to accept either an Event or a numeric page argument. This prevents errors when goToPage(1) is called directly. Corrected both the .templ source and generated Go code. * credential/filer_etc: improve service account storage robustness Added nil guard to saveServiceAccount, updated GetServiceAccount to return ErrServiceAccountNotFound for empty data, and improved deleteServiceAccount to handle response-level Filer errors. |
3 days ago |
|
|
7e3bb4016e |
Fix syntax error in object_store_users.templ - remove duplicate closing braces
|
4 days ago |
|
|
1e09950ea7 |
Upgrade Bootstrap from 5.3.2 to 5.3.8
|
4 days ago |
|
|
74c7b10bc7 |
Fix Chrome dialog auto-dismiss with Bootstrap modals
- Add modal-alerts.js library with Bootstrap modal replacements - Replace all 15 confirm() calls with showConfirm/showDeleteConfirm - Auto-override window.alert() for all alert() calls - Fixes Chrome 132+ aggressively blocking native dialogs |
4 days ago |
|
|
6bf088cec9
|
IAM Policy Management via gRPC (#8109)
* Add IAM gRPC service definition - Add GetConfiguration/PutConfiguration for config management - Add CreateUser/GetUser/UpdateUser/DeleteUser/ListUsers for user management - Add CreateAccessKey/DeleteAccessKey/GetUserByAccessKey for access key management - Methods mirror existing IAM HTTP API functionality * Add IAM gRPC handlers on filer server - Implement IamGrpcServer with CredentialManager integration - Handle configuration get/put operations - Handle user CRUD operations - Handle access key create/delete operations - All methods delegate to CredentialManager for actual storage * Wire IAM gRPC service to filer server - Add CredentialManager field to FilerOption and FilerServer - Import credential store implementations in filer command - Initialize CredentialManager from credential.toml if available - Register IAM gRPC service on filer gRPC server - Enable credential management via gRPC alongside existing filer services * Regenerate IAM protobuf with gRPC service methods * iam_pb: add Policy Management to protobuf definitions * credential: implement PolicyManager in credential stores * filer: implement IAM Policy Management RPCs * shell: add s3.policy command * test: add integration test for s3.policy * test: fix compilation errors in policy_test * pb * fmt * test * weed shell: add -policies flag to s3.configure This allows linking/unlinking IAM policies to/from identities directly from the s3.configure command. * test: verify s3.configure policy linking and fix port allocation - Added test case for linking policies to users via s3.configure - Implemented findAvailablePortPair to ensure HTTP and gRPC ports are both available, avoiding conflicts with randomized port assignments. - Updated assertion to match jsonpb output (policyNames) * credential: add StoreTypeGrpc constant * credential: add IAM gRPC store boilerplate * credential: implement identity methods in gRPC store * credential: implement policy methods in gRPC store * admin: use gRPC credential store for AdminServer This ensures that all IAM and policy changes made through the Admin UI are persisted via the Filer's IAM gRPC service instead of direct file manipulation. * shell: s3.configure use granular IAM gRPC APIs instead of full config patching * shell: s3.configure use granular IAM gRPC APIs * shell: replace deprecated ioutil with os in s3.policy * filer: use gRPC FailedPrecondition for unconfigured credential manager * test: improve s3.policy integration tests and fix error checks * ci: add s3 policy shell integration tests to github workflow * filer: fix LoadCredentialConfiguration error handling * credential/grpc: propagate unmarshal errors in GetPolicies * filer/grpc: improve error handling and validation * shell: use gRPC status codes in s3.configure * credential: document PutPolicy as create-or-replace * credential/postgres: reuse CreatePolicy in PutPolicy to deduplicate logic * shell: add timeout context and strictly enforce flags in s3.policy * iam: standardize policy content field naming in gRPC and proto * shell: extract slice helper functions in s3.configure * filer: map credential store errors to gRPC status codes * filer: add input validation for UpdateUser and CreateAccessKey * iam: improve validation in policy and config handlers * filer: ensure IAM service registration by defaulting credential manager * credential: add GetStoreName method to manager * test: verify policy deletion in integration test |
4 days ago |
|
|
57a16b0b87 |
Improve error handling in GetObjectStoreUsers per PR review
|
6 days ago |
|
|
e559b8df37 |
Refactor Admin UI to use unified IAM storage and add Shutdown hook
|
6 days ago |
|
|
3f879b8d2b |
copy the aws keys
|
1 week ago |
|
|
13dcf445a4
|
Fix maintenance worker panic and add EC integration tests (#8068)
* Fix nil pointer panic in maintenance worker when receiving empty task assignment When a worker requests a task and none are available, the admin server sends an empty TaskAssignment message. The worker was attempting to log the task details without checking if the TaskId was empty, causing a nil pointer dereference when accessing taskAssign.Params.VolumeId. This fix adds a check for empty TaskId before processing the assignment, preventing worker crashes and improving stability in production environments. * Add EC integration test for admin-worker maintenance system Adds comprehensive integration test that verifies the end-to-end flow of erasure coding maintenance tasks: - Admin server detects volumes needing EC encoding - Workers register and receive task assignments - EC encoding is executed and verified in master topology - File read-back validation confirms data integrity The test uses unique absolute working directories for each worker to prevent ID conflicts and ensure stable worker registration. Includes proper cleanup and process management for reliable test execution. * Improve maintenance system stability and task deduplication - Add cross-type task deduplication to prevent concurrent maintenance operations on the same volume (EC, balance, vacuum) - Implement HasAnyTask check in ActiveTopology for better coordination - Increase RequestTask timeout from 5s to 30s to prevent unnecessary worker reconnections - Add TaskTypeNone sentinel for generic task checks - Update all task detectors to use HasAnyTask for conflict prevention - Improve config persistence and schema handling * Add GitHub Actions workflow for EC integration tests Adds CI workflow that runs EC integration tests on push and pull requests to master branch. The workflow: - Triggers on changes to admin, worker, or test files - Builds the weed binary - Runs the EC integration test suite - Uploads test logs as artifacts on failure for debugging This ensures the maintenance system remains stable and worker-admin integration is validated in CI. * go version 1.24 * address comments * Update maintenance_integration.go * support seconds * ec prioritize over balancing in tests |
1 week ago |
|
|
ce23c4fca7 |
missing changes
|
2 weeks ago |
|
|
6bc5a64a98
|
Add access key status management to Admin UI (#8050)
* Add access key status management to Admin UI - Add Status field to AccessKeyInfo struct - Implement UpdateAccessKeyStatus API endpoint - Add status dropdown in access keys modal - Fix modal backdrop issue by using refreshAccessKeysList helper - Status can be toggled between Active and Inactive * Replace magic strings with constants for access key status - Define AccessKeyStatusActive and AccessKeyStatusInactive constants in admin_data.go - Define STATUS_ACTIVE and STATUS_INACTIVE constants in JavaScript - Replace all hardcoded 'Active' and 'Inactive' strings with constants - Update error messages to use constants for consistency * Remove duplicate manageAccessKeys function definition * Add security improvements to access key status management - Add status validation in UpdateAccessKeyStatus to prevent invalid values - Fix XSS vulnerability by replacing inline onchange with data attributes - Add delegated event listener for status select changes - Add URL encoding to API request path segments |
2 weeks ago |
|
|
dbde8983a7
|
Fix bucket permission persistence in Admin UI (#8049)
Fix bucket permission persistence and security issues (#7226) Security Fixes: - Fix XSS vulnerability in showModal by using DOM methods instead of template strings for title - Add escapeHtmlForAttribute helper to properly escape all HTML entities (&, <, >, ", ') - Fix XSS in showSecretKey and showNewAccessKeyModal by using proper HTML escaping - Fix XSS in createAccessKeysContent by replacing inline onclick with data attributes and event delegation Code Cleanup: - Remove debug label "(DEBUG)" from page header - Remove debug console.log statements from buildBucketPermissionsNew - Remove dead functions: addBucketPermissionRow, removeBucketPermissionRow, parseBucketPermissions, buildBucketPermissions Validation Improvements: - Add validation in handleUpdateUser to prevent empty permissions submission - Update buildBucketPermissionsNew to return null when no buckets selected (instead of empty array) - Add proper error messages for validation failures UI Improvements: - Enhanced access key management with proper modals and copy buttons - Improved copy-to-clipboard functionality with fallbacks Fixes #7226 |
2 weeks ago |
|
|
6bf0c16862 |
fix admin copy text functions
|
3 weeks ago |
|
|
e67973dc53
|
Support Policy Attachment for Object Store Users (#7981)
* Implement Policy Attachment support for Object Store Users
- Added policy_names field to iam.proto and regenerated protos.
- Updated S3 API and IAM integration to support direct policy evaluation for users.
- Enhanced Admin UI to allow attaching policies to users via modals.
- Renamed 'policies' to 'policy_names' to clarify that it stores identifiers.
- Fixed syntax error in user_management.go.
* Fix policy dropdown not populating
The API returns {policies: [...]} but JavaScript was treating response as direct array.
Updated loadPolicies() to correctly access data.policies property.
* Add null safety checks for policy dropdowns
Added checks to prevent "undefined" errors when:
- Policy select elements don't exist
- Policy dropdowns haven't loaded yet
- User is being edited before policies are loaded
* Fix policy dropdown by using correct JSON field name
JSON response has lowercase 'name' field but JavaScript was accessing 'Name'.
Changed policy.Name to policy.name to match the IAMPolicy JSON structure.
* Fix policy names not being saved on user update
Changed condition from len(req.PolicyNames) > 0 to req.PolicyNames != nil
to ensure policy names are always updated when present in the request,
even if it's an empty array (to allow clearing policies).
* Add debug logging for policy names update flow
Added console.log in frontend and glog in backend to trace
policy_names data through the update process.
* Temporarily disable auto-reload for debugging
Commented out window.location.reload() so console logs are visible
when updating a user.
* Add detailed debug logging and alert for policy selection
Added console.log for each step and an alert to show policy_names value
to help diagnose why it's not being included in the request.
* Regenerate templ files for object_store_users
Ran templ generate to ensure _templ.go files are up to date with
the latest .templ changes including debug logging.
* Remove debug logging and restore normal functionality
Cleaned up temporary debug code (console.log and alert statements)
and re-enabled automatic page reload after user update.
* Add step-by-step alert debugging for policy update
Added 5 alert checkpoints to trace policy data through the update flow:
1. Check if policiesSelect element exists
2. Show selected policy values
3. Show userData.policy_names
4. Show full request body
5. Confirm server response
Temporarily disabled auto-reload to see alerts.
* Add version check alert on page load
Added alert on DOMContentLoaded to verify new JavaScript is being executed
and not cached by the browser.
* Compile templates using make
Ran make to compile all template files and install the weed binary.
* Add button click detection and make handleUpdateUser global
- Added inline alert on button click to verify click is detected
- Made handleUpdateUser a window-level function to ensure it's accessible
- Added alert at start of handleUpdateUser function
* Fix handleUpdateUser scope issue - remove duplicate definition
Removed duplicate function definition that was inside DOMContentLoaded.
Now handleUpdateUser is defined only once in global scope (line 383)
making it accessible when button onclick fires.
* Remove all duplicate handleUpdateUser definitions
Now handleUpdateUser is defined only once at the very top of the script
block (line 352), before DOMContentLoaded, ensuring it's available when
the button onclick fires.
* Add function existence check and error catching
Added alerts to check if handleUpdateUser is defined and wrapped
the function call in try-catch to capture any JavaScript errors.
Also added console.log statements to verify function definition.
* Simplify handleUpdateUser to non-async for testing
Removed async/await and added early return to test if function
can be called at all. This will help identify if async is causing
the issue.
* Add cache-control headers to prevent browser caching
Added no-cache headers to ShowObjectStoreUsers handler to prevent
aggressive browser caching of inline JavaScript in the HTML page.
* Fix syntax error - make handleUpdateUser async
Changed function back to async to fix 'await is only valid in async functions' error.
The cache-control headers are working - browser is now loading new code.
* Update version check to v3 to verify cache busting
Changed version alert to 'v3 - WITH EARLY RETURN' to confirm
the new code with early return statement is being loaded.
* Remove all debug code - clean implementation
Removed all alerts, console.logs, and test code.
Implemented clean policy update functionality with proper error handling.
* Add ETag header for cache-busting and update walkthrough
* Fix policy pre-selection in Edit User modal
- Updated admin.js editUser function to pre-select policies
- Root cause: duplicate editUser in admin.js overwrote inline version
- Added policy pre-selection logic to match inline template
- Verified working in browser: policies now pre-select correctly
* Fix policy persistence in handleUpdateUser
- Added policy_names field to userData payload in handleUpdateUser
- Policies were being lost because handleUpdateUser only sent email and actions
- Now collects selected policies from editPolicies dropdown
- Verified working: policies persist correctly across updates
* Fix XSS vulnerability in access keys display
- Escape HTML in access key display using escapeHtml utility
- Replace inline onclick handlers with data attributes
- Add event delegation for delete access key buttons
- Prevents script injection via malicious access key values
* Fix additional XSS vulnerabilities in user details display
- Escape HTML in actions badges (line 626)
- Escape HTML in policy_names badges (line 636)
- Prevents script injection via malicious action or policy names
* Fix XSS vulnerability in loadPolicies function
- Replace innerHTML string concatenation with DOM API
- Use createElement and textContent for safe policy name insertion
- Prevents script injection via malicious policy names
- Apply same pattern to both create and edit select elements
* Remove debug logging from UpdateObjectStoreUser
- Removed glog.V(0) debug statements
- Clean up temporary debugging code before production
* Remove duplicate handleUpdateUser function
- Removed inline handleUpdateUser that duplicated admin.js logic
- Removed debug console.log statement
- admin.js version is now the single source of truth
- Eliminates maintenance burden of keeping two versions in sync
* Refine user management and address code review feedback
- Preserve PolicyNames in UpdateUserPolicies
- Allow clearing actions in UpdateObjectStoreUser by checking for nil
- Remove version comment from object_store_users.templ
- Refactor loadPolicies for DRYness using cloneNode while keeping DOM API security
* IAM Authorization for Static Access Keys
* verified XSS Fixes in Templates
* fix div
|
3 weeks ago |
|
|
d15f32ae46
|
feat: add flags to disable WebDAV and Admin UI in weed mini (#7971)
* feat: add flags to disable WebDAV and Admin UI in weed mini - Add -webdav flag (default: true) to optionally disable WebDAV server - Add -admin.ui flag (default: true) to optionally disable Admin UI only (server still runs) - Conditionally skip WebDAV service startup based on flag - Pass disableUI flag to SetupRoutes to skip UI route registration - Admin server still runs for gRPC and API access when UI is disabled Addresses issue from https://github.com/seaweedfs/seaweedfs/pull/7833#issuecomment-3711924150 * refactor: use positive enableUI parameter instead of disableUI across admin server and handlers * docs: update mini welcome message to list enabled components * chore: remove unused welcomeMessageTemplate constant * docs: split S3 credential message into separate sb.WriteString calls |
3 weeks ago |
|
|
24556ebdcc
|
Refine Bucket Size Metrics: Logical and Physical Size (#7943)
* refactor: implement logical size calculation with replication factor using dedicated helper * ui: update bucket list to show logical/physical size |
4 weeks ago |
|
|
31a4f57cd9
|
Fix: Add -admin.grpc flag to worker for explicit gRPC port (#7926) (#7927)
* Fix: Add -admin.grpc flag to worker for explicit gRPC port configuration * Fix(helm): Add adminGrpcServer to worker configuration * Refactor: Support host:port.grpcPort address format, revert -admin.grpc flag * Helm: Conditionally append grpcPort to worker admin address * weed/admin: fix "send on closed channel" panic in worker gRPC server Make unregisterWorker connection-aware to prevent closing channels belonging to newer connections. * weed/worker: improve gRPC client stability and logging - Fix goroutine leak in reconnection logic - Refactor reconnection loop to exit on success and prevent busy-waiting - Add session identification and enhanced logging to client handlers - Use constant for internal reset action and remove unused variables * weed/worker: fix worker state initialization and add lifecycle logs - Revert workerState to use running boolean correctly - Prevent handleStart failing by checking running state instead of startTime - Add more detailed logs for worker startup events |
4 weeks ago |
|
|
b6d99f1c9e
|
Admin: Add Service Account Management UI (#7902)
* admin: add Service Account management UI Add admin UI for managing service accounts: New files: - handlers/service_account_handlers.go - HTTP handlers - dash/service_account_management.go - CRUD operations - view/app/service_accounts.templ - UI template Changes: - dash/types.go - Add ServiceAccount and related types - handlers/admin_handlers.go - Register routes and handlers - view/layout/layout.templ - Add sidebar navigation link Service accounts are stored as special identities with "sa:" prefix in their name, using ABIA access key prefix. They can be created, listed, enabled/disabled, and deleted through the admin UI. Features: - Create service accounts linked to parent users - View and manage service account status - Delete service accounts - Service accounts inherit parent user permissions Note: STS configuration is read-only (configured via JSON file). Full STS integration requires changes from PR #7901. * admin: use dropdown for parent user selection Change the Parent User field from text input to dropdown when creating a service account. The dropdown is populated with all existing Object Store users. Changes: - Add AvailableUsers field to ServiceAccountsData type - Populate available users in getServiceAccountsData handler - Update template to use <select> element with user options * admin: show secret access key on service account creation Display both access key and secret access key when creating a service account, with proper AWS CLI usage instructions. Changes: - Add SecretAccessKey field to ServiceAccount type (only populated on creation) - Return secret key from CreateServiceAccount - Add credentials modal with copy-to-clipboard buttons - Show AWS CLI usage example with actual credentials - Modal is non-dismissible until user confirms they saved credentials The secret key is only shown once during creation for security. After creation, only the access key ID is visible in the list. * admin: address code review comments for service account management - Persist creation dates in identity actions (createdAt:timestamp) - Replace magic number slicing with len(accessKeyPrefix) - Add bounds checking after strings.SplitN - Use accessKeyPrefix constant instead of hardcoded "ABIA" Creation dates are now stored as actions (e.g., "createdAt:1735473600") and will persist across restarts. Helper functions getCreationDate() and setCreationDate() manage the timestamp storage. Addresses review comments from gemini-code-assist[bot] and coderabbitai[bot] * admin: fix XSS vulnerabilities in service account details Replace innerHTML with template literals with safe DOM creation. The createSADetailsContent function now uses createElement and textContent to prevent XSS attacks from malicious service account data (id, description, parent_user, etc.). Also added try-catch for date parsing to prevent exceptions on malformed input. Addresses security review comments from coderabbitai[bot] * admin: add context.Context to service account management methods Addressed PR #7902 review feedback: 1. All service account management methods now accept context.Context as first parameter to enable cancellation, deadlines, and tracing 2. Removed all context.Background() calls 3. Updated handlers to pass c.Request.Context() from HTTP requests Methods updated: - GetServiceAccounts - GetServiceAccountDetails - CreateServiceAccount - UpdateServiceAccount - DeleteServiceAccount - GetServiceAccountByAccessKey Note: Creation date persistence was already implemented using the createdAt:<timestamp> action pattern as suggested in the review. * admin: fix render flow to prevent partial HTML writes Fixed ShowServiceAccounts handler to render template to an in-memory buffer first before writing to the response. This prevents partial HTML writes followed by JSON error responses, which would result in invalid mixed content. Changes: - Render to bytes.Buffer first - Only write to c.Writer if render succeeds - Use c.AbortWithStatus on error instead of attempting JSON response - Prevents any additional headers/body writes after partial write * admin: fix error handling, date validation, and event parameters Addressed multiple code review issues: 1. Proper 404 vs 500 error handling: - Added ErrServiceAccountNotFound sentinel error - GetServiceAccountDetails now wraps errors with sentinel - Handler uses errors.Is() to distinguish not-found from internal errors - Returns 404 only for missing resources, 500 for other errors - Logs internal errors before returning 500 2. Date validation in JavaScript: - Validate expiration date before using it - Check !isNaN(date.getTime()) to ensure valid date - Return validation error if date is invalid - Prevents invalid Date construction 3. Event parameter handling: - copyToClipboard now accepts event parameter - Updated onclick attributes to pass event object - Prevents reliance on window.event - More explicit and reliable event handling * admin: replace deprecated execCommand with Clipboard API Replaced deprecated document.execCommand('copy') with modern navigator.clipboard.writeText() API for better security and UX. Changes: - Made copyToClipboard async to support Clipboard API - Use navigator.clipboard.writeText() as primary method - Fallback to execCommand if Clipboard API fails (older browsers) - Added console warning when fallback is used - Maintains same visual feedback behavior * admin: improve security and UX for error handling Addressed code review feedback: 1. Security: Remove sensitive error details from API responses - CreateServiceAccount: Return generic error message - UpdateServiceAccount: Return generic error message - DeleteServiceAccount: Return generic error message - Detailed errors still logged server-side via glog.Errorf() - Prevents exposure of internal system details to clients 2. UX: Replace alert() with Bootstrap toast notifications - Implemented showToast() function using Bootstrap 5 toasts - Non-blocking, modern notification system - Auto-dismiss after 5 seconds - Proper HTML escaping to prevent XSS - Toast container positioned at top-right - Success (green) and error (red) variants * admin: complete error handling improvements Addressed remaining security review feedback: 1. GetServiceAccounts: Remove error details from response - Log errors server-side via glog.Errorf() - Return generic error message to client 2. UpdateServiceAccount & DeleteServiceAccount: - Wrap not-found errors with ErrServiceAccountNotFound sentinel - Enables proper 404 vs 500 distinction in handlers 3. Update & Delete handlers: - Added errors.Is() check for ErrServiceAccountNotFound - Return 404 for missing resources - Return 500 for internal errors with logging - Consistent with GetServiceAccountDetails behavior All handlers now properly distinguish not-found (404) from internal errors (500) and never expose sensitive error details to clients. * admin: implement expiration support and improve code quality Addressed final code review feedback: 1. Expiration Support: - Added expiration helper functions (getExpiration, setExpiration) - Implemented expiration in CreateServiceAccount - Implemented expiration in UpdateServiceAccount - Added Expiration field to ServiceAccount struct - Parse and validate RFC3339 expiration dates 2. Constants for Magic Strings: - Added StatusActive, StatusInactive constants - Added disabledAction, serviceAccountPrefix constants - Replaced all magic strings with constants throughout - Improves maintainability and prevents typos 3. Helper Function to Reduce Duplication: - Created identityToServiceAccount() helper - Reduces code duplication across Get/Update/Delete methods - Centralizes ServiceAccount struct building logic 4. Fixed time.Now() Fallback: - Changed from time.Now() to time.Time{} for legacy accounts - Prevents creation date from changing on each fetch - UI can display zero time as "N/A" or blank All code quality issues addressed! * admin: fix StatusActive reference in handler Use dash.StatusActive to properly reference the constant from the dash package. * admin: regenerate templ files Regenerated all templ Go files after recent template changes. The AWS CLI usage example already uses proper <pre><code> formatting which preserves line breaks for better readability. * admin: add explicit white-space CSS to AWS CLI example Added style="white-space: pre-wrap;" to the pre tag to ensure line breaks are preserved and displayed correctly in all browsers. This forces the browser to respect the newlines in the code block. * admin: fix AWS CLI example to display on separate lines Replaced pre/code block with individual div elements for each line. This ensures each command displays on its own line regardless of how templ processes whitespace. Each line is now a separate div with font-monospace styling for code appearance. * make * admin: filter service accounts from parent user dropdown Service accounts should not appear as selectable parent users when creating new service accounts. Added filter to GetObjectStoreUsers() to skip identities with "sa:" prefix, ensuring only actual IAM users are shown in the parent user dropdown. * admin: address code review feedback - Use constants for magic strings in service account management - Add Expiration field to service account responses - Add nil checks and context propagation - Improve templates (date validation, async clipboard, toast notifications) * Update service_accounts_templ.go |
1 month ago |
|
|
6b98b52acc
|
Fix reporting of EC shard sizes from nodes to masters. (#7835)
SeaweedFS tracks EC shard sizes on topology data stuctures, but this information is never
relayed to master servers :( The end result is that commands reporting disk usage, such
as `volume.list` and `cluster.status`, yield incorrect figures when EC shards are present.
As an example for a simple 5-node test cluster, before...
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[1 5]
Disk hdd total size:88967096 file_count:172
DataNode 192.168.10.111:9001 total size:88967096 file_count:172
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[0 4]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9002 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[2 6]
Disk hdd total size:77267536 file_count:166
DataNode 192.168.10.111:9003 total size:77267536 file_count:166
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[3 7]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9004 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[8 9 10 11 12 13]
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:498703896 file_count:1014
DataCenter DefaultDataCenter total size:498703896 file_count:1014
total size:498703896 file_count:1014
```
...and after:
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[1 5 9] sizes:[1:8.00 MiB 5:8.00 MiB 9:8.00 MiB] total:24.00 MiB
Disk hdd total size:81761800 file_count:161
DataNode 192.168.10.111:9001 total size:81761800 file_count:161
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[11 12 13] sizes:[11:8.00 MiB 12:8.00 MiB 13:8.00 MiB] total:24.00 MiB
Disk hdd total size:88678712 file_count:170
DataNode 192.168.10.111:9002 total size:88678712 file_count:170
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[0 4 8] sizes:[0:8.00 MiB 4:8.00 MiB 8:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9003 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[2 6 10] sizes:[2:8.00 MiB 6:8.00 MiB 10:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9004 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[3 7] sizes:[3:8.00 MiB 7:8.00 MiB] total:16.00 MiB
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:511321536 file_count:993
DataCenter DefaultDataCenter total size:511321536 file_count:993
total size:511321536 file_count:993
```
|
1 month ago |
|
|
a3c090e606 |
adjust layout
|
1 month ago |
|
|
6de6061ce9
|
admin: add cursor-based pagination to file browser (#7891)
* adjust menu items * admin: add cursor-based pagination to file browser - Implement cursor-based pagination using lastFileName parameter - Add customizable page size selector (20/50/100/200 entries) - Add compact pagination controls in header and footer - Remove summary cards for cleaner UI - Make directory names clickable to return to first page - Support forward-only navigation (Next button) - Preserve cursor position when changing page size - Remove sorting to align with filer's storage order approach * Update file_browser_templ.go * admin: remove directory icons from breadcrumbs * Update file_browser_templ.go * admin: address PR comments - Fix fragile EOF check: use io.EOF instead of string comparison - Cap page size at 200 to prevent potential DoS - Remove unused helper functions from template - Use safer templ script for page size selector to prevent XSS * admin: cleanup redundant first button * Update file_browser_templ.go * admin: remove entry counting logic * admin: remove unused variables in file browser data * admin: remove unused logic for FirstFileName and HasPrevPage * admin: remove unused TotalEntries and TotalSize fields * Update file_browser_data.go |
1 month ago |
|
|
c260e6a22e
|
Fix issue #7880: Tasks use Volume IDs instead of ip:port (#7881)
* Fix issue #7880: Tasks use Volume IDs instead of ip:port When volume servers are registered with custom IDs, tasks were attempting to connect using the ID instead of the actual ip:port address, causing connection failures. Modified task detection logic in balance, erasure coding, and vacuum tasks to resolve volume server IDs to their actual ip:port addresses using ActiveTopology information. * Use server addresses directly instead of translating from IDs Modified VolumeHealthMetrics to include ServerAddress field populated directly from topology DataNodeInfo.Address. Updated task detection logic to use addresses directly without runtime lookups. Changes: - Added ServerAddress field to VolumeHealthMetrics - Updated maintenance scanner to populate ServerAddress - Modified task detection to use ServerAddress for Node fields - Updated DestinationPlan to include TargetAddress - Removed runtime address lookups in favor of direct address usage * Address PR comments: add ServerAddress field, improve error handling - Add missing ServerAddress field to VolumeHealthMetrics struct - Add warning in vacuum detection when server not found in topology - Improve error handling in erasure coding to abort task if sources missing - Make vacuum task stricter by skipping if server not found in topology * Refactor: Extract common address resolution logic into shared utility - Created weed/worker/tasks/util/address.go with ResolveServerAddress function - Updated balance, erasure_coding, and vacuum detection to use the shared utility - Removed code duplication and improved maintainability - Consistent error handling across all task types * Fix critical issues in task address resolution - Vacuum: Require topology availability and fail if server not found (no fallback to ID) - Ensure all task types consistently fail early when topology is incomplete - Prevent creation of tasks that would fail due to missing server addresses * Address additional PR feedback - Add validation for empty addresses in ResolveServerAddress - Remove redundant serverAddress variable in vacuum detection - Improve robustness of address resolution * Improve error logging in vacuum detection - Include actual error details in log message for better diagnostics - Make error messages consistent with other task types |
1 month ago |
|
|
225e3d0302
|
Add read only user (#7862)
* add readonly user * add args * address comments * avoid same user name * Prevents timing attacks * doc --------- Co-authored-by: Chris Lu <chris.lu@gmail.com> |
1 month ago |
|
|
1261e93ef2
|
fix: comprehensive go vet error fixes and add CI enforcement (#7861)
* fix: use keyed fields in struct literals - Replace unsafe reflect.StringHeader/SliceHeader with safe unsafe.String/Slice (weed/query/sqltypes/unsafe.go) - Add field names to Type_ScalarType struct literals (weed/mq/schema/schema_builder.go) - Add Duration field name to FlexibleDuration struct literals across test files - Add field names to bson.D struct literals (weed/filer/mongodb/mongodb_store_kv.go) Fixes go vet warnings about unkeyed struct literals. * fix: remove unreachable code - Remove unreachable return statements after infinite for loops - Remove unreachable code after if/else blocks where all paths return - Simplify recursive logic by removing unnecessary for loop (inode_to_path.go) - Fix Type_ScalarType literal to use enum value directly (schema_builder.go) - Call onCompletionFn on stream error (subscribe_session.go) Files fixed: - weed/query/sqltypes/unsafe.go - weed/mq/schema/schema_builder.go - weed/mq/client/sub_client/connect_to_sub_coordinator.go - weed/filer/redis3/ItemList.go - weed/mq/client/agent_client/subscribe_session.go - weed/mq/broker/broker_grpc_pub_balancer.go - weed/mount/inode_to_path.go - weed/util/skiplist/name_list.go * fix: avoid copying lock values in protobuf messages - Use proto.Merge() instead of direct assignment to avoid copying sync.Mutex in S3ApiConfiguration (iamapi_server.go) - Add explicit comments noting that channel-received values are already copies before taking addresses (volume_grpc_client_to_master.go) The protobuf messages contain sync.Mutex fields from the message state, which should not be copied. Using proto.Merge() properly merges messages without copying the embedded mutex. * fix: correct byte array size for uint32 bit shift operations The generateAccountId() function only needs 4 bytes to create a uint32 value. Changed from allocating 8 bytes to 4 bytes to match the actual usage. This fixes go vet warning about shifting 8-bit values (bytes) by more than 8 bits. * fix: ensure context cancellation on all error paths In broker_client_subscribe.go, ensure subscriberCancel() is called on all error return paths: - When stream creation fails - When partition assignment fails - When sending initialization message fails This prevents context leaks when an error occurs during subscriber creation. * fix: ensure subscriberCancel called for CreateFreshSubscriber stream.Send error Ensure subscriberCancel() is called when stream.Send fails in CreateFreshSubscriber. * ci: add go vet step to prevent future lint regressions - Add go vet step to GitHub Actions workflow - Filter known protobuf lock warnings (MessageState sync.Mutex) These are expected in generated protobuf code and are safe - Prevents accumulation of go vet errors in future PRs - Step runs before build to catch issues early * fix: resolve remaining syntax and logic errors in vet fixes - Fixed syntax errors in filer_sync.go caused by missing closing braces - Added missing closing brace for if block and function - Synchronized fixes to match previous commits on branch * fix: add missing return statements to daemon functions - Add 'return false' after infinite loops in filer_backup.go and filer_meta_backup.go - Satisfies declared bool return type signatures - Maintains consistency with other daemon functions (runMaster, runFilerSynchronize, runWorker) - While unreachable, explicitly declares the return satisfies function signature contract * fix: add nil check for onCompletionFn in SubscribeMessageRecord - Check if onCompletionFn is not nil before calling it - Prevents potential panic if nil function is passed - Matches pattern used in other callback functions * docs: clarify unreachable return statements in daemon functions - Add comments documenting that return statements satisfy function signature - Explains that these returns follow infinite loops and are unreachable - Improves code clarity for future maintainers |
1 month ago |
|
|
9c784cf9e2
|
fix: use path to handle urls in weed admin file browser (#7858)
* fix: use path instead of filepath to handle urls in weed admin file browser * test: add comprehensive tests for file browser path handling - Test breadcrumb generation for various path scenarios - Test path handling with forward slashes (URL compatibility) - Test parent path calculation for Windows compatibility - Test file extension handling using path.Ext - Test bucket path detection logic These tests verify that the switch from filepath to path package works correctly and handles URLs properly across all platforms. * refactor: simplify fullPath construction using path.Join Replace verbose manual path construction with path.Join which: - Handles trailing slashes automatically - Is more concise and readable - Is more robust for edge cases * fix: normalize path in ShowFileBrowser and rename generateBreadcrumbs parameter Critical fix: - Add util.CleanWindowsPath() normalization to path parameter in ShowFileBrowser handler, matching the pattern used in other file operation handlers (lines 273, 464) - This ensures Windows-style backslashes are converted to forward slashes before processing, fixing path handling issues on Windows Consistency improvement: - Rename path parameter to dir in generateBreadcrumbs function - Aligns with parameter rename in GetFileBrowser for consistent naming throughout the file * test: improve coverage for Windows path handling and production code behavior Address reviewer feedback by enhancing test quality: 1. Improved test documentation: - Added clear comments explaining what each test validates - Clarified that some tests validate expected behavior vs production code - Documented the Windows path normalization flow 2. Enhanced actual production code testing: - TestGenerateBreadcrumbs: Calls actual production function - TestBreadcrumbPathFormatting: Validates production output format - TestDirectoryNavigation: Integration-style test for complete flow 3. Added new test functions for better coverage: - TestPathJoinHandlesEdgeCases: Verifies path.Join behavior - TestWindowsPathNormalizationBehavior: Documents expected normalization - TestDirectoryNavigation: Complete navigation flow test 4. Improved test organization: - Fixed duplicate field naming issues - Better test names for clarity - More comprehensive edge case coverage These improvements ensure the fix for issue #7628 (Windows path handling) is properly validated across the complete flow from handler to path logic. * test: use actual util.CleanWindowsPath function in Windows path normalization test Address reviewer feedback by testing the actual production function: - Import util package for CleanWindowsPath - Call the real util.CleanWindowsPath() instead of reimplementing logic - Ensures test validates actual implementation, not just expected behavior - Added more test cases for edge cases (simple path, deep nesting) This change validates that the Windows path normalization in the ShowFileBrowser handler (handlers/file_browser_handlers.go:64) works correctly with the actual util.CleanWindowsPath function. * style: fix indentation in TestPathJoinHandlesEdgeCases Align t.Errorf statement inside the if block with proper indentation. The error message now correctly aligns with the if block body, maintaining consistent indentation throughout the function. * test: restore backslash validation check in TestPathJoinHandlesEdgeCases --------- Co-authored-by: Chris Lu <chris.lu@gmail.com> |
1 month ago |
|
|
289ec5e2f5
|
Fix SeaweedFS S3 bucket extended attributes handling (#7854)
* refactor: Convert versioning to three-state string model matching AWS S3 - Change VersioningEnabled bool to VersioningStatus string in S3Bucket struct - Add GetVersioningStatus() function returning empty string (never enabled), 'Enabled', or 'Suspended' - Update StoreVersioningInExtended() to delete key instead of setting 'Suspended' - Ensures Admin UI and S3 API use consistent versioning state representation * fix: Add validation for bucket quota and Object Lock configuration - Prevent buckets with quota enabled but size=0 (validation check) - Fix Object Lock mode handling to only pass mode when setDefaultRetention is true - Ensures proper extended attribute storage for Object Lock configuration - Matches AWS S3 behavior for Object Lock setup * feat: Handle versioned objects in bucket details view - Recognize .versions directories as versioned objects in listBucketObjects() - Extract size and mtime from extended attribute metadata (ExtLatestVersionSizeKey, ExtLatestVersionMtimeKey) - Add length validation (8 bytes) before parsing extended attribute byte arrays - Update GetBucketDetails() and GetS3Buckets() to use new GetVersioningStatus() - Properly display versioned objects without .versions suffix in bucket details * ui: Update bucket management UI to show three-state versioning and Object Lock - Change versioning display from binary (Enabled/Disabled) to three-state (Not configured/Enabled/Suspended) - Update Object Lock display to show 'Not configured' instead of 'Disabled' - Fix bucket details modal to use bucket.versioning_status instead of bucket.versioning_enabled - Update displayBucketDetails() JavaScript to handle three versioning states * chore: Regenerate template code for bucket UI changes - Generated from updated s3_buckets.templ - Reflects three-state versioning and Object Lock UI improvements |
1 month ago |
|
|
5b86d33c3c
|
Fix worker reconnection race condition causing context canceled errors (#7825)
* Fix worker reconnection race condition causing context canceled errors Fixes #7824 This commit fixes critical connection stability issues between admin server and workers that manifested as rapid reconnection cycles with 'context canceled' errors, particularly after 24+ hours of operation in containerized environments. Root Cause: ----------- Race condition where TWO goroutines were calling stream.Recv() on the same gRPC bidirectional stream concurrently: 1. sendRegistrationSync() started a goroutine that calls stream.Recv() 2. handleIncoming() also calls stream.Recv() in a loop Per gRPC specification, only ONE goroutine can call Recv() on a stream at a time. Concurrent Recv() calls cause undefined behavior, manifesting as 'context canceled' errors and stream corruption. The race occurred during worker reconnection: - Sometimes sendRegistrationSync goroutine read the registration response first (success) - Sometimes handleIncoming read it first, causing sendRegistrationSync to timeout - This left the stream in an inconsistent state, triggering 'context canceled' error - The error triggered rapid reconnection attempts, creating a reconnection storm Why it happened after 24 hours: Container orchestration systems (Docker Swarm/Kubernetes) periodically restart pods. Over time, workers reconnect multiple times. Each reconnection had a chance of hitting the race condition. Eventually the race manifested and caused the connection storm. Changes: -------- weed/worker/client.go: - Start handleIncoming and handleOutgoing goroutines BEFORE sending registration - Use sendRegistration() instead of sendRegistrationSync() - Ensures only ONE goroutine (handleIncoming) calls stream.Recv() - Eliminates race condition entirely weed/admin/dash/worker_grpc_server.go: - Clean up old connection when worker reconnects with same ID - Cancel old connection context to stop its goroutines - Prevents resource leaks and stale connection accumulation Impact: ------- Before: Random 'context canceled' errors during reconnection, rapid reconnection cycles, resource leaks, requires manual restart to recover After: Reliable reconnection, single Recv() goroutine, proper cleanup, stable operation over 24+ hours Testing: -------- Build verified successful with no compilation errors. How to reproduce the bug: 1. Start admin server and worker 2. Restart admin server (simulates container recreation) 3. Worker reconnects 4. Race condition may manifest, causing 'context canceled' error 5. Observe rapid reconnection cycles in logs The fix is backward compatible and requires no configuration changes. * Add MaxConnectionAge to gRPC server for Docker Swarm DNS handling - Configure MaxConnectionAge and MaxConnectionAgeGrace for gRPC server - Expand error detection in shouldInvalidateConnection for better cache invalidation - Add connection lifecycle logging for debugging * Add topology validation and nil-safety checks - Add validation guards in UpdateTopology to prevent invalid updates - Add nil-safety checks in rebuildIndexes - Add GetDiskCount method for diagnostic purposes * Fix worker registration race condition - Reorder goroutine startup in WorkerStream to prevent race conditions - Add defensive cleanup in unregisterWorker with panic-safe channel closing * Add comprehensive topology update logging - Enhance UpdateTopologyInfo with detailed logging of datacenter/node/disk counts - Add metrics logging for topology changes * Add periodic diagnostic status logging - Implement topologyStatusLoop running every 5 minutes - Add logTopologyStatus function reporting system metrics - Run as background goroutine in maintenance manager * Enhance master client connection logging - Add connection timing logs in tryConnectToMaster - Add reconnection attempt counting in KeepConnectedToMaster - Improve diagnostic visibility for connection issues * Remove unused sendRegistrationSync function - Function is no longer called after switching to asynchronous sendRegistration - Contains the problematic concurrent stream.Recv() pattern that caused race conditions - Cleanup as suggested in PR review * Clarify comment for channel closing during disconnection - Improve comment to explain why channels are closed and their effect - Make the code more self-documenting as suggested in PR review * Address code review feedback: refactor and improvements - Extract topology counting logic to shared helper function CountTopologyResources() to eliminate duplication between topology_management.go and maintenance_integration.go - Use gRPC status codes for more robust error detection in shouldInvalidateConnection(), falling back to string matching for transport-level errors - Add recover wrapper for channel close consistency in cleanupStaleConnections() to match unregisterWorker() pattern * Update grpc_client_server.go * Fix data race on lastSeen field access - Add mutex protection around conn.lastSeen = time.Now() in WorkerStream method - Ensures thread-safe access consistent with cleanupStaleConnections * Fix goroutine leaks in worker reconnection logic - Close streamExit in reconnect() before creating new connection - Close streamExit in attemptConnection() when sendRegistration fails - Prevents orphaned handleOutgoing/handleIncoming goroutines from previous connections - Ensures proper cleanup of goroutines competing for shared outgoing channel * Minor cleanup improvements for consistency and clarity - Remove redundant string checks in shouldInvalidateConnection that overlap with gRPC status codes - Add recover block to Stop() method for consistency with other channel close operations - Maintains valuable DNS and transport-specific error detection while eliminating redundancy * Improve topology update error handling - Return descriptive errors instead of silently preserving topology for invalid updates - Change nil topologyInfo case to return 'rejected invalid topology update: nil topologyInfo' - Change empty DataCenterInfos case to return 'rejected invalid topology update: empty DataCenterInfos (had X nodes, Y disks)' - Keep existing glog.Warningf calls but append error details to logs before returning errors - Allows callers to distinguish rejected updates and handle them appropriately * Refactor safe channel closing into helper method - Add safeCloseOutgoingChannel helper method to eliminate code duplication - Replace repeated recover blocks in Stop, unregisterWorker, and cleanupStaleConnections - Improves maintainability and ensures consistent error handling across all channel close operations - Maintains same panic recovery behavior with contextual source identification * Make connection invalidation string matching case-insensitive - Convert error string to lowercase once for all string.Contains checks - Improves robustness by catching error message variations from different sources - Eliminates need for separate 'DNS resolution' and 'dns' checks - Maintains same error detection coverage with better reliability * Clean up warning logs in UpdateTopology to avoid duplicating error text - Remove duplicated error phrases from glog.Warningf messages - Keep concise contextual warnings that don't repeat the fmt.Errorf content - Maintain same error returns for backward compatibility * Add robust validation to prevent topology wipeout during master restart - Reject topology updates with 0 nodes when current topology has nodes - Prevents transient empty topology from overwriting valid state - Improves resilience during master restart scenarios - Maintains backward compatibility for legitimate empty topology updates |
1 month ago |
|
|
9e9c97ec61 |
fix bucket link
|
1 month ago |
|
|
93499cd944
|
Fix admin GUI list ordering on refresh (#7782)
Sort lists of filers, volume servers, masters, and message brokers by address to ensure consistent ordering on page refresh. This fixes the non-deterministic ordering caused by iterating over Go maps with range. Fixes #7781 |
1 month ago |
|
|
44cd07f835 |
Update cluster_ec_volumes_templ.go
|
1 month ago |
|
|
95ef041bfb
|
Fix EC Volumes page header styling to match admin theme (#7780)
* Fix EC Volumes page header styling to match admin theme Fixes #7779 The EC Volumes page was rendering with bright Bootstrap default colors instead of the admin dark theme because it was structured as a standalone HTML document with its own DOCTYPE, head, and body tags. This fix converts the template to be a content fragment (like other properly styled templates such as cluster_ec_shards.templ) so it correctly inherits the admin.css styling when rendered within the layout. * Address review comments: fix URL interpolation and falsy value check - Fix collection filter link to use templ.URL() for proper interpolation - Change updateUrl() falsy check from 'if (params[key])' to 'if (params[key] != null)' to handle 0 and false values correctly * Address additional review comments - Use erasure_coding.TotalShardsCount constant instead of hardcoded '14' for shard count displays (lines 88 and 214) - Improve error handling in repairVolume() to check response.ok before parsing JSON, preventing confusing errors on non-JSON responses - Remove unused totalSize variable in formatShardRangesWithSizes() - Simplify redundant pagination conditions * Remove unused code: displayShardLocationsHTML, groupShardsByServerWithSizes, formatShardRangesWithSizes These functions and templates were defined but never called anywhere in the codebase. Removing them reduces code maintenance burden. * Address review feedback: improve code quality - Add defensive JSON response validation in repairVolume function - Replace O(n²) bubble sorts with Go's standard sort.Ints and sort.Slice - Document volume status thresholds explaining EC recovery logic: * Critical: unrecoverable (more than DataShardsCount missing) * Degraded: high risk (more than half DataShardsCount missing) * Incomplete: reduced redundancy (more than half ParityShardsCount missing) * Minor: fully recoverable with good margin * Fix redundant shard count display in Healthy Volumes card Changed from 'Complete (14/14 shards)' to 'All 14 shards present' since the numerator and denominator were always the same value. * Use templ.URL for default collection link for consistency * Fix Clear Filter link to stay on EC Volumes page Changed href from /cluster/ec-shards to /cluster/ec-volumes so clearing the filter stays on the current page instead of navigating away. |
1 month ago |
|
|
51c2ab0107
|
fix: admin UI bucket deletion with filer group configured (#7735)
|
2 months ago |
|
|
f77e6ed2d4
|
fix: admin UI bucket delete now properly deletes collection and checks Object Lock (#7734)
* fix: admin UI bucket delete now properly deletes collection and checks Object Lock Fixes #7711 The admin UI's DeleteS3Bucket function was missing two critical behaviors: 1. It did not delete the collection from the master (unlike s3.bucket.delete shell command), leaving orphaned volume data that caused fs.verify errors. 2. It did not check for Object Lock protections before deletion, potentially allowing deletion of buckets with locked objects. Changes: - Add shared Object Lock checking utilities to object_lock_utils.go: - EntryHasActiveLock: standalone function to check if an entry has active lock - HasObjectsWithActiveLocks: shared function to scan bucket for locked objects - Refactor S3 API entryHasActiveLock to use shared EntryHasActiveLock function - Update admin UI DeleteS3Bucket to: - Check Object Lock using shared HasObjectsWithActiveLocks utility - Delete the collection before deleting filer entries (matching s3.bucket.delete) * refactor: S3 API uses shared Object Lock utilities Removes 114 lines of duplicated code from s3api_bucket_handlers.go by having hasObjectsWithActiveLocks delegate to the shared HasObjectsWithActiveLocks function in object_lock_utils.go. Now both S3 API and Admin UI use the same shared utilities: - EntryHasActiveLock - HasObjectsWithActiveLocks - recursivelyCheckLocksWithClient - checkVersionsForLocksWithClient * feat: s3.bucket.delete shell command now checks Object Lock Add Object Lock protection to the s3.bucket.delete shell command. If the bucket has Object Lock enabled and contains objects with active retention or legal hold, deletion is prevented. Also refactors Object Lock checking utilities into a new s3_objectlock package to avoid import cycles between shell, s3api, and admin packages. All three components now share the same logic: - S3 API (DeleteBucketHandler) - Admin UI (DeleteS3Bucket) - Shell command (s3.bucket.delete) * refactor: unified Object Lock checking and consistent deletion parameters 1. Add CheckBucketForLockedObjects() - a unified function that combines: - Bucket entry lookup - Object Lock enabled check - Scan for locked objects 2. All three components now use this single function: - S3 API (via s3api.CheckBucketForLockedObjects) - Admin UI (via s3api.CheckBucketForLockedObjects) - Shell command (via s3_objectlock.CheckBucketForLockedObjects) 3. Aligned deletion parameters across all components: - isDeleteData: false (collection already deleted separately) - isRecursive: true - ignoreRecursiveError: true * fix: properly handle non-EOF errors in Recv() loops The Recv() loops in recursivelyCheckLocksWithClient and checkVersionsForLocksWithClient were breaking on any error, which could hide real stream errors and incorrectly report 'no locks found'. Now: - io.EOF: break loop (normal end of stream) - any other error: return it so caller knows the stream failed * fix: address PR review comments 1. Add path traversal protection - validate entry names before building subdirectory paths. Skip entries with empty names, '.', '..', or containing path separators. 2. Use exact match for .versions folder instead of HasSuffix() to avoid mismatching unrelated directories like 'foo.versions'. 3. Replace path.Join with simple string concatenation since we now validate entry names. * refactor: extract paginateEntries helper to reduce duplication The recursivelyCheckLocksWithClient and checkVersionsForLocksWithClient functions shared significant structural similarity. Extracted a generic paginateEntries helper that: - Handles pagination logic (lastFileName tracking, Limit) - Handles stream receiving with proper EOF vs error handling - Validates entry names (path traversal protection) - Calls a processEntry callback for business logic This centralizes pagination logic and makes the code more maintainable. * feat: add context propagation for timeout and cancellation support All Object Lock checking functions now accept context.Context parameter: - paginateEntries(ctx, client, dir, processEntry) - recursivelyCheckLocksWithClient(ctx, client, dir, hasLocks, currentTime) - checkVersionsForLocksWithClient(ctx, client, versionsDir, hasLocks, currentTime) - HasObjectsWithActiveLocks(ctx, client, bucketPath) - CheckBucketForLockedObjects(ctx, client, bucketsPath, bucketName) This enables: - Timeout support for large bucket scans - Cancellation propagation from HTTP requests - The S3 API handler now uses r.Context() for proper request lifecycle * fix: address PR review comments 1. Add DefaultBucketsPath constant in admin_server.go instead of hardcoding "/buckets" in multiple places. 2. Add defensive normalization in EntryHasActiveLock: - TrimSpace to handle whitespace around values - ToUpper for case-insensitive comparison of legal hold and retention mode values - TrimSpace on retention date before parsing * fix: use ctx variable consistently instead of context.Background() In both DeleteS3Bucket and command_s3_bucket_delete, use the ctx variable defined at the start of the function for all gRPC calls instead of creating new context.Background() instances. |
2 months ago |
|
|
a1eab5ff99
|
shell: add -owner flag to s3.bucket.create command (#7728)
* shell: add -owner flag to s3.bucket.create command
This fixes an issue where buckets created via weed shell cannot be accessed
by non-admin S3 users because the bucket has no owner set.
When using S3 IAM authentication, non-admin users can only access buckets
they own. Buckets created via lazy S3 creation automatically have their
owner set from the request context, but buckets created via weed shell
had no owner, making them inaccessible to non-admin users.
The new -owner flag allows setting the bucket owner identity (s3-identity-id)
at creation time:
s3.bucket.create -name my-bucket -owner my-identity-name
Fixes: https://github.com/seaweedfs/seaweedfs/discussions/7599
* shell: add s3.bucket.owner command to view/change bucket ownership
This command allows viewing and changing the owner of an S3 bucket,
making it easier to manage bucket access for IAM users.
Usage:
# View the current owner of a bucket
s3.bucket.owner -name my-bucket
# Set or change the owner of a bucket
s3.bucket.owner -name my-bucket -set -owner new-identity
# Remove the owner (make bucket admin-only)
s3.bucket.owner -name my-bucket -set -owner ""
* shell: show bucket owner in s3.bucket.list output
Display the bucket owner (s3-identity-id) when listing buckets,
making it easier to see which identity owns each bucket.
Example output:
my-bucket size:1024 chunk:5 owner:my-identity
* admin: add bucket owner support to admin UI
- Add Owner field to S3Bucket struct for displaying bucket ownership
- Add Owner field to CreateBucketRequest for setting owner at creation
- Add UpdateBucketOwner API endpoint (PUT /api/s3/buckets/:bucket/owner)
- Add SetBucketOwner function for updating bucket ownership
- Update GetS3Buckets to populate owner from s3-identity-id extended attribute
- Update CreateS3BucketWithObjectLock to set owner when creating bucket
This allows the admin UI to display bucket owners and supports creating/
editing bucket ownership, which is essential for S3 IAM authentication
where non-admin users can only access buckets they own.
* admin: show bucket owner in buckets list and create form
- Add Owner column to buckets table to display bucket ownership
- Add Owner field to create bucket form for setting owner at creation
- Show owner in bucket details modal
- Update JavaScript to include owner when creating buckets
This makes bucket ownership visible and configurable from the admin UI,
which is essential for S3 IAM authentication where non-admin users can
only access buckets they own.
* admin: add bucket owner management with user dropdown
- Add 'Manage Owner' button to bucket actions
- Add modal with dropdown to select owner from existing users
- Fetch users from /api/users endpoint to populate dropdown
- Update create bucket form to use dropdown for owner selection
- Allow setting owner to empty (no owner = admin-only access)
This provides a user-friendly way to manage bucket ownership by selecting
from existing S3 identities rather than manually typing identity names.
* fix: use username instead of name for user dropdown
The /api/users endpoint returns 'username' field, not 'name'.
Fixed both the manage owner modal and create bucket form.
* Update s3_buckets_templ.go
* fix: address code review feedback for s3.bucket.create
- Check if entry.Extended is nil before making a new map to prevent
overwriting any previously set extended attributes
- Use fmt.Fprintln(writer, ...) instead of println() for consistent
output handling across the shell command framework
* fix: improve help text and validate owner input
- Add note that -owner value should match identity name in s3.json
- Trim whitespace from owner and treat whitespace-only as empty
* fix: address code review feedback for list and owner commands
- s3.bucket.list: Use %q to escape owner value and prevent malformed
tabular output from special characters (tabs/newlines/control chars)
- s3.bucket.owner: Use neutral error message for lookup failures since
they can occur for reasons other than missing bucket (e.g., permission)
* fix: improve s3.bucket.owner CLI UX
- Remove confusing -set flag that was required but not shown in examples
- Add explicit -delete flag to remove owner (safer than empty string)
- Presence of -owner now implies set operation (no extra flag needed)
- Validate that -owner and -delete cannot be used together
- Trim whitespace from owner value
- Update help text with correct examples and add note about identity name
- Clearer success messages for each operation
* fix: address code review feedback for admin UI
- GetBucketDetails: Extract and return owner from extended attributes
- CSV export: Fix column indices after adding Owner column, add Owner to header
- XSS prevention: Add escapeHtml() function to sanitize user data in innerHTML
(bucket.name, bucket.owner, bucket.object_lock_mode, obj.key, obj.storage_class)
* fix: address additional code review feedback
- types.go: Add omitempty to Owner JSON tag, update comment
- bucket_management.go: Trim and validate owner (max 256 chars) in CreateBucket
- bucket_management.go: Use neutral error message in SetBucketOwner lookup
* fix: improve owner field handling and error recovery
bucket_management.go:
- Use *string pointer for Owner to detect if field was explicitly provided
- Return HTTP 400 if owner field is missing (use empty string to clear)
- Trim and validate owner (max 256 chars) in UpdateBucketOwner
s3_buckets.templ:
- Re-enable owner select dropdown on fetch error
- Reset dropdown to default 'No owner' option on error
- Allow users to retry or continue without selecting an owner
* fix: move modal instance variables to global scope
Move deleteModalInstance, quotaModalInstance, ownerModalInstance,
detailsModalInstance, and cachedUsers to global scope so they are
accessible from both DOMContentLoaded handlers and global functions
like deleteBucket(). This fixes the undefined variable issue.
* refactor: improve modal handling and avoid global window properties
- Initialize modal instances once on DOMContentLoaded and reuse with show()
- Replace window.currentBucket* global properties with data attributes on forms
- Remove modal dispose/recreate pattern and unnecessary cleanup code
- Scope state to relevant DOM elements instead of global namespace
* Update s3_buckets_templ.go
* fix: define MaxOwnerNameLength constant and implement RFC 4180 CSV escaping
bucket_management.go:
- Add MaxOwnerNameLength constant (256) with documentation
- Replace magic number 256 with constant in both validation checks
s3_buckets.templ:
- Add escapeCsvField() helper for RFC 4180 compliant CSV escaping
- Properly handle commas, double quotes, and newlines in field values
- Escape internal quotes by doubling them (")→("")
* Update s3_buckets_templ.go
* refactor: use direct gRPC client methods for consistency
- command_s3_bucket_create.go: Use client.CreateEntry instead of filer_pb.CreateEntry
- command_s3_bucket_owner.go: Use client.LookupDirectoryEntry instead of filer_pb.LookupEntry
- command_s3_bucket_owner.go: Use client.UpdateEntry instead of filer_pb.UpdateEntry
This aligns with the pattern used in weed/admin/dash/bucket_management.go
|
2 months ago |
|
|
28ac536280
|
fix: normalize Windows backslash paths in weed admin file uploads (#7636)
fix: normalize Windows backslash paths in file uploads When uploading files from a Windows client to a Linux server, file paths containing backslashes were not being properly interpreted as directory separators. This caused files intended for subdirectories to be created in the root directory with backslashes in their filenames. Changes: - Add util.CleanWindowsPath and util.CleanWindowsPathBase helper functions in weed/util/fullpath.go for reusable path normalization - Use path.Join/path.Clean/path.Base instead of filepath equivalents for URL path semantics (filepath is OS-specific) - Apply normalization in weed admin handlers and filer upload parsing Fixes #7628 |
2 months ago |
|
|
f1384108e8
|
fix: Admin UI file browser uses https.client TLS config for filer communication (#7633)
* fix: Admin UI file browser uses https.client TLS config for filer communication When filer is configured with HTTPS (https.filer section in security.toml), the Admin UI file browser was still using plain HTTP for file uploads, downloads, and viewing. This caused TLS handshake errors: 'http: TLS handshake error: client sent an HTTP request to an HTTPS server' This fix: - Updates FileBrowserHandlers to use the HTTPClient from weed/util/http/client which properly loads TLS configuration from https.client section - The HTTPClient automatically uses HTTPS when https.client.enabled=true - All file operations (upload, download, view) now respect TLS configuration - Falls back to plain HTTP if TLS client creation fails Fixes #7631 * fix: Address code review comments - Fix fallback client Transport wiring (properly assign transport to http.Client) - Use per-operation timeouts instead of unified 60s timeout: - uploadFileToFiler: 60s (for large file uploads) - ViewFile: 30s (original timeout) - isLikelyTextFile: 10s (original timeout) * fix: Proxy file downloads through Admin UI for mTLS support The DownloadFile function previously used browser redirect, which would fail when filer requires mutual TLS (client certificates) since the browser doesn't have these certificates. Now the Admin UI server proxies the download, using its TLS-aware HTTP client with the configured client certificates, then streams the response to the browser. * fix: Ensure HTTP response body is closed on non-200 responses In ViewFile, the response body was only closed on 200 OK paths, which could leak connections on non-200 responses. Now the body is always closed via defer immediately after checking err == nil, before checking the status code. * refactor: Extract fetchFileContent helper to reduce nesting in ViewFile Extracted the deeply nested file fetch logic (7+ levels) into a separate fetchFileContent helper method. This improves readability while maintaining the same TLS-aware behavior and error handling. * refactor: Use idiomatic Go error handling in fetchFileContent Changed fetchFileContent to return (string, error) instead of (content string, reason string) for idiomatic Go error handling. This enables error wrapping and standard 'if err != nil' checks. Also improved error messages to be more descriptive for debugging, including the HTTP status code and response body on non-200 responses. * refactor: Extract newClientWithTimeout helper to reduce code duplication - Added newClientWithTimeout() helper method that creates a temporary http.Client with the specified timeout, reusing the TLS transport - Updated uploadFileToFiler, fetchFileContent, DownloadFile, and isLikelyTextFile to use the new helper - Improved error message in DownloadFile to include response body for better debuggability (consistent with fetchFileContent) * fix: Address CodeRabbit review comments - Fix connection leak in isLikelyTextFile: ensure resp.Body.Close() is called even when status code is not 200 - Use http.NewRequestWithContext in DownloadFile so the filer request is cancelled when the client disconnects, improving resource cleanup * fix: Escape Content-Disposition filename per RFC 2616 Filenames containing quotes, backslashes, or special characters could break the Content-Disposition header or cause client-side parsing issues. Now properly escapes these characters before including in the header. * fix: Handle io.ReadAll errors when reading error response bodies In fetchFileContent and DownloadFile, the error from io.ReadAll was ignored when reading the filer's error response body. Now properly handles these errors to provide complete error messages. * fix: Fail fast when TLS client creation fails If TLS is enabled (https.client.enabled=true) but misconfigured, fail immediately with glog.Fatalf rather than silently falling back to plain HTTP. This prevents confusing runtime errors when the filer only accepts HTTPS connections. * fix: Use mime.FormatMediaType for RFC 6266 compliant Content-Disposition Replace manual escaping with mime.FormatMediaType which properly handles non-ASCII characters and special characters per RFC 6266, ensuring correct filename display for international users. |
2 months ago |
|
|
4cc6a2a4e5
|
fix: Admin UI user creation fails before filer discovery (#7624) (#7625)
* fix: Admin UI user creation fails before filer discovery (#7624) The credential manager's filer address function was not configured quickly enough after admin server startup, causing 'filer address function not configured' errors when users tried to create users immediately. Changes: - Use exponential backoff (200ms -> 5s) instead of fixed 5s polling for faster filer discovery on startup - Improve error messages to be more user-friendly and actionable Fixes #7624 * Add more debug logging to help diagnose filer discovery issues * fix: Use dynamic filer address function to eliminate race condition Instead of using a goroutine to wait for filer discovery before setting the filer address function, we now set a dynamic function immediately that returns the current filer address whenever it's called. This eliminates the race condition where users could create users before the goroutine completed, and provides clearer error messages when no filer is available. The dynamic function is HA-aware - it automatically returns whatever filer is currently available, adapting to filer failovers. |
2 months ago |
|
|
f6f3859826
|
Fix #7575: Correct interface check for filer address function in admin server (#7588)
* Fix #7575: Correct interface check for filer address function in admin server Problem: User creation in object store was failing with error: 'filer_etc: filer address function not configured' Root Cause: In admin_server.go, the code checked for incorrect interface method SetFilerClient(string, grpc.DialOption) instead of the actual SetFilerAddressFunc(func() pb.ServerAddress, grpc.DialOption) This interface mismatch prevented the filer address function from being configured, causing user creation operations to fail. Solution: - Fixed interface check to use SetFilerAddressFunc - Updated function call to properly configure filer address function - Function now dynamically returns current active filer address Tests Added: - Unit tests in weed/admin/dash/user_management_test.go - Integration tests in test/admin/user_creation_integration_test.go - Documentation in test/admin/README.md All tests pass successfully. * Fix #7575: Correct interface check for filer address function in admin UI Problem: User creation in Admin UI was failing with error: 'filer_etc: filer address function not configured' Root Cause: In admin_server.go, the code checked for incorrect interface method SetFilerClient(string, grpc.DialOption) instead of the actual SetFilerAddressFunc(func() pb.ServerAddress, grpc.DialOption) This interface mismatch prevented the filer address function from being configured, causing user creation operations to fail in the Admin UI. Note: This bug only affects the Admin UI. The S3 API and weed shell commands (s3.configure) were unaffected as they use the correct interface or bypass the credential manager entirely. Solution: - Fixed interface check in admin_server.go to use SetFilerAddressFunc - Updated function call to properly configure filer address function - Function now dynamically returns current active filer (HA-aware) - Cleaned up redundant comments in the code Tests Added: - Unit tests in weed/admin/dash/user_management_test.go * TestFilerAddressFunctionInterface - verifies correct interface * TestGenerateAccessKey - tests key generation * TestGenerateSecretKey - tests secret generation * TestGenerateAccountId - tests account ID generation All tests pass and will run automatically in CI. * Fix #7575: Correct interface check for filer address function in admin UI Problem: User creation in Admin UI was failing with error: 'filer_etc: filer address function not configured' Root Cause: 1. In admin_server.go, the code checked for incorrect interface method SetFilerClient(string, grpc.DialOption) instead of the actual SetFilerAddressFunc(func() pb.ServerAddress, grpc.DialOption) 2. The admin command was missing the filer_etc import, so the store was never registered This interface mismatch prevented the filer address function from being configured, causing user creation operations to fail in the Admin UI. Note: This bug only affects the Admin UI. The S3 API and weed shell commands (s3.configure) were unaffected as they use the correct interface or bypass the credential manager entirely. Solution: - Added filer_etc import to weed/command/admin.go to register the store - Fixed interface check in admin_server.go to use SetFilerAddressFunc - Updated function call to properly configure filer address function - Function now dynamically returns current active filer (HA-aware) - Hoisted credentialManager assignment to reduce code duplication Tests Added: - Unit tests in weed/admin/dash/user_management_test.go * TestFilerAddressFunctionInterface - verifies correct interface * TestGenerateAccessKey - tests key generation * TestGenerateSecretKey - tests secret generation * TestGenerateAccountId - tests account ID generation All tests pass and will run automatically in CI. |
2 months ago |
|
|
cc444b1868 |
muted texts
|
3 months ago |
|
|
ca8cd631ff |
Update admin.css
|
3 months ago |
|
|
82f2c3757f |
muted admin UI color
|
3 months ago |
|
|
b2fd31c08b |
fix volume utilization icon rendering
|
3 months ago |
|
|
c56a0a0ebd |
fix: handle 'default' collection filter in cluster volumes page
- Update matchesCollection to recognize 'default' as filter for empty collection - Remove incorrect conversion of 'default' to empty string in handlers - Fixes issue where ?collection=default would show all collections instead of just default collection |
3 months ago |
|
|
fb46a8a61f |
adjust volume server link
|
3 months ago |
|
|
b7ba6785a2 |
go fmt
|
3 months ago |
|
|
208d7f24f4
|
Erasure Coding: Ec refactoring (#7396)
* refactor: add ECContext structure to encapsulate EC parameters
- Create ec_context.go with ECContext struct
- NewDefaultECContext() creates context with default 10+4 configuration
- Helper methods: CreateEncoder(), ToExt(), String()
- Foundation for cleaner function signatures
- No behavior change, still uses hardcoded 10+4
* refactor: update ec_encoder.go to use ECContext
- Add WriteEcFilesWithContext() and RebuildEcFilesWithContext() functions
- Keep old functions for backward compatibility (call new versions)
- Update all internal functions to accept ECContext parameter
- Use ctx.DataShards, ctx.ParityShards, ctx.TotalShards consistently
- Use ctx.CreateEncoder() instead of hardcoded reedsolomon.New()
- Use ctx.ToExt() for shard file extensions
- No behavior change, still uses default 10+4 configuration
* refactor: update ec_volume.go to use ECContext
- Add ECContext field to EcVolume struct
- Initialize ECContext with default configuration in NewEcVolume()
- Update LocateEcShardNeedleInterval() to use ECContext.DataShards
- Phase 1: Always uses default 10+4 configuration
- No behavior change
* refactor: add EC shard count fields to VolumeInfo protobuf
- Add data_shards_count field (field 8) to VolumeInfo message
- Add parity_shards_count field (field 9) to VolumeInfo message
- Fields are optional, 0 means use default (10+4)
- Backward compatible: fields added at end
- Phase 1: Foundation for future customization
* refactor: regenerate protobuf Go files with EC shard count fields
- Regenerated volume_server_pb/*.go with new EC fields
- DataShardsCount and ParityShardsCount accessors added to VolumeInfo
- No behavior change, fields not yet used
* refactor: update VolumeEcShardsGenerate to use ECContext
- Create ECContext with default configuration in VolumeEcShardsGenerate
- Use ecCtx.TotalShards and ecCtx.ToExt() in cleanup
- Call WriteEcFilesWithContext() instead of WriteEcFiles()
- Save EC configuration (DataShardsCount, ParityShardsCount) to VolumeInfo
- Log EC context being used
- Phase 1: Always uses default 10+4 configuration
- No behavior change
* fmt
* refactor: update ec_test.go to use ECContext
- Update TestEncodingDecoding to create and use ECContext
- Update validateFiles() to accept ECContext parameter
- Update removeGeneratedFiles() to use ctx.TotalShards and ctx.ToExt()
- Test passes with default 10+4 configuration
* refactor: use EcShardConfig message instead of separate fields
* optimize: pre-calculate row sizes in EC encoding loop
* refactor: replace TotalShards field with Total() method
- Remove TotalShards field from ECContext to avoid field drift
- Add Total() method that computes DataShards + ParityShards
- Update all references to use ctx.Total() instead of ctx.TotalShards
- Read EC config from VolumeInfo when loading EC volumes
- Read data shard count from .vif in VolumeEcShardsToVolume
- Use >= instead of > for exact boundary handling in encoding loops
* optimize: simplify VolumeEcShardsToVolume to use existing EC context
- Remove redundant CollectEcShards call
- Remove redundant .vif file loading
- Use v.ECContext.DataShards directly (already loaded by NewEcVolume)
- Slice tempShards instead of collecting again
* refactor: rename MaxShardId to MaxShardCount for clarity
- Change from MaxShardId=31 to MaxShardCount=32
- Eliminates confusing +1 arithmetic (MaxShardId+1)
- More intuitive: MaxShardCount directly represents the limit
fix: support custom EC ratios beyond 14 shards in VolumeEcShardsToVolume
- Add MaxShardId constant (31, since ShardBits is uint32)
- Use MaxShardId+1 (32) instead of TotalShardsCount (14) for tempShards buffer
- Prevents panic when slicing for volumes with >14 total shards
- Critical fix for custom EC configurations like 20+10
* fix: add validation for EC shard counts from VolumeInfo
- Validate DataShards/ParityShards are positive and within MaxShardCount
- Prevent zero or invalid values that could cause divide-by-zero
- Fallback to defaults if validation fails, with warning log
- VolumeEcShardsGenerate now preserves existing EC config when regenerating
- Critical safety fix for corrupted or legacy .vif files
* fix: RebuildEcFiles now loads EC config from .vif file
- Critical: RebuildEcFiles was always using default 10+4 config
- Now loads actual EC config from .vif file when rebuilding shards
- Validates config before use (positive shards, within MaxShardCount)
- Falls back to default if .vif missing or invalid
- Prevents data corruption when rebuilding custom EC volumes
* add: defensive validation for dataShards in VolumeEcShardsToVolume
- Validate dataShards > 0 and <= MaxShardCount before use
- Prevents panic from corrupted or uninitialized ECContext
- Returns clear error message instead of panic
- Defense-in-depth: validates even though upstream should catch issues
* fix: replace TotalShardsCount with MaxShardCount for custom EC ratio support
Critical fixes to support custom EC ratios > 14 shards:
disk_location_ec.go:
- validateEcVolume: Check shards 0-31 instead of 0-13 during validation
- removeEcVolumeFiles: Remove shards 0-31 instead of 0-13 during cleanup
ec_volume_info.go ShardBits methods:
- ShardIds(): Iterate up to MaxShardCount (32) instead of TotalShardsCount (14)
- ToUint32Slice(): Iterate up to MaxShardCount (32)
- IndexToShardId(): Iterate up to MaxShardCount (32)
- MinusParityShards(): Remove shards 10-31 instead of 10-13 (added note about Phase 2)
- Minus() shard size copy: Iterate up to MaxShardCount (32)
- resizeShardSizes(): Iterate up to MaxShardCount (32)
Without these changes:
- Custom EC ratios > 14 total shards would fail validation on startup
- Shards 14-31 would never be discovered or cleaned up
- ShardBits operations would miss shards >= 14
These changes are backward compatible - MaxShardCount (32) includes
the default TotalShardsCount (14), so existing 10+4 volumes work as before.
* fix: replace TotalShardsCount with MaxShardCount in critical data structures
Critical fixes for buffer allocations and loops that must support
custom EC ratios up to 32 shards:
Data Structures:
- store_ec.go:354: Buffer allocation for shard recovery (bufs array)
- topology_ec.go:14: EcShardLocations.Locations fixed array size
- command_ec_rebuild.go:268: EC shard map allocation
- command_ec_common.go:626: Shard-to-locations map allocation
Shard Discovery Loops:
- ec_task.go:378: Loop to find generated shard files
- ec_shard_management.go: All 8 loops that check/count EC shards
These changes are critical because:
1. Buffer allocations sized to 14 would cause index-out-of-bounds panics
when accessing shards 14-31
2. Fixed arrays sized to 14 would truncate shard location data
3. Loops limited to 0-13 would never discover/manage shards 14-31
Note: command_ec_encode.go:208 intentionally NOT changed - it creates
shard IDs to mount after encoding. In Phase 1 we always generate 14
shards, so this remains TotalShardsCount and will be made dynamic in
Phase 2 based on actual EC context.
Without these fixes, custom EC ratios > 14 total shards would cause:
- Runtime panics (array index out of bounds)
- Data loss (shards 14-31 never discovered/tracked)
- Incomplete shard management (missing shards not detected)
* refactor: move MaxShardCount constant to ec_encoder.go
Moved MaxShardCount from ec_volume_info.go to ec_encoder.go to group it
with other shard count constants (DataShardsCount, ParityShardsCount,
TotalShardsCount). This improves code organization and makes it easier
to understand the relationship between these constants.
Location: ec_encoder.go line 22, between TotalShardsCount and MinTotalDisks
* improve: add defensive programming and better error messages for EC
Code review improvements from CodeRabbit:
1. ShardBits Guardrails (ec_volume_info.go):
- AddShardId, RemoveShardId: Reject shard IDs >= MaxShardCount
- HasShardId: Return false for out-of-range shard IDs
- Prevents silent no-ops from bit shifts with invalid IDs
2. Future-Proof Regex (disk_location_ec.go):
- Updated regex from \.ec[0-9][0-9] to \.ec\d{2,3}
- Now matches .ec00 through .ec999 (currently .ec00-.ec31 used)
- Supports future increases to MaxShardCount beyond 99
3. Better Error Messages (volume_grpc_erasure_coding.go):
- Include valid range (1..32) in dataShards validation error
- Helps operators quickly identify the problem
4. Validation Before Save (volume_grpc_erasure_coding.go):
- Validate ECContext (DataShards > 0, ParityShards > 0, Total <= MaxShardCount)
- Log EC config being saved to .vif for debugging
- Prevents writing invalid configs to disk
These changes improve robustness and debuggability without changing
core functionality.
* fmt
* fix: critical bugs from code review + clean up comments
Critical bug fixes:
1. command_ec_rebuild.go: Fixed indentation causing compilation error
- Properly nested if/for blocks in registerEcNode
2. ec_shard_management.go: Fixed isComplete logic incorrectly using MaxShardCount
- Changed from MaxShardCount (32) back to TotalShardsCount (14)
- Default 10+4 volumes were being incorrectly reported as incomplete
- Missing shards 14-31 were being incorrectly reported as missing
- Fixed in 4 locations: volume completeness checks and getMissingShards
3. ec_volume_info.go: Fixed MinusParityShards removing too many shards
- Changed from MaxShardCount (32) back to TotalShardsCount (14)
- Was incorrectly removing shard IDs 10-31 instead of just 10-13
Comment cleanup:
- Removed Phase 1/Phase 2 references (development plan context)
- Replaced with clear statements about default 10+4 configuration
- SeaweedFS repo uses fixed 10+4 EC ratio, no phases needed
Root cause: Over-aggressive replacement of TotalShardsCount with MaxShardCount.
MaxShardCount (32) is the limit for buffer allocations and shard ID loops,
but TotalShardsCount (14) must be used for default EC configuration logic.
* fix: add defensive bounds checks and compute actual shard counts
Critical fixes from code review:
1. topology_ec.go: Add defensive bounds checks to AddShard/DeleteShard
- Prevent panic when shardId >= MaxShardCount (32)
- Return false instead of crashing on out-of-range shard IDs
2. command_ec_common.go: Fix doBalanceEcShardsAcrossRacks
- Was using hardcoded TotalShardsCount (14) for all volumes
- Now computes actual totalShardsForVolume from rackToShardCount
- Fixes incorrect rebalancing for volumes with custom EC ratios
- Example: 5+2=7 shards would incorrectly use 14 as average
These fixes improve robustness and prepare for future custom EC ratios
without changing current behavior for default 10+4 volumes.
Note: MinusParityShards and ec_task.go intentionally NOT changed for
seaweedfs repo - these will be enhanced in seaweed-enterprise repo
where custom EC ratio configuration is added.
* fmt
* style: make MaxShardCount type casting explicit in loops
Improved code clarity by explicitly casting MaxShardCount to the
appropriate type when used in loop comparisons:
- ShardId comparisons: Cast to ShardId(MaxShardCount)
- uint32 comparisons: Cast to uint32(MaxShardCount)
Changed in 5 locations:
- Minus() loop (line 90)
- ShardIds() loop (line 143)
- ToUint32Slice() loop (line 152)
- IndexToShardId() loop (line 219)
- resizeShardSizes() loop (line 248)
This makes the intent explicit and improves type safety readability.
No functional changes - purely a style improvement.
|
3 months ago |
|
|
9f4075441c
|
[Admin UI] Login not possible due to securecookie error (#7374)
* [Admin UI] Login not possible due to securecookie error * avoid 404 favicon * Update weed/admin/dash/auth_middleware.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * address comments * avoid variable over shadowing * log session save error --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
3 months ago |