* fix: volume balance detection now returns multiple tasks per run (#8551)
Previously, detectForDiskType() returned at most 1 balance task per disk
type, making the MaxJobsPerDetection setting ineffective. The detection
loop now iterates within each disk type, planning multiple moves until
the imbalance drops below threshold or maxResults is reached. Effective
volume counts are adjusted after each planned move so the algorithm
correctly re-evaluates which server is overloaded.
* fix: factor pending tasks into destination scoring and use UnixNano for task IDs
- Use UnixNano instead of Unix for task IDs to avoid collisions when
multiple tasks are created within the same second
- Adjust calculateBalanceScore to include LoadCount (pending + assigned
tasks) in the utilization estimate, so the destination picker avoids
stacking multiple planned moves onto the same target disk
* test: add comprehensive balance detection tests for complex scenarios
Cover multi-server convergence, max-server shifting, destination
spreading, pre-existing pending task skipping, no-duplicate-volume
invariant, and parameterized convergence verification across different
cluster shapes and thresholds.
* fix: address PR review findings in balance detection
- hasMore flag: compute from len(results) >= maxResults so the scheduler
knows more pages may exist, matching vacuum/EC handler pattern
- Exhausted server fallthrough: when no eligible volumes remain on the
current maxServer (all have pending tasks) or destination planning
fails, mark the server as exhausted and continue to the next
overloaded server instead of stopping the entire detection loop
- Return canonical destination server ID directly from createBalanceTask
instead of resolving via findServerIDByAddress, eliminating the
fragile address→ID lookup for adjustment tracking
- Fix bestScore sentinel: use math.Inf(-1) instead of -1.0 so disks
with negative scores (high pending load, same rack/DC) are still
selected as the best available destination
- Add TestDetection_ExhaustedServerFallsThrough covering the scenario
where the top server's volumes are all blocked by pre-existing tasks
* test: fix computeEffectiveCounts and add len guard in no-duplicate test
- computeEffectiveCounts now takes a servers slice to seed counts for all
known servers (including empty ones) and uses an address→ID map from
the topology spec instead of scanning metrics, so destination servers
with zero initial volumes are tracked correctly
- TestDetection_NoDuplicateVolumesAcrossIterations now asserts len > 1
before checking duplicates, so the test actually fails if Detection
regresses to returning a single task
* fix: remove redundant HasAnyTask check in createBalanceTask
The HasAnyTask check in createBalanceTask duplicated the same check
already performed in detectForDiskType's volume selection loop.
Since detection runs single-threaded (MaxDetectionConcurrency: 1),
no race can occur between the two points.
* fix: consistent hasMore pattern and remove double-counted LoadCount in scoring
- Adopt vacuum_handler's hasMore pattern: over-fetch by 1, check
len > maxResults, and truncate — consistent truncation semantics
- Remove direct LoadCount penalty in calculateBalanceScore since
LoadCount is already factored into effectiveVolumeCount for
utilization scoring; bump utilization weight from 40 to 50 to
compensate for the removed 10-point load penalty
* fix: handle zero maxResults as no-cap, emit trace after trim, seed empty servers
- When MaxResults is 0 (omitted), treat as no explicit cap instead of
defaulting to 1; only apply the +1 over-fetch probe when caller
supplies a positive limit
- Move decision trace emission after hasMore/trim so the trace
accurately reflects the returned proposals
- Seed serverVolumeCounts from ActiveTopology so servers that have a
matching disk type but zero volumes are included in the imbalance
calculation and MinServerCount check
* fix: nil-guard clusterInfo, uncap legacy DetectionFunc, deterministic disk type order
- Add early nil guard for clusterInfo in Detection to prevent panics
in downstream helpers (detectForDiskType, createBalanceTask)
- Change register.go DetectionFunc wrapper from maxResults=1 to 0
(no cap) so the legacy code path returns all detected tasks
- Sort disk type keys before iteration so results are deterministic
when maxResults spans multiple disk types (HDD/SSD)
* fix: don't over-fetch in stateful detection to avoid orphaned pending tasks
Detection registers planned moves in ActiveTopology via AddPendingTask,
so requesting maxResults+1 would create an extra pending task that gets
discarded during trim. Use len(results) >= maxResults as the hasMore
signal instead, which is correct since Detection already caps internally.
* fix: return explicit truncated flag from Detection instead of approximating
Detection now returns (results, truncated, error) where truncated is true
only when the loop stopped because it hit maxResults, not when it ran out
of work naturally. This eliminates false hasMore signals when detection
happens to produce exactly maxResults results by resolving the imbalance.
* cleanup: simplify detection logic and remove redundancies
- Remove redundant clusterInfo nil check in detectForDiskType since
Detection already guards against nil clusterInfo
- Remove adjustments loop for destination servers not in
serverVolumeCounts — topology seeding ensures all servers with
matching disk type are already present
- Merge two-loop min/max calculation into a single loop: min across
all servers, max only among non-exhausted servers
- Replace magic number 100 with len(metrics) for minC initialization
in convergence test
* fix: accurate truncation flag, deterministic server order, indexed volume lookup
- Track balanced flag to distinguish "hit maxResults cap" from "cluster
balanced at exactly maxResults" — truncated is only true when there's
genuinely more work to do
- Sort servers for deterministic iteration and tie-breaking when
multiple servers have equal volume counts
- Pre-index volumes by server with per-server cursors to avoid
O(maxResults * volumes) rescanning on each iteration
- Add truncation flag assertions to RespectsMaxResults test: true when
capped, false when detection finishes naturally
* fix: seed trace server counts from ActiveTopology to match detection logic
The decision trace was building serverVolumeCounts only from metrics,
missing zero-volume servers seeded from ActiveTopology by Detection.
This could cause the trace to report wrong server counts, incorrect
imbalance ratios, or spurious "too few servers" messages. Pass
activeTopology into the trace function and seed server counts the
same way Detection does.
* fix: don't exhaust server on per-volume planning failure, sort volumes by ID
- When createBalanceTask returns nil, continue to the next volume on
the same server instead of marking the entire server as exhausted.
The failure may be volume-specific (not found in topology, pending
task registration failed) and other volumes on the server may still
be viable candidates.
- Sort each server's volume slice by VolumeID after pre-indexing so
volume selection is fully deterministic regardless of input order.
* fix: use require instead of assert to prevent nil dereference panic in CORS test
The test used assert.NoError (non-fatal) for GetBucketCors, then
immediately accessed getResp.CORSRules. When the API returns an error,
getResp is nil causing a panic. Switch to require.NoError/NotNil/Len
so the test stops before dereferencing a nil response.
* fix: deterministic disk tie-breaking and stronger pre-existing task test
- Sort available disks by NodeID then DiskID before scoring so
destination selection is deterministic when two disks score equally
- Add task count bounds assertion to SkipsPreExistingPendingTasks test:
with 15 of 20 volumes already having pending tasks, at most 5 new
tasks should be created and at least 1 (imbalance still exists)
* fix: seed adjustments from existing pending/assigned tasks to prevent over-scheduling
Detection now calls ActiveTopology.GetTaskServerAdjustments() to
initialize the adjustments map with source/destination deltas from
existing pending and assigned balance tasks. This ensures
effectiveCounts reflects in-flight moves, preventing the algorithm
from planning additional moves in the same direction when prior
moves already address the imbalance.
Added GetTaskServerAdjustments(taskType) to ActiveTopology which
iterates pending and assigned tasks, decrementing source servers
and incrementing destination servers for the given task type.