Browse Source

Add volume dir tags and EC placement priority (#8472)

* Add volume dir tags to topology

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Add preferred tag config for EC

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Prioritize EC destinations by tags

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Add EC placement planner tag tests

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Refactor EC placement tests to reuse buildActiveTopology

Remove buildActiveTopologyWithDiskTags helper function and consolidate
tag setup inline in test cases. Tests now use UpdateTopology to apply
tags after topology creation, reusing the existing buildActiveTopology
function rather than duplicating its logic.

All tag scenario tests pass:
- TestECPlacementPlannerPrefersTaggedDisks
- TestECPlacementPlannerFallsBackWhenTagsInsufficient

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Consolidate normalizeTagList into shared util package

Extract normalizeTagList from three locations (volume.go,
detection.go, erasure_coding_handler.go) into new weed/util/tag.go
as exported NormalizeTagList function. Replace all duplicate
implementations with imports and calls to util.NormalizeTagList.

This improves code reuse and maintainability by centralizing
tag normalization logic.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Add PreferredTags to EC config persistence

Add preferred_tags field to ErasureCodingTaskConfig protobuf with field
number 5. Update GetConfigSpec to include preferred_tags field in the
UI configuration schema. Add PreferredTags to ToTaskPolicy to serialize
config to protobuf. Add PreferredTags to FromTaskPolicy to deserialize
from protobuf with defensive copy to prevent external mutation.

This allows EC preferred tags to be persisted and restored across
worker restarts.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Add defensive copy for Tags slice in DiskLocation

Copy the incoming tags slice in NewDiskLocation instead of storing
by reference. This prevents external callers from mutating the
DiskLocation.Tags slice after construction, improving encapsulation
and preventing unexpected changes to disk metadata.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Add doc comment to buildCandidateSets method

Document the tiered candidate selection and fallback behavior. Explain
that for a planner with preferredTags, it accumulates disks matching
each tag in order into progressively larger tiers, emits a candidate
set once a tier reaches shardsNeeded, and finally falls back to the
full candidates set if preferred-tag tiers are insufficient.

This clarifies the intended semantics for future maintainers.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Apply final PR review fixes

1. Update parseVolumeTags to replicate single tag entry to all folders
   instead of leaving some folders with nil tags. This prevents nil
   pointer dereferences when processing folders without explicit tags.

2. Add defensive copy in ToTaskPolicy for PreferredTags slice to match
   the pattern used in FromTaskPolicy, preventing external mutation of
   the returned TaskPolicy.

3. Add clarifying comment in buildCandidateSets explaining that the
   shardsNeeded <= 0 branch is a defensive check for direct callers,
   since selectDestinations guarantees shardsNeeded > 0.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix nil pointer dereference in parseVolumeTags

Ensure all folder tags are initialized to either normalized tags or
empty slices, not nil. When multiple tag entries are provided and there
are more folders than entries, remaining folders now get empty slices
instead of nil, preventing nil pointer dereference in downstream code.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix NormalizeTagList to return empty slice instead of nil

Change NormalizeTagList to always return a non-nil slice. When all tags
are empty or whitespace after normalization, return an empty slice
instead of nil. This prevents nil pointer dereferences in downstream
code that expects a valid (possibly empty) slice.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Add nil safety check for v.tags pointer

Add a safety check to handle the case where v.tags might be nil,
preventing a nil pointer dereference. If v.tags is nil, use an empty
string instead. This is defensive programming to prevent panics in
edge cases.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Add volume.tags flag to weed server and weed mini commands

Add the volume.tags CLI option to both the 'weed server' and 'weed mini'
commands. This allows users to specify disk tags when running the
combined server modes, just like they can with 'weed volume'.

The flag uses the same format and description as the volume command:
comma-separated tag groups per data dir with ':' separators
(e.g. fast:ssd,archive).

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <copilot@github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
pull/8475/head
Chris Lu 2 days ago
committed by GitHub
parent
commit
f5c35240be
No known key found for this signature in database GPG Key ID: B5690EEEBB952194
  1. 2
      weed/admin/topology/capacity.go
  2. 1
      weed/command/mini.go
  3. 1
      weed/command/server.go
  4. 40
      weed/command/volume.go
  5. 10
      weed/pb/master.proto
  6. 810
      weed/pb/master_pb/master.pb.go
  7. 1
      weed/pb/worker.proto
  8. 13
      weed/pb/worker_pb/worker.pb.go
  9. 16
      weed/plugin/worker/erasure_coding_handler.go
  10. 1
      weed/server/master_grpc_server.go
  11. 4
      weed/server/volume_server.go
  12. 10
      weed/storage/disk_location.go
  13. 15
      weed/storage/store.go
  14. 2
      weed/storage/store_load_balancing_test.go
  15. 33
      weed/topology/data_node.go
  16. 25
      weed/util/tag.go
  17. 27
      weed/worker/tasks/erasure_coding/config.go
  18. 108
      weed/worker/tasks/erasure_coding/detection.go
  19. 70
      weed/worker/tasks/erasure_coding/detection_test.go

2
weed/admin/topology/capacity.go

@ -115,6 +115,7 @@ func (at *ActiveTopology) GetDisksWithEffectiveCapacity(taskType TaskType, exclu
RemoteVolumeCount: disk.DiskInfo.DiskInfo.RemoteVolumeCount,
ActiveVolumeCount: disk.DiskInfo.DiskInfo.ActiveVolumeCount,
FreeVolumeCount: disk.DiskInfo.DiskInfo.FreeVolumeCount,
Tags: append([]string(nil), disk.DiskInfo.DiskInfo.Tags...),
}
diskCopy.DiskInfo = diskInfoCopy
diskCopy.DiskInfo.MaxVolumeCount = disk.DiskInfo.DiskInfo.MaxVolumeCount // Ensure Max is set
@ -178,6 +179,7 @@ func (at *ActiveTopology) GetDisksForPlanning(taskType TaskType, excludeNodeID s
RemoteVolumeCount: disk.DiskInfo.DiskInfo.RemoteVolumeCount,
ActiveVolumeCount: disk.DiskInfo.DiskInfo.ActiveVolumeCount,
FreeVolumeCount: disk.DiskInfo.DiskInfo.FreeVolumeCount,
Tags: append([]string(nil), disk.DiskInfo.DiskInfo.Tags...),
}
diskCopy.DiskInfo = diskInfoCopy

1
weed/command/mini.go

@ -204,6 +204,7 @@ func initMiniVolumeFlags() {
miniOptions.v.publicUrl = cmdMini.Flag.String("volume.publicUrl", "", "publicly accessible address")
miniOptions.v.indexType = cmdMini.Flag.String("volume.index", "memory", "Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance.")
miniOptions.v.diskType = cmdMini.Flag.String("volume.disk", "", "[hdd|ssd|<tag>] hard drive or solid state drive or any tag")
miniOptions.v.tags = cmdMini.Flag.String("volume.tags", "", "comma-separated tag groups per data dir; each group uses ':' (e.g. fast:ssd,archive)")
miniOptions.v.fixJpgOrientation = cmdMini.Flag.Bool("volume.images.fix.orientation", false, "Adjust jpg orientation when uploading.")
miniOptions.v.readMode = cmdMini.Flag.String("volume.readMode", "proxy", "[local|proxy|redirect] how to deal with non-local volume: 'not found|read in remote node|redirect volume location'.")
miniOptions.v.compactionMBPerSecond = cmdMini.Flag.Int("volume.compactionMBps", 0, "limit compaction speed in mega bytes per second")

1
weed/command/server.go

@ -137,6 +137,7 @@ func init() {
serverOptions.v.id = cmdServer.Flag.String("volume.id", "", "volume server id. If empty, default to ip:port")
serverOptions.v.indexType = cmdServer.Flag.String("volume.index", "memory", "Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance.")
serverOptions.v.diskType = cmdServer.Flag.String("volume.disk", "", "[hdd|ssd|<tag>] hard drive or solid state drive or any tag")
serverOptions.v.tags = cmdServer.Flag.String("volume.tags", "", "comma-separated tag groups per data dir; each group uses ':' (e.g. fast:ssd,archive)")
serverOptions.v.fixJpgOrientation = cmdServer.Flag.Bool("volume.images.fix.orientation", false, "Adjust jpg orientation when uploading.")
serverOptions.v.readMode = cmdServer.Flag.String("volume.readMode", "proxy", "[local|proxy|redirect] how to deal with non-local volume: 'not found|read in remote node|redirect volume location'.")
serverOptions.v.compactionMBPerSecond = cmdServer.Flag.Int("volume.compactionMBps", 0, "limit compaction speed in mega bytes per second")

40
weed/command/volume.go

@ -53,6 +53,7 @@ type VolumeServerOptions struct {
whiteList []string
indexType *string
diskType *string
tags *string
fixJpgOrientation *bool
readMode *string
cpuProfile *string
@ -94,6 +95,7 @@ func init() {
v.rack = cmdVolume.Flag.String("rack", "", "current volume server's rack name")
v.indexType = cmdVolume.Flag.String("index", "memory", "Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance.")
v.diskType = cmdVolume.Flag.String("disk", "", "[hdd|ssd|<tag>] hard drive or solid state drive or any tag")
v.tags = cmdVolume.Flag.String("tags", "", "comma-separated tag groups per data dir; each group uses ':' (e.g. fast:ssd,archive)")
v.fixJpgOrientation = cmdVolume.Flag.Bool("images.fix.orientation", false, "Adjust jpg orientation when uploading.")
v.readMode = cmdVolume.Flag.String("readMode", "proxy", "[local|proxy|redirect] how to deal with non-local volume: 'not found|proxy to remote node|redirect volume location'.")
v.cpuProfile = cmdVolume.Flag.String("cpuprofile", "", "cpu profile output file")
@ -219,6 +221,12 @@ func (v VolumeServerOptions) startVolumeServer(volumeFolders, maxVolumeCounts, v
glog.Fatalf("%d directories by -dir, but only %d disk types is set by -disk", len(v.folders), len(diskTypes))
}
var tagsArg string
if v.tags != nil {
tagsArg = *v.tags
}
folderTags := parseVolumeTags(tagsArg, len(v.folders))
// security related white list configuration
v.whiteList = util.StringSplit(volumeWhiteListOption, ",")
@ -269,7 +277,7 @@ func (v VolumeServerOptions) startVolumeServer(volumeFolders, maxVolumeCounts, v
volumeServer := weed_server.NewVolumeServer(volumeMux, publicVolumeMux,
*v.ip, *v.port, *v.portGrpc, *v.publicUrl, volumeServerId,
v.folders, v.folderMaxLimits, minFreeSpaces, diskTypes,
v.folders, v.folderMaxLimits, minFreeSpaces, diskTypes, folderTags,
*v.idxFolder,
volumeNeedleMapKind,
v.masters, constants.VolumePulsePeriod, *v.dataCenter, *v.rack,
@ -334,6 +342,36 @@ func (v VolumeServerOptions) startVolumeServer(volumeFolders, maxVolumeCounts, v
}
func parseVolumeTags(tagsArg string, folderCount int) [][]string {
if folderCount <= 0 {
return nil
}
tagEntries := []string{}
if strings.TrimSpace(tagsArg) != "" {
tagEntries = strings.Split(tagsArg, ",")
}
folderTags := make([][]string, folderCount)
// If exactly one tag entry provided, replicate it to all folders
if len(tagEntries) == 1 {
normalized := util.NormalizeTagList(strings.Split(tagEntries[0], ":"))
for i := 0; i < folderCount; i++ {
folderTags[i] = append([]string(nil), normalized...)
}
} else {
// Otherwise, assign tags to folders that have explicit entries
for i := 0; i < folderCount; i++ {
if i < len(tagEntries) {
folderTags[i] = util.NormalizeTagList(strings.Split(tagEntries[i], ":"))
} else {
// Initialize remaining folders with empty tag slice
folderTags[i] = []string{}
}
}
}
return folderTags
}
func shutdown(publicHttpDown httpdown.Server, clusterHttpServer httpdown.Server, grpcS *grpc.Server, volumeServer *weed_server.VolumeServer) {
// firstly, stop the public http service to prevent from receiving new user request

10
weed/pb/master.proto

@ -61,6 +61,11 @@ service Seaweed {
//////////////////////////////////////////////////
message DiskTag {
uint32 disk_id = 1;
repeated string tags = 2;
}
message Heartbeat {
string ip = 1;
uint32 port = 2;
@ -89,6 +94,8 @@ message Heartbeat {
// state flags
volume_server_pb.VolumeServerState state = 23;
repeated DiskTag disk_tags = 24;
}
message HeartbeatResponse {
@ -292,6 +299,7 @@ message DiskInfo {
repeated VolumeEcShardInformationMessage ec_shard_infos = 7;
int64 remote_volume_count = 8;
uint32 disk_id = 9;
repeated string tags = 10;
}
message DataNodeInfo {
string id = 1;
@ -460,4 +468,4 @@ message RaftLeadershipTransferResponse {
}
message VolumeGrowResponse {
}
}

810
weed/pb/master_pb/master.pb.go
File diff suppressed because it is too large
View File

1
weed/pb/worker.proto

@ -314,6 +314,7 @@ message ErasureCodingTaskConfig {
int32 quiet_for_seconds = 2; // Minimum quiet time before EC
int32 min_volume_size_mb = 3; // Minimum volume size for EC
string collection_filter = 4; // Only process volumes from specific collections
repeated string preferred_tags = 5; // Disk tags to prioritize for EC shard placement
}
// BalanceTaskConfig contains balance-specific configuration

13
weed/pb/worker_pb/worker.pb.go

@ -2589,6 +2589,7 @@ type ErasureCodingTaskConfig struct {
QuietForSeconds int32 `protobuf:"varint,2,opt,name=quiet_for_seconds,json=quietForSeconds,proto3" json:"quiet_for_seconds,omitempty"` // Minimum quiet time before EC
MinVolumeSizeMb int32 `protobuf:"varint,3,opt,name=min_volume_size_mb,json=minVolumeSizeMb,proto3" json:"min_volume_size_mb,omitempty"` // Minimum volume size for EC
CollectionFilter string `protobuf:"bytes,4,opt,name=collection_filter,json=collectionFilter,proto3" json:"collection_filter,omitempty"` // Only process volumes from specific collections
PreferredTags []string `protobuf:"bytes,5,rep,name=preferred_tags,json=preferredTags,proto3" json:"preferred_tags,omitempty"` // Disk tags to prioritize for EC shard placement
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
@ -2651,6 +2652,13 @@ func (x *ErasureCodingTaskConfig) GetCollectionFilter() string {
return ""
}
func (x *ErasureCodingTaskConfig) GetPreferredTags() []string {
if x != nil {
return x.PreferredTags
}
return nil
}
// BalanceTaskConfig contains balance-specific configuration
type BalanceTaskConfig struct {
state protoimpl.MessageState `protogen:"open.v1"`
@ -3559,12 +3567,13 @@ const file_worker_proto_rawDesc = "" +
"\x10VacuumTaskConfig\x12+\n" +
"\x11garbage_threshold\x18\x01 \x01(\x01R\x10garbageThreshold\x12/\n" +
"\x14min_volume_age_hours\x18\x02 \x01(\x05R\x11minVolumeAgeHours\x120\n" +
"\x14min_interval_seconds\x18\x03 \x01(\x05R\x12minIntervalSeconds\"\xc6\x01\n" +
"\x14min_interval_seconds\x18\x03 \x01(\x05R\x12minIntervalSeconds\"\xed\x01\n" +
"\x17ErasureCodingTaskConfig\x12%\n" +
"\x0efullness_ratio\x18\x01 \x01(\x01R\rfullnessRatio\x12*\n" +
"\x11quiet_for_seconds\x18\x02 \x01(\x05R\x0fquietForSeconds\x12+\n" +
"\x12min_volume_size_mb\x18\x03 \x01(\x05R\x0fminVolumeSizeMb\x12+\n" +
"\x11collection_filter\x18\x04 \x01(\tR\x10collectionFilter\"n\n" +
"\x11collection_filter\x18\x04 \x01(\tR\x10collectionFilter\x12%\n" +
"\x0epreferred_tags\x18\x05 \x03(\tR\rpreferredTags\"n\n" +
"\x11BalanceTaskConfig\x12/\n" +
"\x13imbalance_threshold\x18\x01 \x01(\x01R\x12imbalanceThreshold\x12(\n" +
"\x10min_server_count\x18\x02 \x01(\x05R\x0eminServerCount\"I\n" +

16
weed/plugin/worker/erasure_coding_handler.go

@ -11,6 +11,7 @@ import (
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
"github.com/seaweedfs/seaweedfs/weed/util"
ecstorage "github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding"
"github.com/seaweedfs/seaweedfs/weed/util/wildcard"
erasurecodingtask "github.com/seaweedfs/seaweedfs/weed/worker/tasks/erasure_coding"
@ -128,6 +129,14 @@ func (h *ErasureCodingHandler) Descriptor() *plugin_pb.JobTypeDescriptor {
Required: true,
MinValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 0}},
},
{
Name: "preferred_tags",
Label: "Preferred Tags",
Description: "Comma-separated disk tags to prioritize for EC shard placement, ordered by preference.",
Placeholder: "fast,ssd",
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_STRING,
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_TEXT,
},
},
},
},
@ -144,6 +153,9 @@ func (h *ErasureCodingHandler) Descriptor() *plugin_pb.JobTypeDescriptor {
"min_interval_seconds": {
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 60},
},
"preferred_tags": {
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: ""},
},
},
},
AdminRuntimeDefaults: &plugin_pb.AdminRuntimeDefaults{
@ -169,6 +181,9 @@ func (h *ErasureCodingHandler) Descriptor() *plugin_pb.JobTypeDescriptor {
"min_interval_seconds": {
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 60},
},
"preferred_tags": {
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: ""},
},
},
}
}
@ -601,6 +616,7 @@ func deriveErasureCodingWorkerConfig(values map[string]*plugin_pb.ConfigValue) *
if minIntervalSeconds < 0 {
minIntervalSeconds = 0
}
taskConfig.PreferredTags = util.NormalizeTagList(readStringListConfig(values, "preferred_tags"))
return &erasureCodingWorkerConfig{
TaskConfig: taskConfig,

1
weed/server/master_grpc_server.go

@ -162,6 +162,7 @@ func (ms *MasterServer) SendHeartbeat(stream master_pb.Seaweed_SendHeartbeatServ
}
dn.AdjustMaxVolumeCounts(heartbeat.MaxVolumeCounts)
dn.UpdateDiskTags(heartbeat.DiskTags)
glog.V(4).Infof("master received heartbeat %s", heartbeat.String())
stats.MasterReceivedHeartbeatCounter.WithLabelValues("total").Inc()

4
weed/server/volume_server.go

@ -58,7 +58,7 @@ type VolumeServer struct {
func NewVolumeServer(adminMux, publicMux *http.ServeMux, ip string,
port int, grpcPort int, publicUrl string, id string,
folders []string, maxCounts []int32, minFreeSpaces []util.MinFreeSpace, diskTypes []types.DiskType,
folders []string, maxCounts []int32, minFreeSpaces []util.MinFreeSpace, diskTypes []types.DiskType, diskTags [][]string,
idxFolder string,
needleMapKind storage.NeedleMapKind,
masterNodes []pb.ServerAddress, pulsePeriod time.Duration,
@ -118,7 +118,7 @@ func NewVolumeServer(adminMux, publicMux *http.ServeMux, ip string,
vs.checkWithMaster()
vs.store = storage.NewStore(vs.grpcDialOption, ip, port, grpcPort, publicUrl, id, folders, maxCounts, minFreeSpaces, idxFolder, vs.needleMapKind, diskTypes, ldbTimeout)
vs.store = storage.NewStore(vs.grpcDialOption, ip, port, grpcPort, publicUrl, id, folders, maxCounts, minFreeSpaces, idxFolder, vs.needleMapKind, diskTypes, diskTags, ldbTimeout)
vs.guard = security.NewGuard(whiteList, signingKey, expiresAfterSec, readSigningKey, readExpiresAfterSec)
handleStaticResources(adminMux)

10
weed/storage/disk_location.go

@ -31,6 +31,7 @@ type DiskLocation struct {
DirectoryUuid string
IdxDirectory string
DiskType types.DiskType
Tags []string
MaxVolumeCount int32
OriginalMaxVolumeCount int32
MinFreeSpace util.MinFreeSpace
@ -76,7 +77,7 @@ func writeNewUuid(fileName string) (string, error) {
return dirUuidString, nil
}
func NewDiskLocation(dir string, maxVolumeCount int32, minFreeSpace util.MinFreeSpace, idxDir string, diskType types.DiskType) *DiskLocation {
func NewDiskLocation(dir string, maxVolumeCount int32, minFreeSpace util.MinFreeSpace, idxDir string, diskType types.DiskType, tags []string) *DiskLocation {
glog.V(4).Infof("Added new Disk %s: maxVolumes=%d", dir, maxVolumeCount)
dir = util.ResolvePath(dir)
if idxDir == "" {
@ -88,11 +89,18 @@ func NewDiskLocation(dir string, maxVolumeCount int32, minFreeSpace util.MinFree
if err != nil {
glog.Fatalf("cannot generate uuid of dir %s: %v", dir, err)
}
// Defensive copy of tags to prevent external mutation
var copiedTags []string
if len(tags) > 0 {
copiedTags = make([]string, len(tags))
copy(copiedTags, tags)
}
location := &DiskLocation{
Directory: dir,
DirectoryUuid: dirUuid,
IdxDirectory: idxDir,
DiskType: diskType,
Tags: copiedTags,
MaxVolumeCount: maxVolumeCount,
OriginalMaxVolumeCount: maxVolumeCount,
MinFreeSpace: minFreeSpace,

15
weed/storage/store.go

@ -92,6 +92,7 @@ func NewStore(
idxFolder string,
needleMapKind NeedleMapKind,
diskTypes []DiskType,
diskTags [][]string,
ldbTimeout int64,
) (s *Store) {
s = &Store{
@ -113,7 +114,11 @@ func NewStore(
var wg sync.WaitGroup
for i := 0; i < len(dirnames); i++ {
location := NewDiskLocation(dirnames[i], int32(maxVolumeCounts[i]), minFreeSpaces[i], idxFolder, diskTypes[i])
var tags []string
if i < len(diskTags) {
tags = diskTags[i]
}
location := NewDiskLocation(dirnames[i], int32(maxVolumeCounts[i]), minFreeSpaces[i], idxFolder, diskTypes[i], tags)
s.Locations = append(s.Locations, location)
stats.VolumeServerMaxVolumeCounter.Add(float64(maxVolumeCounts[i]))
@ -474,6 +479,13 @@ func (s *Store) CollectHeartbeat() *master_pb.Heartbeat {
for _, loc := range s.Locations {
uuidList = append(uuidList, loc.DirectoryUuid)
}
var diskTags []*master_pb.DiskTag
for diskID, loc := range s.Locations {
diskTags = append(diskTags, &master_pb.DiskTag{
DiskId: uint32(diskID),
Tags: append([]string(nil), loc.Tags...),
})
}
for col, size := range collectionVolumeSize {
stats.VolumeServerDiskSizeGauge.WithLabelValues(col, "normal").Set(float64(size))
@ -504,6 +516,7 @@ func (s *Store) CollectHeartbeat() *master_pb.Heartbeat {
HasNoVolumes: len(volumeMessages) == 0,
HasNoEcShards: len(ecVolumeMessages) == 0,
LocationUuids: uuidList,
DiskTags: diskTags,
}
}

2
weed/storage/store_load_balancing_test.go

@ -32,7 +32,7 @@ func newTestStore(t *testing.T, numDirs int) *Store {
}
store := NewStore(nil, "localhost", 8080, 18080, "http://localhost:8080", "",
dirs, maxCounts, minFreeSpaces, "", NeedleMapInMemory, diskTypes, 3)
dirs, maxCounts, minFreeSpaces, "", NeedleMapInMemory, diskTypes, nil, 3)
// Consume channel messages to prevent blocking
done := make(chan bool)

33
weed/topology/data_node.go

@ -25,6 +25,7 @@ type DataNode struct {
IsTerminating bool
MaintenanceMode bool
diskTags map[uint32][]string
}
func NewDataNode(id string) *DataNode {
@ -291,9 +292,41 @@ func (dn *DataNode) ToDataNodeInfo() *master_pb.DataNodeInfo {
disk := c.(*Disk)
m.DiskInfos[string(disk.Id())] = disk.ToDiskInfo()
}
dn.RLock()
diskTags := make(map[uint32][]string, len(dn.diskTags))
for diskID, tags := range dn.diskTags {
diskTags[diskID] = append([]string(nil), tags...)
}
dn.RUnlock()
for _, diskInfo := range m.DiskInfos {
if diskInfo == nil {
continue
}
if tags, found := diskTags[diskInfo.DiskId]; found {
diskInfo.Tags = append([]string(nil), tags...)
}
}
return m
}
func (dn *DataNode) UpdateDiskTags(tags []*master_pb.DiskTag) {
if len(tags) == 0 {
return
}
dn.Lock()
if dn.diskTags == nil {
dn.diskTags = make(map[uint32][]string, len(tags))
}
for _, tagInfo := range tags {
if tagInfo == nil {
continue
}
dn.diskTags[tagInfo.DiskId] = append([]string(nil), tagInfo.Tags...)
}
dn.Unlock()
}
// GetVolumeIds returns the human readable volume ids limited to count of max 100.
func (dn *DataNode) GetVolumeIds() string {
dn.RLock()

25
weed/util/tag.go

@ -0,0 +1,25 @@
package util
import "strings"
// NormalizeTagList normalizes a list of tags by converting to lowercase,
// trimming whitespace, removing duplicates, and filtering empty strings.
func NormalizeTagList(tags []string) []string {
normalized := make([]string, 0, len(tags))
seen := make(map[string]struct{}, len(tags))
for _, tag := range tags {
tag = strings.ToLower(strings.TrimSpace(tag))
if tag == "" {
continue
}
if _, exists := seen[tag]; exists {
continue
}
seen[tag] = struct{}{}
normalized = append(normalized, tag)
}
if len(normalized) == 0 {
return []string{}
}
return normalized
}

27
weed/worker/tasks/erasure_coding/config.go

@ -12,10 +12,11 @@ import (
// Config extends BaseConfig with erasure coding specific settings
type Config struct {
base.BaseConfig
QuietForSeconds int `json:"quiet_for_seconds"`
FullnessRatio float64 `json:"fullness_ratio"`
CollectionFilter string `json:"collection_filter"`
MinSizeMB int `json:"min_size_mb"`
QuietForSeconds int `json:"quiet_for_seconds"`
FullnessRatio float64 `json:"fullness_ratio"`
CollectionFilter string `json:"collection_filter"`
MinSizeMB int `json:"min_size_mb"`
PreferredTags []string `json:"preferred_tags"`
}
// NewDefaultConfig creates a new default erasure coding configuration
@ -30,6 +31,7 @@ func NewDefaultConfig() *Config {
FullnessRatio: 0.8, // 80%
CollectionFilter: "",
MinSizeMB: 30, // 30MB (more reasonable than 100MB)
PreferredTags: nil,
}
}
@ -142,12 +144,27 @@ func GetConfigSpec() base.ConfigSpec {
InputType: "number",
CSSClasses: "form-control",
},
{
Name: "preferred_tags",
JSONName: "preferred_tags",
Type: config.FieldTypeString,
DefaultValue: "",
Required: false,
DisplayName: "Preferred Disk Tags",
Description: "Comma-separated disk tags to prioritize for EC shard placement",
HelpText: "EC shards will be placed on disks with these tags first, then fall back to other disks if needed",
Placeholder: "fast,ssd",
InputType: "text",
CSSClasses: "form-control",
},
},
}
}
// ToTaskPolicy converts configuration to a TaskPolicy protobuf message
func (c *Config) ToTaskPolicy() *worker_pb.TaskPolicy {
// Defensive copy of PreferredTags to prevent external mutation
preferredTagsCopy := append([]string(nil), c.PreferredTags...)
return &worker_pb.TaskPolicy{
Enabled: c.Enabled,
MaxConcurrent: int32(c.MaxConcurrent),
@ -159,6 +176,7 @@ func (c *Config) ToTaskPolicy() *worker_pb.TaskPolicy {
QuietForSeconds: int32(c.QuietForSeconds),
MinVolumeSizeMb: int32(c.MinSizeMB),
CollectionFilter: c.CollectionFilter,
PreferredTags: preferredTagsCopy,
},
},
}
@ -181,6 +199,7 @@ func (c *Config) FromTaskPolicy(policy *worker_pb.TaskPolicy) error {
c.QuietForSeconds = int(ecConfig.QuietForSeconds)
c.MinSizeMB = int(ecConfig.MinVolumeSizeMb)
c.CollectionFilter = ecConfig.CollectionFilter
c.PreferredTags = append([]string(nil), ecConfig.PreferredTags...)
}
return nil

108
weed/worker/tasks/erasure_coding/detection.go

@ -11,9 +11,10 @@ import (
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
"github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding"
"github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding/placement"
"github.com/seaweedfs/seaweedfs/weed/util"
"github.com/seaweedfs/seaweedfs/weed/util/wildcard"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/base"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/util"
workerutil "github.com/seaweedfs/seaweedfs/weed/worker/tasks/util"
"github.com/seaweedfs/seaweedfs/weed/worker/types"
)
@ -148,7 +149,7 @@ func Detection(ctx context.Context, metrics []*types.VolumeHealthMetrics, cluste
glog.Infof("EC Detection: ActiveTopology available, planning destinations for volume %d", metric.VolumeID)
if planner == nil {
planner = newECPlacementPlanner(clusterInfo.ActiveTopology)
planner = newECPlacementPlanner(clusterInfo.ActiveTopology, ecConfig.PreferredTags)
}
multiPlan, err := planECDestinations(planner, metric, ecConfig)
if err != nil {
@ -344,21 +345,27 @@ type ecPlacementPlanner struct {
candidates []*placement.DiskCandidate
candidateByKey map[string]*placement.DiskCandidate
diskStates map[string]*ecDiskState
diskTags map[string][]string
preferredTags []string
}
func newECPlacementPlanner(activeTopology *topology.ActiveTopology) *ecPlacementPlanner {
func newECPlacementPlanner(activeTopology *topology.ActiveTopology, preferredTags []string) *ecPlacementPlanner {
if activeTopology == nil {
return nil
}
disks := activeTopology.GetDisksWithEffectiveCapacity(topology.TaskTypeErasureCoding, "", 0)
candidates := diskInfosToCandidates(disks)
tagsByKey := collectDiskTags(disks)
normalizedPreferredTags := util.NormalizeTagList(preferredTags)
if len(candidates) == 0 {
return &ecPlacementPlanner{
activeTopology: activeTopology,
candidates: candidates,
candidateByKey: map[string]*placement.DiskCandidate{},
diskStates: map[string]*ecDiskState{},
diskTags: tagsByKey,
preferredTags: normalizedPreferredTags,
}
}
@ -377,6 +384,8 @@ func newECPlacementPlanner(activeTopology *topology.ActiveTopology) *ecPlacement
candidates: candidates,
candidateByKey: candidateByKey,
diskStates: diskStates,
diskTags: tagsByKey,
preferredTags: normalizedPreferredTags,
}
}
@ -397,11 +406,21 @@ func (p *ecPlacementPlanner) selectDestinations(sourceRack, sourceDC string, sha
PreferDifferentRacks: true,
}
result, err := placement.SelectDestinations(p.candidates, config)
if err != nil {
return nil, err
var lastErr error
for _, candidates := range p.buildCandidateSets(shardsNeeded) {
if len(candidates) == 0 {
continue
}
result, err := placement.SelectDestinations(candidates, config)
if err == nil {
return result.SelectedDisks, nil
}
lastErr = err
}
return result.SelectedDisks, nil
if lastErr == nil {
lastErr = fmt.Errorf("no EC placement candidates available")
}
return nil, lastErr
}
func (p *ecPlacementPlanner) applyTaskReservations(volumeSize int64, sources []topology.TaskSourceSpec, destinations []topology.TaskDestinationSpec) {
@ -501,6 +520,77 @@ func ecDiskKey(nodeID string, diskID uint32) string {
return fmt.Sprintf("%s:%d", nodeID, diskID)
}
func collectDiskTags(disks []*topology.DiskInfo) map[string][]string {
tagMap := make(map[string][]string, len(disks))
for _, disk := range disks {
if disk == nil || disk.DiskInfo == nil {
continue
}
key := ecDiskKey(disk.NodeID, disk.DiskID)
tags := util.NormalizeTagList(disk.DiskInfo.Tags)
if len(tags) > 0 {
tagMap[key] = tags
}
}
return tagMap
}
func diskHasTag(tags []string, tag string) bool {
if tag == "" || len(tags) == 0 {
return false
}
for _, candidate := range tags {
if candidate == tag {
return true
}
}
return false
}
// buildCandidateSets builds tiered candidate sets for preferred-tag prioritized placement.
// For a planner with preferredTags, it accumulates disks matching each tag in order into
// progressively larger tiers. It emits a candidate set once a tier reaches shardsNeeded,
// then continues accumulating for subsequent tags. Finally, it falls back to the full
// p.candidates set if preferred-tag tiers are insufficient. This ensures tagged disks
// are selected first before falling back to all available candidates.
func (p *ecPlacementPlanner) buildCandidateSets(shardsNeeded int) [][]*placement.DiskCandidate {
if p == nil {
return nil
}
if len(p.preferredTags) == 0 {
return [][]*placement.DiskCandidate{p.candidates}
}
selected := make(map[string]bool, len(p.candidates))
var tier []*placement.DiskCandidate
var candidateSets [][]*placement.DiskCandidate
for _, tag := range p.preferredTags {
for _, candidate := range p.candidates {
key := ecDiskKey(candidate.NodeID, candidate.DiskID)
if selected[key] {
continue
}
if diskHasTag(p.diskTags[key], tag) {
selected[key] = true
tier = append(tier, candidate)
}
}
if shardsNeeded > 0 && len(tier) >= shardsNeeded {
candidateSets = append(candidateSets, append([]*placement.DiskCandidate(nil), tier...))
}
}
// Defensive check: selectDestinations always ensures shardsNeeded > 0 before calling
// buildCandidateSets, but this branch handles direct callers and edge cases.
if shardsNeeded <= 0 && len(tier) > 0 {
candidateSets = append(candidateSets, append([]*placement.DiskCandidate(nil), tier...))
}
if len(tier) < len(p.candidates) {
candidateSets = append(candidateSets, p.candidates)
} else if len(candidateSets) == 0 {
candidateSets = append(candidateSets, p.candidates)
}
return candidateSets
}
// planECDestinations plans the destinations for erasure coding operation
// This function implements EC destination planning logic directly in the detection phase
func planECDestinations(planner *ecPlacementPlanner, metric *types.VolumeHealthMetrics, ecConfig *Config) (*topology.MultiDestinationPlan, error) {
@ -550,7 +640,7 @@ func planECDestinations(planner *ecPlacementPlanner, metric *types.VolumeHealthM
for _, disk := range selectedDisks {
// Get the target server address
targetAddress, err := util.ResolveServerAddress(disk.NodeID, planner.activeTopology)
targetAddress, err := workerutil.ResolveServerAddress(disk.NodeID, planner.activeTopology)
if err != nil {
return nil, fmt.Errorf("failed to resolve address for target server %s: %v", disk.NodeID, err)
}
@ -654,7 +744,7 @@ func convertTaskSourcesToProtobuf(sources []topology.TaskSourceSpec, volumeID ui
var protobufSources []*worker_pb.TaskSource
for _, source := range sources {
serverAddress, err := util.ResolveServerAddress(source.ServerID, activeTopology)
serverAddress, err := workerutil.ResolveServerAddress(source.ServerID, activeTopology)
if err != nil {
return nil, fmt.Errorf("failed to resolve address for source server %s: %v", source.ServerID, err)
}

70
weed/worker/tasks/erasure_coding/detection_test.go

@ -17,7 +17,7 @@ import (
func TestECPlacementPlannerApplyReservations(t *testing.T) {
activeTopology := buildActiveTopology(t, 1, []string{"hdd"}, 10, 0)
planner := newECPlacementPlanner(activeTopology)
planner := newECPlacementPlanner(activeTopology, nil)
require.NotNil(t, planner)
key := ecDiskKey("10.0.0.1:8080", 0)
@ -47,7 +47,7 @@ func TestECPlacementPlannerApplyReservations(t *testing.T) {
func TestPlanECDestinationsUsesPlanner(t *testing.T) {
activeTopology := buildActiveTopology(t, 7, []string{"hdd", "ssd"}, 100, 0)
planner := newECPlacementPlanner(activeTopology)
planner := newECPlacementPlanner(activeTopology, nil)
require.NotNil(t, planner)
metric := &types.VolumeHealthMetrics{
@ -63,6 +63,70 @@ func TestPlanECDestinationsUsesPlanner(t *testing.T) {
assert.Equal(t, erasure_coding.TotalShardsCount, len(plan.Plans))
}
func TestECPlacementPlannerPrefersTaggedDisks(t *testing.T) {
activeTopology := buildActiveTopology(t, 3, []string{"hdd"}, 10, 0)
topo := activeTopology.GetTopologyInfo()
for _, dc := range topo.DataCenterInfos {
for _, rack := range dc.RackInfos {
for k, node := range rack.DataNodeInfos {
for diskType := range node.DiskInfos {
if k < 2 {
node.DiskInfos[diskType].Tags = []string{"fast"}
} else {
node.DiskInfos[diskType].Tags = []string{"slow"}
}
}
}
}
}
require.NoError(t, activeTopology.UpdateTopology(topo))
planner := newECPlacementPlanner(activeTopology, []string{"fast"})
require.NotNil(t, planner)
selected, err := planner.selectDestinations("", "", 2)
require.NoError(t, err)
require.Len(t, selected, 2)
for _, candidate := range selected {
key := ecDiskKey(candidate.NodeID, candidate.DiskID)
assert.True(t, diskHasTag(planner.diskTags[key], "fast"))
}
}
func TestECPlacementPlannerFallsBackWhenTagsInsufficient(t *testing.T) {
activeTopology := buildActiveTopology(t, 3, []string{"hdd"}, 10, 0)
topo := activeTopology.GetTopologyInfo()
for _, dc := range topo.DataCenterInfos {
for _, rack := range dc.RackInfos {
for i, node := range rack.DataNodeInfos {
for diskType := range node.DiskInfos {
if i == 0 {
node.DiskInfos[diskType].Tags = []string{"fast"}
}
}
}
}
}
require.NoError(t, activeTopology.UpdateTopology(topo))
planner := newECPlacementPlanner(activeTopology, []string{"fast"})
require.NotNil(t, planner)
selected, err := planner.selectDestinations("", "", 3)
require.NoError(t, err)
require.Len(t, selected, 3)
taggedCount := 0
for _, candidate := range selected {
key := ecDiskKey(candidate.NodeID, candidate.DiskID)
if diskHasTag(planner.diskTags[key], "fast") {
taggedCount++
}
}
assert.Less(t, taggedCount, len(selected))
}
func TestDetectionContextCancellation(t *testing.T) {
activeTopology := buildActiveTopology(t, 5, []string{"hdd", "ssd"}, 50, 0)
clusterInfo := &types.ClusterInfo{ActiveTopology: activeTopology}
@ -88,7 +152,7 @@ func TestDetectionMaxResultsHonorsLimit(t *testing.T) {
func TestPlanECDestinationsFailsWithInsufficientCapacity(t *testing.T) {
activeTopology := buildActiveTopology(t, 1, []string{"hdd"}, 1, 1)
planner := newECPlacementPlanner(activeTopology)
planner := newECPlacementPlanner(activeTopology, nil)
require.NotNil(t, planner)
metric := &types.VolumeHealthMetrics{

Loading…
Cancel
Save