Browse Source

Helm Charts: add admin and worker to helm charts (#7688)

* add admin and worker to helm charts

* workers are stateless, admin is stateful

* removed the duplicate admin-deployment.yaml

* address comments

* address comments

* purge

* Update README.md

* Update k8s/charts/seaweedfs/templates/admin/admin-ingress.yaml

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* address comments

* address comments

* supports Kubernetes versions from v1.14 to v1.30+, ensuring broad compatibility

* add probe for workers

* address comments

* add a todo

* chore: trigger CI

* use port name for probes in admin statefulset

* fix: remove trailing blank line in values.yaml

* address code review feedback

- Quote admin credentials in shell command to handle special characters
- Remove unimplemented capabilities (remote, replication) from worker defaults
- Add security note about admin password character restrictions

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
fix-mount-read-throughput-7504
Chris Lu 20 hours ago
committed by GitHub
parent
commit
80c7de8d76
No known key found for this signature in database GPG Key ID: B5690EEEBB952194
  1. 185
      k8s/charts/seaweedfs/README.md
  2. 52
      k8s/charts/seaweedfs/templates/admin/admin-ingress.yaml
  3. 39
      k8s/charts/seaweedfs/templates/admin/admin-service.yaml
  4. 33
      k8s/charts/seaweedfs/templates/admin/admin-servicemonitor.yaml
  5. 319
      k8s/charts/seaweedfs/templates/admin/admin-statefulset.yaml
  6. 29
      k8s/charts/seaweedfs/templates/shared/_helpers.tpl
  7. 271
      k8s/charts/seaweedfs/templates/worker/worker-deployment.yaml
  8. 26
      k8s/charts/seaweedfs/templates/worker/worker-service.yaml
  9. 33
      k8s/charts/seaweedfs/templates/worker/worker-servicemonitor.yaml
  10. 234
      k8s/charts/seaweedfs/values.yaml
  11. 5
      weed/command/worker.go

185
k8s/charts/seaweedfs/README.md

@ -145,6 +145,191 @@ stringData:
seaweedfs_s3_config: '{"identities":[{"name":"anvAdmin","credentials":[{"accessKey":"snu8yoP6QAlY0ne4","secretKey":"PNzBcmeLNEdR0oviwm04NQAicOrDH1Km"}],"actions":["Admin","Read","Write"]},{"name":"anvReadOnly","credentials":[{"accessKey":"SCigFee6c5lbi04A","secretKey":"kgFhbT38R8WUYVtiFQ1OiSVOrYr3NKku"}],"actions":["Read"]}]}'
```
## Admin Component
The admin component provides a modern web-based administration interface for managing SeaweedFS clusters. It includes:
- **Dashboard**: Real-time cluster status and metrics
- **Volume Management**: Monitor volume servers, capacity, and health
- **File Browser**: Browse and manage files in the filer
- **Maintenance Operations**: Trigger maintenance tasks via workers
- **Object Store Management**: Create and manage buckets with web interface
### Enabling Admin
To enable the admin interface, add the following to your values.yaml:
```yaml
admin:
enabled: true
port: 23646
grpcPort: 33646 # For worker connections
adminUser: "admin"
adminPassword: "your-secure-password" # Leave empty to disable auth
# Optional: persist admin data
data:
type: "persistentVolumeClaim"
size: "10Gi"
storageClass: "your-storage-class"
# Optional: enable ingress
ingress:
enabled: true
host: "admin.seaweedfs.local"
className: "nginx"
```
The admin interface will be available at `http://<admin-service>:23646` (or via ingress). Workers connect to the admin server via gRPC on port `33646`.
### Admin Authentication
If `adminPassword` is set, the admin interface requires authentication:
- Username: Value of `adminUser` (default: `admin`)
- Password: Value of `adminPassword`
If `adminPassword` is empty or not set, the admin interface runs without authentication (not recommended for production).
### Admin Data Persistence
The admin component can store configuration and maintenance data. You can configure storage in several ways:
- **emptyDir** (default): Data is lost when pod restarts
- **persistentVolumeClaim**: Data persists across pod restarts
- **hostPath**: Data stored on the host filesystem
- **existingClaim**: Use an existing PVC
## Worker Component
Workers are maintenance agents that execute cluster maintenance tasks such as vacuum, volume balancing, and erasure coding. Workers connect to the admin server via gRPC and receive task assignments.
### Enabling Workers
To enable workers, add the following to your values.yaml:
```yaml
worker:
enabled: true
replicas: 2 # Scale based on workload
capabilities: "vacuum,balance,erasure_coding" # Tasks this worker can handle
maxConcurrent: 3 # Maximum concurrent tasks per worker
# Working directory for task execution
# Default: "/tmp/seaweedfs-worker"
# Note: /tmp is ephemeral - use persistent storage (hostPath/existingClaim) for long-running tasks
workingDir: "/tmp/seaweedfs-worker"
# Optional: configure admin server address
# If not specified, auto-discovers from admin service in the same namespace by looking for
# a service named "<release-name>-admin" (e.g., "seaweedfs-admin").
# Auto-discovery only works if the admin is in the same namespace and same Helm release.
# For cross-namespace or separate release scenarios, explicitly set this value.
# Example: If main SeaweedFS is deployed in "production" namespace:
# adminServer: "seaweedfs-admin.production.svc:33646"
adminServer: ""
# Workers need storage for task execution
# Note: Workers use a Deployment, which does not support `volumeClaimTemplates`
# for dynamic PVC creation per pod. To use persistent storage, you must
# pre-provision a PersistentVolumeClaim and use `type: "existingClaim"`.
data:
type: "emptyDir" # Options: "emptyDir", "hostPath", or "existingClaim"
hostPathPrefix: /storage # For hostPath
# claimName: "worker-pvc" # For existingClaim with pre-provisioned PVC
# Resource limits for worker pods
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "2"
memory: "2Gi"
```
### Worker Capabilities
Workers can be configured with different capabilities:
- **vacuum**: Reclaim deleted file space
- **balance**: Balance volumes across volume servers
- **erasure_coding**: Handle erasure coding operations
You can configure workers with all capabilities or create specialized worker pools with specific capabilities.
### Worker Deployment Strategy
For production deployments, consider:
1. **Multiple Workers**: Deploy 2+ worker replicas for high availability
2. **Resource Allocation**: Workers need sufficient CPU/memory for maintenance tasks
3. **Storage**: Workers need temporary storage for vacuum and balance operations (size depends on volume size)
4. **Specialized Workers**: Create separate worker deployments for different capabilities if needed
Example specialized worker configuration:
For specialized worker pools, deploy separate Helm releases with different capabilities:
**values-worker-vacuum.yaml** (for vacuum operations):
```yaml
# Disable all other components, enable only workers
master:
enabled: false
volume:
enabled: false
filer:
enabled: false
s3:
enabled: false
admin:
enabled: false
worker:
enabled: true
replicas: 2
capabilities: "vacuum"
maxConcurrent: 2
# REQUIRED: Point to the admin service of your main SeaweedFS release
# Replace <namespace> with the namespace where your main seaweedfs is deployed
# Example: If deploying in namespace "production":
# adminServer: "seaweedfs-admin.production.svc:33646"
adminServer: "seaweedfs-admin.<namespace>.svc:33646"
```
**values-worker-balance.yaml** (for balance operations):
```yaml
# Disable all other components, enable only workers
master:
enabled: false
volume:
enabled: false
filer:
enabled: false
s3:
enabled: false
admin:
enabled: false
worker:
enabled: true
replicas: 1
capabilities: "balance"
maxConcurrent: 1
# REQUIRED: Point to the admin service of your main SeaweedFS release
# Replace <namespace> with the namespace where your main seaweedfs is deployed
# Example: If deploying in namespace "production":
# adminServer: "seaweedfs-admin.production.svc:33646"
adminServer: "seaweedfs-admin.<namespace>.svc:33646"
```
Deploy the specialized workers as separate releases:
```bash
# Deploy vacuum workers
helm install seaweedfs-worker-vacuum seaweedfs/seaweedfs -f values-worker-vacuum.yaml
# Deploy balance workers
helm install seaweedfs-worker-balance seaweedfs/seaweedfs -f values-worker-balance.yaml
```
## Enterprise
For enterprise users, please visit [seaweedfs.com](https://seaweedfs.com) for the SeaweedFS Enterprise Edition,

52
k8s/charts/seaweedfs/templates/admin/admin-ingress.yaml

@ -0,0 +1,52 @@
{{- if and .Values.admin.enabled .Values.admin.ingress.enabled }}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion }}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion }}
apiVersion: networking.k8s.io/v1beta1
{{- else }}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: ingress-{{ template "seaweedfs.name" . }}-admin
namespace: {{ .Release.Namespace }}
annotations:
{{- if and (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) .Values.admin.ingress.className }}
kubernetes.io/ingress.class: {{ .Values.admin.ingress.className }}
{{- end }}
{{- with .Values.admin.ingress.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
labels:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: admin
spec:
{{- if and (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) .Values.admin.ingress.className }}
ingressClassName: {{ .Values.admin.ingress.className | quote }}
{{- end }}
tls:
{{ .Values.admin.ingress.tls | default list | toYaml | nindent 6}}
rules:
- {{- if .Values.admin.ingress.host }}
host: {{ .Values.admin.ingress.host | quote }}
{{- end }}
http:
paths:
- path: {{ .Values.admin.ingress.path | quote }}
{{- if semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion }}
pathType: {{ .Values.admin.ingress.pathType | quote }}
{{- end }}
backend:
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion }}
service:
name: {{ template "seaweedfs.name" . }}-admin
port:
number: {{ .Values.admin.port }}
{{- else }}
serviceName: {{ template "seaweedfs.name" . }}-admin
servicePort: {{ .Values.admin.port }}
{{- end }}
{{- end }}

39
k8s/charts/seaweedfs/templates/admin/admin-service.yaml

@ -0,0 +1,39 @@
{{- if .Values.admin.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "seaweedfs.name" . }}-admin
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: admin
{{- if .Values.admin.service.annotations }}
annotations:
{{- toYaml .Values.admin.service.annotations | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.admin.service.type }}
ports:
- name: "http"
port: {{ .Values.admin.port }}
targetPort: {{ .Values.admin.port }}
protocol: TCP
- name: "grpc"
port: {{ .Values.admin.grpcPort }}
targetPort: {{ .Values.admin.grpcPort }}
protocol: TCP
{{- if .Values.admin.metricsPort }}
- name: "metrics"
port: {{ .Values.admin.metricsPort }}
targetPort: {{ .Values.admin.metricsPort }}
protocol: TCP
{{- end }}
selector:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: admin
{{- end }}

33
k8s/charts/seaweedfs/templates/admin/admin-servicemonitor.yaml

@ -0,0 +1,33 @@
{{- if .Values.admin.enabled }}
{{- if .Values.admin.metricsPort }}
{{- if .Values.global.monitoring.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "seaweedfs.name" . }}-admin
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: admin
{{- with .Values.global.monitoring.additionalLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.admin.serviceMonitor.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
endpoints:
- interval: 30s
port: metrics
scrapeTimeout: 5s
selector:
matchLabels:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
app.kubernetes.io/component: admin
{{- end }}
{{- end }}
{{- end }}

319
k8s/charts/seaweedfs/templates/admin/admin-statefulset.yaml

@ -0,0 +1,319 @@
{{- if .Values.admin.enabled }}
{{- if and (not .Values.admin.masters) (not .Values.global.masterServer) (not .Values.master.enabled) }}
{{- fail "admin.masters or global.masterServer must be set if master.enabled is false" -}}
{{- end }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "seaweedfs.name" . }}-admin
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: admin
{{- if .Values.admin.annotations }}
annotations:
{{- toYaml .Values.admin.annotations | nindent 4 }}
{{- end }}
spec:
serviceName: {{ template "seaweedfs.name" . }}-admin
podManagementPolicy: {{ .Values.admin.podManagementPolicy }}
replicas: {{ .Values.admin.replicas }}
selector:
matchLabels:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: admin
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: admin
{{ with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.admin.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
annotations:
{{ with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.admin.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
restartPolicy: {{ default .Values.global.restartPolicy .Values.admin.restartPolicy }}
{{- if .Values.admin.affinity }}
affinity:
{{ tpl .Values.admin.affinity . | nindent 8 | trim }}
{{- end }}
{{- if .Values.admin.topologySpreadConstraints }}
topologySpreadConstraints:
{{ tpl .Values.admin.topologySpreadConstraints . | nindent 8 | trim }}
{{- end }}
{{- if .Values.admin.tolerations }}
tolerations:
{{ tpl .Values.admin.tolerations . | nindent 8 | trim }}
{{- end }}
{{- include "seaweedfs.imagePullSecrets" . | nindent 6 }}
terminationGracePeriodSeconds: 30
{{- if .Values.admin.priorityClassName }}
priorityClassName: {{ .Values.admin.priorityClassName | quote }}
{{- end }}
enableServiceLinks: false
{{- if .Values.admin.serviceAccountName }}
serviceAccountName: {{ .Values.admin.serviceAccountName | quote }}
{{- end }}
{{- if .Values.admin.initContainers }}
initContainers:
{{ tpl .Values.admin.initContainers . | nindent 8 | trim }}
{{- end }}
{{- if .Values.admin.podSecurityContext.enabled }}
securityContext: {{- omit .Values.admin.podSecurityContext "enabled" | toYaml | nindent 8 }}
{{- end }}
containers:
- name: seaweedfs
image: {{ template "admin.image" . }}
imagePullPolicy: {{ default "IfNotPresent" .Values.global.imagePullPolicy }}
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SEAWEEDFS_FULLNAME
value: "{{ template "seaweedfs.name" . }}"
{{- if .Values.admin.extraEnvironmentVars }}
{{- range $key, $value := .Values.admin.extraEnvironmentVars }}
- name: {{ $key }}
{{- if kindIs "string" $value }}
value: {{ $value | quote }}
{{- else }}
valueFrom:
{{ toYaml $value | nindent 16 | trim }}
{{- end -}}
{{- end }}
{{- end }}
{{- if .Values.global.extraEnvironmentVars }}
{{- range $key, $value := .Values.global.extraEnvironmentVars }}
- name: {{ $key }}
{{- if kindIs "string" $value }}
value: {{ $value | quote }}
{{- else }}
valueFrom:
{{ toYaml $value | nindent 16 | trim }}
{{- end -}}
{{- end }}
{{- end }}
command:
- "/bin/sh"
- "-ec"
- |
exec /usr/bin/weed \
{{- if or (eq .Values.admin.logs.type "hostPath") (eq .Values.admin.logs.type "persistentVolumeClaim") (eq .Values.admin.logs.type "emptyDir") (eq .Values.admin.logs.type "existingClaim") }}
-logdir=/logs \
{{- else }}
-logtostderr=true \
{{- end }}
{{- if .Values.admin.loggingOverrideLevel }}
-v={{ .Values.admin.loggingOverrideLevel }} \
{{- else }}
-v={{ .Values.global.loggingLevel }} \
{{- end }}
admin \
-port={{ .Values.admin.port }} \
-port.grpc={{ .Values.admin.grpcPort }} \
{{- if or (eq .Values.admin.data.type "hostPath") (eq .Values.admin.data.type "persistentVolumeClaim") (eq .Values.admin.data.type "emptyDir") (eq .Values.admin.data.type "existingClaim") }}
-dataDir=/data \
{{- else if .Values.admin.dataDir }}
-dataDir={{ .Values.admin.dataDir }} \
{{- end }}
{{- if .Values.admin.adminPassword }}
-adminUser='{{ .Values.admin.adminUser }}' \
-adminPassword='{{ .Values.admin.adminPassword }}' \
{{- end }}
{{- if .Values.admin.masters }}
-masters={{ .Values.admin.masters }}{{- if .Values.admin.extraArgs }} \{{ end }}
{{- else if .Values.global.masterServer }}
-masters={{ .Values.global.masterServer }}{{- if .Values.admin.extraArgs }} \{{ end }}
{{- else }}
-masters={{ range $index := until (.Values.master.replicas | int) }}${SEAWEEDFS_FULLNAME}-master-{{ $index }}.${SEAWEEDFS_FULLNAME}-master.{{ $.Release.Namespace }}:{{ $.Values.master.port }}{{ if lt $index (sub ($.Values.master.replicas | int) 1) }},{{ end }}{{ end }}{{- if .Values.admin.extraArgs }} \{{ end }}
{{- end }}
{{- range $index, $arg := .Values.admin.extraArgs }}
{{ $arg }}{{- if lt $index (sub (len $.Values.admin.extraArgs) 1) }} \{{ end }}
{{- end }}
volumeMounts:
{{- if or (eq .Values.admin.data.type "hostPath") (eq .Values.admin.data.type "persistentVolumeClaim") (eq .Values.admin.data.type "emptyDir") (eq .Values.admin.data.type "existingClaim") }}
- name: admin-data
mountPath: /data
{{- end }}
{{- if or (eq .Values.admin.logs.type "hostPath") (eq .Values.admin.logs.type "persistentVolumeClaim") (eq .Values.admin.logs.type "emptyDir") (eq .Values.admin.logs.type "existingClaim") }}
- name: admin-logs
mountPath: /logs
{{- end }}
{{- if .Values.global.enableSecurity }}
- name: security-config
readOnly: true
mountPath: /etc/seaweedfs/security.toml
subPath: security.toml
- name: ca-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/ca/
- name: master-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/master/
- name: volume-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/volume/
- name: filer-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/filer/
- name: client-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/client/
{{- end }}
{{ tpl .Values.admin.extraVolumeMounts . | nindent 12 | trim }}
ports:
- containerPort: {{ .Values.admin.port }}
name: http
- containerPort: {{ .Values.admin.grpcPort }}
name: grpc
{{- if .Values.admin.metricsPort }}
- containerPort: {{ .Values.admin.metricsPort }}
name: metrics
{{- end }}
{{- if .Values.admin.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: {{ .Values.admin.readinessProbe.httpGet.path }}
port: http
scheme: {{ .Values.admin.readinessProbe.httpGet.scheme }}
initialDelaySeconds: {{ .Values.admin.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.admin.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.admin.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.admin.readinessProbe.failureThreshold }}
timeoutSeconds: {{ .Values.admin.readinessProbe.timeoutSeconds }}
{{- end }}
{{- if .Values.admin.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: {{ .Values.admin.livenessProbe.httpGet.path }}
port: http
scheme: {{ .Values.admin.livenessProbe.httpGet.scheme }}
initialDelaySeconds: {{ .Values.admin.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.admin.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.admin.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.admin.livenessProbe.failureThreshold }}
timeoutSeconds: {{ .Values.admin.livenessProbe.timeoutSeconds }}
{{- end }}
{{- with .Values.admin.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- if .Values.admin.containerSecurityContext.enabled }}
securityContext: {{- omit .Values.admin.containerSecurityContext "enabled" | toYaml | nindent 12 }}
{{- end }}
{{- if .Values.admin.sidecars }}
{{- include "common.tplvalues.render" (dict "value" .Values.admin.sidecars "context" $) | nindent 8 }}
{{- end }}
volumes:
{{- if eq .Values.admin.data.type "hostPath" }}
- name: admin-data
hostPath:
path: {{ .Values.admin.data.hostPathPrefix }}/seaweedfs-admin-data
type: DirectoryOrCreate
{{- end }}
{{- if eq .Values.admin.data.type "emptyDir" }}
- name: admin-data
emptyDir: {}
{{- end }}
{{- if eq .Values.admin.data.type "existingClaim" }}
- name: admin-data
persistentVolumeClaim:
claimName: {{ .Values.admin.data.claimName }}
{{- end }}
{{- if eq .Values.admin.logs.type "hostPath" }}
- name: admin-logs
hostPath:
path: {{ .Values.admin.logs.hostPathPrefix }}/logs/seaweedfs/admin
type: DirectoryOrCreate
{{- end }}
{{- if eq .Values.admin.logs.type "emptyDir" }}
- name: admin-logs
emptyDir: {}
{{- end }}
{{- if eq .Values.admin.logs.type "existingClaim" }}
- name: admin-logs
persistentVolumeClaim:
claimName: {{ .Values.admin.logs.claimName }}
{{- end }}
{{- if .Values.global.enableSecurity }}
- name: security-config
configMap:
name: {{ template "seaweedfs.name" . }}-security-config
- name: ca-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-ca-cert
- name: master-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-master-cert
- name: volume-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-volume-cert
- name: filer-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-filer-cert
- name: client-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-client-cert
{{- end }}
{{ tpl .Values.admin.extraVolumes . | indent 8 | trim }}
{{- if .Values.admin.nodeSelector }}
nodeSelector:
{{ tpl .Values.admin.nodeSelector . | indent 8 | trim }}
{{- end }}
{{- $pvc_exists := include "admin.pvc_exists" . -}}
{{- if $pvc_exists }}
volumeClaimTemplates:
{{- if eq .Values.admin.data.type "persistentVolumeClaim" }}
- metadata:
name: admin-data
{{- with .Values.admin.data.annotations }}
annotations:
{{- toYaml . | nindent 10 }}
{{- end }}
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: {{ .Values.admin.data.storageClass }}
resources:
requests:
storage: {{ .Values.admin.data.size }}
{{- end }}
{{- if eq .Values.admin.logs.type "persistentVolumeClaim" }}
- metadata:
name: admin-logs
{{- with .Values.admin.logs.annotations }}
annotations:
{{- toYaml . | nindent 10 }}
{{- end }}
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: {{ .Values.admin.logs.storageClass }}
resources:
requests:
storage: {{ .Values.admin.logs.size }}
{{- end }}
{{- end }}
{{- end }}

29
k8s/charts/seaweedfs/templates/shared/_helpers.tpl

@ -83,6 +83,26 @@ Inject extra environment vars in the format key:value, if populated
{{- end -}}
{{- end -}}
{{/* Return the proper admin image */}}
{{- define "admin.image" -}}
{{- if .Values.admin.imageOverride -}}
{{- $imageOverride := .Values.admin.imageOverride -}}
{{- printf "%s" $imageOverride -}}
{{- else -}}
{{- include "common.image" . }}
{{- end -}}
{{- end -}}
{{/* Return the proper worker image */}}
{{- define "worker.image" -}}
{{- if .Values.worker.imageOverride -}}
{{- $imageOverride := .Values.worker.imageOverride -}}
{{- printf "%s" $imageOverride -}}
{{- else -}}
{{- include "common.image" . }}
{{- end -}}
{{- end -}}
{{/* Return the proper volume image */}}
{{- define "volume.image" -}}
{{- if .Values.volume.imageOverride -}}
@ -136,6 +156,15 @@ Inject extra environment vars in the format key:value, if populated
{{- end -}}
{{- end -}}
{{/* check if any Admin PVC exists */}}
{{- define "admin.pvc_exists" -}}
{{- if or (eq .Values.admin.data.type "persistentVolumeClaim") (eq .Values.admin.logs.type "persistentVolumeClaim") -}}
{{- printf "true" -}}
{{- else -}}
{{- printf "" -}}
{{- end -}}
{{- end -}}
{{/* check if any InitContainers exist for Volumes */}}
{{- define "volume.initContainers_exists" -}}
{{- if or (not (empty .Values.volume.idx )) (not (empty .Values.volume.initContainers )) -}}

271
k8s/charts/seaweedfs/templates/worker/worker-deployment.yaml

@ -0,0 +1,271 @@
{{- if .Values.worker.enabled }}
{{- if and (not .Values.worker.adminServer) (not .Values.admin.enabled) }}
{{- fail "worker.adminServer must be set if admin.enabled is false within the same release" -}}
{{- end }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "seaweedfs.name" . }}-worker
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: worker
{{- if .Values.worker.annotations }}
annotations:
{{- toYaml .Values.worker.annotations | nindent 4 }}
{{- end }}
spec:
replicas: {{ .Values.worker.replicas }}
selector:
matchLabels:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: worker
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: worker
{{ with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.worker.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
annotations:
{{ with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.worker.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
restartPolicy: {{ default .Values.global.restartPolicy .Values.worker.restartPolicy }}
{{- if .Values.worker.affinity }}
affinity:
{{ tpl .Values.worker.affinity . | nindent 8 | trim }}
{{- end }}
{{- if .Values.worker.topologySpreadConstraints }}
topologySpreadConstraints:
{{ tpl .Values.worker.topologySpreadConstraints . | nindent 8 | trim }}
{{- end }}
{{- if .Values.worker.tolerations }}
tolerations:
{{ tpl .Values.worker.tolerations . | nindent 8 | trim }}
{{- end }}
{{- include "seaweedfs.imagePullSecrets" . | nindent 6 }}
terminationGracePeriodSeconds: 60
{{- if .Values.worker.priorityClassName }}
priorityClassName: {{ .Values.worker.priorityClassName | quote }}
{{- end }}
enableServiceLinks: false
{{- if .Values.worker.serviceAccountName }}
serviceAccountName: {{ .Values.worker.serviceAccountName | quote }}
{{- end }}
{{- if .Values.worker.initContainers }}
initContainers:
{{ tpl .Values.worker.initContainers . | nindent 8 | trim }}
{{- end }}
{{- if .Values.worker.podSecurityContext.enabled }}
securityContext: {{- omit .Values.worker.podSecurityContext "enabled" | toYaml | nindent 8 }}
{{- end }}
containers:
- name: seaweedfs
image: {{ template "worker.image" . }}
imagePullPolicy: {{ default "IfNotPresent" .Values.global.imagePullPolicy }}
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SEAWEEDFS_FULLNAME
value: "{{ template "seaweedfs.name" . }}"
{{- if .Values.worker.extraEnvironmentVars }}
{{- range $key, $value := .Values.worker.extraEnvironmentVars }}
- name: {{ $key }}
{{- if kindIs "string" $value }}
value: {{ $value | quote }}
{{- else }}
valueFrom:
{{ toYaml $value | nindent 16 | trim }}
{{- end -}}
{{- end }}
{{- end }}
{{- if .Values.global.extraEnvironmentVars }}
{{- range $key, $value := .Values.global.extraEnvironmentVars }}
- name: {{ $key }}
{{- if kindIs "string" $value }}
value: {{ $value | quote }}
{{- else }}
valueFrom:
{{ toYaml $value | nindent 16 | trim }}
{{- end -}}
{{- end }}
{{- end }}
command:
- "/bin/sh"
- "-ec"
- |
exec /usr/bin/weed \
{{- if or (eq .Values.worker.logs.type "hostPath") (eq .Values.worker.logs.type "emptyDir") (eq .Values.worker.logs.type "existingClaim") }}
-logdir=/logs \
{{- else }}
-logtostderr=true \
{{- end }}
{{- if .Values.worker.loggingOverrideLevel }}
-v={{ .Values.worker.loggingOverrideLevel }} \
{{- else }}
-v={{ .Values.global.loggingLevel }} \
{{- end }}
worker \
{{- if .Values.worker.adminServer }}
-adminServer={{ .Values.worker.adminServer }} \
{{- else }}
-adminServer={{ template "seaweedfs.name" . }}-admin.{{ .Release.Namespace }}:{{ .Values.admin.grpcPort }} \
{{- end }}
-capabilities={{ .Values.worker.capabilities }} \
-maxConcurrent={{ .Values.worker.maxConcurrent }} \
-workingDir={{ .Values.worker.workingDir }}{{- if .Values.worker.extraArgs }} \{{ end }}
{{- range $index, $arg := .Values.worker.extraArgs }}
{{ $arg }}{{- if lt $index (sub (len $.Values.worker.extraArgs) 1) }} \{{ end }}
{{- end }}
volumeMounts:
{{- if or (eq .Values.worker.data.type "hostPath") (eq .Values.worker.data.type "emptyDir") (eq .Values.worker.data.type "existingClaim") }}
- name: worker-data
mountPath: {{ .Values.worker.workingDir }}
{{- end }}
{{- if or (eq .Values.worker.logs.type "hostPath") (eq .Values.worker.logs.type "emptyDir") (eq .Values.worker.logs.type "existingClaim") }}
- name: worker-logs
mountPath: /logs
{{- end }}
{{- if .Values.global.enableSecurity }}
- name: security-config
readOnly: true
mountPath: /etc/seaweedfs/security.toml
subPath: security.toml
- name: ca-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/ca/
- name: master-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/master/
- name: volume-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/volume/
- name: filer-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/filer/
- name: client-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/client/
{{- end }}
{{ tpl .Values.worker.extraVolumeMounts . | nindent 12 | trim }}
ports:
{{- if .Values.worker.metricsPort }}
- containerPort: {{ .Values.worker.metricsPort }}
name: metrics
{{- end }}
{{- with .Values.worker.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- if .Values.worker.livenessProbe.enabled }}
livenessProbe:
{{- with .Values.worker.livenessProbe.tcpSocket }}
tcpSocket:
port: {{ .port }}
{{- end }}
initialDelaySeconds: {{ .Values.worker.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.worker.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.worker.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.worker.livenessProbe.failureThreshold }}
timeoutSeconds: {{ .Values.worker.livenessProbe.timeoutSeconds }}
{{- end }}
{{- if .Values.worker.readinessProbe.enabled }}
readinessProbe:
{{- with .Values.worker.readinessProbe.tcpSocket }}
tcpSocket:
port: {{ .port }}
{{- end }}
initialDelaySeconds: {{ .Values.worker.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.worker.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.worker.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.worker.readinessProbe.failureThreshold }}
timeoutSeconds: {{ .Values.worker.readinessProbe.timeoutSeconds }}
{{- end }}
{{- if .Values.worker.containerSecurityContext.enabled }}
securityContext: {{- omit .Values.worker.containerSecurityContext "enabled" | toYaml | nindent 12 }}
{{- end }}
{{- if .Values.worker.sidecars }}
{{- include "common.tplvalues.render" (dict "value" .Values.worker.sidecars "context" $) | nindent 8 }}
{{- end }}
volumes:
{{- if eq .Values.worker.data.type "hostPath" }}
- name: worker-data
hostPath:
path: {{ .Values.worker.data.hostPathPrefix }}/seaweedfs-worker-data
type: DirectoryOrCreate
{{- end }}
{{- if eq .Values.worker.data.type "emptyDir" }}
- name: worker-data
emptyDir: {}
{{- end }}
{{- if eq .Values.worker.data.type "existingClaim" }}
- name: worker-data
persistentVolumeClaim:
claimName: {{ .Values.worker.data.claimName }}
{{- end }}
{{- if eq .Values.worker.logs.type "hostPath" }}
- name: worker-logs
hostPath:
path: {{ .Values.worker.logs.hostPathPrefix }}/logs/seaweedfs/worker
type: DirectoryOrCreate
{{- end }}
{{- if eq .Values.worker.logs.type "emptyDir" }}
- name: worker-logs
emptyDir: {}
{{- end }}
{{- if eq .Values.worker.logs.type "existingClaim" }}
- name: worker-logs
persistentVolumeClaim:
claimName: {{ .Values.worker.logs.claimName }}
{{- end }}
{{- if .Values.global.enableSecurity }}
- name: security-config
configMap:
name: {{ template "seaweedfs.name" . }}-security-config
- name: ca-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-ca-cert
- name: master-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-master-cert
- name: volume-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-volume-cert
- name: filer-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-filer-cert
- name: client-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-client-cert
{{- end }}
{{ tpl .Values.worker.extraVolumes . | indent 8 | trim }}
{{- if .Values.worker.nodeSelector }}
nodeSelector:
{{ tpl .Values.worker.nodeSelector . | indent 8 | trim }}
{{- end }}
{{- end }}

26
k8s/charts/seaweedfs/templates/worker/worker-service.yaml

@ -0,0 +1,26 @@
{{- if .Values.worker.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "seaweedfs.name" . }}-worker
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: worker
spec:
clusterIP: None # Headless service
{{- if .Values.worker.metricsPort }}
ports:
- name: "metrics"
port: {{ .Values.worker.metricsPort }}
targetPort: {{ .Values.worker.metricsPort }}
protocol: TCP
{{- end }}
selector:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: worker
{{- end }}

33
k8s/charts/seaweedfs/templates/worker/worker-servicemonitor.yaml

@ -0,0 +1,33 @@
{{- if .Values.worker.enabled }}
{{- if .Values.worker.metricsPort }}
{{- if .Values.global.monitoring.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "seaweedfs.name" . }}-worker
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: worker
{{- with .Values.global.monitoring.additionalLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.worker.serviceMonitor.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
endpoints:
- interval: 30s
port: metrics
scrapeTimeout: 5s
selector:
matchLabels:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
app.kubernetes.io/component: worker
{{- end }}
{{- end }}
{{- end }}

234
k8s/charts/seaweedfs/values.yaml

@ -1088,6 +1088,240 @@ sftp:
failureThreshold: 100
timeoutSeconds: 10
admin:
enabled: false
imageOverride: null
restartPolicy: null
replicas: 1
port: 23646 # Default admin port
grpcPort: 33646 # Default gRPC port for worker connections
metricsPort: 9327
loggingOverrideLevel: null
# Admin authentication
# Note: Avoid special shell characters in password ($ \ " ' ( ) [ ] { } ; | & < >)
# For production, consider using Kubernetes Secrets (future enhancement)
adminUser: "admin"
adminPassword: "" # If empty, auth is disabled
# Data directory for admin configuration and maintenance data
dataDir: "" # If empty, configuration is kept in memory only
# Master servers to connect to
# If empty, uses global.masterServer or auto-discovers from master statefulset
masters: ""
# Custom command line arguments to add to the admin command
# Example: ["-customFlag", "value", "-anotherFlag"]
extraArgs: []
# Storage configuration
data:
type: "emptyDir" # Options: "hostPath", "persistentVolumeClaim", "emptyDir", "existingClaim"
size: "10Gi"
storageClass: ""
hostPathPrefix: /storage
claimName: ""
annotations: {}
logs:
type: "emptyDir" # Options: "hostPath", "persistentVolumeClaim", "emptyDir", "existingClaim"
size: "5Gi"
storageClass: ""
hostPathPrefix: /storage
claimName: ""
annotations: {}
# Additional resources
sidecars: []
initContainers: ""
extraVolumes: ""
extraVolumeMounts: ""
podLabels: {}
podAnnotations: {}
annotations: {}
## Set podManagementPolicy
podManagementPolicy: Parallel
# Affinity Settings
# Commenting out or setting as empty the affinity variable, will allow
# deployment to single node services such as Minikube
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: admin
topologyKey: kubernetes.io/hostname
# Topology Spread Constraints Settings
# This should map directly to the value of the topologySpreadConstraints
# for a PodSpec. By Default no constraints are set.
topologySpreadConstraints: ""
resources: {}
tolerations: ""
nodeSelector: ""
priorityClassName: ""
serviceAccountName: ""
podSecurityContext: {}
containerSecurityContext: {}
extraEnvironmentVars: {}
# Health checks
livenessProbe:
enabled: true
httpGet:
path: /health
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 60
successThreshold: 1
failureThreshold: 5
timeoutSeconds: 10
readinessProbe:
enabled: true
httpGet:
path: /health
scheme: HTTP
initialDelaySeconds: 15
periodSeconds: 15
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 10
ingress:
enabled: false
className: "nginx"
# host: false for "*" hostname
host: "admin.seaweedfs.local"
path: "/"
pathType: Prefix
annotations: {}
tls: []
service:
type: ClusterIP
annotations: {}
# ServiceMonitor annotations (separate from pod/deployment annotations)
serviceMonitor:
annotations: {}
worker:
enabled: false
imageOverride: null
restartPolicy: null
replicas: 1
loggingOverrideLevel: null
metricsPort: 9327
# Admin server to connect to
# Format: "host:port" or auto-discover from admin service
adminServer: ""
# Worker capabilities - comma-separated list
# Available: vacuum, balance, ec (erasure_coding)
# Default: "vacuum,ec,balance"
capabilities: "vacuum,ec,balance"
# Maximum number of concurrent tasks
maxConcurrent: 3
# Working directory for task execution
workingDir: "/tmp/seaweedfs-worker"
# Custom command line arguments to add to the worker command
# Example: ["-customFlag", "value", "-anotherFlag"]
extraArgs: []
# Storage configuration for working directory
# Note: Workers use Deployment, so use "emptyDir", "hostPath", or "existingClaim"
# Do NOT use "persistentVolumeClaim" - use "existingClaim" with pre-provisioned PVC instead
data:
type: "emptyDir" # Options: "hostPath", "emptyDir", "existingClaim"
hostPathPrefix: /storage
claimName: "" # For existingClaim type
logs:
type: "emptyDir" # Options: "hostPath", "emptyDir", "existingClaim"
hostPathPrefix: /storage
claimName: "" # For existingClaim type
# Additional resources
sidecars: []
initContainers: ""
extraVolumes: ""
extraVolumeMounts: ""
podLabels: {}
podAnnotations: {}
annotations: {}
# Affinity Settings
# Commenting out or setting as empty the affinity variable, will allow
# deployment to single node services such as Minikube
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: worker
topologyKey: kubernetes.io/hostname
# Topology Spread Constraints Settings
# This should map directly to the value of the topologySpreadConstraints
# for a PodSpec. By Default no constraints are set.
topologySpreadConstraints: ""
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "2"
memory: "2Gi"
tolerations: ""
nodeSelector: ""
priorityClassName: ""
serviceAccountName: ""
podSecurityContext: {}
containerSecurityContext: {}
extraEnvironmentVars: {}
# Health checks for worker pods
# Since workers do not have an HTTP endpoint, a tcpSocket probe on the metrics port is recommended.
livenessProbe:
enabled: true
tcpSocket:
port: metrics
initialDelaySeconds: 30
periodSeconds: 60
successThreshold: 1
failureThreshold: 5
timeoutSeconds: 10
readinessProbe:
enabled: true
tcpSocket:
port: metrics
initialDelaySeconds: 20
periodSeconds: 15
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 10
# ServiceMonitor annotations (separate from pod/deployment annotations)
serviceMonitor:
annotations: {}
# All-in-one deployment configuration
allInOne:
enabled: false

5
weed/command/worker.go

@ -19,6 +19,9 @@ import (
_ "github.com/seaweedfs/seaweedfs/weed/worker/tasks/balance"
_ "github.com/seaweedfs/seaweedfs/weed/worker/tasks/erasure_coding"
_ "github.com/seaweedfs/seaweedfs/weed/worker/tasks/vacuum"
// TODO: Implement additional task packages (add to default capabilities when ready):
// _ "github.com/seaweedfs/seaweedfs/weed/worker/tasks/remote" - for uploading volumes to remote/cloud storage
// _ "github.com/seaweedfs/seaweedfs/weed/worker/tasks/replication" - for fixing replication issues and maintaining data consistency
)
var cmdWorker = &Command{
@ -41,7 +44,7 @@ Examples:
var (
workerAdminServer = cmdWorker.Flag.String("admin", "localhost:23646", "admin server address")
workerCapabilities = cmdWorker.Flag.String("capabilities", "vacuum,ec,remote,replication,balance", "comma-separated list of task types this worker can handle")
workerCapabilities = cmdWorker.Flag.String("capabilities", "vacuum,ec,balance", "comma-separated list of task types this worker can handle")
workerMaxConcurrent = cmdWorker.Flag.Int("maxConcurrent", 2, "maximum number of concurrent tasks")
workerHeartbeatInterval = cmdWorker.Flag.Duration("heartbeat", 30*time.Second, "heartbeat interval")
workerTaskRequestInterval = cmdWorker.Flag.Duration("taskInterval", 5*time.Second, "task request interval")

Loading…
Cancel
Save