You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

896 lines
26 KiB

fix helm install error for No such file or directory (#5250) * weed/shell: Cluster check other disk types (#5245) * week/shell: Cluster check other disk types The `cluster.check` command only took the empty (`""`) and `hdd` disk types into consideration, but a cluster with only `ssd` or `nvme` disk types would be equally valid. This commit simply checks that _any_ disk type is defined, and that some volumes are available for it. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Replace loop that copies slice Use the following construct instead of a `for` loop: ```golang x = append(x, y...) ``` See https://staticcheck.dev/docs/checks#S1011. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Check disk types when filer is in use Filer stores its metadata logs in generic (i.e. `""`) or HDD disk type volumes, so make sure those disk types exist and have volumes associated with them when Filer is deployed in the cluster. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Cluster check other disk types (#5245) * week/shell: Cluster check other disk types The `cluster.check` command only took the empty (`""`) and `hdd` disk types into consideration, but a cluster with only `ssd` or `nvme` disk types would be equally valid. This commit simply checks that _any_ disk type is defined, and that some volumes are available for it. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Replace loop that copies slice Use the following construct instead of a `for` loop: ```golang x = append(x, y...) ``` See https://staticcheck.dev/docs/checks#S1011. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Check disk types when filer is in use Filer stores its metadata logs in generic (i.e. `""`) or HDD disk type volumes, so make sure those disk types exist and have volumes associated with them when Filer is deployed in the cluster. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * fix helm install error for No such file or directory --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> Co-authored-by: Benoît Knecht <bknecht@protonmail.ch>
1 year ago
fix helm install error for No such file or directory (#5250) * weed/shell: Cluster check other disk types (#5245) * week/shell: Cluster check other disk types The `cluster.check` command only took the empty (`""`) and `hdd` disk types into consideration, but a cluster with only `ssd` or `nvme` disk types would be equally valid. This commit simply checks that _any_ disk type is defined, and that some volumes are available for it. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Replace loop that copies slice Use the following construct instead of a `for` loop: ```golang x = append(x, y...) ``` See https://staticcheck.dev/docs/checks#S1011. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Check disk types when filer is in use Filer stores its metadata logs in generic (i.e. `""`) or HDD disk type volumes, so make sure those disk types exist and have volumes associated with them when Filer is deployed in the cluster. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Cluster check other disk types (#5245) * week/shell: Cluster check other disk types The `cluster.check` command only took the empty (`""`) and `hdd` disk types into consideration, but a cluster with only `ssd` or `nvme` disk types would be equally valid. This commit simply checks that _any_ disk type is defined, and that some volumes are available for it. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Replace loop that copies slice Use the following construct instead of a `for` loop: ```golang x = append(x, y...) ``` See https://staticcheck.dev/docs/checks#S1011. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Check disk types when filer is in use Filer stores its metadata logs in generic (i.e. `""`) or HDD disk type volumes, so make sure those disk types exist and have volumes associated with them when Filer is deployed in the cluster. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * fix helm install error for No such file or directory --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> Co-authored-by: Benoît Knecht <bknecht@protonmail.ch>
1 year ago
fix helm install error for No such file or directory (#5250) * weed/shell: Cluster check other disk types (#5245) * week/shell: Cluster check other disk types The `cluster.check` command only took the empty (`""`) and `hdd` disk types into consideration, but a cluster with only `ssd` or `nvme` disk types would be equally valid. This commit simply checks that _any_ disk type is defined, and that some volumes are available for it. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Replace loop that copies slice Use the following construct instead of a `for` loop: ```golang x = append(x, y...) ``` See https://staticcheck.dev/docs/checks#S1011. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Check disk types when filer is in use Filer stores its metadata logs in generic (i.e. `""`) or HDD disk type volumes, so make sure those disk types exist and have volumes associated with them when Filer is deployed in the cluster. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Cluster check other disk types (#5245) * week/shell: Cluster check other disk types The `cluster.check` command only took the empty (`""`) and `hdd` disk types into consideration, but a cluster with only `ssd` or `nvme` disk types would be equally valid. This commit simply checks that _any_ disk type is defined, and that some volumes are available for it. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Replace loop that copies slice Use the following construct instead of a `for` loop: ```golang x = append(x, y...) ``` See https://staticcheck.dev/docs/checks#S1011. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Check disk types when filer is in use Filer stores its metadata logs in generic (i.e. `""`) or HDD disk type volumes, so make sure those disk types exist and have volumes associated with them when Filer is deployed in the cluster. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * fix helm install error for No such file or directory --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> Co-authored-by: Benoît Knecht <bknecht@protonmail.ch>
1 year ago
fix helm install error for No such file or directory (#5250) * weed/shell: Cluster check other disk types (#5245) * week/shell: Cluster check other disk types The `cluster.check` command only took the empty (`""`) and `hdd` disk types into consideration, but a cluster with only `ssd` or `nvme` disk types would be equally valid. This commit simply checks that _any_ disk type is defined, and that some volumes are available for it. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Replace loop that copies slice Use the following construct instead of a `for` loop: ```golang x = append(x, y...) ``` See https://staticcheck.dev/docs/checks#S1011. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Check disk types when filer is in use Filer stores its metadata logs in generic (i.e. `""`) or HDD disk type volumes, so make sure those disk types exist and have volumes associated with them when Filer is deployed in the cluster. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Cluster check other disk types (#5245) * week/shell: Cluster check other disk types The `cluster.check` command only took the empty (`""`) and `hdd` disk types into consideration, but a cluster with only `ssd` or `nvme` disk types would be equally valid. This commit simply checks that _any_ disk type is defined, and that some volumes are available for it. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Replace loop that copies slice Use the following construct instead of a `for` loop: ```golang x = append(x, y...) ``` See https://staticcheck.dev/docs/checks#S1011. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Check disk types when filer is in use Filer stores its metadata logs in generic (i.e. `""`) or HDD disk type volumes, so make sure those disk types exist and have volumes associated with them when Filer is deployed in the cluster. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * fix helm install error for No such file or directory --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> Co-authored-by: Benoît Knecht <bknecht@protonmail.ch>
1 year ago
fix helm install error for No such file or directory (#5250) * weed/shell: Cluster check other disk types (#5245) * week/shell: Cluster check other disk types The `cluster.check` command only took the empty (`""`) and `hdd` disk types into consideration, but a cluster with only `ssd` or `nvme` disk types would be equally valid. This commit simply checks that _any_ disk type is defined, and that some volumes are available for it. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Replace loop that copies slice Use the following construct instead of a `for` loop: ```golang x = append(x, y...) ``` See https://staticcheck.dev/docs/checks#S1011. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Check disk types when filer is in use Filer stores its metadata logs in generic (i.e. `""`) or HDD disk type volumes, so make sure those disk types exist and have volumes associated with them when Filer is deployed in the cluster. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Cluster check other disk types (#5245) * week/shell: Cluster check other disk types The `cluster.check` command only took the empty (`""`) and `hdd` disk types into consideration, but a cluster with only `ssd` or `nvme` disk types would be equally valid. This commit simply checks that _any_ disk type is defined, and that some volumes are available for it. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Replace loop that copies slice Use the following construct instead of a `for` loop: ```golang x = append(x, y...) ``` See https://staticcheck.dev/docs/checks#S1011. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Check disk types when filer is in use Filer stores its metadata logs in generic (i.e. `""`) or HDD disk type volumes, so make sure those disk types exist and have volumes associated with them when Filer is deployed in the cluster. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * fix helm install error for No such file or directory --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> Co-authored-by: Benoît Knecht <bknecht@protonmail.ch>
1 year ago
fix helm install error for No such file or directory (#5250) * weed/shell: Cluster check other disk types (#5245) * week/shell: Cluster check other disk types The `cluster.check` command only took the empty (`""`) and `hdd` disk types into consideration, but a cluster with only `ssd` or `nvme` disk types would be equally valid. This commit simply checks that _any_ disk type is defined, and that some volumes are available for it. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Replace loop that copies slice Use the following construct instead of a `for` loop: ```golang x = append(x, y...) ``` See https://staticcheck.dev/docs/checks#S1011. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Check disk types when filer is in use Filer stores its metadata logs in generic (i.e. `""`) or HDD disk type volumes, so make sure those disk types exist and have volumes associated with them when Filer is deployed in the cluster. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Cluster check other disk types (#5245) * week/shell: Cluster check other disk types The `cluster.check` command only took the empty (`""`) and `hdd` disk types into consideration, but a cluster with only `ssd` or `nvme` disk types would be equally valid. This commit simply checks that _any_ disk type is defined, and that some volumes are available for it. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Replace loop that copies slice Use the following construct instead of a `for` loop: ```golang x = append(x, y...) ``` See https://staticcheck.dev/docs/checks#S1011. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * weed/shell: Check disk types when filer is in use Filer stores its metadata logs in generic (i.e. `""`) or HDD disk type volumes, so make sure those disk types exist and have volumes associated with them when Filer is deployed in the cluster. Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> * fix helm install error for No such file or directory --------- Signed-off-by: Benoît Knecht <bknecht@protonmail.ch> Co-authored-by: Benoît Knecht <bknecht@protonmail.ch>
1 year ago
4 years ago
  1. # Available parameters and their default values for the SeaweedFS chart.
  2. global:
  3. createClusterRole: true
  4. registry: ""
  5. repository: ""
  6. imageName: chrislusf/seaweedfs
  7. imagePullPolicy: IfNotPresent
  8. imagePullSecrets: ""
  9. restartPolicy: Always
  10. loggingLevel: 1
  11. enableSecurity: false
  12. masterServer: null
  13. securityConfig:
  14. jwtSigning:
  15. volumeWrite: true
  16. volumeRead: false
  17. filerWrite: false
  18. filerRead: false
  19. # we will use this serviceAccountName for all ClusterRoles/ClusterRoleBindings
  20. serviceAccountName: "seaweedfs"
  21. certificates:
  22. alphacrds: false
  23. monitoring:
  24. enabled: false
  25. gatewayHost: null
  26. gatewayPort: null
  27. additionalLabels: {}
  28. # if enabled will use global.replicationPlacment and override master & filer defaultReplicaPlacement config
  29. enableReplication: false
  30. # replication type is XYZ:
  31. # X number of replica in other data centers
  32. # Y number of replica in other racks in the same data center
  33. # Z number of replica in other servers in the same rack
  34. replicationPlacment: "001"
  35. extraEnvironmentVars:
  36. WEED_CLUSTER_DEFAULT: "sw"
  37. WEED_CLUSTER_SW_MASTER: "seaweedfs-master.seaweedfs:9333"
  38. WEED_CLUSTER_SW_FILER: "seaweedfs-filer-client.seaweedfs:8888"
  39. # WEED_JWT_SIGNING_KEY:
  40. # secretKeyRef:
  41. # name: seaweedfs-signing-key
  42. # key: signingKey
  43. image:
  44. registry: ""
  45. repository: ""
  46. master:
  47. enabled: true
  48. repository: null
  49. imageName: null
  50. imageTag: null
  51. imageOverride: null
  52. restartPolicy: null
  53. replicas: 1
  54. port: 9333
  55. grpcPort: 19333
  56. metricsPort: 9327
  57. ipBind: "0.0.0.0"
  58. volumePreallocate: false
  59. volumeSizeLimitMB: 1000
  60. loggingOverrideLevel: null
  61. # number of seconds between heartbeats, default 5
  62. pulseSeconds: null
  63. # threshold to vacuum and reclaim spaces, default 0.3 (30%)
  64. garbageThreshold: null
  65. # Prometheus push interval in seconds, default 15
  66. metricsIntervalSec: 15
  67. # replication type is XYZ:
  68. # X number of replica in other data centers
  69. # Y number of replica in other racks in the same data center
  70. # Z number of replica in other servers in the same rack
  71. defaultReplication: "000"
  72. # Disable http request, only gRpc operations are allowed
  73. disableHttp: false
  74. config: |-
  75. # Enter any extra configuration for master.toml here.
  76. # It may be a multi-line string.
  77. # You may use ANY storage-class, example with local-path-provisioner
  78. # Annotations are optional.
  79. # data:
  80. # type: "persistentVolumeClaim"
  81. # size: "24Ti"
  82. # storageClass: "local-path-provisioner"
  83. # annotations:
  84. # "key": "value"
  85. #
  86. # You may also spacify an existing claim:
  87. # data:
  88. # type: "existingClaim"
  89. # claimName: "my-pvc"
  90. #
  91. # You can also use emptyDir storage:
  92. # data:
  93. # type: "emptyDir"
  94. data:
  95. type: "hostPath"
  96. storageClass: ""
  97. hostPathPrefix: /ssd
  98. # You can also use emptyDir storage:
  99. # logs:
  100. # type: "emptyDir"
  101. logs:
  102. type: "hostPath"
  103. size: ""
  104. storageClass: ""
  105. hostPathPrefix: /storage
  106. ## @param master.sidecars Add additional sidecar containers to the master pod(s)
  107. ## e.g:
  108. ## sidecars:
  109. ## - name: your-image-name
  110. ## image: your-image
  111. ## imagePullPolicy: Always
  112. ## ports:
  113. ## - name: portname
  114. ## containerPort: 1234
  115. ##
  116. sidecars: []
  117. initContainers: ""
  118. extraVolumes: ""
  119. extraVolumeMounts: ""
  120. # Labels to be added to the master pods
  121. podLabels: {}
  122. # Annotations to be added to the master pods
  123. podAnnotations: {}
  124. ## Set podManagementPolicy
  125. podManagementPolicy: Parallel
  126. # Resource requests, limits, etc. for the master cluster placement. This
  127. # should map directly to the value of the resources field for a PodSpec,
  128. # formatted as a multi-line string. By default no direct resource request
  129. # is made.
  130. resources: {}
  131. # updatePartition is used to control a careful rolling update of SeaweedFS
  132. # masters.
  133. updatePartition: 0
  134. # Affinity Settings
  135. # Commenting out or setting as empty the affinity variable, will allow
  136. # deployment to single node services such as Minikube
  137. affinity: |
  138. podAntiAffinity:
  139. requiredDuringSchedulingIgnoredDuringExecution:
  140. - labelSelector:
  141. matchLabels:
  142. app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
  143. app.kubernetes.io/instance: {{ .Release.Name }}
  144. app.kubernetes.io/component: master
  145. topologyKey: kubernetes.io/hostname
  146. # Toleration Settings for master pods
  147. # This should be a multi-line string matching the Toleration array
  148. # in a PodSpec.
  149. tolerations: ""
  150. # nodeSelector labels for master pod assignment, formatted as a muli-line string.
  151. # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  152. # Example:
  153. nodeSelector: |
  154. kubernetes.io/arch: amd64
  155. # nodeSelector: |
  156. # sw-backend: "true"
  157. # used to assign priority to master pods
  158. # ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
  159. priorityClassName: ""
  160. # used to assign a service account.
  161. # ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
  162. serviceAccountName: ""
  163. # Configure security context for Pod
  164. # ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  165. # Example:
  166. # podSecurityContext:
  167. # enabled: true
  168. # runAsUser: 1000
  169. # runAsGroup: 3000
  170. # fsGroup: 2000
  171. podSecurityContext: {}
  172. # Configure security context for Container
  173. # ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  174. # Example:
  175. # containerSecurityContext:
  176. # enabled: true
  177. # runAsUser: 2000
  178. # allowPrivilegeEscalation: false
  179. containerSecurityContext: {}
  180. ingress:
  181. enabled: false
  182. className: "nginx"
  183. # host: false for "*" hostname
  184. host: "master.seaweedfs.local"
  185. annotations:
  186. nginx.ingress.kubernetes.io/auth-type: "basic"
  187. nginx.ingress.kubernetes.io/auth-secret: "default/ingress-basic-auth-secret"
  188. nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - SW-Master'
  189. nginx.ingress.kubernetes.io/service-upstream: "true"
  190. nginx.ingress.kubernetes.io/rewrite-target: /$1
  191. nginx.ingress.kubernetes.io/use-regex: "true"
  192. nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
  193. nginx.ingress.kubernetes.io/ssl-redirect: "false"
  194. nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
  195. nginx.ingress.kubernetes.io/configuration-snippet: |
  196. sub_filter '<head>' '<head> <base href="/sw-master/">'; #add base url
  197. sub_filter '="/' '="./'; #make absolute paths to relative
  198. sub_filter '=/' '=./';
  199. sub_filter '/seaweedfsstatic' './seaweedfsstatic';
  200. sub_filter_once off;
  201. tls: []
  202. extraEnvironmentVars:
  203. WEED_MASTER_VOLUME_GROWTH_COPY_1: '7'
  204. WEED_MASTER_VOLUME_GROWTH_COPY_2: '6'
  205. WEED_MASTER_VOLUME_GROWTH_COPY_3: '3'
  206. WEED_MASTER_VOLUME_GROWTH_COPY_OTHER: '1'
  207. # used to configure livenessProbe on master-server containers
  208. #
  209. livenessProbe:
  210. enabled: true
  211. httpGet:
  212. path: /cluster/status
  213. scheme: HTTP
  214. initialDelaySeconds: 20
  215. periodSeconds: 30
  216. successThreshold: 1
  217. failureThreshold: 4
  218. timeoutSeconds: 10
  219. # used to configure readinessProbe on master-server containers
  220. #
  221. readinessProbe:
  222. enabled: true
  223. httpGet:
  224. path: /cluster/status
  225. scheme: HTTP
  226. initialDelaySeconds: 10
  227. periodSeconds: 45
  228. successThreshold: 2
  229. failureThreshold: 100
  230. timeoutSeconds: 10
  231. volume:
  232. enabled: true
  233. repository: null
  234. imageName: null
  235. imageTag: null
  236. imageOverride: null
  237. restartPolicy: null
  238. port: 8080
  239. grpcPort: 18080
  240. metricsPort: 9327
  241. ipBind: "0.0.0.0"
  242. replicas: 1
  243. loggingOverrideLevel: null
  244. # number of seconds between heartbeats, must be smaller than or equal to the master's setting
  245. pulseSeconds: null
  246. # Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance., default memory
  247. index: null
  248. # limit file size to avoid out of memory, default 256mb
  249. fileSizeLimitMB: null
  250. # minimum free disk space(in percents). If free disk space lower this value - all volumes marks as ReadOnly
  251. minFreeSpacePercent: 7
  252. # For each data disk you may use ANY storage-class, example with local-path-provisioner
  253. # Annotations are optional.
  254. # dataDirs:
  255. # - name: data:
  256. # type: "persistentVolumeClaim"
  257. # size: "24Ti"
  258. # storageClass: "local-path-provisioner"
  259. # annotations:
  260. # "key": "value"
  261. # maxVolumes: 0 # If set to zero on non-windows OS, the limit will be auto configured. (default "7")
  262. #
  263. # You may also spacify an existing claim:
  264. # - name: data
  265. # type: "existingClaim"
  266. # claimName: "my-pvc"
  267. # maxVolumes: 0 # If set to zero on non-windows OS, the limit will be auto configured. (default "7")
  268. #
  269. # You can also use emptyDir storage:
  270. # - name: data
  271. # type: "emptyDir"
  272. # maxVolumes: 0 # If set to zero on non-windows OS, the limit will be auto configured. (default "7")
  273. dataDirs:
  274. - name: data1
  275. type: "hostPath"
  276. hostPathPrefix: /ssd
  277. maxVolumes: 0
  278. # - name: data2
  279. # type: "persistentVolumeClaim"
  280. # storageClass: "yourClassNameOfChoice"
  281. # size: "800Gi"
  282. # maxVolumes: 0
  283. # idx can be defined by:
  284. #
  285. # idx:
  286. # type: "hostPath"
  287. # hostPathPrefix: /ssd
  288. #
  289. # or
  290. #
  291. # idx:
  292. # type: "persistentVolumeClaim"
  293. # size: "20Gi"
  294. # storageClass: "local-path-provisioner"
  295. #
  296. # or
  297. #
  298. # idx:
  299. # type: "existingClaim"
  300. # claimName: "myClaim"
  301. #
  302. # or
  303. #
  304. # idx:
  305. # type: "emptyDir"
  306. # same applies to "logs"
  307. idx: {}
  308. logs: {}
  309. # limit background compaction or copying speed in mega bytes per second
  310. compactionMBps: "50"
  311. # Volume server's rack name
  312. rack: null
  313. # Volume server's data center name
  314. dataCenter: null
  315. # Redirect moved or non-local volumes. (default proxy)
  316. readMode: proxy
  317. # Comma separated Ip addresses having write permission. No limit if empty.
  318. whiteList: null
  319. # Adjust jpg orientation when uploading.
  320. imagesFixOrientation: false
  321. ## @param volume.sidecars Add additional sidecar containers to the volume pod(s)
  322. ## e.g:
  323. ## sidecars:
  324. ## - name: your-image-name
  325. ## image: your-image
  326. ## imagePullPolicy: Always
  327. ## ports:
  328. ## - name: portname
  329. ## containerPort: 1234
  330. ##
  331. sidecars: []
  332. initContainers: ""
  333. extraVolumes: ""
  334. extraVolumeMounts: ""
  335. # Labels to be added to the volume pods
  336. podLabels: {}
  337. # Annotations to be added to the volume pods
  338. podAnnotations: {}
  339. ## Set podManagementPolicy
  340. podManagementPolicy: Parallel
  341. # Affinity Settings
  342. # Commenting out or setting as empty the affinity variable, will allow
  343. # deployment to single node services such as Minikube
  344. affinity: |
  345. podAntiAffinity:
  346. requiredDuringSchedulingIgnoredDuringExecution:
  347. - labelSelector:
  348. matchLabels:
  349. app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
  350. app.kubernetes.io/instance: {{ .Release.Name }}
  351. app.kubernetes.io/component: volume
  352. topologyKey: kubernetes.io/hostname
  353. # Resource requests, limits, etc. for the server cluster placement. This
  354. # should map directly to the value of the resources field for a PodSpec,
  355. # formatted as a multi-line string. By default no direct resource request
  356. # is made.
  357. resources: {}
  358. # Toleration Settings for server pods
  359. # This should be a multi-line string matching the Toleration array
  360. # in a PodSpec.
  361. tolerations: ""
  362. # nodeSelector labels for server pod assignment, formatted as a muli-line string.
  363. # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  364. # Example:
  365. nodeSelector: |
  366. kubernetes.io/arch: amd64
  367. # nodeSelector: |
  368. # sw-volume: "true"
  369. # used to assign priority to server pods
  370. # ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
  371. priorityClassName: ""
  372. # used to assign a service account.
  373. # ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
  374. serviceAccountName: ""
  375. extraEnvironmentVars:
  376. # Configure security context for Pod
  377. # ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  378. # Example:
  379. # podSecurityContext:
  380. # enabled: true
  381. # runAsUser: 1000
  382. # runAsGroup: 3000
  383. # fsGroup: 2000
  384. podSecurityContext: {}
  385. # Configure security context for Container
  386. # ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  387. # Example:
  388. # containerSecurityContext:
  389. # enabled: true
  390. # runAsUser: 2000
  391. # allowPrivilegeEscalation: false
  392. containerSecurityContext: {}
  393. # used to configure livenessProbe on volume-server containers
  394. #
  395. livenessProbe:
  396. enabled: true
  397. httpGet:
  398. path: /status
  399. scheme: HTTP
  400. initialDelaySeconds: 20
  401. periodSeconds: 90
  402. successThreshold: 1
  403. failureThreshold: 4
  404. timeoutSeconds: 30
  405. # used to configure readinessProbe on volume-server containers
  406. #
  407. readinessProbe:
  408. enabled: true
  409. httpGet:
  410. path: /status
  411. scheme: HTTP
  412. initialDelaySeconds: 15
  413. periodSeconds: 15
  414. successThreshold: 1
  415. failureThreshold: 100
  416. timeoutSeconds: 30
  417. filer:
  418. enabled: true
  419. repository: null
  420. imageName: null
  421. imageTag: null
  422. imageOverride: null
  423. restartPolicy: null
  424. replicas: 1
  425. port: 8888
  426. grpcPort: 18888
  427. metricsPort: 9327
  428. loggingOverrideLevel: null
  429. filerGroup: ""
  430. # replication type is XYZ:
  431. # X number of replica in other data centers
  432. # Y number of replica in other racks in the same data center
  433. # Z number of replica in other servers in the same rack
  434. defaultReplicaPlacement: "000"
  435. # turn off directory listing
  436. disableDirListing: false
  437. # split files larger than the limit, default 32
  438. maxMB: null
  439. # encrypt data on volume servers
  440. encryptVolumeData: false
  441. # Whether proxy or redirect to volume server during file GET request
  442. redirectOnRead: false
  443. # Limit sub dir listing size (default 100000)
  444. dirListLimit: 100000
  445. # Disable http request, only gRpc operations are allowed
  446. disableHttp: false
  447. # DEPRECATE: enablePVC, storage, storageClass
  448. # Consider replacing with filer.data section below instead.
  449. # Settings for configuring stateful storage of filer pods.
  450. # enablePVC will create a pvc for filer for data persistence.
  451. enablePVC: false
  452. # storage should be set to the disk size of the attached volume.
  453. storage: 25Gi
  454. # storageClass is the class of storage which defaults to null (the Kube cluster will pick the default).
  455. storageClass: null
  456. # You may use ANY storage-class, example with local-path-provisioner
  457. # Annotations are optional.
  458. # data:
  459. # type: "persistentVolumeClaim"
  460. # size: "24Ti"
  461. # storageClass: "local-path-provisioner"
  462. # annotations:
  463. # "key": "value"
  464. #
  465. # You may also specify an existing claim:
  466. # data:
  467. # type: "existingClaim"
  468. # claimName: "my-pvc"
  469. #
  470. # You can also use emptyDir storage:
  471. # data:
  472. # type: "emptyDir"
  473. data:
  474. type: "hostPath"
  475. size: ""
  476. storageClass: ""
  477. hostPathPrefix: /storage
  478. # You can also use emptyDir storage:
  479. # logs:
  480. # type: "emptyDir"
  481. logs:
  482. type: "hostPath"
  483. size: ""
  484. storageClass: ""
  485. hostPathPrefix: /storage
  486. ## @param filer.sidecars Add additional sidecar containers to the filer pod(s)
  487. ## e.g:
  488. ## sidecars:
  489. ## - name: your-image-name
  490. ## image: your-image
  491. ## imagePullPolicy: Always
  492. ## ports:
  493. ## - name: portname
  494. ## containerPort: 1234
  495. ##
  496. sidecars: []
  497. initContainers: ""
  498. extraVolumes: ""
  499. extraVolumeMounts: ""
  500. # Labels to be added to the filer pods
  501. podLabels: {}
  502. # Annotations to be added to the filer pods
  503. podAnnotations: {}
  504. ## Set podManagementPolicy
  505. podManagementPolicy: Parallel
  506. # Affinity Settings
  507. # Commenting out or setting as empty the affinity variable, will allow
  508. # deployment to single node services such as Minikube
  509. affinity: |
  510. podAntiAffinity:
  511. requiredDuringSchedulingIgnoredDuringExecution:
  512. - labelSelector:
  513. matchLabels:
  514. app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
  515. app.kubernetes.io/instance: {{ .Release.Name }}
  516. app.kubernetes.io/component: filer
  517. topologyKey: kubernetes.io/hostname
  518. # updatePartition is used to control a careful rolling update of SeaweedFS
  519. # masters.
  520. updatePartition: 0
  521. # Resource requests, limits, etc. for the server cluster placement. This
  522. # should map directly to the value of the resources field for a PodSpec,
  523. # formatted as a multi-line string. By default no direct resource request
  524. # is made.
  525. resources: {}
  526. # Toleration Settings for server pods
  527. # This should be a multi-line string matching the Toleration array
  528. # in a PodSpec.
  529. tolerations: ""
  530. # nodeSelector labels for server pod assignment, formatted as a muli-line string.
  531. # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  532. # Example:
  533. nodeSelector: |
  534. kubernetes.io/arch: amd64
  535. # nodeSelector: |
  536. # sw-backend: "true"
  537. # used to assign priority to server pods
  538. # ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
  539. priorityClassName: ""
  540. # used to assign a service account.
  541. # ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
  542. serviceAccountName: ""
  543. # Configure security context for Pod
  544. # ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  545. # Example:
  546. # podSecurityContext:
  547. # enabled: true
  548. # runAsUser: 1000
  549. # runAsGroup: 3000
  550. # fsGroup: 2000
  551. podSecurityContext: {}
  552. # Configure security context for Container
  553. # ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  554. # Example:
  555. # containerSecurityContext:
  556. # enabled: true
  557. # runAsUser: 2000
  558. # allowPrivilegeEscalation: false
  559. containerSecurityContext: {}
  560. ingress:
  561. enabled: false
  562. className: "nginx"
  563. # host: false for "*" hostname
  564. host: "seaweedfs.cluster.local"
  565. annotations:
  566. nginx.ingress.kubernetes.io/backend-protocol: GRPC
  567. nginx.ingress.kubernetes.io/auth-type: "basic"
  568. nginx.ingress.kubernetes.io/auth-secret: "default/ingress-basic-auth-secret"
  569. nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - SW-Filer'
  570. nginx.ingress.kubernetes.io/service-upstream: "true"
  571. nginx.ingress.kubernetes.io/rewrite-target: /$1
  572. nginx.ingress.kubernetes.io/use-regex: "true"
  573. nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
  574. nginx.ingress.kubernetes.io/ssl-redirect: "false"
  575. nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
  576. nginx.ingress.kubernetes.io/configuration-snippet: |
  577. sub_filter '<head>' '<head> <base href="/sw-filer/">'; #add base url
  578. sub_filter '="/' '="./'; #make absolute paths to relative
  579. sub_filter '=/' '=./';
  580. sub_filter '/seaweedfsstatic' './seaweedfsstatic';
  581. sub_filter_once off;
  582. # extraEnvVars is a list of extra enviroment variables to set with the stateful set.
  583. extraEnvironmentVars:
  584. WEED_MYSQL_ENABLED: "false"
  585. WEED_MYSQL_HOSTNAME: "mysql-db-host"
  586. WEED_MYSQL_PORT: "3306"
  587. WEED_MYSQL_DATABASE: "sw_database"
  588. WEED_MYSQL_CONNECTION_MAX_IDLE: "5"
  589. WEED_MYSQL_CONNECTION_MAX_OPEN: "75"
  590. # "refresh" connection every 10 minutes, eliminating mysql closing "old" connections
  591. WEED_MYSQL_CONNECTION_MAX_LIFETIME_SECONDS: "600"
  592. # enable usage of memsql as filer backend
  593. WEED_MYSQL_INTERPOLATEPARAMS: "true"
  594. # if you want to use leveldb2, then should enable "enablePVC". or you may lose your data.
  595. WEED_LEVELDB2_ENABLED: "true"
  596. # with http DELETE, by default the filer would check whether a folder is empty.
  597. # recursive_delete will delete all sub folders and files, similar to "rm -Rf"
  598. WEED_FILER_OPTIONS_RECURSIVE_DELETE: "false"
  599. # directories under this folder will be automatically creating a separate bucket
  600. WEED_FILER_BUCKETS_FOLDER: "/buckets"
  601. # used to configure livenessProbe on filer containers
  602. #
  603. livenessProbe:
  604. enabled: true
  605. httpGet:
  606. path: /
  607. scheme: HTTP
  608. initialDelaySeconds: 20
  609. periodSeconds: 30
  610. successThreshold: 1
  611. failureThreshold: 5
  612. timeoutSeconds: 10
  613. # used to configure readinessProbe on filer containers
  614. #
  615. readinessProbe:
  616. enabled: true
  617. httpGet:
  618. path: /
  619. scheme: HTTP
  620. initialDelaySeconds: 10
  621. periodSeconds: 15
  622. successThreshold: 1
  623. failureThreshold: 100
  624. timeoutSeconds: 10
  625. # secret env variables
  626. secretExtraEnvironmentVars: {}
  627. # WEED_POSTGRES_USERNAME:
  628. # secretKeyRef:
  629. # name: postgres-credentials
  630. # key: username
  631. # WEED_POSTGRES_PASSWORD:
  632. # secretKeyRef:
  633. # name: postgres-credentials
  634. # key: password
  635. s3:
  636. enabled: false
  637. port: 8333
  638. # add additional https port
  639. httpsPort: 0
  640. # allow empty folders
  641. allowEmptyFolder: false
  642. # Suffix of the host name, {bucket}.{domainName}
  643. domainName: ""
  644. # enable user & permission to s3 (need to inject to all services)
  645. enableAuth: false
  646. # set to the name of an existing kubernetes Secret with the s3 json config file
  647. # should have a secret key called seaweedfs_s3_config with an inline json configure
  648. existingConfigSecret: null
  649. auditLogConfig: {}
  650. # You may specify buckets to be created during the install process.
  651. # Buckets may be exposed publicly by setting `anonymousRead` to `true`
  652. # createBuckets:
  653. # - name: bucket-a
  654. # anonymousRead: true
  655. # - name: bucket-b
  656. # anonymousRead: false
  657. s3:
  658. enabled: false
  659. repository: null
  660. imageName: null
  661. imageTag: null
  662. restartPolicy: null
  663. replicas: 1
  664. bindAddress: 0.0.0.0
  665. port: 8333
  666. # add additional https port
  667. httpsPort: 0
  668. metricsPort: 9327
  669. loggingOverrideLevel: null
  670. # allow empty folders
  671. allowEmptyFolder: true
  672. # enable user & permission to s3 (need to inject to all services)
  673. enableAuth: false
  674. # set to the name of an existing kubernetes Secret with the s3 json config file
  675. # should have a secret key called seaweedfs_s3_config with an inline json config
  676. existingConfigSecret: null
  677. auditLogConfig: {}
  678. # Suffix of the host name, {bucket}.{domainName}
  679. domainName: ""
  680. ## @param s3.sidecars Add additional sidecar containers to the s3 pod(s)
  681. ## e.g:
  682. ## sidecars:
  683. ## - name: your-image-name
  684. ## image: your-image
  685. ## imagePullPolicy: Always
  686. ## ports:
  687. ## - name: portname
  688. ## containerPort: 1234
  689. ##
  690. sidecars: []
  691. initContainers: ""
  692. extraVolumes: ""
  693. extraVolumeMounts: ""
  694. # Labels to be added to the s3 pods
  695. podLabels: {}
  696. # Annotations to be added to the s3 pods
  697. podAnnotations: {}
  698. # Resource requests, limits, etc. for the server cluster placement. This
  699. # should map directly to the value of the resources field for a PodSpec,
  700. # formatted as a multi-line string. By default no direct resource request
  701. # is made.
  702. resources: {}
  703. # Toleration Settings for server pods
  704. # This should be a multi-line string matching the Toleration array
  705. # in a PodSpec.
  706. tolerations: ""
  707. # nodeSelector labels for server pod assignment, formatted as a muli-line string.
  708. # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  709. # Example:
  710. nodeSelector: |
  711. kubernetes.io/arch: amd64
  712. # nodeSelector: |
  713. # sw-backend: "true"
  714. # used to assign priority to server pods
  715. # ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
  716. priorityClassName: ""
  717. # used to assign a service account.
  718. # ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
  719. serviceAccountName: ""
  720. # Configure security context for Pod
  721. # ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  722. # Example:
  723. # podSecurityContext:
  724. # enabled: true
  725. # runAsUser: 1000
  726. # runAsGroup: 3000
  727. # fsGroup: 2000
  728. podSecurityContext: {}
  729. # Configure security context for Container
  730. # ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  731. # Example:
  732. # containerSecurityContext:
  733. # enabled: true
  734. # runAsUser: 2000
  735. # allowPrivilegeEscalation: false
  736. containerSecurityContext: {}
  737. # You can also use emptyDir storage:
  738. # logs:
  739. # type: "emptyDir"
  740. logs:
  741. type: "hostPath"
  742. size: ""
  743. storageClass: ""
  744. hostPathPrefix: /storage
  745. extraEnvironmentVars:
  746. # used to configure livenessProbe on s3 containers
  747. #
  748. livenessProbe:
  749. enabled: true
  750. httpGet:
  751. path: /status
  752. scheme: HTTP
  753. initialDelaySeconds: 20
  754. periodSeconds: 60
  755. successThreshold: 1
  756. failureThreshold: 20
  757. timeoutSeconds: 10
  758. # used to configure readinessProbe on s3 containers
  759. #
  760. readinessProbe:
  761. enabled: true
  762. httpGet:
  763. path: /status
  764. scheme: HTTP
  765. initialDelaySeconds: 15
  766. periodSeconds: 15
  767. successThreshold: 1
  768. failureThreshold: 100
  769. timeoutSeconds: 10
  770. ingress:
  771. enabled: false
  772. className: "nginx"
  773. # host: false for "*" hostname
  774. host: "seaweedfs.cluster.local"
  775. # additional ingress annotations for the s3 endpoint
  776. annotations: {}
  777. tls: []
  778. certificates:
  779. commonName: "SeaweedFS CA"
  780. ipAddresses: []
  781. keyAlgorithm: rsa
  782. keySize: 2048
  783. duration: 2160h # 90d
  784. renewBefore: 360h # 15d
  785. externalCertificates:
  786. # This will avoid the need to use cert-manager and will rely on providing your own external certificates and CA
  787. # you will need to store your provided certificates in the secret read by the different services:
  788. # seaweedfs-master-cert, seaweedfs-filer-cert, etc. Can see any statefulset definition to see secret names
  789. enabled: false
  790. # Labels to be added to all the created pods
  791. podLabels: {}
  792. # Annotations to be added to all the created pods
  793. podAnnotations: {}