Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support configMapRef or SecretRef for volumeAttributes.source and volumeAttributes.subDir #890

Open
aDisplayName opened this issue Dec 17, 2024 · 4 comments

Comments

@aDisplayName
Copy link

aDisplayName commented Dec 17, 2024

Is your feature request related to a problem?/Why is this needed
We are using the csi-driver-smb to allow workloads to access the data from a local shared folder on edge side in different locations. Each deployment on an edge device needs to connect to a different local network shared folder with different name.

We are using the HashCorp Vault to manage the different settings via secret to each edge device, and uses the external-secrets controller to deploy the secret via its CRD. The secret includes the following information (value varies on each edge device):

  • The network share source
  • The username and password of the network share credential
  • The subfolder of the network share

We are deploying the managed workload using Rancher Fleet to each edge device. The rancher fleet GitRepo does not have access to the HashCorp Vault. The fleet bundle does know the name of the secret carrying the value of the source and credentials.

To extract the value of the source from the secret object to the manifest of pv.yaml, we could use the helm lookup function in helm templating, if the secret is already available prior to the fleet bundle deployment.

### By injecting an externalsecret.external-secrets.io object, a Secret will be created once by the external-secrets controller after the helm chart is deployed.
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: network-folder-secret
spec:
  refreshInterval: "24h"
  secretStoreRef:
    name: cluster-secrets-vault-backend
    kind: ClusterSecretStore
  target:
    name: network-folder-secret
  dataFrom:
  - extract:
      key: vault/secret/locationid/1234
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: network-data-pv
spec:
  accessModes:
    - ReadOnlyMany
  capacity:
    storage: 200Gi
  persistentVolumeReclaimPolicy: Delete
  storageClassName: smb
  mountOptions:
    - dir_mode=0444
    - file_mode=0444
  csi:
    driver: smb.csi.k8s.io
    readOnly: true
    volumeHandle: town-ehc-data-extractor-jafolder-pv-handle
    volumeAttributes:  # The lookup will not succeed if the secret does not exist before the helm chart is being installed.
      {{- $namespace := .Release.Namespace }}
      {{- $secret := (lookup "v1" "Secret" $namespace network-folder-secret) }}
      {{- if $secret  }}
        source: {{  cat (index $secret "data" share-source | b64dec) "" | nospace  }}
      {{- end  }}
    nodeStageSecretRef:
      name: network-folder-secret
      namespace: {{ .Release.Namespace }}

But it is not the case in our scenario, as we are only deploying the secret using the external-secrets crd within the same fleet bundle. So the helm templating using lookup function will never be able to get the correct info, as the secret network-folder-secret will not be available until AFTER the helm app is deployed.

Describe the solution you'd like in detail
We would like to be able to use the secretRef or configMapRef to specify the network share source and network share subdir, so that we don't have to use helm templating lookup function which may not even available if the secret was not available.

In the following example, we would propose to add more attributes:

  • source-secret-name
  • source-secret-namespace
  • source-secret-key
  • source-configmap-name
  • source-configmap-namespace
  • source-configmap-key
  • subDir-secret-name
  • subDir-secret-namespace
  • subDir-secret-key
  • subDir-configmap-name
  • subDir-configmap-namespace
  • subDir-configmap-key

Here is the example, in terms of storage class.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: smb
provisioner: smb.csi.k8s.io
parameters:
  source-secret-name: network-folder-secret
  source-secret-namespace: {{ .Release.Namespace }}
  source-secret-key: share-source # key of the source data from the secret referenced by source-secret-name
  # if csi.storage.k8s.io/provisioner-secret is provided, will create a sub directory
  # with PV name under source
  csi.storage.k8s.io/provisioner-secret-name: network-folder-secret
  csi.storage.k8s.io/provisioner-secret-namespace: {{ .Release.Namespace }}
  csi.storage.k8s.io/node-stage-secret-name: network-folder-secret
  csi.storage.k8s.io/node-stage-secret-namespace: {{ .Release.Namespace }}
reclaimPolicy: Delete  # available values: Delete, Retain
volumeBindingMode: Immediate
allowVolumeExpansion: true
mountOptions:
  - dir_mode=0777
  - file_mode=0777
  - uid=1001
  - gid=1001

Describe alternatives you've considered

Additional context

@andyzhangx
Copy link
Member

andyzhangx commented Dec 18, 2024

@aDisplayName csi.storage.k8s.io/provisioner-secret-name is equal to source-secret-name and csi.storage.k8s.io/provisioner-secret-namespace is equal to source-secret-namespace in your case. why do you need a new source-xx-xx field, and what is source-secret-key, we could not store key as plain text in storage class.

and subDir belongs to source, so actually you don't need two separate secret-name and secret-namespace

@aDisplayName
Copy link
Author

aDisplayName commented Dec 18, 2024

@aDisplayName csi.storage.k8s.io/provisioner-secret-name is equal to source-secret-name and csi.storage.k8s.io/provisioner-secret-namespace is equal to source-secret-namespace in your case. why do you need a new source-xx-xx field, and what is source-secret-key, we could not store key as plain text in storage class.

@andyzhangx , thank you for the info. If I understand correctly, you are saying we can use the following manifest to achieve the said purpose:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: smb
provisioner: smb.csi.k8s.io
parameters:
  # source is not defined, but referenced in csi.storage.k8s.io/provisioner-secret-name
  csi.storage.k8s.io/provisioner-secret-name: pv-provision-secret
  csi.storage.k8s.io/provisioner-secret-namespace: {{ .Release.Namespace }}
  csi.storage.k8s.io/node-stage-secret-name: network-folder-secret
  csi.storage.k8s.io/node-stage-secret-namespace: {{ .Release.Namespace }}
reclaimPolicy: Delete  # available values: Delete, Retain
volumeBindingMode: Immediate
allowVolumeExpansion: true
mountOptions:
  - dir_mode=0777
  - file_mode=0777
  - uid=1001
  - gid=1001

How does it exactly to specify the value of csi.volumeAttributes.source by using a secret with name specified to csi.storage.k8s.io/provisioner-secret-name? What should the name of the key of the value be in such secret?

apiVersion: v1
kind: Secret
metadata:
  name: pv-provision-secret
type: Opaque
data:
  <key-name>: '//server-in-a-secure-location-only-accessible-on-edge/sharename' # What should the '<key-name>' be?

@andyzhangx
Copy link
Member

I have implemented inline volume feature in master branch now, will that solve the issue? in pod config, you could specify source, secretName to specify those info:

csi:
driver: smb.csi.k8s.io
volumeAttributes:
source: //smb-server.default.svc.cluster.local/share # required
secretName: smbcreds # required, secretNamespace is the same as the pod
mountOptions: "dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock" # optional

@aDisplayName
Copy link
Author

@andyzhangx

I have implemented inline volume feature in master branch now, will that solve the issue? in pod config, you could specify source, secretName to specify those info:

  1. What should the properties be in smbcreds data?

  2. Unfortunately, it doesn't solve our deployment problem when external-secret object is used.

We are not be able to hardcode csi.volumeAttributes.source, nor can we provide the value via helm values, as those information (both source share, and the credentials) are saved in Key Vault, and is not available to the to the helm chart.

The only way the value of source is retrieved, in our case, is to use external-secret operator by deploying an externalsecrets.external-secrets.io CRD object. Based on the CRD, the operator will create / update a normal secret based on the externalsecrets.external-secrets.io.

As the result, even using lookup template function in helm charts will not be able to retrieve those information because the secret does not exist when helm release templating is being executed when installing helm release.

If the csi.volumeAttributes.source can be retrieved using secret, for example, using source.fromSercret then the following sequence of action will happen:

°N external-secret object source secret object persistent volume claim object
1 deployed, not updated does not exist created, not updated, missing source secret (referenced by secretRef
2 deployed, updated created by external-secret operator created, not provisioned, missing source secret
3 deployed, updated created by external-secret operator created, provisioned using the source from the source-secret-ref

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants