-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support configMapRef or SecretRef for volumeAttributes.source and volumeAttributes.subDir #890
Comments
@aDisplayName and subDir belongs to source, so actually you don't need two separate secret-name and secret-namespace |
@andyzhangx , thank you for the info. If I understand correctly, you are saying we can use the following manifest to achieve the said purpose: apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: smb
provisioner: smb.csi.k8s.io
parameters:
# source is not defined, but referenced in csi.storage.k8s.io/provisioner-secret-name
csi.storage.k8s.io/provisioner-secret-name: pv-provision-secret
csi.storage.k8s.io/provisioner-secret-namespace: {{ .Release.Namespace }}
csi.storage.k8s.io/node-stage-secret-name: network-folder-secret
csi.storage.k8s.io/node-stage-secret-namespace: {{ .Release.Namespace }}
reclaimPolicy: Delete # available values: Delete, Retain
volumeBindingMode: Immediate
allowVolumeExpansion: true
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1001
- gid=1001 How does it exactly to specify the value of apiVersion: v1
kind: Secret
metadata:
name: pv-provision-secret
type: Opaque
data:
<key-name>: '//server-in-a-secure-location-only-accessible-on-edge/sharename' # What should the '<key-name>' be? |
I have implemented inline volume feature in master branch now, will that solve the issue? in pod config, you could specify csi-driver-smb/deploy/example/nginx-pod-smb-inline-volume.yaml Lines 22 to 27 in 613018d
|
We are not be able to hardcode csi.volumeAttributes.source, nor can we provide the value via helm values, as those information (both source share, and the credentials) are saved in Key Vault, and is not available to the to the helm chart. The only way the value of As the result, even using If the csi.volumeAttributes.source can be retrieved using secret, for example, using
|
Is your feature request related to a problem?/Why is this needed
We are using the csi-driver-smb to allow workloads to access the data from a local shared folder on edge side in different locations. Each deployment on an edge device needs to connect to a different local network shared folder with different name.
We are using the HashCorp Vault to manage the different settings via secret to each edge device, and uses the external-secrets controller to deploy the secret via its CRD. The secret includes the following information (value varies on each edge device):
We are deploying the managed workload using Rancher Fleet to each edge device. The rancher fleet GitRepo does not have access to the HashCorp Vault. The fleet bundle does know the name of the secret carrying the value of the source and credentials.
To extract the value of the source from the secret object to the manifest of pv.yaml, we could use the helm lookup function in helm templating, if the secret is already available prior to the fleet bundle deployment.
But it is not the case in our scenario, as we are only deploying the secret using the external-secrets crd within the same fleet bundle. So the helm templating using lookup function will never be able to get the correct info, as the secret
network-folder-secret
will not be available until AFTER the helm app is deployed.Describe the solution you'd like in detail
We would like to be able to use the
secretRef
orconfigMapRef
to specify the network share source and network share subdir, so that we don't have to use helm templatinglookup
function which may not even available if the secret was not available.In the following example, we would propose to add more attributes:
Here is the example, in terms of storage class.
Describe alternatives you've considered
Additional context
The text was updated successfully, but these errors were encountered: