diff --git a/README.md b/README.md index 2fddb41..5c67630 100644 --- a/README.md +++ b/README.md @@ -1 +1,79 @@ -# Scality COSI driver +# Scality COSI Driver + +The **Scality COSI Driver** integrates Scality RING Object Storage with Kubernetes, leveraging the Kubernetes Container Object Storage Interface (COSI) to enable seamless object storage provisioning and management. This repository provides all necessary resources to deploy, use, and contribute to the Scality COSI Driver. + +--- + +## Features + +| Category | Feature | Notes | +|---------------------------|----------------------------------|------------------------------------------------------------------------------------------------------------------| +| **Bucket Provisioning** | Greenfield bucket provisioning | Creates a new S3 Bucket with default settings. | +| | Brownfield bucket provisioning | Leverages an existing bucket in S3 storage within Kubernetes workflows. | +| | Delete Bucket | Deletes an S3 Bucket, but only if it is empty. | +| **Access Management** | Grant Bucket Access | Provides full access to a bucket by creating new IAM credentials with access and secret keys. | +| | Revoke Bucket Access | Removes access by deleting IAM credentials associated with the bucket. | + +--- + +## Getting Started + +### Installation + +Use [Quickstart](#quickstart-guide) or follow the [installation guide](docs/installation/install-helm.md) to deploy the Scality COSI Driver using Helm. + +### Quickstart Guide + +To quickly deploy and test the Scality COSI Driver: + +1. Ensure your Kubernetes cluster is properly configured, and [Helm v3+ is installed](https://helm.sh/docs/intro/install/). The COSI specification was introduced in Kubernetes 1.25. We recommend using [one of the latest supported Kubernetes versions](https://kubernetes.io/releases/). +2. Create namespace `container-object-storage-system` and install the COSI controller deployment and COSI CRDs: + + ```bash + kubectl create -k github.com/kubernetes-sigs/container-object-storage-interface + ``` + +3. Deploy the driver: Namespace `container-object-storage-system` will be created in step 2. + + ```bash + helm install scality-cosi-driver oci://ghcr.io/scality/cosi-driver/helm-charts/scality-cosi-driver \ + --namespace container-object-storage-system + ``` + +4. Verify the deployment: + + ```bash + kubectl get pods -n container-object-storage-system + ``` + + There should be 2 pods `Running` in the `container-object-storage-system` namespace: + + ```sh + $ kubectl get pods -n container-object-storage-system + NAME READY STATUS RESTARTS AGE + container-object-storage-controller-7f9f89fd45-h7jtn 1/1 Running 0 25h + scality-cosi-driver-67d96bf8ff-9f59l 2/2 Running 0 20h + ``` + +To learn how to use the COSI driver, refer to the [Usage documentation](./docs/Usage.md) + +--- + +## Documentation + +The following sections provide detailed documentation for deploying, configuring, and developing with the Scality COSI Driver: + +- **[Installation Guide](docs/installation/install-helm.md):** Step-by-step instructions for deploying the driver. +- **[Driver Parameters](docs/driver-params.md):** Configuration options for bucket classes and access credentials. +- **[Metrics Overview](docs/metrics-overview.md):** Prometheus metrics exposed by the driver. +- **[Feature Usage](docs/Usage.md):** Detailed guides on bucket provisioning and access control with the COSI driver. +- **[Development Documentation](docs/development):** + - [Dev Container Setup](docs/development/dev-container-setup.md) + - [Remote Debugging](docs/development/remote-debugging-golang-on-kubernetes.md) + - [Running Locally](docs/development/run-cosi-driver-locally.md) + +--- + +## Support + +For issues, please create a ticket in the [GitHub Issues](https://github.com/scality/cosi-driver/issues) section. diff --git a/docs/Usage.md b/docs/Usage.md new file mode 100644 index 0000000..f9d3f09 --- /dev/null +++ b/docs/Usage.md @@ -0,0 +1,325 @@ +# Scality COSI Driver Usage Guide: Bucket Provisioning & Access Control + +This document provides an overview and step-by-step guidance for implementing **Bucket Provisioning** (both Greenfield and Brownfield) and **Access Control** using the Scality COSI Driver. Example YAML manifests can be found in the [cosi-examples](../cosi-examples/) folder. + +> [!NOTE] +> The Scality COSI Driver supports standard AWS S3 and IAM compliant storage solutions like Scality RING, Scality ARTESCA and AWS S3 & IAM + +## Prerequisites + +Before proceeding, ensure that the following components are installed on your cluster: + +1. **Kubernetes and Helm**: Ensure your Kubernetes cluster is properly configured, and [Helm v3+ is installed](https://helm.sh/docs/intro/install/). The COSI specification was introduced in Kubernetes 1.25. We recommend using [one of the latest supported Kubernetes versions](https://kubernetes.io/releases/). +2. **Kubernetes Container Object Storage Interface (COSI) CRDs** +3. **Container Object Storage Interface Controller** + +Refer to the quick start guide in the [README](../README.md#quickstart-guide) for installation instructions. + +### Common Setup Steps + +1. **Create the IAM User (by the Storage Administrator)** + + Create an IAM user and a pair of Access Key ID and Secret Access Key. This user will be used by COSI driver. Assign S3/IAM permissions that allow bucket creation and user management. Permissions needed by COSI driver: + - `S3:CreateBucket` + - `S3:DeleteBucket` + - `IAM:GetUser` + - `IAM:CreateUser` + - `IAM:DeleteUser` + - `IAM:PutUserPolicy` + - `IAM:DeleteUserPolicy` + - `IAM:ListAccessKeys` + - `IAM:CreateAccessKey` + - `IAM:DeleteAccessKey` + +2. **Collect Access & Endpoint Details** + + The Storage Administrator provides the below details to the Kubernetes Administrator: + + - S3 endpoint (and IAM endpoint, if different) + - Region + - Access Key ID & Secret Key + - `tlsCert`, if needed. COSI driver uses the the AWS setting `InsecureSkipVerify` for HTTPs endpoints if not provided. + +3. **Create a Kubernetes Secret (by the Kubernetes Administrator)** + + The Kubernetes Administrator creates a secret containing the above credentials and configuration details: + + ```bash + cat < + secretAccessKey: + endpoint: + region: + iamEndpoint: + tlsCert: |- + -----BEGIN CERTIFICATE----- + ... + -----END CERTIFICATE----- + EOF + ``` + + > [!NOTE] + > Update ``, ``, ``, ``, with valid values for your environment. If your endpoint does not require a TLS cert, you can remove it. Similarly, add IAM endpoint with `iamEndpoint` if its different from S3 endpoint otherwise remove it. + > If using TLS cert, include the certificate content (PEM-encoded) in the stringData section of the Secret. Use a multi-line block scalar (|-) in YAML so that the certificate (with newlines) is preserved correctly. + > For HTTPS endpoint if not using TLS certificate, COSI driver will use the `InsecureSkipVerify` flag. + +--- + +## 1. Bucket Provisioning + +In the **Scality COSI Driver**, both **Greenfield** and **Brownfield** provisioning share similar steps, with minor differences in how resources (Bucket, BucketClaim) are created. + +> Note: +> For **fully working** examples, see the YAMLs in the [cosi-examples/brownfield](../cosi-examples/brownfield/) and [cosi-examples/greenfield](../cosi-examples/greenfield/) directories. +> For brownfield scenario it is madatory to create COSI CRs in the same namespace as COSI driver and controller. + +### 1.1 Greenfield: Creating a New Bucket + +Greenfield provisioning will create a brand-new S3 bucket in your object store, managed by Kubernetes. Examples can be found [here](../cosi-examples/greenfield/). + +1. **Create a BucketClass** + A `BucketClass` defines how buckets should be provisioned or deleted. The bucket class name is used as a prefix for bucket name by COSI: + + ```bash + cat <-`). + - Only `S3` protocol is supported at the moment. + +### 1.2 Brownfield: Using an Existing Bucket + +Brownfield provisioning allows you to manage an **already-existing** S3 bucket in Kubernetes. + +> Note: For brownfield scenario, COSI CRs for Bucket and Access provisioning should be created in the same namespace as COSI driver and controller. + +1. **Verify Existing Bucket** + + Ensure the bucket already exists in S3 either through Storage Administrator or by running the following AWS CLI command: + + ```bash + aws s3api head-bucket --bucket --endpoint-url + ``` + +2. **Create a BucketClass** + + Similar to Greenfield, but you will typically still reference the same secret: + + ```bash + cat < [!NOTE] + > For Brownfield, existing buckets when imported using the steps below, do not follow the `deletionPolicy` even if it set to `Delete` . All buckets created using this bucket class for greenfield scenario will still respect the `deletionPolicy` + +3. **Create the Bucket Resource/Instance** + This is where we tell Kubernetes about the existing bucket: + + ```bash + cat <" + namespace: container-object-storage-system + spec: + bucketClaim: {} + driverName: cosi.scality.com + bucketClassName: brownfield-bucketclass + driverName: cosi.scality.com + deletionPolicy: Retain + existingBucketID: "" + parameters: + objectStorageSecretName: s3-secret-for-cosi + objectStorageSecretNamespace: default + protocols: + - S3 + EOF + ``` + + - `name` and `existingBucketID` should be the same as the existing bucket name in S3 storage. + +4. **Create the BucketClaim** + Reference the existing `Bucket` resource/instance by name via `existingBucketName`: + + ```bash + cat <" + protocols: + - S3 + EOF + ``` + + - `existingBucketName` should match the `name` of the `Bucket` resource/instance created in the previous step for Bucket Instance. + +### Bucket Provisioning Cleanup + +To remove the buckets and associated Kubernetes resources: + +- **Greenfield**: + + ```bash + kubectl delete bucketclaim my-greenfield-bucketclaim + ``` + + - Deleting the `BucketClaim` will remove the underlying bucket only if: + - `deletionPolicy` was set to `Delete` in `BucketClass`. + - The bucket is empty at the time of deletion. + +- **Brownfield**: + + ```bash + kubectl delete bucketclaim my-brownfield-bucketclaim + ``` + + - Deleting the `BucketClaim` and `Bucket` resources in Kubernetes **does not** delete the actual pre-existing bucket in S3 even if `deletionPolicy` is `Delete`. + +--- + +## 2. Access Control (Common to Greenfield & Brownfield) + +Access Control configuration is effectively the same for both Greenfield and Brownfield. Once the `BucketClaim` is ready, you can request credentials for the bucket via a `BucketAccess` resource. + +### 2.1 Create a BucketAccessClass + +A `BucketAccessClass` defines how access (IAM policy or S3 keys) is granted: + +```bash +cat < [!NOTE] +> +> - `authenticationType` is `Key` for basic S3 key-based credentials. +> - `objectStorageSecretName` and `objectStorageSecretNamespace` reference the secret you created earlier in [Common Setup Steps](#common-setup-steps). + +### 2.2 Request Bucket Access + +Once the `BucketClaim` is bound (Greenfield or Brownfield), create a `BucketAccess` to generate a credential secret in the cluster: + +```bash +cat <.scality.com`). | `cosi` | No | -| `driver-metrics-address` | The address to expose Prometheus metrics. | `:8080` | No | -| `driver-metrics-path` | The HTTP path for exposing metrics. | `/metrics` | No | -| `driver-custom-metrics-prefix` | The prefix for metrics collected by the COSI driver. | `scality_cosi_driver` | No | +| **Parameter** | **Description** | **Default Value** | **Required** | +|---------------------------------|-----------------------------------------------------------------------------------------------|--------------------------------------|--------------| +| `driver-address` | The socket file address for the COSI driver. | `unix:///var/lib/cosi/cosi.sock` | Yes | +| `driver-prefix` | The prefix for the COSI driver (e.g., `.scality.com`). | `cosi` | No | +| `driver-metrics-address` | The address (hostname:port) for exposing Prometheus metrics. | `:8080` | No | +| `driver-metrics-path` | The HTTP path for exposing metrics. | `/metrics` | No | +| `driver-custom-metrics-prefix` | The prefix for metrics collected by the COSI driver. | `scality_cosi_driver` | No | +| `driver-otel-endpoint` | The OpenTelemetry (OTEL) endpoint for exporting traces (if `driver-otel-stdout` is false). | `""` (empty string disables tracing) | No | +| `driver-otel-stdout` | Enable OpenTelemetry trace export to stdout. Disables the OTEL endpoint if set to `true`. | `false` | No | +| `driver-otel-service-name` | The service name reported in OpenTelemetry traces. | `cosi.scality.com` | No | + +For Helm deployments, these parameters can be set in the [values.yaml](../helm/scality-cosi-driver/values.yaml) file or passed as flags during installation. + +## Notes on OpenTelemetry Parameters + +- **`driver-otel-endpoint`**: + Use this to specify an OTEL collector endpoint such as `otel-collector.local:4318`. + If `driver-otel-stdout` is set to `true`, this endpoint is ignored. + +- **`driver-otel-stdout`**: + If set, trace data is printed to stdout in addition to any logging. + This is useful for local debugging but should generally be disabled in production. + +- **`driver-otel-service-name`**: + Defines how the service is labeled in OTEL-based observability platforms (e.g., Jaeger). ### Notes -- If driver-metrics-path does not start with /, it will automatically prepend /. +- If driver-metrics-path does not end with `/`, it will automatically append `/`. - Prometheus metrics are exposed for monitoring at the address and path specified. +- Generation of traces are disabled by default. To enable tracing, set `driver-otel-endpoint` to the desired OTEL collector endpoint or set `driver-otel-stdout` to `true` to print traces to stdout diff --git a/docs/installation/install-helm.md b/docs/installation/install-helm.md index a4e4a6b..c810cf9 100644 --- a/docs/installation/install-helm.md +++ b/docs/installation/install-helm.md @@ -4,22 +4,6 @@ This guide provides step-by-step instructions for installing the Scality COSI Dr --- -## Table of Contents - -- [Installing the Scality COSI Driver with Helm](#installing-the-scality-cosi-driver-with-helm) - - [Table of Contents](#table-of-contents) - - [Prerequisites](#prerequisites) - - [Installation Methods](#installation-methods) - - [Install locally without helm package](#install-locally-without-helm-package) - - [Package locally and install](#package-locally-and-install) - - [Install from OCI Registry with Helm](#install-from-oci-registry-with-helm) - - [Verifying the Installation](#verifying-the-installation) - - [Uninstalling the Chart](#uninstalling-the-chart) - - [Troubleshooting](#troubleshooting) - - [Additional Resources](#additional-resources) - ---- - ## Prerequisites - **Kubernetes Cluster**: Ensure you have access to a running Kubernetes cluster (v1.23 or later). @@ -31,12 +15,21 @@ This guide provides step-by-step instructions for installing the Scality COSI Dr ## Installation Methods +You can install the Scality COSI Driver using Helm in multiple ways. Choose the method that best suits your environment and requirements. +Its recommended to deploy COSI controller first which creates the `container-object-storage-system` namespace and then install the COSI driver. If the namespace is not created, the COSI driver installation will fail. Use `--create-namespace` flag to create the namespace if it does not exist. + +### Deploy COSI controller and related CRDs + +```bash +kubectl create -k github.com/kubernetes-sigs/container-object-storage-interface +``` + ### Install locally without helm package ```bash git clone https://github.com/scality/cosi-driver.git cd cosi-driver - helm install scality-cosi-driver ./helm/scality-cosi-driver --namespace container-object-storage-system --create-namespace --set image.tag=0.1.0 + helm install scality-cosi-driver ./helm/scality-cosi-driver --namespace container-object-storage-system --create-namespace --set image.tag=1.0.0 ``` ### Package locally and install @@ -44,14 +37,14 @@ This guide provides step-by-step instructions for installing the Scality COSI Dr ```bash git clone https://github.com/scality/cosi-driver.git cd cosi-driver - helm package ./helm/scality-cosi-driver --version 0.1.0 - helm install scality-cosi-driver ./scality-cosi-driver-0.1.0.tgz --namespace container-object-storage-system --create-namespace --set image.tag=0.1.0 + helm package ./helm/scality-cosi-driver --version 1.0.0 + helm install scality-cosi-driver ./scality-cosi-driver-1.0.0.tgz --namespace container-object-storage-system --create-namespace --set image.tag=1.0.0 ``` ### Install from OCI Registry with Helm ```bash - helm install scality-cosi-driver oci://ghcr.io/scality/cosi-driver/helm-charts/scality-cosi-driver --version 0.0.1 --namespace scality-cosi --create-namespace --set image.tag=0.1.0 + helm install scality-cosi-driver oci://ghcr.io/scality/cosi-driver/helm-charts/scality-cosi-driver --namespace container-object-storage-system --create-namespace --set image.tag=1.0.0 ``` --- @@ -63,29 +56,23 @@ After installing the chart using either method, verify that the Scality COSI Dri 1. **Check the Pods in the Namespace** ```bash - kubectl get pods -n scality-cosi + kubectl get pods -n container-object-storage-system ``` -2. **Check the COSI Driver Registration** + You should see a pod for `scality-cosi-driver`. - ```bash - kubectl get csidrivers - ``` - - You should see an entry for `scality-cosi-driver`. +2. **Check Logs and Deployment events** -3. **Describe the Deployment** + If there are issues, check if the events of the deployment for any errors ```bash - kubectl describe deployment scality-cosi-driver -n scality-cosi + kubectl describe deployment scality-cosi-driver --namespace container-object-storage-system ``` -4. **Check Logs** - - If there are issues, check the logs of the driver pod: + If the Pod is running check the logs for any errors ```bash - kubectl logs -l app.kubernetes.io/name=scality-cosi-driver -n scality-cosi + kubectl logs -l app.kubernetes.io/name=scality-cosi-driver --namespace container-object-storage-system ``` --- @@ -95,13 +82,7 @@ After installing the chart using either method, verify that the Scality COSI Dri To uninstall the Scality COSI Driver and remove all associated resources: ```bash -helm uninstall scality-cosi-driver --namespace scality-cosi -``` - -Optionally, delete the namespace if it's no longer needed: - -```bash -kubectl delete namespace scality-cosi +helm uninstall scality-cosi-driver --namespace container-object-storage-system ``` --- @@ -113,9 +94,8 @@ kubectl delete namespace scality-cosi - **Network Issues**: Ensure your network allows access to the OCI registry. - **Resource Conflicts**: Check for existing resources that might conflict with the installation. - **Logs**: Always check the pod logs for error messages if the driver is not running as expected. -- **Log in to the OCI Registry**: Log in to the `ghcr.io` using Helm: `helm registry login -u -p ghcr.io` -- **Chart debuggeing**: View chart details using `helm show all oci://ghcr.io/scality/cosi-driver/helm-charts/scality-cosi-driver --version ` -- Templating the chart**: To render the Helm templates and see the Kubernetes resources that will be created: `helm template scality-cosi-driver oci://ghcr.io/scality/cosi-driver/helm-charts/scality-cosi-driver --version ` +- **Chart debugging**: View chart details using `helm show all oci://ghcr.io/scality/cosi-driver/helm-charts/scality-cosi-driver --version ` +- **Templating the chart**: To render the Helm templates and see the Kubernetes resources that will be created: `helm template scality-cosi-driver oci://ghcr.io/scality/cosi-driver/helm-charts/scality-cosi-driver --version ` --- @@ -132,5 +112,3 @@ When a new release of the Scality COSI Driver is published, it includes: - A Docker image pushed to `ghcr.io/scality/cosi-driver:` - A Helm chart available in the OCI registry `ghcr.io/scality/cosi-driver/helm-charts/scality-cosi-driver` - -**Note**: Always replace placeholders like ``, ``, and `` with your actual credentials and desired versions. diff --git a/helm/scality-cosi-driver/Chart.yaml b/helm/scality-cosi-driver/Chart.yaml index 4abd59d..128a293 100644 --- a/helm/scality-cosi-driver/Chart.yaml +++ b/helm/scality-cosi-driver/Chart.yaml @@ -1,5 +1,5 @@ apiVersion: v2 name: scality-cosi-driver description: A Helm chart for deploying the Scality COSI Driver -version: 0.1.0-beta +version: 1.0.0 appVersion: "1.0" diff --git a/helm/scality-cosi-driver/templates/deployment.yaml b/helm/scality-cosi-driver/templates/deployment.yaml index e0b3336..583d922 100644 --- a/helm/scality-cosi-driver/templates/deployment.yaml +++ b/helm/scality-cosi-driver/templates/deployment.yaml @@ -2,6 +2,7 @@ apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "scality-cosi-driver.fullname" . }} + namespace: {{ .Values.namespace }} labels: app.kubernetes.io/name: {{ include "scality-cosi-driver.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} diff --git a/helm/scality-cosi-driver/values.yaml b/helm/scality-cosi-driver/values.yaml index 9fe636a..4244170 100644 --- a/helm/scality-cosi-driver/values.yaml +++ b/helm/scality-cosi-driver/values.yaml @@ -1,6 +1,6 @@ image: repository: ghcr.io/scality/cosi-driver - tag: latest + tag: 1.0.0 pullPolicy: IfNotPresent @@ -68,4 +68,4 @@ env: fieldPath: metadata.namespace -version: 0.1.0 +version: 1.0.0