Skip to content

Commit

Permalink
incorporate with review changes
Browse files Browse the repository at this point in the history
Signed-off-by: Anisur Rahman <[email protected]>
  • Loading branch information
anisurrahman75 committed Mar 19, 2024
1 parent f6cc5cd commit 5b0ed8f
Show file tree
Hide file tree
Showing 27 changed files with 251 additions and 188 deletions.
11 changes: 0 additions & 11 deletions docs/concepts/crds/_index.md

This file was deleted.

2 changes: 1 addition & 1 deletion docs/guides/backends/azure/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ secret/azure-secret created

### Create BackupStorage

Now, you have to create a `BackupStorage` crd. You have to provide the storage secret that we have created earlier in `spec.storage.azure.secretName` field.
Now, you have to create a `BackupStorage` cr. You have to specify the name of the storage secret that we have created earlier in the `spec.storage.azure.secretName` field.

Following parameters are available for `azure` backend.

Expand Down
2 changes: 1 addition & 1 deletion docs/guides/backends/gcs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ secret/gcs-secret created

### Create BackupStorage

Now, you have to create a `BackupStorage` crd. You have to provide the storage secret that we have created earlier in `spec.storage.gcs.SecretName` field.
Now, you have to create a `BackupStorage` cr. You have to specify the name of the storage secret that we have created earlier in the `spec.storage.gcs.SecretName` field.

Following parameters are available for `gcs` backend.

Expand Down
14 changes: 6 additions & 8 deletions docs/guides/backends/local/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,9 @@ section_menu_id: guides

### What is Local Backend

`Local` backend refers to a local path inside a container. KubeStash runs `Job` to handle various backend interactions for the local backend. These interactions include tasks such as initializing `BackupStorage`, initializing `Repository`, uploading `Snapshots`, cleanup `Repository`, and cleanup `BackupStorage`. Any Kubernetes supported [volumes](https://kubernetes.io/docs/concepts/storage/volumes/) such as [PersistentVolumeClaim](https://kubernetes.io/docs/concepts/storage/volumes/#persistentvolumeclaim), [HostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath), [EmptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) (for testing only), [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs), [gcePersistentDisk](https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk) etc. can be used as local backend.
KubeStash supports any Kubernetes supported [volumes](https://kubernetes.io/docs/concepts/storage/volumes/) such as [PersistentVolumeClaim](https://kubernetes.io/docs/concepts/storage/volumes/#persistentvolumeclaim), [HostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath), [EmptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) (for testing only), [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs), [gcePersistentDisk](https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk) etc. as local backend.

> Unlike other backend options that allow the KubeStash operator to interact directly with storage, it cannot do so with the local backend because the backend volume is not mounted in the operator pod. Therefore, it needs to execute jobs to initialize the BackupStorage and Repository, as well as upload Snapshot metadata to the local backend.
### Create BackupStorage

Expand All @@ -30,9 +32,7 @@ Following parameters are available for `Local` backend.
| `local.subPath` | `Optional` | Sub-path inside the referenced volume where the backed up data will be stored instead of its root. |
| `local.VolumeSource` | `Required` | Any Kubernetes volume. Can be specified inlined. Example: `hostPath`. |


> Note that by default, KubeStash run an initializer job for the local backend, which doesn’t have file write permission. So, in order to achieve
that you must give file system group permission, achieved by specifying `spec.runtimeSettings.pod.securityContext.fsGroup` in the `BackupStorage` configuration.
> By default, KubeStash runs an initializer job with the user `65534` for the local backend. However, this user might lack write permissions to the backend. To address this, you can specify a different `fsGroup` or `runAsUser` in the `.spec.runtimeSettings.pod.securityContext` section of the BackupStorage.
Here, we are going to show some sample `BackupStorage` objects that uses different Kubernetes volume as a backend.

Expand Down Expand Up @@ -71,9 +71,7 @@ $ kubectl apply -f https://github.com/kubestash/docs/raw/{{< param "info.version
backupstorage.storage.kubestash.com/local-storage-with-hostpath created
```

> Note that by default, Kubestash run `BackupStorage` initializer job with a `non-root` user. `hostPath` volume is writable only for the `root` user.
So, in order to use `hostPath` volume as a backend, you must either run initializer job as the `root` user, achieved by specifying
`spec.runtimeSettings.pod.securityContext.runAsUser` in the `BackupStorage` configuration, or adjust the permissions of the `hostPath` to allow write access for `non-root` users.
> Since a `hostPath` volume is typically writable only by the root user, you'll need to either run the initializer job as the `root` user or modify permissions directly on the host filesystem to enable non-root write access.

### PersistentVolumeClaim as Backend

Expand Down Expand Up @@ -146,7 +144,7 @@ $ kubectl apply -f https://github.com/kubestash/docs/raw/{{< param "info.version
backupstorage.storage.kubestash.com/local-storage-with-nfs created
```

>For NFS backend, KubeStash may have to run the network volume accessor deployments in privileged mode to provide KubeStash CLI facilities. In this case, please configure network volume accessors by following the instruction [here](/docs/setup/install/troubleshooting/index.md#configuring-network-volume-accessor).
>For network volumes such as NFS, KubeStash needs to deploy a helper network volume accessor deployment in the same namespace as the BackupStorage. This deployment mounts the NFS volume, allowing the CLI to interact with the backend. You can configure the network volume accessor by following the instructions [here](/docs/setup/install/troubleshooting/index.md#configuring-network-volume-accessor).

## Next Steps

Expand Down
2 changes: 1 addition & 1 deletion docs/guides/backends/s3/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ secret/minio-secret created

### Create BackupStorage

Now, you have to create a `BackupStorage` object. You have to provide the storage secret that we have created earlier in `spec.storage.s3.secretName` field.
Now, you have to create a `BackupStorage` cr. You have to specify the name of the storage secret that we have created earlier in the `spec.storage.s3.secretName` field.

Following parameters are available for `S3` backend.

Expand Down
10 changes: 0 additions & 10 deletions docs/guides/cli/_index.md

This file was deleted.

Binary file not shown.
Binary file not shown.
Binary file not shown.
42 changes: 23 additions & 19 deletions docs/guides/kubedump/application/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,7 @@ $ kubectl apply -f https://github.com/kubestash/docs/raw/{{< param "info.version
backupstorage.storage.kubestash.com/gcs-repo created
```

Now, we are ready to backup our application yaml resources.

**Create RetentionPolicy:**

Expand All @@ -119,7 +120,7 @@ spec:
from: Same
```

Notice the `spec.usagePolicy` that allows referencing the `RetentionPolicy` from all namespaces.For more details on configuring it for specific namespaces, please refer to the following [RetentionPolicy usage policy](/docs/concepts/crds/retentionpolicy/index.md).
Notice the `spec.usagePolicy` that allows referencing the `RetentionPolicy` from all namespaces.For more details on configuring it for specific namespaces, please refer to the following [link](/docs/concepts/crds/retentionpolicy/index.md).

Let's create the `RetentionPolicy` object that we have shown above,

Expand Down Expand Up @@ -194,9 +195,9 @@ Here, we are going to take backup YAMLs for `kubestash-kubestash-operator` Deplo

**Create Secret:**

We also have to create another `Secret` with an encryption key `RESTIC_PASSWORD` for `Restic`. This secret will be used by `Restic` for both encrypting and decrypting the backup data during backup & restore.
We also have to create another `Secret` with an encryption key `RESTIC_PASSWORD` for `Restic`. This secret will be used by `Restic` for encrypting the backup data.

Let's create a secret named `encry-secret` with the Restic password.
Let's create a secret named `encrypt-secret` with the Restic password.

```bash
$ echo -n 'changeit' > RESTIC_PASSWORD
Expand Down Expand Up @@ -256,9 +257,9 @@ spec:
```

Here,
- `spec.sessions[*].addon.name` specifies the name of the `Addon` object that specifies addon configuration that will be used to perform backup of a stand-alone PVC.
- `spec.sessions[*].addon.tasks[*].name` specifies the name of the `Task` that holds the `Function` and their order of execution to perform backup of a stand-alone PVC.
- `spec.sessions[*].addon.jobTemplate.runtimeSettings.pod.serviceAccountName` specifies the ServiceAccount name that we have created earlier with cluster-wide resource reading permission.
- `spec.sessions[*].addon.name` specifies the name of the `Addon`.
- `spec.sessions[*].addon.tasks[*].name` specifies the name of the backup task.
- `spec.sessions[*].addon.jobTemplate.spec.serviceAccountName`specifies the ServiceAccount name that we have created earlier with cluster-wide resource reading permission.

Let's create the `BackupConfiguration` object we have shown above,

Expand All @@ -279,6 +280,18 @@ NAME PHASE PAUSED AGE
application-manifest-backup Ready 19s
```

**Verify Repository:**

Verify that the Repository specified in the BackupConfiguration has been created using the following command,

```bash
$ kubectl get repositories -n demo
NAME INTEGRITY SNAPSHOT-COUNT SIZE PHASE LAST-SUCCESSFUL-BACKUP AGE
gcs-repository Ready 28s
```

KubeStash keeps the backup for `Repository` YAMLs. If we navigate to the GCS bucket, we will see the Repository YAML stored in the `demo/deployment-manifests` directory.

**Verify CronJob:**

Verify that KubeStash has created a `CronJob` with the schedule specified in `spec.sessions[*].scheduler.schedule` field of `BackupConfiguration` object.
Expand Down Expand Up @@ -306,33 +319,24 @@ application-manifest-backup-frequent-backup-1708677300 BackupConfiguration a

**Verify Backup:**

When backup session is created, KubeStash operator creates `Snapshot` which represents the state of backup run for each `Repository` which are provided in `BackupConfiguration`.

```bash
$ kubectl get repositories -n demo
NAME INTEGRITY SNAPSHOT-COUNT SIZE PHASE LAST-SUCCESSFUL-BACKUP AGE
gcs-repository true 1 2.262 KiB Ready 103s 8m
```

At this moment we have one `Snapshot`. Run the following command to check the respective `Snapshot`.
When `BackupSession` is created, KubeStash operator creates `Snapshot` for each `Repository` listed in the respective session of the `BackupConfiguration`. Since we have only specified one repository in the session, at this moment we should have one `Snapshot`.

Verify created `Snapshot` object by the following command,
Run the following command to check the respective `Snapshot`,

```bash
$ kubectl get snapshots -n demo
NAME REPOSITORY SESSION SNAPSHOT-TIME DELETION-POLICY PHASE AGE
gcs-repository-application-manifckup-frequent-backup-1708677300 gcs-repository frequent-backup 2024-02-23T08:35:00Z Delete Succeeded 43s
```

Now, If we navigate to `kubestash-qa/demo/deployment-manifests/repository/v1/frequent-backup/manifest` directory of our GCS bucket, we are going to see that the snapshot has been stored there.
Now, if we navigate to the GCS bucket, we will see the backed up data stored in the `demo/deployment-manifests/repository/v1/frequent-backup/manifest` directory. KubeStash also keeps the backup for `Snapshot` YAMLs, which can be found in the `kubestash-qa/demo/deployment-manifests/repository/snapshots` directory.

<figure align="center">
  <img alt="Backup YAMLs data of an Application in GCS storage" src="/docs/guides/kubedump/application/images/application_manifest_backup.png">
<figcaption align="center">Fig: Backup YAMLs data of an Application in GCS backend</figcaption>
</figure>

> KubeStash keeps all backup data encrypted. So, snapshot files in the bucket will not contain any meaningful data until they are decrypted.
> Note: KubeStash keeps all the dumped data encrypted in the backup directory meaning the dumped files won't contain any readable data until decryption.

## Restore

Expand Down
36 changes: 14 additions & 22 deletions docs/guides/kubedump/cluster/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ $ kubectl apply -f https://github.com/kubestash/docs/raw/{{< param "info.version
backupstorage.storage.kubestash.com/gcs-repo created
```

Now, we are ready to backup our target volume to this backend.
Now, we are ready to backup our cluster yaml resources.

**Create RetentionPolicy:**

Expand All @@ -123,17 +123,15 @@ spec:
from: Same
```

Notice the `spec.usagePolicy` that allows referencing the `RetentionPolicy` from all namespaces.For more details on configuring it for specific namespaces, please refer to the following [link](/docs/concepts/crds/retentionpolicy/index.md).

Let's create the `RetentionPolicy` object that we have shown above,

```bash
$ kubectl apply -f https://github.com/kubestash/docs/raw/{{< param "info.version" >}}/docs/guides/kubedump/cluster/examples/retentionpolicy.yaml
retentionpolicy.storage.kubestash.com/demo-retention created
```

Notice the `spec.usagePolicy` that allows referencing the `RetentionPolicy` from all namespaces.For more details on configuring it for specific namespaces, please refer to the following [RetentionPolicy usage policy](/docs/concepts/crds/retentionpolicy/index.md).

Let's create the `RetentionPolicy` object that we have shown above,


#### Create RBAC

Expand Down Expand Up @@ -186,10 +184,9 @@ Now, we are ready for backup. In the next section, we are going to schedule a ba

To schedule a backup, we have to create a `BackupConfiguration` object.


**Create Secret:**

We also have to create another `Secret` with an encryption key `RESTIC_PASSWORD` for `Restic`. This secret will be used by `Restic` for both encrypting and decrypting the backup data during backup & restore.
We also have to create another `Secret` with an encryption key `RESTIC_PASSWORD` for `Restic`. This secret will be used by `Restic` for encrypting the backup data.

Let's create a secret named `encry-secret` with the Restic password.

Expand Down Expand Up @@ -244,8 +241,9 @@ spec:
```

Here,
- `spec.sessions[*].addon.name` specifies the name of the `Addon` object that specifies addon configuration that will be used to perform backup of a stand-alone PVC.
- `spec.sessions[*].addon.tasks[*].name` specifies the name of the `Task` that holds the `Function` and their order of execution to perform backup of a stand-alone PVC.
- `spec.sessions[*].addon.name` specifies the name of the `Addon`.
- `spec.sessions[*].addon.tasks[*].name` specifies the name of the backup task.
- `spec.sessions[*].addon.jobTemplate.spec.serviceAccountName`specifies the ServiceAccount name that we have created earlier with cluster-wide resource reading permission.

Let's create the `BackupConfiguration` object we have shown above,

Expand All @@ -269,14 +267,16 @@ cluster-resources-backup Ready 79s

**Verify Repository:**

Verify that KubeStash has created `Repositories` that holds the `BackupStorage` information by the following command,
Verify that the Repository specified in the BackupConfiguration has been created using the following command,

```bash
$ kubectl get repositories -n demo
NAME INTEGRITY SNAPSHOT-COUNT SIZE PHASE LAST-SUCCESSFUL-BACKUP AGE
gcs-repository Ready 28s
```

KubeStash keeps the backup for `Repository` YAMLs. If we navigate to the GCS bucket, we will see the Repository YAML stored in the `demo/cluster-manifests` directory.

**Verify CronJob:**

Verify that KubeStash has created a `CronJob` with the schedule specified in `spec.sessions[*].scheduler.schedule` field of `BackupConfiguration` object.
Expand All @@ -302,32 +302,24 @@ cluster-resources-backup-frequent-backup-1708694700 BackupConfiguration clus

**Verify Backup:**

When backup session is created, KubeStash operator creates `Snapshot` which represents the state of backup run for each `Repository` which are provided in `BackupConfiguration`.

```bash
$ kubectl get repositories -n demo
NAME INTEGRITY SNAPSHOT-COUNT SIZE PHASE LAST-SUCCESSFUL-BACKUP AGE
gcs-repository Ready 28s
```

At this moment we have one `Snapshot`. Run the following command to check the respective `Snapshot`.
When `BackupSession` is created, KubeStash operator creates `Snapshot` for each `Repository` listed in the respective session of the `BackupConfiguration`. Since we have only specified one repository in the session, at this moment we should have one `Snapshot`.

Verify created `Snapshot` object by the following command,
Run the following command to check the respective `Snapshot`,

```bash
$ kubectl get snapshots -n demo
NAME REPOSITORY SESSION SNAPSHOT-TIME DELETION-POLICY PHASE AGE
gcs-repository-cluster-resourcesckup-frequent-backup-1708694700 gcs-repository frequent-backup 2024-02-23T13:25:00Z Delete Succeeded 22m
```

Now, if we navigate to the GCS bucket, we will see the backed up data has been stored in `/manifest/cluster` directory as specified by `.spec.backend.gcs.prefix` field of the `Repository` object.
Now, if we navigate to the GCS bucket, we will see the backed up data stored in the `demo/cluster-manifests/repository/v1/frequent-backup/manifest` directory. KubeStash also keeps the backup for `Snapshot` YAMLs, which can be found in the` demo/cluster-manifests/repository/snapshots` directory.

<figure align="center">
<img alt="Backup data in GCS Bucket" src="/docs/guides/kubedump/cluster/images/cluster_manifests_backup.png">
<figcaption align="center">Fig: Backup data in GCS Bucket</figcaption>
</figure>

> Note: KubeStash keeps all the backed-up data encrypted. So, data in the backend will not make any sense until they are decrypted.
> Note: KubeStash keeps all the dumped data encrypted in the backup directory meaning the dumped files won't contain any readable data until decryption.

## Restore

Expand Down
Loading

0 comments on commit 5b0ed8f

Please sign in to comment.