-
Notifications
You must be signed in to change notification settings - Fork 3
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Initial rough draft of local dev environment SDD
- Loading branch information
Showing
2 changed files
with
125 additions
and
0 deletions.
There are no files selected for viewing
124 changes: 124 additions & 0 deletions
124
docs/modules/SDDs/pages/0029-local-dev-environment.adoc
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,124 @@ | ||
= SDD 0029 - Local Commodore Development Environment | ||
|
||
:sdd_author: Simon Gerber | ||
:sdd_owner: VSHN Tech Alignment WG | ||
:sdd_reviewers: Reviewer names | ||
:sdd_date: 2023-XX-XX | ||
:sdd_status: draft | ||
include::partial$meta-info-table.adoc[] | ||
|
||
[NOTE] | ||
.Summary | ||
==== | ||
This SDD outlines a local development environment which is easy to use, quick to bootstrap and which supports local development backed by a full cluster catalog. | ||
==== | ||
|
||
== Motivation | ||
|
||
While we have a development environment for working on a single component in isolation (`commodore component compile`), we don't currently have a simple development environment for working on complex configurations which consist of multiple components. | ||
|
||
Additionally, the currently available `commodore component compile` hits limits when working on more complex components, for example when advanced ArgoCD features are required for correct installation of the software managed by a component. | ||
|
||
Finally, with a more full-fledged local development environment we also enable ourselves to provide a better getting started experience with Commodore and we can explore new avenues for integration testing in component CI pipelines. | ||
|
||
=== Goals | ||
|
||
We want the local development environment to | ||
|
||
* Support quick edit-compile-deploy cycles for single components | ||
* Enable users to test single- or multi-component configurations in a local Kubernetes cluster (e.g. `kind`, or `k3d`) without having to register the cluster in a Lieutenant instance | ||
* ... | ||
|
||
=== Non-Goals | ||
|
||
* TBD | ||
|
||
== User Stories | ||
|
||
=== As a *component developer* I want *quick and easy edit-compile-deploy cycles* so that *I can quickly iterate on my changes* | ||
|
||
==== Background | ||
|
||
We've seen Helm charts which make heavy use of install hooks to perform some initialization steps in the right order, which aren't supported at all with the currently available tools. | ||
We also develop multiple components which use ArgoCD's sync job or sync wave mechanisms to orchestrate more complex installations or upgrades. | ||
Some changes to these components can't be tested without having a fully Syn-managed cluster available. | ||
|
||
Depending on how many components are installed on a shared dev cluster, compile times for the dev cluster's cluster catalog may be in the multiple minutes range, which isn't really acceptable for iterating on changes. | ||
|
||
==== With the current tooling | ||
|
||
Jane tests components by compiling them locally with `commodore component compile` and the applies the resulting manifests on her kind cluster with `kubectl apply -Rf compiled/`. | ||
However, because the component she's trying to test installs a Helm chart which uses Helm hooks to perform some in-cluster configuration that needs to happen in a specific order, she isn't able to get the software installed by the Helm chart to work properly. | ||
At this point, Jane may start chasing red herrings, such as "The installation works fine with `helm install`, so something in Commodore's Helm rendering must be wrong". | ||
Alternatively, if she's seen a similar issue previously, she'll either test the component on a shared dev cluster or she'll register her local cluster in a Lieutenant instance, since she's aware that ArgoCD will execute helm hooks in the correct order. | ||
|
||
With either of those approaches, the compile-deploy cycle for Jane has jumped from a few seconds to tens of seconds since she now needs a full `commodore catalog compile` followed by either manually syncing her component's App in ArgoCD or waiting for ArgoCD's sync cycle. | ||
If she's decided to test her component on the shared dev cluster because she doesn't want to spend upwards of 15 minutes to get her local cluster Syn-managed, the compile-deploy cycle may even take multiple minutes depending on how many other components are installed on the shared cluster. | ||
|
||
==== With the proposed tooling | ||
|
||
Jane tests components locally by running `commodore component install` while having her local kind cluster selected in her `kubectl` config. | ||
After a few seconds, she sees her changes getting applied in her local kind cluster. | ||
|
||
=== As a *component maintainer* I want *integration tests in the component CI pipeline* so that *I can see issues with dependency updates when looking at PRs* | ||
|
||
===== With the current tooling | ||
|
||
We've not engineered any support for running integration tests (i.e. installing components in a kind cluster) in CI pipelines. | ||
Therefore, John needs to setup a local kind cluster, make it Syn-managed, and install the dependency update branch of the component he's reviewing by adding the component and all of its dependencies to the newly created Syn cluster config for his local cluster. | ||
Alternatively, John can update the component to the dependency branch on the shared dev cluster without any local tests, and accept that he may break the dev environment with a so-far completely untested dependency update. | ||
|
||
|
||
==== With the proposed tooling | ||
|
||
John can create a local kind cluster, and install the dependency update branch of the component into his local cluster by running `commodore component install`. | ||
|
||
=== As a *system engineer* I want to *locally test configuration changes touching multiple components* so that *I can confidently say that the changes are safe to roll out* | ||
|
||
==== With the current tooling | ||
|
||
Project Syn currently has no support for cleanly testing configuration changes which touch multiple components without compiling a full cluster catalog. | ||
Therefore, Jane usually tests such changes on the dev cluster, accepting that she'll wait multiple minutes for each iteration of her change before she sees the catalog diff. | ||
If she wants to actually test iterations of the configuration change on the cluster, she has to push or merge each iteration of the change into the master branch of the dev cluster's tenant repo, and then compile and push the catalog changes. | ||
|
||
==== With the proposed tooling | ||
|
||
Jane sets up a local kind cluster and ensures her current `kubectl` context points to it. | ||
She then defines a local cluster spec which installs all the components affected by her configuration change. | ||
She copies or references the current configuration from the regular config hierarchy and runs `commodore catalog install cluster-spec.yml`, to render a local catalog and install it into her local kind cluster. | ||
|
||
== Design Proposal | ||
|
||
=== Design ideas | ||
|
||
==== Local cluster spec in Reclass | ||
|
||
* Have a local "cluster spec" Reclass class which is used in place of `global.commodore` in the Kapitan targets and which specifies everything that's required to render a local catalog | ||
* Render catalog into a local Git repo -> TODO: how to handle secret references? | ||
* Engineer a Steward variant which either spins up a kind/k3d cluster and bootstraps it from the local catalog or which expects a local kind/k3d cluster to be available. | ||
* Easy to support multi-component testing, aka `commodore catalog install cluster-spec.yml` in the user stories. | ||
** We can engineer optional support for referencing remote global or tenant repos in the local cluster spec. | ||
* edit-compile-deploy cycle may become slower, especially if cluster spec gets big or references a lot of external config repos. | ||
* A single command `commodore component install` may need some extra inputs | ||
|
||
==== Run `commodore component compile` in ArgoCD | ||
|
||
* Have tooling to bootstrap an ArgoCD which is configured with an App pointing to the working directory of the developer and runs `commodore component compile` during apply | ||
* `commodore component install` in the user stories is basically this | ||
* Unclear how we can offer local multi-component testing ("local catalog") with this approach | ||
|
||
|
||
==== Fake Lieutenant API managed by Commodore | ||
|
||
* Commodore provides a fake Lieutenant API based on additional flags for `catalog compile` | ||
* Just having the fake Lieutenant API doesn't address the remaining aspect of getting a local cluster bootstrapped with ArgoCD linked to a repo on the developer's laptop. | ||
|
||
=== Implementation Details/Notes/Constraints [optional] | ||
|
||
=== Risks and Mitigations [optional] | ||
|
||
== Drawbacks [optional] | ||
|
||
== Alternatives | ||
|
||
== References |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters