Skip to content

Commit

Permalink
Stack recipe CLI improvements (#872)
Browse files Browse the repository at this point in the history
* fix string indices bug

* highlight that pull will delete tfstate files

* add check for kubectl, helm and docker

* add option to skip checking locals.tf file

* update zenml version automatically in recipe

* Apply Hamza's suggestions from code review

Co-authored-by: Hamza Tahir <[email protected]>

* Apply Alex's suggestions from code review

Co-authored-by: Alex Strick van Linschoten <[email protected]>

* add filename flag to stack import

* Fix failing test

* add describe function

* Show help message on stack recipe subcommands

* check if prerequisites are met

* remove pager

* add command to check version

* Fix docstring and formatting

* Add missing return docstring

* Fix remaining linting issues after merge

* Fix all links

* Make returrn type optional

Co-authored-by: Hamza Tahir <[email protected]>
Co-authored-by: Alex Strick van Linschoten <[email protected]>
Co-authored-by: Michael Schuster <[email protected]>
  • Loading branch information
4 people authored Sep 9, 2022
1 parent 8322f68 commit e35cb26
Show file tree
Hide file tree
Showing 28 changed files with 276 additions and 90 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -180,3 +180,6 @@ zenml_examples/

# some examples have yamls; don't ignore them
!examples/**/*.yaml

# stack recipes
test_terraform/
4 changes: 2 additions & 2 deletions .pyspelling-ignore-words
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,7 @@ DataFrame's
DataFrames
DataLoader
DataQualityProfileSection
Databricks
DatasetProfile
DatasetProfileView
Deepchecks
Expand Down Expand Up @@ -734,6 +735,7 @@ daemonization
daemonize
daemonizer
dango
databricks
datadir
datadrift
dataframe
Expand Down Expand Up @@ -1282,5 +1284,3 @@ zenml
zenmldocker
zenmlregistry
zenserver
databricks
Databricks
2 changes: 1 addition & 1 deletion docs/book/mlops-stacks/model-deployers/model-deployers.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ Both pre- and post-processing are very essential for the model deployment proces
The custom model deployment support is available only for the following integrations:
* [KServe Custom Predictor](./kserve.md#custom-model-deployment)
* [Seldon Core Custom Python Model](./seldon-core.md#custom-model-deployment)
* [Seldon Core Custom Python Model](./seldon.md#custom-model-deployment)
{% endhint %}
### How to Interact with model deployer after deployment?
Expand Down
20 changes: 10 additions & 10 deletions docs/book/stack-deployment-guide/manual-deployments/aws/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,42 +12,42 @@ This is a list of all supported AWS services that you can use as ZenML stack com
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. [Learn more here](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html).

* An EKS cluster can be used to run multiple **orchestrators**.
* [A Kubernetes-native orchestrator.](../../mlops-stacks/orchestrators/kubernetes.md)
* [A Kubeflow orchestrator.](../../mlops-stacks/orchestrators/kubeflow.md)
* [A Kubernetes-native orchestrator.](../../../mlops-stacks/orchestrators/kubernetes.md)
* [A Kubeflow orchestrator.](../../../mlops-stacks/orchestrators/kubeflow.md)
* You can host **model deployers** on the cluster.
* [A Seldon model deployer.](../../mlops-stacks/model-deployers/seldon.md)
* [An MLflow model deployer.](../../mlops-stacks/model-deployers/mlflow.md)
* [A Seldon model deployer.](../../../mlops-stacks/model-deployers/seldon.md)
* [An MLflow model deployer.](../../../mlops-stacks/model-deployers/mlflow.md)
* Experiment trackers can also be hosted on the cluster.
* [An MLflow experiment tracker](../../mlops-stacks/experiment-trackers/mlflow.md)
* [An MLflow experiment tracker](../../../mlops-stacks/experiment-trackers/mlflow.md)

## Simple Storage Service (S3)

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers scalability, data availability, security, and performance. [Learn more here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html).

* You can use an [S3 bucket as an artifact store](../../mlops-stacks/artifact-stores/amazon-s3.md) to hold files from our pipeline runs like models, data and more.
* You can use an [S3 bucket as an artifact store](../../../mlops-stacks/artifact-stores/amazon-s3.md) to hold files from our pipeline runs like models, data and more.

## Elastic Container Registry (ECR)

Amazon Elastic Container Registry (Amazon ECR) is an AWS managed container image registry service that is secure, scalable, and reliable. [Learn more here](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html).

* An [ECS registry can be used as a container registry](../../mlops-stacks/container-registries/amazon-ecr.md) stack component to host images of your pipelines.
* An [ECS registry can be used as a container registry](../../../mlops-stacks/container-registries/amazon-ecr.md) stack component to host images of your pipelines.

## SageMaker

Amazon SageMaker is a fully managed machine learning service. With SageMaker, data scientists and developers can quickly build and train machine learning models, and then directly deploy them into a production-ready hosted environment. [Learn more here](https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html).

* You can use [SageMaker as a step operator](../../mlops-stacks/step-operators/amazon-sagemaker.md) to run specific steps from your pipeline using it as the backend.
* You can use [SageMaker as a step operator](../../../mlops-stacks/step-operators/amazon-sagemaker.md) to run specific steps from your pipeline using it as the backend.

## Relational Database Service (RDS)

Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. [Learn more here](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html).

* You can use [Amazon RDS as a metadata store](../../mlops-stacks/metadata-stores/mysql.md) to track metadata from your pipeline runs.
* You can use [Amazon RDS as a metadata store](../../../mlops-stacks/metadata-stores/mysql.md) to track metadata from your pipeline runs.

## Secrets Manager

Secrets Manager enables you to replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically. [Learn more here](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html).

* You can store your secrets to be used inside a pipeline by registering the [AWS Secrets Manager as a ZenML secret manager](../../mlops-stacks/secrets-managers/aws.md) stack component.
* You can store your secrets to be used inside a pipeline by registering the [AWS Secrets Manager as a ZenML secret manager](../../../mlops-stacks/secrets-managers/aws.md) stack component.

In the following pages, you will find step-by-step guides for setting up some common stacks using the AWS console and the CLI. More combinations and components are progressively updated in the form of new pages.
Original file line number Diff line number Diff line change
Expand Up @@ -13,31 +13,31 @@ Azure Kubernetes Service (AKS) is a managed Kubernetes service with hardened sec


* An AKS cluster can be used to run multiple **orchestrators**.
* [A Kubernetes-native orchestrator.](../../mlops-stacks/orchestrators/kubernetes.md)
* [A Kubeflow orchestrator.](../../mlops-stacks/orchestrators/kubeflow.md)
* [A Kubernetes-native orchestrator.](../../../mlops-stacks/orchestrators/kubernetes.md)
* [A Kubeflow orchestrator.](../../../mlops-stacks/orchestrators/kubeflow.md)
* You can host **model deployers** on the cluster.
* [A Seldon model deployer.](../../mlops-stacks/model-deployers/seldon.md)
* [An MLflow model deployer.](../../mlops-stacks/model-deployers/mlflow.md)
* [A Seldon model deployer.](../../../mlops-stacks/model-deployers/seldon.md)
* [An MLflow model deployer.](../../../mlops-stacks/model-deployers/mlflow.md)
* Experiment trackers can also be hosted on the cluster.
* [An MLflow experiment tracker](../../mlops-stacks/model-deployers/mlflow.md)
* [An MLflow experiment tracker](../../../mlops-stacks/model-deployers/mlflow.md)

## Azure Blob Storage

Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Blob storage offers three types of resources: the storage account, a container in the storage account and a blob in a container. [Learn more here](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction).

* You can use an [Azure Blob Storage Container as an artifact store](../../mlops-stacks/artifact-stores/azure-blob-storage.md) to hold files from our pipeline runs like models, data and more.
* You can use an [Azure Blob Storage Container as an artifact store](../../../mlops-stacks/artifact-stores/azure-blob-storage.md) to hold files from our pipeline runs like models, data and more.

## Azure Container Registry

Azure Container Registry is a managed registry service based on the open-source Docker Registry 2.0. Create and maintain Azure container registries to store and manage your container images and related artifacts. [Learn more here](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-intro).

* An [Azure container registry can be used as a ZenML container registry](../../mlops-stacks/container-registries/azure.md) stack component to host images of your pipelines.
* An [Azure container registry can be used as a ZenML container registry](../../../mlops-stacks/container-registries/azure.md) stack component to host images of your pipelines.

## AzureML

Azure Machine Learning is a cloud service for accelerating and managing the machine learning project lifecycle. Machine learning professionals, data scientists, and engineers can use it in their day-to-day workflows to train and deploy models, and manage MLOps. [Learn more here](https://docs.microsoft.com/en-us/azure/machine-learning/overview-what-is-azure-machine-learning).

* You can use [AzureML compute as a step operator](../../mlops-stacks/step-operators/azureml.md) to run specific steps from your pipeline using it as the backend.
* You can use [AzureML compute as a step operator](../../../mlops-stacks/step-operators/azureml.md) to run specific steps from your pipeline using it as the backend.

## Azure SQL server

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -430,7 +430,7 @@ appropriate rights after the first run fails.
At the end of the logs you should be seeing a link to the Vertex AI dashboard.
It should look something like this:

![Finished Run](../../assets/VertexAiRun.png)
![Finished Run](../../../assets/VertexAiRun.png)

In case you get an error message like this:
```shell
Expand Down Expand Up @@ -473,8 +473,8 @@ Now rerun your pipeline, it should work now.

Within this guide you have set up and used a stack on GCP using the Vertex AI
orchestrator. For more guides on different cloud set-ups, check out the
[Kubeflow](../../mlops-stacks/orchestrators/kubeflow.md) and
[Kubernetes](../../mlops-stacks/orchestrators/kubernetes.md) orchestrators
[Kubeflow](../../../mlops-stacks/orchestrators/kubeflow.md) and
[Kubernetes](../../../mlops-stacks/orchestrators/kubernetes.md) orchestrators
respectively and find out if these are a better fit for you.


Expand Down
22 changes: 11 additions & 11 deletions docs/book/stack-deployment-guide/manual-deployments/gcp/gcp.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,45 +13,45 @@ Google Kubernetes Engine (GKE) provides a managed environment for deploying, man


* An GKE cluster can be used to run multiple **orchestrators**.
* [A Kubernetes-native orchestrator.](../../mlops-stacks/orchestrators/kubernetes.md)
* [A Kubeflow orchestrator.](../../mlops-stacks/orchestrators/kubeflow.md)
* [A Kubernetes-native orchestrator.](../../../mlops-stacks/orchestrators/kubernetes.md)
* [A Kubeflow orchestrator.](../../../mlops-stacks/orchestrators/kubeflow.md)
* You can host **model deployers** on the cluster.
* [A Seldon model deployer.](../../mlops-stacks/model-deployers/seldon.md)
* [An MLflow model deployer.](../../mlops-stacks/model-deployers/mlflow.md)
* [A Seldon model deployer.](../../../mlops-stacks/model-deployers/seldon.md)
* [An MLflow model deployer.](../../../mlops-stacks/model-deployers/mlflow.md)
* Experiment trackers can also be hosted on the cluster.
* [An MLflow experiment tracker](../../mlops-stacks/experiment-trackers/mlflow.md)
* [An MLflow experiment tracker](../../../mlops-stacks/experiment-trackers/mlflow.md)

## Cloud Storage Bucket (GCS)

Cloud Storage is a service for storing your objects in Google Cloud. An object is an immutable piece of data consisting of a file of any format. You store objects in containers called buckets. [Learn more here](https://cloud.google.com/storage/docs/introduction).

* You can use a [GCS bucket as an artifact store](../../mlops-stacks/artifact-stores/gcloud-gcs.md) to hold files from our pipeline runs like models, data and more.
* You can use a [GCS bucket as an artifact store](../../../mlops-stacks/artifact-stores/gcloud-gcs.md) to hold files from our pipeline runs like models, data and more.

## Google Container Registry (GCR)

Container Registry is a service for storing private container images. It is being deprecated in favor of Artifact Registry, support for which will be coming soon to ZenML!

* A [GCR registry can be used as a container registry](../../mlops-stacks/container-registries/gcloud.md) stack component to host images of your pipelines.
* A [GCR registry can be used as a container registry](../../../mlops-stacks/container-registries/gcloud.md) stack component to host images of your pipelines.

## Vertex AI

Vertex AI brings together the Google Cloud services for building ML under one, unified UI and API. In Vertex AI, you can now train and compare models using AutoML or custom code training and all your models are stored in one central model repository. [Learn more here](https://cloud.google.com/vertex-ai).

* You can use [Vertex AI as a step operator](../../mlops-stacks/step-operators/gcloud-vertexai.md) to run specific steps from your pipeline using it as the backend.
* You can use [Vertex AI as a step operator](../../../mlops-stacks/step-operators/gcloud-vertexai.md) to run specific steps from your pipeline using it as the backend.

* [Vertex AI can also be used as an orchestrator](../../mlops-stacks/orchestrators/gcloud-vertexai.md) for your pipelines.
* [Vertex AI can also be used as an orchestrator](../../../mlops-stacks/orchestrators/gcloud-vertexai.md) for your pipelines.

## CloudSQL

Cloud SQL is a fully-managed database service that helps you set up, maintain, manage, and administer your relational databases on Google Cloud Platform.
You can use Cloud SQL with a MySQL server in ZenML. [Learn more here](https://cloud.google.com/sql/docs).

* You can use a [CloudSQL MySQL instance as a metadata store](../../mlops-stacks/metadata-stores/mysql.md) to track metadata from your pipeline runs.
* You can use a [CloudSQL MySQL instance as a metadata store](../../../mlops-stacks/metadata-stores/mysql.md) to track metadata from your pipeline runs.

## Secret Manager

Secret Manager is a secure and convenient storage system for API keys, passwords, certificates, and other sensitive data. Secret Manager provides a central place and single source of truth to manage, access, and audit secrets across Google Cloud. [Learn more here](https://cloud.google.com/secret-manager/docs).

* You can store your secrets to be used inside a pipeline by registering the [Google Secret Manager as a ZenML secret manager](../../mlops-stacks/secrets-managers/gcp.md) stack component.
* You can store your secrets to be used inside a pipeline by registering the [Google Secret Manager as a ZenML secret manager](../../../mlops-stacks/secrets-managers/gcp.md) stack component.

In the following pages, you will find step-by-step guides for setting up some common stacks using the GCP console and the CLI. More combinations and components are progressively updated in the form of new pages.
2 changes: 1 addition & 1 deletion docs/book/stack-deployment-guide/stack-recipes.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ To use the stack recipe CLI commands, you will have to install some optional dep
Run `pip install "zenml[stacks]"` to get started!
{% endhint %}

Detailed steps are available in the READMEs of respective recipes but here's what a simple flow could look like:
Detailed steps are available in the README of the respective recipe but here's what a simple flow could look like:

1. 📃 List the available recipes in the repository.

Expand Down
6 changes: 3 additions & 3 deletions examples/custom_code_deployment/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ The next sections cover how to setup the GCP Artifact Store credentials for the
Please look up the variables relevant to your use case in the
[official KServe Storage Credentials](https://kserve.github.io/website/0.8/sdk_docs/docs/KServeClient/#parameters)
and set them accordingly for your ZenML secrets schemas already built for each storage_type.
You can find the relevant variables in the [Kserve integration secret schemas docs](https://apidocs.zenml.io/0.13.0/api_docs/integrations/#zenml.integrations.kserve.secret_schemas.secret_schemas).
You can find the relevant variables in the [Kserve integration secret schemas docs](https://apidocs.zenml.io/latest/api_docs/integrations/#zenml.integrations.kserve.secret_schemas.secret_schemas).

#### GCP Authentication with kserve_gs secret schema

Expand Down Expand Up @@ -389,7 +389,7 @@ The next sections cover how to set GCP Artifact Store credentials for the Seldon
Please look up the variables relevant to your use case in the
[official Seldon Core Storage Credentials](https://kserve.github.io/website/0.8/sdk_docs/docs/KServeClient/#parameters)
and set them accordingly for your ZenML secrets schemas already built for each storage_type.
You can find the relevant variables in the [Seldon Integration secret schema](https://apidocs.zenml.io/0.13.0/api_docs/integrations/#zenml.integrations.seldon.secret_schemas.secret_schemas).
You can find the relevant variables in the [Seldon Integration secret schema](https://apidocs.zenml.io/latest/api_docs/integrations/#zenml.integrations.seldon.secret_schemas.secret_schemas).

#### GCP Authentication with seldon_s3 secret schema

Expand Down Expand Up @@ -621,7 +621,7 @@ rm -rf zenml_examples

# 📜 Learn more

Our docs regarding the custom model deployment can be found [here](https://docs.zenml.io/mlops-stacks/model-deployers/custom-pre-processing-and-post-processing).
Our docs regarding the custom model deployment can be found [here](https://docs.zenml.io/mlops-stacks/model-deployers#custom-pre-processing-and-post-processing).

If you want to learn more about the deployment in ZenML in general or about how to build your deployer steps in ZenML
check out our [docs](https://docs.zenml.io/mlops-stacks/model-deployers/custom).
5 changes: 2 additions & 3 deletions src/zenml/cli/stack.py
Original file line number Diff line number Diff line change
Expand Up @@ -1142,16 +1142,15 @@ def _import_stack_component(

@stack.command("import", help="Import a stack from YAML.")
@click.argument("stack_name", type=str, required=True)
@click.argument("filename", type=str, required=False)
@click.option("--filename", "-f", type=str, required=False)
@click.option(
"--ignore-version-mismatch",
is_flag=True,
help="Import stack components even if the installed version of ZenML "
"is different from the one specified in the stack YAML file",
)
@click.option(
"--decouple_stores",
"decouple_stores",
"--decouple-stores",
is_flag=True,
help="Decouple the given artifact/metadata store from prior associations.",
type=click.BOOL,
Expand Down
Loading

0 comments on commit e35cb26

Please sign in to comment.