From bc01aa8e24718048609b97caef614f39c7f64115 Mon Sep 17 00:00:00 2001 From: Chad Crum Date: Wed, 16 Oct 2024 10:26:48 -0400 Subject: [PATCH] Bug ECOPROJECT-2275 - Cleanup some grammatical errors in build and deploy service pod docs Signed-off-by: Chad Crum --- doc/agentservice.md | 6 +++--- doc/agentvm.md | 32 ++++++++++++++++---------------- doc/deployment.md | 20 ++++++++++---------- 3 files changed, 29 insertions(+), 29 deletions(-) diff --git a/doc/agentservice.md b/doc/agentservice.md index 0bc52c6..5e589e1 100644 --- a/doc/agentservice.md +++ b/doc/agentservice.md @@ -1,11 +1,11 @@ # Agent service -Agent service is responsible for serving collected data to the user. Once user create a source for his vCenter environment the Agent service provide a streaming service to download OVA image that is ready to be booted on the vCenter enviroment to run the collection of the data. +The Agent service is responsible for receiving and serving the collected vCenter data to the user. Once the user creates a source for their vCenter environment, the Agent service will provide a streaming service to download an OVA image. The OVA image can be booted on the vCenter enviroment to perform the collection of the vCenter data. ## Agent API There are two APIs related to the Agent. ### Internal API -Internal Agent API exposed for the UI. This API contains operations to create source, download OVA, etc. By default running on port 3443. This API is not exposed externaly to users, it's used only internally by UI. +The API contains operations to create a source, download the OVA image, etc. By default it runs on tcp port 3443. The API is not exposed externally to users, as it is only used internally by the UI. ### Agent API -The Agent API is exposed for the communication with the Agent VM. The only operation is to update the status of the source. By default running on port 7443. This API must be externally exposed, so agent VM can send over data. +The Agent API is exposed to communicate with the Agent VM. Its only operation is to update the status of the source. By default it runs on tcp port 7443. This API must be externally exposed so that the agent VM can initiate communication with it. diff --git a/doc/agentvm.md b/doc/agentvm.md index b46e34b..33f9dbc 100644 --- a/doc/agentvm.md +++ b/doc/agentvm.md @@ -1,31 +1,31 @@ # Agent virtual machine -The agent, based on Red Hat CoreOS (RHCOS), communicates with the Agent Service and reports its status. -The agent virtual machine is initialized using ignition, which configures multiple containers that run as systemd services. Each of these services is dedicated to a specific function. +The agent virtual machine, based on Red Hat CoreOS (RHCOS), communicates with the Agent Service and reports its status. +The VM is initialized using ignition, which configures multiple containers that run as systemd services. Each of these services is dedicated to a specific function. ## Systemd services -Follows the list of systemd services that can be found on agent virtual machine. All of the services +The following are a list of systemd services that can be found on agent virtual machines. All of the services are defined as quadlets. Quadlet configuration can be found in the [ignition template file](../data/config.ign.template). -Agent dockerfile can be found [here](../Containerfile.agent), the collector containerfile is [here](../Containerfile.collector). +The Agent containerfile can be found [here](../Containerfile.agent). The collector containerfile is [here](../Containerfile.collector). ### planner-setup -Planner-setup service is responsible for inicializing the volume with data, that are shared between `planner-agent` and `planner-agent-collector`. +Planner-setup service is responsible for initializing the volume with data that is shared between the `planner-agent` and the `planner-agent-collector`. ### planner-agent -Planner-agent is a service that reports the status to the Agent service. The URL of the Agent service is configured in `$HOME/vol/config.yaml` file, which is injected via ignition. +Planner-agent is a service that reports the status to the Agent service. The URL of the Agent service is configured in the file `$HOME/vol/config.yaml`, which is injected via ignition. -Planner-agent contains web application that is exposed via port 3333. Once user access the web app and enter the credentials of the vCenter, `credentials.json` file is created in the shared volume, and `planner-agent-collector` can be spawned. +The Planner-agent contains a web application that is exposed via tcp port 3333. Once the user accesses the web application and enters the credentials of their vCenter, the `credentials.json` file is created on the shared volume and the `planner-agent-collector` container is spawned. ### planner-agent-opa -Planner-agent-opa is a service that re-uses [forklift validation](https://github.com/kubev2v/forklift/blob/main/validation/README.adoc) container. The forklift validation container is responsible for vCenter data validation. When `planner-agent-collector` fetch vCenter data it's validated against the OPA server and report is shared back to Agent Service. +Planner-agent-opa is a service that re-uses the [forklift validation](https://github.com/kubev2v/forklift/blob/main/validation/README.adoc) container. The forklift validation container is responsible for vCenter data validation. When the `planner-agent-collector` fetches vCenter data, it's validated against the OPA server and the report is shared back to the Agent Service. ### planner-agent-collector -Planner-agent-collector service waits until user enter vCenter credentials, once credentials are entered the vCenter data are collected. The data are stored in `$HOME/vol/data/inventory.json`. Once `invetory.json` is created `planner-agent` service send the data over to Agent service. +Planner-agent-collector service waits until the user enters the vCenter credentials in the `planner-agent` web application. Once the credentials are entered, the vCenter data is collected. The data is stored in `$HOME/vol/data/inventory.json`. Once `inventory.json` is created, the `planner-agent` service sends the data over to Agent service. ### podman-auto-update -Podman auto update is responsible for updating the image of containers in case there is a new release of the image. We use default `podman-auto-update.timer`, which executes `podman-auto-update` every 24hours. +Podman auto update is responsible for updating the image of the containers in case there is a new image release. The default `podman-auto-update.timer` is used, which executes `podman-auto-update` every 24 hours. ## Troubleshooting Agent VM services -Usefull commands to troubleshoot Agent VM. Note that all the containers are running under `core` user. +Useful commands to troubleshoot the Agent VM. Note that all the containers are running under the `core` user. ### Listing the running podman containers ``` @@ -38,13 +38,13 @@ $ systemctl --user status planner-* ``` ### Inspecting the shared volume -We create a shared volume between containers, so we can share information between collector and agent container. -In order to expore the data stored in the volume find the mountpoint of the volume: +A shared volume is created between containers, so that information can be shared between the `planner-agent-collector` and `planner-agent` containers. +In order to export the data stored in the volume, find the mountpoint of the volume: ``` $ podman volume inspect planner.volume | jq .[0].Mountpoint ``` -And then you can explore relevant data. Like `config.yaml`, `credentials.json`, `inventory.json`, etc. +And then the relevant data can be explored, such as: `config.yaml`, `credentials.json`, `inventory.json`, etc. ``` $ ls /var/home/core/.local/share/containers/storage/volumes/planner.volume/_data $ cat /var/home/core/.local/share/containers/storage/volumes/planner.volume/_data/config.yaml @@ -65,7 +65,7 @@ $ journalctl --user -f -u planner-* ``` ### Status is `Not connected` after VM is booted. -This isually indicates that `planner-agent` service can't communicate with the Agent service. +This usually indicates that the `planner-agent` service can't communicate with the Agent service. Check the logs of the `planner-agent` service: ``` journalctl --user -f -u planner-agent @@ -74,4 +74,4 @@ And search for the error in the log: ``` level=error msg="failed connecting to migration planner: dial tcp: http://non-working-ip:7443 ``` -Make sure `non-working-ip` has properly setup Agent service and is listening on port `7443`. +Make sure `non-working-ip` has a properly setup Agent service and is listening on port `7443`. diff --git a/doc/deployment.md b/doc/deployment.md index 544c1e7..951e4ab 100644 --- a/doc/deployment.md +++ b/doc/deployment.md @@ -1,23 +1,23 @@ -# Deployment of the agent service -The project contains yaml files for Openshift deployment. This document describes the deployment process. -By default we deploy images from `quay.io/kubev2v` namespace. We push latest images after every merge of the PRs. +# Deployment of the Agent service on OpenShift +The project contains yaml files for deploying the Agent service on OpenShift. This document describes the deployment process. +By default images are deployed from the `quay.io/kubev2v` namespace. New images are built and pushed to quay after each PR is merged in this repo. -## Deploy on openshift -In order to deploy the Agent service on top of Openshift there is Makefile target called `deploy-on-openshift`. +## Deploy on OpenShift +In order to deploy the Agent service on top of OpenShift there is Makefile target called `deploy-on-openshift`. ``` $ oc login --token=$TOKEN --server=$SERVER $ make deploy-on-openshift ``` -The deployment process deploys all relevant parts of the project including the UI and database. +The deployment process deploys all relevant parts of the project, including the UI and database. -To undeploy the project, which removes all the relevent parts run: +To undeploy the project, which removes all the relevent parts, run: ``` make undeploy-on-openshift ``` -## Using custom images of API/UI +## Using custom images for the Agent Service API and UI If you want to deploy the project with your own images you can specify custom enviroment variables: ``` @@ -26,8 +26,8 @@ export MIGRATION_PLANNER_UI_IMAGE=quay.io/$USER/migration-planner-ui make deploy-on-openshift ``` -## Using custom images of Agent -Agent images are defined in the ignition file. So in order to modify the images of the Agent you need to pass the specific environment variables to the deployment of API service. Modify `deploy/k8s/migration-planner.yaml` and add relevant env variable for example: +## Using custom Agent Images used in the Agent OVA +Agent images are defined in the ignition file. In order to modify the images of the Agent you need to pass the specific environment variables to the deployment of the API service. Modify `deploy/k8s/migration-planner.yaml` and add relevant environment variables to the deployment manifest. For example: ``` env: