Skip to content

Commit

Permalink
Merge pull request #47 from chadcrum/ECOPROJECT-2275-cleanup-some-gra…
Browse files Browse the repository at this point in the history
…mmatical-errors-in-build-and-deploy-service-pods

Cleanup some grammatical errors in build and deploy service pod docs
  • Loading branch information
tupyy authored Oct 16, 2024
2 parents 9dbf908 + bc01aa8 commit d44ee36
Show file tree
Hide file tree
Showing 3 changed files with 29 additions and 29 deletions.
6 changes: 3 additions & 3 deletions doc/agentservice.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Agent service
Agent service is responsible for serving collected data to the user. Once user create a source for his vCenter environment the Agent service provide a streaming service to download OVA image that is ready to be booted on the vCenter enviroment to run the collection of the data.
The Agent service is responsible for receiving and serving the collected vCenter data to the user. Once the user creates a source for their vCenter environment, the Agent service will provide a streaming service to download an OVA image. The OVA image can be booted on the vCenter enviroment to perform the collection of the vCenter data.

## Agent API
There are two APIs related to the Agent.

### Internal API
Internal Agent API exposed for the UI. This API contains operations to create source, download OVA, etc. By default running on port 3443. This API is not exposed externaly to users, it's used only internally by UI.
The API contains operations to create a source, download the OVA image, etc. By default it runs on tcp port 3443. The API is not exposed externally to users, as it is only used internally by the UI.

### Agent API
The Agent API is exposed for the communication with the Agent VM. The only operation is to update the status of the source. By default running on port 7443. This API must be externally exposed, so agent VM can send over data.
The Agent API is exposed to communicate with the Agent VM. Its only operation is to update the status of the source. By default it runs on tcp port 7443. This API must be externally exposed so that the agent VM can initiate communication with it.
32 changes: 16 additions & 16 deletions doc/agentvm.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,31 @@
# Agent virtual machine
The agent, based on Red Hat CoreOS (RHCOS), communicates with the Agent Service and reports its status.
The agent virtual machine is initialized using ignition, which configures multiple containers that run as systemd services. Each of these services is dedicated to a specific function.
The agent virtual machine, based on Red Hat CoreOS (RHCOS), communicates with the Agent Service and reports its status.
The VM is initialized using ignition, which configures multiple containers that run as systemd services. Each of these services is dedicated to a specific function.

## Systemd services
Follows the list of systemd services that can be found on agent virtual machine. All of the services
The following are a list of systemd services that can be found on agent virtual machines. All of the services
are defined as quadlets. Quadlet configuration can be found in the [ignition template file](../data/config.ign.template).
Agent dockerfile can be found [here](../Containerfile.agent), the collector containerfile is [here](../Containerfile.collector).
The Agent containerfile can be found [here](../Containerfile.agent). The collector containerfile is [here](../Containerfile.collector).

### planner-setup
Planner-setup service is responsible for inicializing the volume with data, that are shared between `planner-agent` and `planner-agent-collector`.
Planner-setup service is responsible for initializing the volume with data that is shared between the `planner-agent` and the `planner-agent-collector`.

### planner-agent
Planner-agent is a service that reports the status to the Agent service. The URL of the Agent service is configured in `$HOME/vol/config.yaml` file, which is injected via ignition.
Planner-agent is a service that reports the status to the Agent service. The URL of the Agent service is configured in the file `$HOME/vol/config.yaml`, which is injected via ignition.

Planner-agent contains web application that is exposed via port 3333. Once user access the web app and enter the credentials of the vCenter, `credentials.json` file is created in the shared volume, and `planner-agent-collector` can be spawned.
The Planner-agent contains a web application that is exposed via tcp port 3333. Once the user accesses the web application and enters the credentials of their vCenter, the `credentials.json` file is created on the shared volume and the `planner-agent-collector` container is spawned.

### planner-agent-opa
Planner-agent-opa is a service that re-uses [forklift validation](https://github.com/kubev2v/forklift/blob/main/validation/README.adoc) container. The forklift validation container is responsible for vCenter data validation. When `planner-agent-collector` fetch vCenter data it's validated against the OPA server and report is shared back to Agent Service.
Planner-agent-opa is a service that re-uses the [forklift validation](https://github.com/kubev2v/forklift/blob/main/validation/README.adoc) container. The forklift validation container is responsible for vCenter data validation. When the `planner-agent-collector` fetches vCenter data, it's validated against the OPA server and the report is shared back to the Agent Service.

### planner-agent-collector
Planner-agent-collector service waits until user enter vCenter credentials, once credentials are entered the vCenter data are collected. The data are stored in `$HOME/vol/data/inventory.json`. Once `invetory.json` is created `planner-agent` service send the data over to Agent service.
Planner-agent-collector service waits until the user enters the vCenter credentials in the `planner-agent` web application. Once the credentials are entered, the vCenter data is collected. The data is stored in `$HOME/vol/data/inventory.json`. Once `inventory.json` is created, the `planner-agent` service sends the data over to Agent service.

### podman-auto-update
Podman auto update is responsible for updating the image of containers in case there is a new release of the image. We use default `podman-auto-update.timer`, which executes `podman-auto-update` every 24hours.
Podman auto update is responsible for updating the image of the containers in case there is a new image release. The default `podman-auto-update.timer` is used, which executes `podman-auto-update` every 24 hours.

## Troubleshooting Agent VM services
Usefull commands to troubleshoot Agent VM. Note that all the containers are running under `core` user.
Useful commands to troubleshoot the Agent VM. Note that all the containers are running under the `core` user.

### Listing the running podman containers
```
Expand All @@ -38,13 +38,13 @@ $ systemctl --user status planner-*
```

### Inspecting the shared volume
We create a shared volume between containers, so we can share information between collector and agent container.
In order to expore the data stored in the volume find the mountpoint of the volume:
A shared volume is created between containers, so that information can be shared between the `planner-agent-collector` and `planner-agent` containers.
In order to export the data stored in the volume, find the mountpoint of the volume:
```
$ podman volume inspect planner.volume | jq .[0].Mountpoint
```

And then you can explore relevant data. Like `config.yaml`, `credentials.json`, `inventory.json`, etc.
And then the relevant data can be explored, such as: `config.yaml`, `credentials.json`, `inventory.json`, etc.
```
$ ls /var/home/core/.local/share/containers/storage/volumes/planner.volume/_data
$ cat /var/home/core/.local/share/containers/storage/volumes/planner.volume/_data/config.yaml
Expand All @@ -65,7 +65,7 @@ $ journalctl --user -f -u planner-*
```

### Status is `Not connected` after VM is booted.
This isually indicates that `planner-agent` service can't communicate with the Agent service.
This usually indicates that the `planner-agent` service can't communicate with the Agent service.
Check the logs of the `planner-agent` service:
```
journalctl --user -f -u planner-agent
Expand All @@ -74,4 +74,4 @@ And search for the error in the log:
```
level=error msg="failed connecting to migration planner: dial tcp: http://non-working-ip:7443
```
Make sure `non-working-ip` has properly setup Agent service and is listening on port `7443`.
Make sure `non-working-ip` has a properly setup Agent service and is listening on port `7443`.
20 changes: 10 additions & 10 deletions doc/deployment.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
# Deployment of the agent service
The project contains yaml files for Openshift deployment. This document describes the deployment process.
By default we deploy images from `quay.io/kubev2v` namespace. We push latest images after every merge of the PRs.
# Deployment of the Agent service on OpenShift
The project contains yaml files for deploying the Agent service on OpenShift. This document describes the deployment process.
By default images are deployed from the `quay.io/kubev2v` namespace. New images are built and pushed to quay after each PR is merged in this repo.

## Deploy on openshift
In order to deploy the Agent service on top of Openshift there is Makefile target called `deploy-on-openshift`.
## Deploy on OpenShift
In order to deploy the Agent service on top of OpenShift there is Makefile target called `deploy-on-openshift`.

```
$ oc login --token=$TOKEN --server=$SERVER
$ make deploy-on-openshift
```

The deployment process deploys all relevant parts of the project including the UI and database.
The deployment process deploys all relevant parts of the project, including the UI and database.

To undeploy the project, which removes all the relevent parts run:
To undeploy the project, which removes all the relevent parts, run:
```
make undeploy-on-openshift
```

## Using custom images of API/UI
## Using custom images for the Agent Service API and UI
If you want to deploy the project with your own images you can specify custom enviroment variables:

```
Expand All @@ -26,8 +26,8 @@ export MIGRATION_PLANNER_UI_IMAGE=quay.io/$USER/migration-planner-ui
make deploy-on-openshift
```

## Using custom images of Agent
Agent images are defined in the ignition file. So in order to modify the images of the Agent you need to pass the specific environment variables to the deployment of API service. Modify `deploy/k8s/migration-planner.yaml` and add relevant env variable for example:
## Using custom Agent Images used in the Agent OVA
Agent images are defined in the ignition file. In order to modify the images of the Agent you need to pass the specific environment variables to the deployment of the API service. Modify `deploy/k8s/migration-planner.yaml` and add relevant environment variables to the deployment manifest. For example:

```
env:
Expand Down

0 comments on commit d44ee36

Please sign in to comment.