Skip to content

Commit

Permalink
Add documentation of service and agent
Browse files Browse the repository at this point in the history
Signed-off-by: Ondra Machacek <[email protected]>
  • Loading branch information
machacekondra authored and bardielle committed Oct 14, 2024
1 parent 10ea4ca commit e40b64c
Show file tree
Hide file tree
Showing 3 changed files with 126 additions and 0 deletions.
11 changes: 11 additions & 0 deletions doc/agentservice.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Agent service
Agent service is responsible for serving collected data to the user. Once user create a source for his vCenter environment the Agent service provide a streaming service to download OVA image that is ready to be booted on the vCenter enviroment to run the collection of the data.

## Agent API
There are two APIs related to the Agent.

### Internal API
Internal Agent API exposed for the UI. This API contains operations to create source, download OVA, etc. By default running on port 3443. This API is not exposed externaly to users, it's used only internally by UI.

### Agent API
The Agent API is exposed for the communication with the Agent VM. The only operation is to update the status of the source. By default running on port 7443. This API must be externally exposed, so agent VM can send over data.
77 changes: 77 additions & 0 deletions doc/agentvm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Agent virtual machine
The agent, based on Red Hat CoreOS (RHCOS), communicates with the Agent Service and reports its status.
The agent virtual machine is initialized using ignition, which configures multiple containers that run as systemd services. Each of these services is dedicated to a specific function.

## Systemd services
Follows the list of systemd services that can be found on agent virtual machine. All of the services
are defined as quadlets. Quadlet configuration can be found in the [ignition template file](../data/config.ign.template).
Agent dockerfile can be found [here](../Containerfile.agent), the collector containerfile is [here](../Containerfile.collector).

### planner-setup
Planner-setup service is responsible for inicializing the volume with data, that are shared between `planner-agent` and `planner-agent-collector`.

### planner-agent
Planner-agent is a service that reports the status to the Agent service. The URL of the Agent service is configured in `$HOME/vol/config.yaml` file, which is injected via ignition.

Planner-agent contains web application that is exposed via port 3333. Once user access the web app and enter the credentials of the vCenter, `credentials.json` file is created in the shared volume, and `planner-agent-collector` can be spawned.

### planner-agent-opa
Planner-agent-opa is a service that re-uses [forklift validation](https://github.com/kubev2v/forklift/blob/main/validation/README.adoc) container. The forklift validation container is responsible for vCenter data validation. When `planner-agent-collector` fetch vCenter data it's validated against the OPA server and report is shared back to Agent Service.

### planner-agent-collector
Planner-agent-collector service waits until user enter vCenter credentials, once credentials are entered the vCenter data are collected. The data are stored in `$HOME/vol/data/inventory.json`. Once `invetory.json` is created `planner-agent` service send the data over to Agent service.

### podman-auto-update
Podman auto update is responsible for updating the image of containers in case there is a new release of the image. We use default `podman-auto-update.timer`, which executes `podman-auto-update` every 24hours.

## Troubleshooting Agent VM services
Usefull commands to troubleshoot Agent VM. Note that all the containers are running under `core` user.

### Listing the running podman containers
```
$ podman ps
```

### Checking the status of all our services
```
$ systemctl --user status planner-*
```

### Inspecting the shared volume
We create a shared volume between containers, so we can share information between collector and agent container.
In order to expore the data stored in the volume find the mountpoint of the volume:
```
$ podman volume inspect planner.volume | jq .[0].Mountpoint
```

And then you can explore relevant data. Like `config.yaml`, `credentials.json`, `inventory.json`, etc.
```
$ ls /var/home/core/.local/share/containers/storage/volumes/planner.volume/_data
$ cat /var/home/core/.local/share/containers/storage/volumes/planner.volume/_data/config.yaml
$ cat /var/home/core/.local/share/containers/storage/volumes/planner.volume/_data/data/credentials.json
$ cat /var/home/core/.local/share/containers/storage/volumes/planner.volume/_data/data/inventory.json
```

### Inspecting the host directory with data
The ignition create a `vol` directory in `core` user home directory.
This directory should contain all relevant data, so in order to find misconfiguration please search in this directory.
```
$ ls -l vol
```

### Check logs of the services
```
$ journalctl --user -f -u planner-*
```

### Status is `Not connected` after VM is booted.
This isually indicates that `planner-agent` service can't communicate with the Agent service.
Check the logs of the `planner-agent` service:
```
journalctl --user -f -u planner-agent
```
And search for the error in the log:
```
level=error msg="failed connecting to migration planner: dial tcp: http://non-working-ip:7443
```
Make sure `non-working-ip` has properly setup Agent service and is listening on port `7443`.
38 changes: 38 additions & 0 deletions doc/deployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Deployment of the agent service
The project contains yaml files for Openshift deployment. This document describes the deployment process.
By default we deploy images from `quay.io/kubev2v` namespace. We push latest images after every merge of the PRs.

## Deploy on openshift
In order to deploy the Agent service on top of Openshift there is Makefile target called `deploy-on-openshift`.

```
$ oc login --token=$TOKEN --server=$SERVER
$ make deploy-on-openshift
```

The deployment process deploys all relevant parts of the project including the UI and database.

To undeploy the project, which removes all the relevent parts run:
```
make undeploy-on-openshift
```

## Using custom images of API/UI
If you want to deploy the project with your own images you can specify custom enviroment variables:

```
export MIGRATION_PLANNER_API_IMAGE=quay.io/$USER/migration-planner-api
export MIGRATION_PLANNER_UI_IMAGE=quay.io/$USER/migration-planner-ui
make deploy-on-openshift
```

## Using custom images of Agent
Agent images are defined in the ignition file. So in order to modify the images of the Agent you need to pass the specific environment variables to the deployment of API service. Modify `deploy/k8s/migration-planner.yaml` and add relevant env variable for example:

```
env:
- name: MIGRATION_PLANNER_COLLECTOR_IMAGE
value: quay.io/$USER/migration-planner-collector
- name: MIGRATION_PLANNER_AGENT_IMAGE
value: quay.io/$USER/migration-planner-agent
```

0 comments on commit e40b64c

Please sign in to comment.