diff --git a/.gitignore b/.gitignore index d6667caa..9d2f041f 100644 --- a/.gitignore +++ b/.gitignore @@ -31,7 +31,10 @@ output/** # Testing purposes sandbox/cluster_up/test.sh + +# Python .snowdrop-venv/ +.*env ####### # IDE # diff --git a/README.adoc b/README.adoc index 698f8512..32d370dd 100644 --- a/README.adoc +++ b/README.adoc @@ -14,6 +14,7 @@ endif::[] == Introduction +[.lead] This project details the `prerequisites` and `steps` necessary to automate the installation of a Kubernetes (aka k8s) cluster or Openshift 4 top of one of the following cloud provider: * Red Hat https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/[OpenStack] (RHOS-PSI) @@ -31,13 +32,13 @@ NOTE: kind is not a cloud provider but a tool able to run a k8s cluster on a con All the commands mentioned on this project are to be executed at the root folder of the repository, except if stated otherwise. ==== -This project uses Ansible. Check the link:ansible/README.adoc[Ansible Document] for the -installation and usage instrutions. - == Prerequisites +This project uses Ansible. Check the link:ansible/README.adoc[Ansible Document] for the +installation and usage instructions. + To use the scripts, playbooks, part of this project, some prerequisites are needed. It is not mandatory to install -all of them and the following chapters will mention which ones are needed. +all of them and the following chapters will mention which ones are needed. * https://kind.sigs.k8s.io/docs/user/quick-start/#installation[kind] * https://docs.docker.com/engine/install/[Docker] or https://podman.io/docs/installation[podman] @@ -45,14 +46,52 @@ all of them and the following chapters will mention which ones are needed. * https://www.python.org/downloads/[Python]. Version >= 3.11 * https://www.passwordstore.org/[passwordstore] * https://github.com/hetznercloud/cli[hcloud] (optional) +* + +=== Python + +Several requirements are provided as Python libraries, including Ansible, + and are identified on the link:requirements.txt[] file. + +Using a Python Virtual Environment is recommended and can be created using + the following command: + +[source,bash] +---- +python3 -m venv .snowdrop-venv +---- +After creating the virtual environment start using it with the following command: + +[source,bash] +---- +source .snowdrop-venv/bin/activate +---- + +The venv will be in use when the `(.snowdrop-venv)` prefix is shown on the bash prompt. + +The python requirements can be installed by executing: + +[source,bash] +---- +pip3 install -r requirements.txt +---- + +[NOTE] +==== +For more information check the link:ansible/README.adoc#python-venv[Python Virtual Env] section on our Ansible README. +==== -== Locally +=== Ansible -The word `locally` should be understood as the process to run on your developer laptop the cluster, using also a CI/CD platform -such as GitHub Actions, etc. +Several Ansible Galaxy collections are used as part of this project and + are listed in the link:collections/requirements.yml[] file. + To install them execute the following command. -We recommend to use 2 tools to run locally a kubernetes cluster: kind or minikube +[source,bash] +---- +ansible-galaxy collection install -r ./collections/requirements.yml --upgrade +---- === Kind @@ -76,7 +115,7 @@ The provisioning process towards the cloud providers relies on the following ass - Password store is installed/configured and needed k/v created - Flavor, volume, capacity (cpu/ram/volume) and OS can be mapped with the playbook of the target cloud provider - Permissions have been set to allow to provision a VM top of the target cloud provider -- Ssh key exist and has been imported (or could be created during provisioning process) +- SSH key exist and has been imported (or could be created during provisioning process) and will include the following basic steps: @@ -96,7 +135,8 @@ This section details how to provision an Openshift 4 cluster using one of Red Ha _Tools: password store, ansible_ -The link:openstack/README.adoc[OpenStack] page explains how to create an OpenStack cloud vm using +The link:openstack/README.adoc[OpenStack] page explains the process using + the RHOS cloud provider. ==== https://resourcehub.redhat.com/[Resource Hub] diff --git a/ansible.cfg b/ansible.cfg index b62f16cf..71cd5bdb 100644 --- a/ansible.cfg +++ b/ansible.cfg @@ -7,6 +7,9 @@ hash_behaviour = merge roles_path = ansible/roles/ callback_whitelist = profile_tasks -gather_timeout = 5000 +gather_timeout = 30000 log_path=/opt/log/ansible.log + +ansible_python_interpreter=/usr/bin/python3 +interpreter_python=auto diff --git a/ansible/README.adoc b/ansible/README.adoc index f7f29bfc..e958a4bb 100644 --- a/ansible/README.adoc +++ b/ansible/README.adoc @@ -1,12 +1,9 @@ = Ansible Snowdrop Team (Antonio costa) -Snowdrop Team (Antonio costa) :icons: font :revdate: {docdate} -:revdate: {docdate} :toc: left :toclevels: 3 -:toclevels: 3 :description: This document introduces some of the key concepts that you should be aware when you play with Ansible in order to configure the environment to let Ansible to access the different machines. ifdef::env-github[] :tip-caption: :bulb: @@ -15,13 +12,6 @@ ifdef::env-github[] :caution-caption: :fire: :warning-caption: :warning: endif::[] -ifdef::env-github[] -:tip-caption: :bulb: -:note-caption: :information_source: -:important-caption: :heavy_exclamation_mark: -:caution-caption: :fire: -:warning-caption: :warning: -endif::[] == Conventions @@ -32,6 +22,11 @@ The exception goes to the playbooks that are executed against `localhost`. This NOTE: Check the Ansible https://docs.ansible.com/ansible/latest/reference_appendices/release_and_maintenance.html#ansible-core-support-matrix[requirement] page for Python compatibility ! +[#ansible-inventory] +== Ansible Inventory + +The Ansible Inventory is managed by + == Installation guide In order to play with the playbooks/roles of this project, it is needed to: @@ -48,14 +43,15 @@ In order to play with the playbooks/roles of this project, it is needed to: [NOTE] ==== -Since passwordstore is integrated with [git](https://git-scm.com/), all changes made locally to a pass repository are automatically committed to the local git repo. +Since passwordstore is integrated with link:https://git-scm.com/[git], all changes made locally to a pass repository are automatically committed to the local git repo. ==== -[NOTE] +[WARNING] ==== Don't forget to `git push` and `git pull` often in order to have your local repository synchronized with other team members as well as publishing to the team your changes. ==== +[#python-venv] === Python Virtual Environments This project suggests using a link:https://docs.python.org/3/library/venv.html[python virtual environment] @@ -282,7 +278,7 @@ Because a host can already be defined under the store, prior to execute the play [source,bash] ---- -$ pass hetzner +pass hetzner hetzner ├── ... ├── host-1 @@ -312,7 +308,7 @@ If a host has already been created, it can be imported within the inventory usin [source,bash] ---- -$ ansible-playbook ansible/playbook/passstore_controller_inventory.yml -e vm_name= -e pass_provider=hetzner +ansible-playbook ansible/playbook/passstore_controller_inventory.yml -e vm_name= -e pass_provider=hetzner ---- where `` corresponds to the host key created under `hetzner`. @@ -334,7 +330,7 @@ This is done using the `passstore_controller_inventory_remove` playbook. More in [source,bash] ---- -$ ansible-playbook ansible/playbook/passstore_controller_inventory_remove.yml -e vm_name= -e pass_provider= +ansible-playbook ansible/playbook/passstore_controller_inventory_remove.yml -e vm_name= -e pass_provider= ---- === Create Server diff --git a/ansible/ansible-inventory.adoc b/ansible/ansible-inventory.adoc index 25c58d12..2ca06820 100644 --- a/ansible/ansible-inventory.adoc +++ b/ansible/ansible-inventory.adoc @@ -1,6 +1,25 @@ = Ansible Inventory +:icons: font +:revdate: {docdate} :toc: left -:description: This document describes the Ansible inventory implementation. +:toclevels: 3 +:description: Ansible Inventory +ifdef::env-github[] +:tip-caption: :bulb: +:note-caption: :information_source: +:important-caption: :heavy_exclamation_mark: +:caution-caption: :fire: +:warning-caption: :warning: +endif::[] + +== Introduction + +[.lead] +This document described the implementation of the Ansible Inventory on this + project. + +It used a mix of a passwordstode database and static files to maintain all + host information and properties. == Introduction to Ansible Inventory @@ -14,7 +33,7 @@ The two most important files are: **Remark**: More information on the Ansible Inventory and how to build it is defined within the [Ansible User Guide](https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html). -=== `hosts.yml` +== `hosts.yml` file This file contains static information for the inventory such as: * Group @@ -24,7 +43,8 @@ This file contains static information for the inventory such as: Here is a sample of a *hosts yaml* file designed using YAML. -```yaml +[source,yaml] +---- all: # keys must be unique, i.e. only one 'hosts' per group hosts: host1: @@ -51,15 +71,17 @@ all: # keys must be unique, i.e. only one 'hosts' per group host1: vars: group_last_var: value -``` +---- More information on these documents are available: -* [yaml – Uses a specific YAML file as an inventory source](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/yaml_inventory.html) -* [How to build your inventory](https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html) -This project already includes a static inventory, at [../inventory/hosts.yml](../inventory/hosts.yml) file. +* link:https://docs.ansible.com/ansible/latest/collections/ansible/builtin/yaml_inventory.html[yaml – Uses a specific YAML file as an inventory source] +* link:https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html[How to build your inventory] -=== Groups +This project already includes a static inventory, at the + link:../inventory/hosts.yml[] file. + +== Groups Ansible hosts can be grouped into...well groups. This allows the execution of playbooks and the definition of variables in a common matter for different hosts. @@ -68,7 +90,8 @@ values assigned to each group. Host group assignment is made in `passstore` by managing entries in the `provider/host/groups` folder being each entry a group to which the host belongs. -```text +[source] +---- ├── provider | ├── host_1 │   │   ├── groups @@ -79,19 +102,46 @@ Host group assignment is made in `passstore` by managing entries in the `provide │   │   ├── groups │   │   │   ├── group_2 │   │   │   ├── group_3 -``` +---- For instance, we wanted to define the ports that a k8s master needs to open. This has been done in the `hosts.yml` file having the following variable assigned to the `masters` group, which is also inside a group structure so other variables are inherited. -``` +[source,yml] +---- firewalld_public_ports: - 6443/tcp - 10250/tcp - 10255/tcp - 8472/udp - 30000-32767/tcp -``` +---- For information regarding actually managing host-group assignment check the [`passstore_manage_host_groups` section](#passstore_manage_host_groups). +== `pass_inventory.py` Inventory Python script + +This Python script build the Ansible Inventory from the passwordstore database. + +To collect information on a host execute the following command. + +[source,bash] +---- +./ansible/inventory/pass_inventory.py --host <1> +---- +<1> Name of the host in the Ansible inventory. + +Example + +[source,bash] +---- +./ansible/inventory/pass_inventory.py --host ocp-xyz-tmp-bootstrap-server +---- + +To list all the inventory simply execute with the --list attribute. + +[source,bash] +---- +./ansible/inventory/pass_inventory.py --list +---- + diff --git a/ansible/ansible_collections/snowdrop/godaddy/playbooks/roles b/ansible/ansible_collections/snowdrop/godaddy/playbooks/roles new file mode 120000 index 00000000..7b9ade87 --- /dev/null +++ b/ansible/ansible_collections/snowdrop/godaddy/playbooks/roles @@ -0,0 +1 @@ +../roles/ \ No newline at end of file diff --git a/ansible/inventory/pass_inventory.py b/ansible/inventory/pass_inventory.py index 0d80b5f0..195a5a82 100755 --- a/ansible/inventory/pass_inventory.py +++ b/ansible/inventory/pass_inventory.py @@ -77,9 +77,20 @@ passLines = pipe.stdout.readlines() passEntry = passLines[0].replace('\n', '') if ('os_user' == passEntryName): + host_vars.update({passEntryName:passEntry}) host_vars.update({'ansible_user':passEntry}) elif ('ip_address' == passEntryName): + host_vars.update({passEntryName:passEntry}) + if (not 'floating_ip' in host_vars): + host_vars.update({'ansible_ssh_host':passEntry}) + elif ('floating_ip' == passEntryName): + host_vars.update({passEntryName:passEntry}) + # floating_ip overrides any other host variable host_vars.update({'ansible_ssh_host':passEntry}) + host_vars.update({'floating_ip':passEntry}) + elif ('ansible_ssh_host' == passEntryName): + if (not 'ansible_ssh_host' in host_vars): + host_vars.update({'ansible_ssh_host':passEntry}) # elif ('ssh_port' == passEntryName): # host_vars.update({'ansible_ssh_port':passEntry}) else: diff --git a/ansible/playbook/README.adoc b/ansible/playbook/README.adoc index e30f3f84..6134ad2f 100644 --- a/ansible/playbook/README.adoc +++ b/ansible/playbook/README.adoc @@ -28,7 +28,7 @@ This playbook will create a passwordstore folder structure that will be the base An example: ```bash -$ ansible-playbook ansible/playbook/passstore_controller_inventory.yml -e vm_name=my-host -e pass_provider=hetzner -e k8s_type=masters -e k8s_version=115 --tags create +ansible-playbook ansible/playbook/passstore_controller_inventory.yml -e vm_name=my-host -e pass_provider=hetzner -e k8s_type=masters -e k8s_version=115 --tags create ``` This execution would generate the following `pass` structure: @@ -61,7 +61,7 @@ $ ls -l ~/.ssh/ This playbook will remove the records and files created by the [`passstore_controller_inventory`](#passstore_controller_inventory) playbook. ```bash -$ ansible-playbook ansible/playbook/passstore_controller_inventory_remove.yml -e vm_name=my-host -e pass_provider=hetzner +ansible-playbook ansible/playbook/passstore_controller_inventory_remove.yml -e vm_name=my-host -e pass_provider=hetzner ``` Variables: @@ -126,25 +126,25 @@ This playbook allows to easily add and remove hosts from an ansible group manage **WARNING**: No entries will be added or removed using this playbook within the `hosts.yaml` file ! ```bash -$ ansible-playbook ansible/playbook/passstore_manage_host_groups.yml -e operation=add -e group_name= -e vm_name= +ansible-playbook ansible/playbook/passstore_manage_host_groups.yml -e operation=add -e group_name= -e vm_name= ``` For instance, adding a host named `n01-k115` to the `k8s_115` group would be done the following way: ```bash -$ ansible-playbook ansible/playbook/passstore_manage_host_groups.yml -e operation=add -e group_name=k8s_115 -e vm_name=n01-k115 +ansible-playbook ansible/playbook/passstore_manage_host_groups.yml -e operation=add -e group_name=k8s_115 -e vm_name=n01-k115 ``` To remove the host from the group just remove the entry from the group folder as following... ```bash -$ ansible-playbook ansible/playbook/passstore_manage_host_groups.yml -e operation=remove -e group_name= -e vm_name= +ansible-playbook ansible/playbook/passstore_manage_host_groups.yml -e operation=remove -e group_name= -e vm_name= ``` For instance, to undo the previous host operation: ```bash -$ ansible-playbook ansible/playbook/passstore_manage_host_groups.yml -e operation=remove -e group_name=k8s_115 -e vm_name=n01-k115 +ansible-playbook ansible/playbook/passstore_manage_host_groups.yml -e operation=remove -e group_name=k8s_115 -e vm_name=n01-k115 ``` == Modules diff --git a/ansible/playbook/edu/misc.yml b/ansible/playbook/edu/misc.yml new file mode 100644 index 00000000..7f435dfd --- /dev/null +++ b/ansible/playbook/edu/misc.yml @@ -0,0 +1,19 @@ +--- +- name: "Print stuff" + hosts: localhost + gather_facts: false + + tasks: + - name: "Print ansible_python_interpreter dictionary" + debug: + var: ansible_python_interpreter + + - name: "Print hostvars[inventory_hostname] dictionary" + debug: + var: hostvars[inventory_hostname] + + - name: "Print hostvars['charles-vm'] dictionary" + debug: + var: hostvars['charles-vm'] + +... diff --git a/ansible/playbook/edu/parse_json.json b/ansible/playbook/edu/parse_json.json new file mode 100644 index 00000000..f375a08d --- /dev/null +++ b/ansible/playbook/edu/parse_json.json @@ -0,0 +1,184 @@ +{ + "servers": [ + { + "access_ipv4": "192.168.99.111", + "access_ipv6": "aaaa:000:aaaa:0:aaaa:aaaa:aaaa:aaaa", + "addresses": { + "private": [ + { + "OS-EXT-IPS-MAC:mac_addr": "aa:aa:aa:aa:aa:aa", + "OS-EXT-IPS:type": "fixed", + "addr": "aaaa:000:aaaa:0:aaaa:aaaa:aaaa:aaaa", + "version": 6 + }, + { + "OS-EXT-IPS-MAC:mac_addr": "aa:aa:aa:aa:aa:aa", + "OS-EXT-IPS:type": "fixed", + "addr": "10.0.0.10", + "version": 4 + }, + { + "OS-EXT-IPS-MAC:mac_addr": "aa:aa:aa:aa:aa:aa", + "OS-EXT-IPS:type": "floating", + "addr": "192.168.99.111", + "version": 4 + } + ] + }, + "admin_password": null, + "attached_volumes": [], + "availability_zone": "nova", + "block_device_mapping": null, + "compute_host": "fv-az1493-571", + "config_drive": "", + "created_at": "2023-12-12T11:48:56Z", + "description": null, + "disk_config": "MANUAL", + "fault": null, + "flavor": { + "description": null, + "disk": 20, + "ephemeral": 0, + "extra_specs": { + "hw_rng:allowed": "True" + }, + "id": "m1.small", + "is_disabled": null, + "is_public": true, + "location": null, + "name": "m1.small", + "original_name": "m1.small", + "ram": 2048, + "rxtx_factor": null, + "swap": 0, + "vcpus": 1 + }, + "flavor_id": null, + "has_config_drive": "", + "host_id": "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", + "host_status": "UP", + "hostname": "ansible-molecule-snowdrop-openstack-test", + "hypervisor_hostname": "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa.localdomain", + "id": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa", + "image": { + "architecture": null, + "checksum": null, + "container_format": null, + "created_at": null, + "direct_url": null, + "disk_format": null, + "file": null, + "has_auto_disk_config": null, + "hash_algo": null, + "hash_value": null, + "hw_cpu_cores": null, + "hw_cpu_policy": null, + "hw_cpu_sockets": null, + "hw_cpu_thread_policy": null, + "hw_cpu_threads": null, + "hw_disk_bus": null, + "hw_machine_type": null, + "hw_qemu_guest_agent": null, + "hw_rng_model": null, + "hw_scsi_model": null, + "hw_serial_port_count": null, + "hw_video_model": null, + "hw_video_ram": null, + "hw_vif_model": null, + "hw_watchdog_action": null, + "hypervisor_type": null, + "id": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa", + "instance_type_rxtx_factor": null, + "instance_uuid": null, + "is_hidden": null, + "is_hw_boot_menu_enabled": null, + "is_hw_vif_multiqueue_enabled": null, + "is_protected": null, + "kernel_id": null, + "location": null, + "locations": null, + "metadata": null, + "min_disk": null, + "min_ram": null, + "name": null, + "needs_config_drive": null, + "needs_secure_boot": null, + "os_admin_user": null, + "os_command_line": null, + "os_distro": null, + "os_require_quiesce": null, + "os_shutdown_timeout": null, + "os_type": null, + "os_version": null, + "owner": null, + "owner_id": null, + "properties": { + "links": [ + { + "href": "http://10.1.0.5/compute/images/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa", + "rel": "bookmark" + } + ] + }, + "ramdisk_id": null, + "schema": null, + "size": null, + "status": null, + "store": null, + "tags": [], + "updated_at": null, + "url": null, + "virtual_size": null, + "visibility": null, + "vm_mode": null, + "vmware_adaptertype": null, + "vmware_ostype": null + }, + "image_id": null, + "instance_name": "instance-00000001", + "is_locked": false, + "kernel_id": "", + "key_name": "ansible_molecule_snowdrop_openstack_test", + "launch_index": 0, + "launched_at": "2023-12-12T11:49:01.000000", + "links": [ + { + "href": "http://10.1.0.5/compute/v2.1/servers/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa", + "rel": "self" + }, + { + "href": "http://10.1.0.5/compute/servers/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa", + "rel": "bookmark" + } + ], + "max_count": null, + "metadata": {}, + "min_count": null, + "name": "ansible_molecule_snowdrop_openstack_test", + "networks": null, + "power_state": 1, + "progress": 0, + "project_id": "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", + "ramdisk_id": "", + "reservation_id": "r-aaaaaa", + "root_device_name": "/dev/vda", + "scheduler_hints": null, + "security_groups": [ + { + "name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER" + } + ], + "server_groups": null, + "status": "ACTIVE", + "tags": [], + "task_state": null, + "terminated_at": null, + "trusted_image_certificates": null, + "updated_at": "2023-12-12T11:49:01Z", + "user_data": "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", + "user_id": "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", + "vm_state": "active", + "volumes": [] + } + ] +} diff --git a/ansible/playbook/edu/parse_json.yml b/ansible/playbook/edu/parse_json.yml new file mode 100644 index 00000000..f40bc2bd --- /dev/null +++ b/ansible/playbook/edu/parse_json.yml @@ -0,0 +1,51 @@ +--- +- name: "Parse json" + hosts: localhost + gather_facts: false + + tasks: + - name: "Print ansible_python_interpreter dictionary" + ansible.builtin.set_fact: + json_contents: "{{ lookup('ansible.builtin.file', 'parse_json.json') }}" + + - name: "Show json_contents" + ansible.builtin.debug: + msg: + - "json_contents: {{ json_contents }}" + - "json_contents private: {{ json_contents | community.general.json_query('servers[0].addresses.private') }}" + + - name: "Show IPV4" + ansible.builtin.debug: + msg: + - "server_name_query: {{ json_contents | community.general.json_query(server_name_query) }}" + - "server_name_query_2: {{ json_contents | community.general.json_query(server_name_query_2) }}" + - "type_n_version_query: {{ json_contents | community.general.json_query(type_n_version_query) }}" + - "type_n_version_query 2 : {{ json_contents.servers[0].addresses.private | community.general.json_query(type_n_version_query_2) }}" + vars: + server_name_query: "servers[0].addresses.private[?\"OS-EXT-IPS:type\" == 'fixed']" + server_name_query_2: "servers[0].addresses.private[?version == `4`]" + type_n_version_query: "servers[0].addresses.private[?\"OS-EXT-IPS:type\" == 'fixed' && version == `4`]" + type_n_version_query_2: "[?\"OS-EXT-IPS:type\" == 'fixed' && version == `4`]" + + - name: "Collect host fixed IP v4" + ansible.builtin.set_fact: + openstack_vm_ipv4_address: "{{ json_contents.servers[0].addresses.private | community.general.json_query(ipv4_fixed_address) }}" + vars: + ipv4_fixed_address: "[?\"OS-EXT-IPS:type\" == 'fixed' && version == `4`]" + + - name: "Show openstack_vm_ipv4_address" + ansible.builtin.debug: + msg: + - "openstack_vm_ipv4_address: {{ openstack_vm_ipv4_address }}" + + - name: "Collect host fixed IP v4" + ansible.builtin.set_fact: + openstack_vm_ipv4: "{{ openstack_vm_ipv4_address[0].addr }}" + + - name: "Show openstack_vm_ipv4" + ansible.builtin.debug: + msg: + - "openstack_vm_ipv4: {{ openstack_vm_ipv4 }}" + +... +# ansible-playbook ansible/playbook/edu/parse_json.yml diff --git a/ansible/playbook/godaddy/godaddy_dns_create_passwordstore.yml b/ansible/playbook/godaddy/godaddy_dns_create_passwordstore.yml index 9ada3c1b..bfbdaebf 100644 --- a/ansible/playbook/godaddy/godaddy_dns_create_passwordstore.yml +++ b/ansible/playbook/godaddy/godaddy_dns_create_passwordstore.yml @@ -1,8 +1,4 @@ --- -# Requires: -# . api_key: GoDaddy API Key for authentication -# . api_secret: GoDaddy API KeySecret for authentication - - name: "Validate passwordstore" import_playbook: "../passstore/passstore_controller_check.yml" @@ -14,25 +10,25 @@ - name: "GoDaddy DNS create" hosts: localhost - gather_facts: True - + gather_facts: true + tasks: - name: "Create DNS record" - include_role: + ansible.builtin.include_role: name: "snowdrop.godaddy.dns" - vars: + vars: state: "present" - name: "Print Create result" - debug: + ansible.builtin.debug: var: godaddy_dns - name: "Get DNS information" - include_role: + ansible.builtin.include_role: name: "snowdrop.godaddy.dns_info" - name: "Print GET result" - debug: + ansible.builtin.debug: var: godaddy_dns_info ... # ansible-playbook ansible/playbook/godaddy/godaddy_dns_create_passwordstore.yml -e domain_name="snowdrop.dev" -e record_type=A -e record_name="apps.ocp" -e '{"dns": {"data": "10.0.215.34"}}' diff --git a/ansible/playbook/godaddy/godaddy_dns_info_passwordstore.yml b/ansible/playbook/godaddy/godaddy_dns_info_passwordstore.yml index 70540dd0..4bdc71e7 100644 --- a/ansible/playbook/godaddy/godaddy_dns_info_passwordstore.yml +++ b/ansible/playbook/godaddy/godaddy_dns_info_passwordstore.yml @@ -1,8 +1,4 @@ --- -# Requires: -# . api_key: GoDaddy API Key for authentication -# . api_secret: GoDaddy API KeySecret for authentication - - name: "Validate passwordstore" import_playbook: "../passstore/passstore_controller_check.yml" @@ -14,15 +10,15 @@ - name: "GoDaddy DNS info" hosts: localhost - gather_facts: True + gather_facts: true tasks: - name: "Get DNS record for domain" - include_role: + ansible.builtin.include_role: name: "snowdrop.godaddy.dns_info" - name: "Print DNS information" - debug: + ansible.builtin.debug: var: godaddy_dns_info ... # ansible-playbook ansible/playbook/godaddy/godaddy_dns_info_passwordstore.yml -e domain_name="snowdrop.dev" -e api_environment=prod diff --git a/ansible/playbook/ocp/README.adoc b/ansible/playbook/ocp/README.adoc index b5e62bd8..fe01eaab 100644 --- a/ansible/playbook/ocp/README.adoc +++ b/ansible/playbook/ocp/README.adoc @@ -1,26 +1,43 @@ -= OCP Ansible Playbooks += OCP on RHOS Ansible Playbooks +Snowdrop Team :icons: font +:revdate: {docdate} :toc: left -:description: This document describes OCP specific playbooks. +:toclevels: 3 +:description: Deploying OCP on RHOS +ifdef::env-github[] +:tip-caption: :bulb: +:note-caption: :information_source: +:important-caption: :heavy_exclamation_mark: +:caution-caption: :fire: +:warning-caption: :warning: +endif::[] + +== Introduction + +[.lead] +This document describes the process to deploy an OCP cluster on a + RHOS infrastructure. -== Before you start +The installation process uses the _OpenShift Container Platform installer_ + obtained from https://mirror.openshift.com/. -[IMPORTANT] -==== -The Ansible commands should be executed within the ansible folder ! -==== +[glossary] +== Terminology -== Other information +Glossary of terms used. -The installation process uses the _OpenShift Container Platform installer_ - obtained from https://mirror.openshift.com/. +[glossary] +OCP:: OpenShift Container Platform +RHCOS:: Red Hat Core OS +RHOSP:: Red Hat OpenStack Platform == OCP On OpenStack Playbooks to deploy and remove an OCP cluster to RHOS. .List of OCP RHOS playbooks -[cols="2m,1m,5"] +[cols="30%m,70%"] |=== |Playbook File |Description @@ -31,23 +48,174 @@ Playbooks to deploy and remove an OCP cluster to RHOS. | Remove an OCP cluster on RHOS. | ocp_openstack_info.yml -| Print information from the OCP cluster, for dev/testing purposes. +a| Print information from the deployed OCP cluster. + +This playbook will print cluster information such as Console URL, kubeadmin password, ... -| ocp_openstack_test.yml -| Test some functionalities, for dev/testing purposes only! |=== +== Preparing the deployment + +The OCP installation process requires the use of the OCP + pull secret. This secret can be obained from https://console.redhat.com/openshift/install/pull-secret. + +As part of the installation process this information will be added + to the `install-config.yaml` and used in the OCP installation + process. + +.Sample OCP pull secret JSON +[source,json] +---- +{ + "auths": { + "cloud.openshift.com": {"auth": "wwwwwwwwww", "email": "antcosta@redhat.com"} + ,"quay.io": {"auth": "xxxxxxxxxxxxx", "email": "janedoe@example.com"} + ,"registry.connect.redhat.com": {"auth": "yyyyyyyyyyyyy", "email": "janedoe@example.com"} + ,"registry.redhat.io": {"auth": "zzzzzzzzzzzzzzzzz", "email": "janedoe@example.com"} + } +} +---- + +[NOTE] +==== +The commands described hereafter use the `OCP_PULL_SECRET` environment + variable to pass the credentials to the playbook. +==== + +== Using a Bootstrap host + +In the installation process the several files are downloaded, amongst + which is the OCP installer software and a RHCOS image. During the execution + of the OCP installer the downloaded RHCOS image must be uploaded into RHOSP. + Although the image is cached, locally, this part of the process takes an + amount of time that cannot be disregarded. + +The default installation process uses the controller (`localhost`) as the + installation executor. This means all files are downloaded/uploaded to/from + the local workstation. This approach has several drawbacks such as having + to rely on the network infrastructure of the workstation being limited by + it's bandwith (icon:download[alt=download] and icon:upload[alt=upload]). + +To mitigate this problem the installation process can, and we suggest it is, + executed from a remote host, which might be a temporary host. This host will + be referred as the bootstrap host hereafter. + +To be able to use a temporary boostrap host it must be created prior to the + execution of the installation process. The name of this RHOSP Host can be + any name although we recomend including the name of the cluster as prefix + and adding a suffix such as `-tmp-bootstrap-server`. + +To create the boostrap host execute the RHOSP playbook created for that purpose. + +.Sample execution of creating a bootstrap host. +[source,bash] +---- +ansible-playbook ansible/playbook/openstack/openstack_vm_create_passwordstore.yml -e '{"openstack": {"vm": {"network": "provider_net_shared","image": "Fedora-Cloud-Base-37", "flavor": "m1.small"}}}' -e vm_name=ocp-xyz-tmp-bootstrap-server +---- + +After creating the bootstrap host execute the steps provided in the + <> section. + +[NOTE] +==== +The bootstrap server could be removed once the OCP cluster has been + created (optional). +==== + // tag::deploy_ocp_on_rhos[] -=== Deploy OCP Cluster on RHOS +[#deploy-ocp-on-rhos] +== Deploy OCP Cluster on RHOS -First obtain the OCP pull secret which can be obained from https://console.redhat.com/openshift/install/pull-secret. +The deployment playbook supports the follow variable entries. -Execute the playbook. +.Script options +[%header,cols="25%,75%"] +|=== +| Variable | Description + +| `ocp_bootstrap_host` + +[.fuchsia]#string# + +a| VM name for the bootstrap host. + +If defined the installation process will be performed not on the + `localhost` controller but on the identified VM. + +| `ocp_cluster_name` + +[.fuchsia]#string# / [.red]#required# + +a| Name to be assigned to the OCP cluster + +[NOTE] +==== +Will also be applied as prefix to all the RHOS VM instances created as well + as other RHOS resources. +==== + +| `ocp_root_directory` + +[.fuchsia]#string# / [.red]#required# + +a| Root folder where for the installation. A new sub-folder with the + `ocp_cluster_name` name will be created and will serve as the + installation folder. + +| `openshift_pull_secret` + +[.fuchsia]#json# / [.red]#required# + +a| String of the OCP pull secret for the user. + +| `openstack_flavor_control_plane` + +[.fuchsia]#string# + +a| Flavor to be used on the Control Plane hosts. + +*Default => `ocp4.control`* + +| `openstack_flavor_compute` + +[.fuchsia]#string# + +a| Flavor to be used on the Compute hosts. + +*Default => `ocp4.compute`* + +|=== + +Execute the playbook. Please note that this playbook uses `sudo` permission + to create several folders so the Ansible user must have `sudo` permission. + We're using the `-K` switch to ask for the `become` password which is only + required if the user as sudo permission with password. The folders created + will be associated (`uid:gid``) with the Ansible user used to connect to the + host. + +.Explanation of the playbook parameter execution +[source] +---- +ansible-playbook ansible/playbook/ocp/ocp_openstack_install.yml <1> + -e ocp_root_directory=<2> + -e ocp_cluster_name=<3> + -e openshift_pull_secret=<4> + -K <5> +---- +<1> Playbook that implements the OCP deployment. +<2> Root directory for the installation. +<3> Name to be given to the cluster. +<4> OCP pull secret for the user. +<5> Ask for the become password. -.Command to execute the OCP deployment playbook. +.Command to execute the OCP deployment playbook [source,bash] ---- -ansible-playbook -i inventory/ playbook/ocp/ocp_openstack_install.yml -e work_directory=/opt/ocp -e openshift_pull_secret=${OCP_PULL_SECRET} -K +ansible-playbook ansible/playbook/ocp/ocp_openstack_install.yml \ + -e ocp_root_directory=/opt/ocp \ + -e ocp_cluster_name=ocp-sdev \ + -e openshift_pull_secret=${OCP_PULL_SECRET} \ + -K ---- The playbook will result on the deployment of several RHOS VMs for control plane and worker nodes. @@ -65,12 +233,28 @@ variables, having as default the values from the role defaults file. include::../../roles/ocp_cluster/defaults/main.yml[tag=rhos_default_flavors] ---- -The list of flavors is identified on the link:../../../openstack/README.adoc#Flavors[OpenStack README file]. +Instructions on how to obtain the list of available flavors is described on + our link:../../../openstack/openstack-cli.adoc#flavors[OpenStack CLI README file]. ==== + +The result of the deployment process is the following: + +* OCP cluster deployed on RHOS instances as defined in the number and flavor of main and worker nodes +* RHOS instance that will serve as jump server to the OCP cluster +* Installation directory stored on the passwordstore and copied to the jump server +* OCP authentication information stored on the passwordstore // end::deploy_ocp_on_rhos[] +[CAUTION] +==== +At this point the _bootstrap server_, if used, is no longer required. + +[.lead] +Check that the installation folder is safely stored both on the jump server as well as on the local passwordstore before removing it. +==== + // tag::undeploy_ocp_on_rhos[] -=== Undeploy OCP Cluster on RHOS +== Undeploy OCP Cluster on RHOS [WARNING] ==== @@ -79,166 +263,165 @@ For the removal process to be successfull the OCP installation directory objects associated to the project. ==== +[#deploy-ocp-on-rhos] +== Deploy OCP Cluster on RHOS + +The deployment playbook supports the follow variable entries. + +.Script options +[%header,cols="25%,75%"] +|=== +| Variable | Description + +| `ocp_bootstrap_host` + +[.fuchsia]#string# + +a| VM name for the host that contains the OCP installation folder. + +If defined the installation process will be performed not on the + `localhost` controller but on the identified VM. + +| `ocp_cluster_name` + +[.fuchsia]#string# / [.red]#required# + +a| Name of the OCP cluster + +| `ocp_root_directory` + +[.fuchsia]#string# / [.red]#required# + +a| Root folder where for the installation. + +*TODO: must be added as part of the ansible inventory* + +|=== + .Command to execute the OCP cluster removal playbook. [source,bash] ---- -ansible-playbook -i inventory/ playbook/ocp/ocp_openstack_remove.yml \ - -e work_directory=/opt/ocp \ - -e installation_dir=/opt/ocp/openshift-data/ +ansible-playbook ansible/playbook/ocp/ocp_openstack_remove.yml \ + -e ocp_root_directory=/opt/ocp \ + -e ocp_cluster_name=ocp-sdev \ + -e ocp_bootstrap_host=ocp-sdev-xxxxx-jump-server ---- // end::undeploy_ocp_on_rhos[] -=== Other OCP RHOS Playbooks +== Other OCP RHOS Playbooks -[source,bash] ----- -ansible-playbook playbook/ocp/ocp_openstack_info.yml -e work_directory=/opt/ocp -e installation_dir=/opt/ocp/openshift-data/ -e ocp_cluster_name=ocp -e snowdrop_domain="snowdrop.dev" -vv ----- +=== Get information from the OCP cluster + +To collect information on the OCP cluster execute the + `ocp_openstack_info` playbook located at the `ansible/playbook/ocp/` + folder. -== Playbooks +.Playbook parameters +[%header,cols="25%,75%"] +|=== +| Variable | Description -=== PasswordStore +| `ocp_root_directory` -Create OpenStack instance based on passwordstore +[.fuchsia]#string# / [.red]#required# -.openstack_vm_create_paswordstore parameters -[cols="2m,1m,5"] -|=== -|Field name |Mandatory |Description +a| Root folder for the OCP installation. + +Either define the `ocp_root_directory` and `ocp_cluster_name` variables + or the `installation_dir` one. + +| `ocp_cluster_name` + +[.fuchsia]#string# / [.red]#required# + +a| Name of the OCP cluster -| vm_name -| x -| Name of the VM being created. Will be used both as hostname as well as Ansible Inventory name. +Either define the `ocp_root_directory` and `ocp_cluster_name` variables + or the `installation_dir` one. -| openstack.vm.network -| x -| Value for the OpenStack provider network. `provider_net_shared` +| `installation_dir` -| openstack.vm.image -| x -| OpenStack VM image, e.g. `Fedora-Cloud-Base-35`. +[.fuchsia]#string# -| openstack.vm.flavor" -| x -| OpenStack VM flavor (size), e.g. `m1.medium`. +a| Location of the installation directory. -| key_name -| - -| Use an existing SSH key (value) instead of creating one for the VM. +Either define the `ocp_root_directory` and `ocp_cluster_name` variables + or the `installation_dir` one. -| k8s_type -| *for k8s hosts.* -| Kubernetes host type [master,worker]. -| k8s_version -| *for k8s hosts.* -| Kubernetes version to be associated with the host, e.g. for version `1.23` use `123`. This is actually an Ansible Inventory group having definitions associated with each of the Kubernetes version. +| `vm_name` + +[.fuchsia]#string# / [.red]#required# + +a| Root folder where for the installation. A new sub-folder with the + `ocp_cluster_name` name will be created and will serve as the + installation folder. + |=== [source,bash] ---- -$ VM_NAME=vm20210221-t01 +ansible-playbook ansible/playbook/ocp/ocp_openstack_info.yml \ + -e ocp_root_directory=/opt/ocp \ + -e ocp_cluster_name=ocp-sdev \ + -e vm_name=ocp-sdev-zzzzz-jump-server -vv ---- -[source,bash] ----- -$ ansible-playbook playbook/openstack/openstack_vm_create_paswordstore.yml -e k8s_type=masters -e k8s_version=123 -e '{"openstack": {"vm": {"network": "provider_net_shared","image": "Fedora-Cloud-Base-35", "flavor": "m1.medium"}}}' -e key_name=snowdrop-adm-key -e vm_name=${VM_NAME} ----- +=== Init jump server -This playbook should finish with something like: +This playbook init's a jump server by performing the following tasks: -[source] -.... -PLAY RECAP ********************************************************************************************************************************************************************************************************************** -localhost : ok=68 changed=20 unreachable=0 failed=0 skipped=13 rescued=0 ignored=1 -vm20210221-t01 : ok=32 changed=20 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 - -Monday 21 February 2022 13:01:53 +0100 (0:00:05.011) 0:12:51.042 ******* -=============================================================================== -openstack/init_vm : Upgrade all packages ------------------------------------------------- 305.39s -openstack/vm : Create VM instance -------------------------------------------------------- 121.94s -sec/firewalld : Install firewalld --------------------------------------------------------- 47.60s -openstack/init_vm : Install packages ------------------------------------------------------ 47.22s -openstack/init_vm : Reboot instance ------------------------------------------------------- 32.76s -Refresh the inventory so the newly added host is available -------------------------------- 21.10s -sec/sshd_port : Change SELINUX settings to allow connections to the new port --------------- 9.14s -sec/motd : Config | Install custom `/etc/motd` file ---------------------------------------- 8.24s -sec/audit : Apply auditd configuration ----------------------------------------------------- 8.06s -openstack/vm : Gather information about a previously created image with same name ---------- 7.85s -Wait for connection to host ---------------------------------------------------------------- 7.02s -openstack/vm : Wait for boot --------------------------------------------------------------- 6.55s -Gathering Facts ---------------------------------------------------------------------------- 5.77s -sec/firewalld : Enable and start firewalld ------------------------------------------------- 5.53s -Gathering Facts ---------------------------------------------------------------------------- 5.08s -sec/update : Update all packages ----------------------------------------------------------- 5.01s -sec/firewalld : firewalld - Manage firewall ports ------------------------------------------ 4.96s -sec/sshd_port : Change the ssh port number ------------------------------------------------- 4.60s -sec/firewalld : firewalld - Manage firewall services --------------------------------------- 4.58s -sec/firewalld : Restart firewalld ---------------------------------------------------------- 4.51s -.... - -The playbook also uses the variables defined in `roles/openstack/vm/defaults/main.yml`. Those variables can also be overridden using the syntax above. +* Downloads the OCP and k8s CLI binaries into the jump server +* Copy the OCP cluster installation folder from passwordstore + into the jump server. -[source,yaml] ----- -include::../../roles/openstack/vm/defaults/main.yml[] ----- +.Playbook parameters +[%header,cols="25%,75%"] +|=== +| Variable | Description -=== Delete a VM +| `ocp_root_directory` -To delete a VM, simply execute the `openstack_vm_remove_aggregate` playbook. +[.fuchsia]#string# / [.red]#required# -[source,bash] ----- -ansible-playbook -i inventory/ playbook/ocp/ocp_openstack_install.yml -e target_dir=/home/ajc102/docs/redhat/_tmp/ocp ----- +a| Root folder for the OCP installation. -[source] -.... -PLAY RECAP ********************************************************************************************************************************************************************************************************************** -localhost : ok=17 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=2 - -Monday 21 February 2022 13:07:58 +0100 (0:00:02.485) 0:00:30.900 ******* -=============================================================================== -openstack/vm : Gather information about a previously created image named ------------------ 8.16s -openstack/vm : Delete --------------------------------------------------------------------- 3.91s -openstack/vm : Delete VM volume ------------------------------------------------------------ 3.41s -openstack/vm : Delete key from server ----------------------------------------------------- 2.93s -Push changes to the pass git database ------------------------------------------------------ 2.49s -Pull pass git database --------------------------------------------------------------------- 2.16s -openstack/vm : Set pass facts from passwordstore ------------------------------------------- 1.70s -openstack/vm : Remove existing SSH key to use with instance -------------------------------- 1.55s -openstack/vm : Find admin user home folder ------------------------------------------------- 0.98s -openstack/vm : Remove the host from the known_hosts file ----------------------------------- 0.95s -openstack/vm : stat ------------------------------------------------------------------------ 0.88s -Remove passstore entries ------------------------------------------------------------------- 0.74s -Remove local ssh keys ---------------------------------------------------------------------- 0.57s -openstack/vm : include_tasks --------------------------------------------------------------- 0.14s -Validate required variables ---------------------------------------------------------------- 0.08s -openstack/vm : Print Openstack output ------------------------------------------------------ 0.07s -openstack/vm : include_tasks --------------------------------------------------------------- 0.07s -.... - -=== Connect to the new instance - -Since all the information related to the host will be managed by our ansible passwordstore roles, which also stores the ssh public and secret keys locally on the `~/.ssh` folder, to login to the newly created VM is as simple as launching the following command. +Either define the `ocp_root_directory` and `ocp_cluster_name` variables + or the `installation_dir` one. -[source,bash] ----- -$ ssh -i ~/.ssh/id_rsa_snowdrop_openstack_${VM_NAME} `pass show openstack/${VM_NAME}/os_user | head -n 1`@`pass show openstack/${VM_NAME}/ansible_ssh_host | head -n 1` -p `pass show openstack/${VM_NAME}/ansible_ssh_port | head -n 1` ----- +| `ocp_cluster_name` -This should connect ot the newly created VM. +[.fuchsia]#string# / [.red]#required# -[source] -==== -Last login: Thu Jan 1 00:00:00 1970 from x.x.x.x ------------------- +a| Name of the OCP cluster -This machine is property of RedHat. -Access is forbidden to all unauthorized person. -All activity is being monitored. +Either define the `ocp_root_directory` and `ocp_cluster_name` variables + or the `installation_dir` one. -Welcome to vm20210221-t01.. -==== +| `ocp_cluster_bin_directory` + +[.fuchsia]#string# + +a| Folder that will contain the OCP and k8s CLI binaries + +*Default => `/bin`* +| `vm_name` + +[.fuchsia]#string# / [.red]#required# + +a| Root folder where for the installation. A new sub-folder with the + `ocp_cluster_name` name will be created and will serve as the + installation folder. + +|=== + +[source,bash] +---- +ansible-playbook ansible/playbook/ocp/rhosp_init_jump_server_pass.yml \ + -e ocp_root_directory=/home/snowdrop/ocp \ + -e ocp_cluster_name=ocp-sdev \ + -e vm_name=ocp-jump-server \ + -e ocp_cluster_bin_directory=/home/snowdrop/.local/bin \ + -vv +---- diff --git a/ansible/playbook/ocp/ocp_openstack_create_jump_server.yml b/ansible/playbook/ocp/ocp_openstack_create_jump_server.yml new file mode 100644 index 00000000..1088fde1 --- /dev/null +++ b/ansible/playbook/ocp/ocp_openstack_create_jump_server.yml @@ -0,0 +1,265 @@ +--- +# Requires: +# vars: +# tmp_directory: temporary directory =/opt/ocp/_tmp/ansible.yxam0y7mbuild +# ocp_root_directory: /opt/ocp +# ocp_cluster_name: ocp-sdev +# vm_name: ocp-sdev-p75fs-jump-server +- name: "Build OpenStack authentication for v3password" + ansible.builtin.import_playbook: "../openstack/openstack_auth_passstore_v3password.yml" + +- name: "Load the metadata from the OCP installation directory" + hosts: "{{ ocp_bootstrap_host | default('localhost') }}" + gather_facts: true + + tasks: + + - name: Calculate installation folder + ansible.builtin.set_fact: + installation_dir: "{{ ocp_root_directory }}/{{ ocp_cluster_name }}" + + # - name: "Load metadata from ansible installation folder" + # ansible.builtin.set_fact: + # ocp_cluster_metadata: "{{ lookup('file', installation_dir + '/metadata.json') | from_json }}" + + - name: Get OCP cluster metadata + ansible.builtin.import_role: + name: ocp_cluster + tasks_from: get_metadata.yml + + # - name: "Print facts" + # ansible.builtin.debug: + # msg: + # # - "ocp_cluster_metadata: {{ ocp_cluster_metadata }}" + # # - "ansible_installation_folder_base64: {{ ansible_installation_folder_base64 }}" + # # - "kubeadmin_password: {{ kubeadmin_password }}" + # - "ocp_cluster_metadata: {{ hostvars[ocp_bootstrap_host]['ocp_cluster_metadata'] }}" + # - "ansible_installation_folder_base64: {{ hostvars[ocp_bootstrap_host]['install_dir'] }}" + # - "kubeadmin_password: {{ hostvars[ocp_bootstrap_host]['kubeadmin_password'] }}" + # verbosity: 1 + + - name: "Collect bootstrap host facts into localhost" + ansible.builtin.set_fact: + ocp_cluster_metadata: "{{ hostvars[ocp_bootstrap_host]['ocp_cluster_metadata'] }}" + # ansible_installation_folder_base64: "{{ query('passwordstore', 'openstack/' + ocp_bootstrap_host + '/install_dir' )[0] }}" + ansible_installation_folder_base64: "{{ query('passwordstore', 'openstack/' + ocp_cluster_metadata.infraID + '-jump-server/install_dir' )[0] }}" + # kubeadmin_password: "{{ query('passwordstore', 'openstack/' + ocp_bootstrap_host + '/kubeadmin' )[0] }}" + kubeadmin_password: "{{ query('passwordstore', 'openstack/' + ocp_cluster_metadata.infraID + '-jump-server/kubeadmin' )[0] }}" + when: ocp_bootstrap_host is defined + delegate_facts: True + delegate_to: localhost + + - name: "Print facts" + ansible.builtin.debug: + msg: + - "ocp_cluster_metadata: {{ ocp_cluster_metadata }}" + - "ocp_cluster_metadata: {{ hostvars[ocp_bootstrap_host]['ocp_cluster_metadata'] }}" + - "ocp_cluster_metadata: {{ hostvars['localhost']['ocp_cluster_metadata'] }}" + +- name: "Deploy Jump Server" + ansible.builtin.import_playbook: "../openstack/openstack_vm_create_passwordstore.yml" + vars: + state: present + openstack: + timeout: 300 + vm: + network: "{{ ocp_cluster_metadata.infraID }}-openshift" + image: "Fedora-Cloud-Base-37" + flavor: "m1.small" + vm_name: "{{ ocp_cluster_metadata.infraID }}-jump-server" + skip_post_installation: true + +- name: "Create floating IP for Jump Server" + hosts: localhost + gather_facts: true + + tasks: + + - name: Getting fip by associated fixed IP address. + openstack.cloud.floating_ip_info: + auth: "{{ rhos_auth }}" + auth_type: "{{ rhos_auth_type }}" + fixed_ip_address: "{{ query('passwordstore', 'openstack/' + vm_name + '/ansible_ssh_host' )[0] }}" + register: fip + ignore_errors: true + + - name: "Print FIP query" + ansible.builtin.debug: + msg: + - "fip: {{ fip }}" + verbosity: 1 + + - name: "Set floating ip variable is already created" + ansible.builtin.set_fact: + jump_server_floating_ip: "{{ fip.floating_ips[0].floating_ip_address }}" + when: not fip.failed and (fip.floating_ips | length > 0) + + - name: "Create Floating IP for Jump Server" + openstack.cloud.floating_ip: + auth: "{{ rhos_auth }}" + auth_type: "{{ rhos_auth_type }}" + state: present + reuse: true + server: "{{ ocp_cluster_metadata.infraID }}-jump-server" + network: "{{ rhos_network | default('provider_net_cci_13') }}" + # fixed_address: 192.0.2.3 + wait: true + timeout: 180 + # ansible.builtin.shell: + # cmd: | + # openstack --os-cloud openstack floating ip create --description "OCP API {{ ocp_cluster_name }}.{{ snowdrop_domain }}" -f value -c floating_ip_address {{ openstack_network_provider }} + # args: + # chdir: "{{ work_directory }}" + register: rhos_floating_ip_jump_server_res + when: jump_server_floating_ip is not defined + + - name: "Set floating ip variable is already created" + ansible.builtin.set_fact: + jump_server_floating_ip: "{{ rhos_floating_ip_jump_server_res.floating_ip.fixed_ip_address }}" + when: jump_server_floating_ip is not defined + + - name: "Store Floating IP on the passwordstore" + ansible.builtin.set_fact: + ansible_installation_folder_passwordstore: "{{ query('passwordstore', 'openstack/' + vm_name + '/floating_ip create=True userpass=' + jump_server_floating_ip )[0] }}" + +- name: "Wait for the VM to boot and we can ssh" + hosts: "{{ vm_name | default([]) }}" + gather_facts: no + + tasks: + + - name: "Wait for connection to host" + ansible.builtin.wait_for: + host: "{{ query('passwordstore', 'openstack/' + inventory_hostname + '/floating_ip')[0] }}" + port: "{{ query('passwordstore', 'openstack/' + inventory_hostname + '/ansible_ssh_port')[0] }}" + timeout: 120 + vars: + ansible_connection: local + register: wait_for_connection_reg + + post_tasks: + + - name: Refresh the inventory so the newly added host is available + meta: refresh_inventory + + - name: "DON'T FORGET TO SECURE YOUR SERVER" + ansible.builtin.debug: + msg: + - "DON'T FORGET TO SECURE YOUR SERVER!!!" + - "" + - "Trying to start start server securization automatically." + - "For manual execution: $ ansible-playbook ansible/playbook/sec_host.yml -e vm_name={{ vm_name }} -e provider=openstack" + +- name: "Add to known hosts" + hosts: localhost + gather_facts: true + + tasks: + + - name: "Add host Floating IP to known hosts {{ hostvars[vm_name]['floating_ip'] }}" + ansible.builtin.known_hosts: + name: "{{ hostvars[vm_name]['floating_ip'] }}" + key: "{{ lookup('pipe', 'ssh-keyscan {{ hostvars[vm_name].floating_ip }}') }}" + hash_host: true + +- name: "Add to known hosts" + hosts: "{{ vm_name | default([]) }}" + gather_facts: true + + tasks: + + - name: "Add host Floating IP to known hosts {{ hostvars[vm_name]['floating_ip'] }}" + ansible.builtin.known_hosts: + name: "{{ hostvars[vm_name]['floating_ip'] }}" + key: "{{ lookup('pipe', 'ssh-keyscan {{ hostvars[vm_name].floating_ip }}') }}" + hash_host: true + +- name: "Extract installation dictory from passwordstore" + hosts: "localhost" + gather_facts: true + + tasks: + + - name: "Extract installation dictory from passwordstore" + ansible.builtin.copy: + content: "{{ query('passwordstore', 'openstack/' + ocp_cluster_metadata.infraID + '/install_dir')[0] | b64decode }}" + dest: /tmp/ocp-installation.tgz + when: ocp_bootstrap_host is not defined + + - name: "Extract installation dictory from passwordstore" + ansible.builtin.copy: + content: "{{ query('passwordstore', 'openstack/' + ocp_bootstrap_host + '/install_dir')[0] | b64decode }}" + dest: /tmp/ocp-installation.tgz + when: ocp_bootstrap_host is defined + +- name: "Post Jump Server OCP" + hosts: "{{ vm_name | default([]) }}" + gather_facts: true + + pre_tasks: + - name: Set required variables + ansible.builtin.set_fact: + remote_bin_folder: /home/snowdrop/.local/bin + + tasks: + - name: Create home .local/bin folder + ansible.builtin.file: + path: "{{ remote_bin_folder }}" + recurse: true + state: directory + owner: "{{ ansible_user_id }}" + group: "{{ ansible_user_id }}" + mode: '0755' + + - name: Copy installation folder to remote host + ansible.builtin.copy: + src: /tmp/ocp-installation.tgz + dest: /tmp/ocp-installation.tgz + + - name: Extract OCP installation into /home/snowdrop + ansible.builtin.unarchive: + src: /tmp/ocp-installation.tgz + dest: /home/snowdrop + + + # - name: Download OCP files + # ansible.builtin.import_role: + # name: ocp_cluster + # tasks_from: install_prepare.yml + # vars: + # ocp_bin_directory: "{{ remote_bin_folder }}" + + - name: Download OCP files + ansible.builtin.import_role: + name: ocp_cluster + tasks_from: download_installation_files.yml + vars: + ocp_cluster_bin_directory: "{{ remote_bin_folder }}" + + post_tasks: + + - name: "DON'T FORGET TO SECURE YOUR SERVER" + ansible.builtin.debug: + msg: + - "DON'T FORGET TO SECURE YOUR SERVER!!!" + - "" + - "Trying to start start server securization automatically." + - "For manual execution: $ ansible-playbook ansible/playbook/sec_host.yml -e vm_name={{ vm_name }} -e provider=openstack" + +- name: "Init Jump Server" + hosts: "{{ vm_name | default([]) }}" + gather_facts: yes + + tasks: + - name: Init RHOS VM + ansible.builtin.include_role: + name: "openstack/init_vm" + +- name: "Secure Jump Server" + ansible.builtin.import_playbook: "../sec_host.yml" + vars: + provider: "openstack" + hosts: "{{ vm_name | default([]) }}" + +... +# ansible-playbook ansible/playbook/ocp/ocp_openstack_deploy_jump_server.yml -e tmp_directory=/opt/ocp/_tmp/ansible.yxam0y7mbuild -e ocp_root_directory=/opt/ocp -e ocp_cluster_name=ocp-sdev -e vm_name=ocp-sdev-p75fs-jump-server diff --git a/ansible/playbook/ocp/ocp_openstack_info.yml b/ansible/playbook/ocp/ocp_openstack_info.yml index 5c00a384..fd4b4e2b 100644 --- a/ansible/playbook/ocp/ocp_openstack_info.yml +++ b/ansible/playbook/ocp/ocp_openstack_info.yml @@ -1,66 +1,184 @@ +# Get OCP cluster information from the installation folder. --- -- name: "INFO OCP" - hosts: localhost +- name: "Build OpenStack authentication for v3password" + ansible.builtin.import_playbook: "../openstack/openstack_auth_passstore_v3password.yml" + +- name: "Get OCP installation information" + hosts: "{{ vm_name | default(['localhost']) }}" gather_facts: true pre_tasks: - - name: "Set openstack_auth facts" - set_fact: - openstack_auth: - openstack_project_name: "{{ query('passwordstore', 'openstack/host/project_name')[0] }}" - openstack_console_user: "{{ query('passwordstore', 'openstack/host/console_user')[0] }}" - openstack_console_password: "{{ query('passwordstore', 'openstack/host/console_pw')[0] }}" - openstack_user_domain: "{{ query('passwordstore', 'openstack/host/console_domain')[0] }}" - openstack_project_domain: "{{ query('passwordstore', 'openstack/host/os_domain')[0] }}" - openstack_os_auth_url: "{{ query('passwordstore', 'openstack/host/os_auth_url')[0] }}" - + - name: Check required variables + ansible.builtin.assert: + that: + - "installation_dir is defined or (ocp_root_directory is defined and ocp_cluster_name is defined)" + msg: + - "Either define:" + - " - installation_dir" + - " or" + - " - ocp_root_directory and ocp_cluster_name" + + - name: "Set installation_dir foler" + ansible.builtin.set_fact: + installation_dir: "{{ ocp_root_directory }}/{{ ocp_cluster_name }}" + when: installation_dir is undefined + tasks: + - name: "Get OCP cluster metadata" + ansible.builtin.import_role: + name: 'ocp_cluster' + tasks_from: get_metadata + when: ocp_cluster_metadata is undefined + + - name: "Print OCP information" + ansible.builtin.debug: + msg: "ocp_cluster_metadata: {{ ocp_cluster_metadata }}" + verbosity: 2 + - name: "Get information from OCP cluster" ansible.builtin.import_role: name: 'ocp_cluster' tasks_from: openshift_install_state + when: openshift_install_state is undefined + + - name: "Print OCP installation information" + ansible.builtin.debug: + msg: "openshift_install_state: {{ openshift_install_state }}" + verbosity: 2 + + - name: "Read kubeadmin-password" + ansible.builtin.slurp: + src: "{{ installation_dir + '/auth/kubeadmin-password' }}" + register: ocp_cluster_kubeadmin_pw_slurp + + - name: "Set rhos_ocp_facts facts" + ansible.builtin.set_fact: + rhos_ocp_facts: + api_floating_ip: "{{ openshift_install_state['*installconfig.InstallConfig'].config.platform.openstack.apiFloatingIP }}" + cluster_name: "{{ ocp_cluster_metadata.clusterName }}" + ingress_fixed_ip: "{{ openshift_install_state['*installconfig.InstallConfig'].config.platform.openstack.ingressVIPs[0] }}" + jump_server_vm_name: "{{ ocp_cluster_metadata.infraID }}-jump-server" + ocp_cluster_kubeadmin_pw: "{{ ocp_cluster_kubeadmin_pw_slurp.content | b64decode }}" + +- name: "RHOS information" + hosts: localhost + gather_facts: true + + pre_tasks: + + - name: "Recover rhos_ocp_facts from VM if required" + ansible.builtin.set_fact: + rhos_ocp_facts: "{{ hostvars[vm_name]['rhos_ocp_facts'] }}" + when: rhos_ocp_facts is undefined + + tasks: + # Jump Server + - name: "Get information from Jump Server" + openstack.cloud.server_info: + auth_type: "{{ rhos_auth_type }}" + auth: "{{ rhos_auth }}" + name: "{{ rhos_ocp_facts.jump_server_vm_name }}" + register: jump_server_info + + - name: "Print ump Server information" + ansible.builtin.debug: + msg: "jump_server_info: {{ jump_server_info }}" + verbosity: 2 - - name: "Get Ingress Floating IP information" + - name: "Get Jump Server Floating IP information" openstack.cloud.floating_ip_info: - auth: - project_name: "{{ openstack_auth.openstack_project_name }}" - username: "{{ openstack_auth.openstack_console_user }}" - password: "{{ openstack_auth.openstack_console_password }}" - user_domain_name: "{{ openstack_auth.openstack_user_domain }}" - project_domain_name: "{{ openstack_auth.openstack_project_domain }}" - auth_url: "{{ openstack_auth.openstack_os_auth_url }}" - floating_ip_address: "{{ floating_ip_ingress }}" - register: rhos_floating_ip_ingress_info_res - - - name: "Get Ingress Floating IP information" - debug: - msg: "rhos_floating_ip_ingress_info_res: {{ rhos_floating_ip_ingress_info_res }}" - verbosity: 0 + auth: "{{ rhos_auth }}" + auth_type: "{{ rhos_auth_type }}" + floating_ip_address: "{{ jump_server_info.servers[0].access_ipv4 }}" + register: rhos_jump_server_floating_ip - - name: "Get Ingress Port information" - openstack.cloud.port_info: - auth: - project_name: "{{ openstack_auth.openstack_project_name }}" - username: "{{ openstack_auth.openstack_console_user }}" - password: "{{ openstack_auth.openstack_console_password }}" - user_domain_name: "{{ openstack_auth.openstack_user_domain }}" - project_domain_name: "{{ openstack_auth.openstack_project_domain }}" - auth_url: "{{ openstack_auth.openstack_os_auth_url }}" - filters: - name: "{{ ocp_cluster_id }}-ingress-port" - register: rhos_ocp_cluster_ingress_port - - - name: "Print Ingress Port details" - debug: - msg: "{{item}}" - verbosity: 0 - loop: - - "rhos_ocp_cluster_ingress_port: {{ rhos_ocp_cluster_ingress_port }}" - - "rhos_ocp_cluster_ingress_port.openstack_ports[0].fixed_ips: {{ rhos_ocp_cluster_ingress_port.openstack_ports[0].fixed_ips }}" - - "rhos_ocp_cluster_ingress_port.openstack_ports[0].fixed_ips[0].ip_address: {{ rhos_ocp_cluster_ingress_port.openstack_ports[0].fixed_ips[0].ip_address }}" - - - name: "Print server details" - debug: - msg: "openstack --os-cloud openstack floating ip set --fixed-ip-address {{ rhos_ocp_cluster_ingress_port.openstack_ports[0].fixed_ips[0].ip_address }} --port {{ rhos_ocp_cluster_ingress_port.openstack_ports[0].id }} {{ rhos_floating_ip_ingress_info_res.floating_ips[0].id }}" + - name: "Print Jump Server Floating IP information" + ansible.builtin.debug: + msg: "rhos_jump_server_floating_ip: {{ rhos_jump_server_floating_ip }}" + verbosity: 2 + + # Ingress + - name: "Get Cluster Ingress Floating IP information" + openstack.cloud.floating_ip_info: + auth: "{{ rhos_auth }}" + auth_type: "{{ rhos_auth_type }}" + # floating_ip_address: "{{ openshift_install_state['*installconfig.InstallConfig'].config.platform.openstack.apiFloatingIP }}" + fixed_ip_address: "{{ rhos_ocp_facts.ingress_fixed_ip }}" + register: rhos_ocp_ingress_floating_ip + + - name: "Print Cluster Ingress Floating IP information" + ansible.builtin.debug: + msg: "rhos_ocp_ingress_floating_ip: {{ rhos_ocp_ingress_floating_ip }}" + verbosity: 2 + + # - name: "Get Cluster Ingress Port information" + # openstack.cloud.port_info: + # auth: "{{ rhos_auth }}" + # auth_type: "{{ rhos_auth_type }}" + # filters: + # name: "{{ ocp_cluster_id }}-ingress-port" + # register: rhos_ocp_cluster_ingress_port + + # - name: "Print Cluster Ingress Port details" + # ansible.builtin.debug: + # msg: "{{ item }}" + # verbosity: 2 + # loop: + # - "rhos_ocp_cluster_ingress_port: {{ rhos_ocp_cluster_ingress_port }}" + # - "rhos_ocp_cluster_ingress_port.openstack_ports[0].fixed_ips: {{ rhos_ocp_cluster_ingress_port.openstack_ports[0].fixed_ips }}" + # - "rhos_ocp_cluster_ingress_port.openstack_ports[0].fixed_ips[0].ip_address: {{ rhos_ocp_cluster_ingress_port.openstack_ports[0].fixed_ips[0].ip_address }}" + + # API + - name: "Get Cluster API Floating IP information" + openstack.cloud.floating_ip_info: + auth: "{{ rhos_auth }}" + auth_type: "{{ rhos_auth_type }}" + floating_ip_address: "{{ rhos_ocp_facts.api_floating_ip }}" + register: rhos_ocp_api_floating_ip + + - name: "Print API Ingress Floating IP information" + ansible.builtin.debug: + msg: "rhos_ocp_api_floating_ip: {{ rhos_ocp_api_floating_ip }}" + verbosity: 2 + + # - name: "Get API Ingress Port information" + # openstack.cloud.port_info: + # auth: "{{ rhos_auth }}" + # auth_type: "{{ rhos_auth_type }}" + # filters: + # name: "{{ ocp_cluster_id }}-ingress-port" + # register: rhos_ocp_cluster_ingress_port + + # - name: "Print API Ingress Port details" + # ansible.builtin.debug: + # msg: "{{ item }}" + # verbosity: 2 + # loop: + # - "rhos_ocp_cluster_ingress_port: {{ rhos_ocp_cluster_ingress_port }}" + # - "rhos_ocp_cluster_ingress_port.openstack_ports[0].fixed_ips: {{ rhos_ocp_cluster_ingress_port.openstack_ports[0].fixed_ips }}" + # - "rhos_ocp_cluster_ingress_port.openstack_ports[0].fixed_ips[0].ip_address: {{ rhos_ocp_cluster_ingress_port.openstack_ports[0].fixed_ips[0].ip_address }}" + + # - name: "Print server details" + # ansible.builtin.debug: + # msg: "openstack --os-cloud openstack floating ip set --fixed-ip-address {{ rhos_ocp_cluster_ingress_port.openstack_ports[0].fixed_ips[0].ip_address }} --port {{ rhos_ocp_cluster_ingress_port.openstack_ports[0].id }} {{ rhos_floating_ip_ingress_info_res.floating_ips[0].id }}" + # verbosity: 2 + + - name: "Installation resume" + ansible.builtin.debug: + msg: + - "OCP Resources:" + - " kubeadmin password: {{ rhos_ocp_facts.ocp_cluster_kubeadmin_pw }}" + - " Console: https://console-openshift-console.apps.{{ rhos_ocp_facts.cluster_name }}.snowdrop.dev/" + - " oc login token at: https://oauth-openshift.apps.{{ rhos_ocp_facts.cluster_name }}.snowdrop.dev/oauth/token/request" + - "" + - "Jump Server:" + - " Floating IP: {{ jump_server_info.servers[0].access_ipv4 }}" + - "" + - "API:" + - " Floating IP: {{ rhos_ocp_api_floating_ip.floating_ips[0].floating_ip_address }}" + - "" + - "Ingress:" + - " Floating IP: {{ rhos_ocp_ingress_floating_ip.floating_ips[0].floating_ip_address }}" verbosity: 0 + ... diff --git a/ansible/playbook/ocp/ocp_openstack_install.yml b/ansible/playbook/ocp/ocp_openstack_install.yml index 3c9363e8..f2a8dcb6 100644 --- a/ansible/playbook/ocp/ocp_openstack_install.yml +++ b/ansible/playbook/ocp/ocp_openstack_install.yml @@ -1,26 +1,113 @@ --- - name: "Build OpenStack authentication for v3password" - import_playbook: "../openstack/openstack_auth_passstore_v3password.yml" + ansible.builtin.import_playbook: "../openstack/openstack_auth_passstore_v3password.yml" + +- name: "Build GoDaddy authentication, if not provided" + import_playbook: "../godaddy/godaddy_auth_passwordstore.yml" + when: use_dns and dns_provider == 'godaddy' + +- name: "Get localhost user home" + hosts: "localhost" + gather_facts: true + + tasks: + + - name: Get localhost user home + ansible.builtin.set_fact: + localhost_user_home: "{{ ansible_env.HOME }}" + +- name: "Install host requirements" + hosts: "{{ ocp_bootstrap_host | default('localhost') }}" + gather_facts: true + vars: + ansible_remote_tmp: /tmp + + tasks: + + - name: Copy public ssh key + ansible.builtin.copy: + src: "{{ hostvars['localhost']['localhost_user_home'] }}/.ssh/id_rsa_snowdrop_openstack.pub" + dest: "{{ ansible_env.HOME }}/.ssh/id_rsa_snowdrop_openstack.pub" + mode: '0600' + + - name: Copy facts from localhost if using bootstrap host + ansible.builtin.set_fact: + rhos_auth: "{{ hostvars['localhost']['rhos_auth'] }}" + rhos_auth_type: "{{ hostvars['localhost']['rhos_auth_type'] }}" + when: ocp_bootstrap_host is defined + + - name: Copy requirements files to host + ansible.builtin.copy: + src: "{{ requirements_file.file_location }}/{{ requirements_file.file_name }}" + dest: "/tmp/{{ requirements_file.file_name }}" + mode: '0644' + loop: + - {file_name: "requirements.txt", file_location: "../../.."} + - {file_name: "requirements.yml", file_location: "../../../collections"} + loop_control: + loop_var: requirements_file + + # - name: Install required packages + # ansible.builtin.package: + # name: "{{ package_to_install }}" + # state: present + # become: true + # loop: + # - httpd-tools + # - python3-pip + # loop_control: + # loop_var: package_to_install + # when: ocp_bootstrap_host is defined + + - name: Install specified python requirements + ansible.builtin.pip: + requirements: /tmp/requirements.txt + + - name: Install collections and roles together + community.general.ansible_galaxy_install: + type: both + requirements_file: /tmp/requirements.yml - name: "Install OCP" - hosts: localhost + hosts: "{{ ocp_bootstrap_host | default('localhost') }}" gather_facts: true + vars: + ansible_remote_tmp: /tmp - # pre_tasks: - # - name: "Set openstack_auth facts" - # set_fact: - # openstack_auth: - # openstack_project_name: "{{ query('passwordstore', 'openstack/host/project_name')[0] }}" - # openstack_console_user: "{{ query('passwordstore', 'openstack/host/console_user')[0] }}" - # openstack_console_password: "{{ query('passwordstore', 'openstack/host/console_pw')[0] }}" - # openstack_user_domain: "{{ query('passwordstore', 'openstack/host/console_domain')[0] }}" - # openstack_project_domain: "{{ query('passwordstore', 'openstack/host/os_domain')[0] }}" - # openstack_os_auth_url: "{{ query('passwordstore', 'openstack/host/os_auth_url')[0] }}" - tasks: - name: "Deploy OCP" - import_role: + ansible.builtin.import_role: name: 'ocp_cluster' vars: state: present + + post_tasks: + + - name: "Executing the post-installation steps" + ansible.builtin.debug: + msg: + - "Executing the post installation steps. If it fails these steps can be executed manually:" + - "$ ansible-playbook ansible/playbook/ocp/ocp_openstack_install_post.yml -e tmp_directory={{ ocp_cluster.tmp_directory }} -e ocp_root_directory={{ ocp_cluster.ocp_root_directory }} -e ocp_cluster_name={{ ocp_cluster_name }} {% if ocp_bootstrap_host is defined %}-e ocp_bootstrap_host={{ ocp_bootstrap_host }}{{ dns.port }}{% endif %}" + + # - name: "Base64 encode OCP installation folder" + # ansible.builtin.set_fact: + # ansible_installation_folder_base64: "{{ lookup('ansible.builtin.file', ocp_cluster.tmp_directory + '/' + ocp_cluster_name + '-data.tar.gz') | b64encode }}" + + # - name: "Store the OCP installation folder on the passwordstore" + # ansible.builtin.set_fact: + # ansible_installation_folder_passwordstore: "{{ query('passwordstore', 'openstack/' + ocp_cluster.metadata. + '/install_dir create=True userpass=' + ansible_installation_folder_base64 )[0] }}" + +- name: "Post deployment steps" + ansible.builtin.import_playbook: "ocp_openstack_install_post.yml" + +- name: "Deploy Jump Server" + ansible.builtin.import_playbook: "ocp_openstack_create_jump_server.yml" + vars: + vm_name: "{{ ocp_cluster_metadata.infraID }}-jump-server" + +- name: "Print cluster info" + ansible.builtin.import_playbook: "ocp_openstack_info.yml" + vars: + vm_name: "{{ ocp_cluster_metadata.infraID }}-jump-server" ... +# ansible-playbook ansible/playbook/ocp/ocp_openstack_install.yml -e ocp_root_directory=/opt/ocp -e ocp_cluster_name=ocp-sdev -e openshift_pull_secret=${OCP_PULL_SECRET} -K diff --git a/ansible/playbook/ocp/ocp_openstack_install_post.yml b/ansible/playbook/ocp/ocp_openstack_install_post.yml new file mode 100644 index 00000000..d6afa7a8 --- /dev/null +++ b/ansible/playbook/ocp/ocp_openstack_install_post.yml @@ -0,0 +1,122 @@ +--- +- name: "Build OpenStack authentication for v3password" + ansible.builtin.import_playbook: "../openstack/openstack_auth_passstore_v3password.yml" + +- name: "Load the metadata from the OCP installation directory" + hosts: "{{ ocp_bootstrap_host | default('localhost') }}" + gather_facts: true + + tasks: + + - name: "Archive the installation data directory" + ansible.builtin.shell: + cmd: | + tar -czf {{ tmp_directory }}/{{ ocp_cluster_name }}-data.tar.gz {{ ocp_cluster_name }}/ + args: + chdir: "{{ ocp_root_directory }}" + + - name: Calculate installation folder + ansible.builtin.set_fact: + installation_dir: "{{ ocp_root_directory }}/{{ ocp_cluster_name }}" + + - name: Get OCP cluster metadata + ansible.builtin.import_role: + name: ocp_cluster + tasks_from: get_metadata.yml + + - name: "Slurp kubeadmin password" + ansible.builtin.slurp: + src: "{{ installation_dir }}/auth/kubeadmin-password" + register: kubeadmin_password_slurp + + - name: "Transform kubeadmin password slurp" + ansible.builtin.set_fact: + kubeadmin_password: "{{ kubeadmin_password_slurp.content | b64decode }}" + + - name: "Slurp OCP installation folder" + ansible.builtin.slurp: + src: "{{ tmp_directory }}/{{ ocp_cluster_name }}-data.tar.gz" + register: ansible_installation_folder_base64_slurp + + - name: "Base64 encode OCP installation folder" + ansible.builtin.set_fact: + ansible_installation_folder_base64: "{{ ansible_installation_folder_base64_slurp.content }}" + +- name: "Get OCP installation information" + ansible.builtin.import_playbook: "ocp_openstack_info.yml" + vars: + # ocp_cluster_name: "{{ ocp_cluster_name }}" + # ocp_root_directory: "{{ ocp_root_directory }}" + vm_name: "{{ ocp_bootstrap_host | default(['localhost']) }}" + +- name: "Store the OCP information on the passwordstore" + hosts: "localhost" + gather_facts: true + + tasks: + + - name: "Print installation variables remote" + ansible.builtin.debug: + msg: + - "ocp_cluster_metadata.infraID: {{ hostvars[ocp_bootstrap_host]['ocp_cluster_metadata']['infraID'] }}" + - "ansible_installation_folder_base64: {{ hostvars[ocp_bootstrap_host]['ansible_installation_folder_base64'] }}" + - "kubeadmin_password: {{ hostvars[ocp_bootstrap_host]['kubeadmin_password'] }}" + when: ocp_bootstrap_host is defined + + - name: "Set localhost facts" + ansible.builtin.set_fact: + ocp_cluster_metadata: "{{ hostvars[ocp_bootstrap_host]['ocp_cluster_metadata'] }}" + ansible_installation_folder_base64: "{{ hostvars[ocp_bootstrap_host]['ansible_installation_folder_base64'] }}" + kubeadmin_password: "{{ hostvars[ocp_bootstrap_host]['kubeadmin_password'] }}" + when: ocp_bootstrap_host is defined + + - name: "Store the OCP information on the bootstrap host passwordstore" + ansible.builtin.set_fact: + ansible_installation_folder_passwordstore: "{{ query('passwordstore', 'openstack/' + ocp_cluster_metadata.infraID + '-jump-server/install_dir create=True userpass=' + ansible_installation_folder_base64 )[0] }}" + console_user: "{{ query('passwordstore', 'openstack/' + ocp_cluster_metadata.infraID + '-jump-server/console_user create=True userpass=kubeadmin' )[0] }}" + console_pwd: "{{ query('passwordstore', 'openstack/' + ocp_cluster_metadata.infraID + '-jump-server/console_pwd create=True userpass=' + kubeadmin_password )[0] }}" + console_url: "{{ query('passwordstore', 'openstack/' + ocp_cluster_metadata.infraID + '-jump-server/console_url create=True userpass=http://console-openshift-console.apps.' + ocp_cluster_name + '.snowdrop.dev/' )[0] }}" + api_url: "{{ query('passwordstore', 'openstack/' + ocp_cluster_metadata.infraID + '-jump-server/api_url create=True userpass=api.' + ocp_cluster_name + '.snowdrop.dev' )[0] }}" + + - name: "Store the OCP API user information on the bootstrap host passwordstore" + ansible.builtin.set_fact: + admin_user: "{{ query('passwordstore', 'openstack/' + ocp_cluster_metadata.infraID + '-jump-server/admin_user create=True userpass=' + ocp_cluster_user_admin_name )[0] }}" + admin_pwd: "{{ query('passwordstore', 'openstack/' + ocp_cluster_metadata.infraID + '-jump-server/admin_pwd create=True userpass=' + ocp_cluster_user_admin_pw )[0] }}" + dev_user: "{{ query('passwordstore', 'openstack/' + ocp_cluster_metadata.infraID + '-jump-server/dev_user create=True userpass=' + ocp_cluster_user_dev_name )[0] }}" + dev_pw: "{{ query('passwordstore', 'openstack/' + ocp_cluster_metadata.infraID + '-jump-server/dev_pw create=True userpass=' + ocp_cluster_user_dev_pw )[0] }}" + when: admin_user is defined and admin_pwd is defined and dev_user is defined and dev_pw is defined + + post_tasks: + + - name: "Deploying jump server" + ansible.builtin.debug: + msg: + - "Deploying the jump server. If it fails these steps can be executed manually:" + - "$ ansible-playbook ansible/playbook/ocp/ocp_openstack_create_jump_server.yml -e tmp_directory={{ tmp_directory }} -e ocp_root_directory={{ ocp_root_directory }} -e ocp_cluster_name={{ ocp_cluster_name }} -e vm_name={{ ocp_cluster_metadata.infraID }}-jump-server {% if ocp_bootstrap_host is defined %}-e ocp_bootstrap_host={{ ocp_bootstrap_host }}{% endif %}" + +- name: "Publish cluster Console DNS records" + ansible.builtin.import_playbook: "../godaddy/godaddy_dns_create_passwordstore.yml" + vars: + api_environment: prod + dns: + data: "{{ ocp_cluster.floating_ip_ingress_address }}" + domain_name: snowdrop.dev + record_name: "*.apps.{{ ocp_cluster_name }}" + record_type: A + +- name: "Publish cluster API DNS records" + ansible.builtin.import_playbook: "../godaddy/godaddy_dns_create_passwordstore.yml" + vars: + api_environment: prod + dns: + data: "{{ ocp_cluster.floating_ip_api_address }}" + domain_name: snowdrop.dev + record_name: "api.{{ ocp_cluster_name }}" + record_type: A + +- name: "Deploy Jump Server" + ansible.builtin.import_playbook: "ocp_openstack_create_jump_server.yml" + vars: + vm_name: "{{ ocp_cluster_metadata.infraID }}-jump-server" +... +# ansible-playbook ansible/playbook/ocp/ocp_openstack_install_post.yml -e tmp_directory=/opt/ocp/_tmp/ansible.yxam0y7mbuild -e ocp_root_directory=/opt/ocp -e ocp_cluster_name=ocp-sdev diff --git a/ansible/playbook/ocp/ocp_openstack_remove.yml b/ansible/playbook/ocp/ocp_openstack_remove.yml index 13b00690..19edd1be 100644 --- a/ansible/playbook/ocp/ocp_openstack_remove.yml +++ b/ansible/playbook/ocp/ocp_openstack_remove.yml @@ -1,24 +1,32 @@ --- +- name: "Build OpenStack authentication for v3password" + import_playbook: "../openstack/openstack_auth_passstore_v3password.yml" + - name: "Remove OCP" - hosts: localhost + hosts: "{{ ocp_bootstrap_host | default('localhost') }}" gather_facts: true pre_tasks: - - name: "Set openstack_auth facts" - set_fact: - openstack_auth: - openstack_project_name: "{{ query('passwordstore', 'openstack/host/project_name')[0] }}" - openstack_console_user: "{{ query('passwordstore', 'openstack/host/console_user')[0] }}" - openstack_console_password: "{{ query('passwordstore', 'openstack/host/console_pw')[0] }}" - openstack_user_domain: "{{ query('passwordstore', 'openstack/host/console_domain')[0] }}" - openstack_project_domain: "{{ query('passwordstore', 'openstack/host/os_domain')[0] }}" - openstack_os_auth_url: "{{ query('passwordstore', 'openstack/host/os_auth_url')[0] }}" - + - name: Check required variables + ansible.builtin.assert: + that: + - "(ocp_root_directory is defined) and ( ocp_root_directory | length >= 1)" + - "(ocp_cluster_name is defined) and ( ocp_cluster_name | length >= 1)" + msg: + - "ocp_root_directory is required: Please specify the OCP cluster installation root directory" + - "ocp_cluster_name is required: Please specify the OCP cluster name" + + - name: Copy facts from localhost if using bootstrap host + ansible.builtin.set_fact: + rhos_auth: "{{ hostvars['localhost']['rhos_auth'] }}" + rhos_auth_type: "{{ hostvars['localhost']['rhos_auth_type'] }}" + when: ocp_bootstrap_host is defined + tasks: - name: "Remove OCP installation and work folders" - import_role: + ansible.builtin.import_role: name: 'ocp_cluster' vars: state: absent ... -# ansible-playbook -i inventory/ playbook/ocp/ocp_openstack_remove.yml -e work_directory=/opt/ocp -e installation_dir=/opt/ocp/openshift-data +# ansible-playbook ansible/playbook/ocp/ocp_openstack_remove.yml -e ocp_root_directory=/opt/ocp -e ocp_cluster_name=ocp-sdev -e openshift_pull_secret=${OCP_PULL_SECRET} -K diff --git a/ansible/playbook/ocp/rhosp_init_jump_server_pass.yml b/ansible/playbook/ocp/rhosp_init_jump_server_pass.yml new file mode 100644 index 00000000..2113e361 --- /dev/null +++ b/ansible/playbook/ocp/rhosp_init_jump_server_pass.yml @@ -0,0 +1,97 @@ +--- +# Requires: +# vars: +# tmp_directory: temporary directory =/opt/ocp/_tmp/ansible.yxam0y7mbuild +# ocp_root_directory: /opt/ocp +# ocp_cluster_name: ocp-sdev +# vm_name: ocp-sdev-p75fs-jump-server +- name: "Build OpenStack authentication for v3password" + ansible.builtin.import_playbook: "../openstack/openstack_auth_passstore_v3password.yml" + +- name: "Create directory structure" + hosts: "{{ vm_name | default([]) }}" + gather_facts: true + vars: + remote_bin_folder: /home/snowdrop/.local/bin + + tasks: + # - name: Create home .local/bin folder + # ansible.builtin.file: + # path: "{{ remote_bin_folder }}" + # recurse: true + # state: directory + # owner: "{{ ansible_user_id }}" + # group: "{{ ansible_user_id }}" + # mode: '0755' + + # - name: Copy installation folder to remote host + # ansible.builtin.copy: + # src: /tmp/ocp-installation.tgz + # dest: /tmp/ocp-installation.tgz + + # - name: Extract OCP installation into /home/snowdrop + # ansible.builtin.unarchive: + # src: /tmp/ocp-installation.tgz + # dest: /home/snowdrop + + - name: Download OCP files + ansible.builtin.import_role: + name: ocp_cluster + tasks_from: install_prepare.yml + vars: + state: absent + # ocp_bin_directory: "{{ remote_bin_folder }}" + + - name: Download OCP files + ansible.builtin.import_role: + name: ocp_cluster + tasks_from: download_installation_files.yml + # vars: + # ocp_cluster_bin_directory: "{{ remote_bin_folder }}" + +- name: "Extract installation directory from passwordstore" + hosts: "localhost" + gather_facts: false + + tasks: + + - name: "Extract installation directory from passwordstore" + ansible.builtin.copy: + content: "{{ query('passwordstore', 'openstack/' + ocp_cluster_name + '/install_dir')[0] | b64decode }}" + dest: /tmp/ocp-installation-{{ ocp_cluster_name }}.tgz + when: ocp_bootstrap_host is not defined + + - name: "Extract installation directory from passwordstore" + ansible.builtin.copy: + content: "{{ query('passwordstore', 'openstack/' + ocp_bootstrap_host + '/install_dir')[0] | b64decode }}" + dest: /tmp/ocp-installation-{{ ocp_cluster_name }}.tgz + when: ocp_bootstrap_host is defined + +- name: "Expand installation directory on Jump Server" + hosts: "{{ vm_name | default([]) }}" + gather_facts: true + vars: + remote_bin_folder: /home/snowdrop/.local/bin + + tasks: + + - name: Copy installation folder to remote host + ansible.builtin.copy: + src: /tmp/ocp-installation-{{ ocp_cluster_name }}.tgz + dest: /tmp/ocp-installation-{{ ocp_cluster_name }}.tgz + + - name: "Create OCP cluster directory" + ansible.builtin.file: + path: "{{ ocp_root_directory }}/{{ ocp_cluster_name }}" + state: directory + owner: "{{ ansible_user_id }}" + group: "{{ ansible_user_id }}" + mode: '0755' + + - name: Extract OCP installation into /home/snowdrop + ansible.builtin.unarchive: + src: /tmp/ocp-installation-{{ ocp_cluster_name }}.tgz + dest: "{{ ocp_root_directory }}/{{ ocp_cluster_name }}" + +... + diff --git a/ansible/playbook/openstack/README.adoc b/ansible/playbook/openstack/README.adoc index e7dbc124..0bdfff0e 100644 --- a/ansible/playbook/openstack/README.adoc +++ b/ansible/playbook/openstack/README.adoc @@ -1,9 +1,7 @@ = OpenStack Ansible Playbooks Snowdrop Team (Antonio Costa) -Snowdrop Team (Antonio Costa) :icons: font :revdate: {docdate} -:revdate: {docdate} :toc: left :description: This document describes OpenStack specific playbooks. ifdef::env-github[] @@ -28,26 +26,26 @@ NOTE: The list of flavors is identified on the link:../../../openstack/README.ad == Playbooks -=== Create a VM +=== Create a VM with Passwordstore Create an OpenStack instance (aka a VM) using link:../../../passwordstore/README.adoc[passwordstore] as tools to manage the credentials, information. -After creating the VM this playbook also executes the VM secure host playbook. More information link:../README.adoc#secure-host[here]. +[NOTE] +==== +The playbook also uses the variables defined in the link:https://github.com/snowdrop/ansible-collection-cloud-infra/blob/main/roles/openstack_vm/defaults/main.yml[`openstack/vm role of the Snowdrop Cloud Infra Ansible Collection`] + that can also be overridden. +==== -.openstack_vm_create_passwordstore parameters +.OpenStack VM Create Passwordstore parameters [cols="20%,80%"] |=== |Field name | Description -| `vm_name` +| `default_generic_key_name` [.fuchsia]#string# -[.red]#required# - -a| Name of the VM being created. - -This name will be used both as hostname as well as Ansible Inventory name. +a| Generic key name | `openstack.vm` @@ -57,10 +55,33 @@ This name will be used both as hostname as well as Ansible Inventory name. a| Map with required attributes for RHOS. -Check below for more details. +Check the <> table below for more details. + +| `rhos_auth_type` + +[.fuchsia]#string# + +a| RHOSP Authentication type + +Check the `openstack` CLI man page (`man openstack`) for available types, or + our link:../../../openstack/README.adoc#rhosp-authentication[Red Hat Open Stack document]. + +* *`v3password` <= Default* +* ... + +| `vm_name` + +[.fuchsia]#string# + +[.red]#required# + +a| Name of the VM being created. + +This name will be used both as hostname as well as Ansible Inventory name. |=== +[#openstack-vm-map-param-table,reftext="`openstack.vm` parameter map"] .openstack.vm map parameter [cols="20%,80%"] |=== @@ -92,35 +113,45 @@ a| Network provider in RHOS |=== -[source,bash] ----- -$ VM_NAME=vm20210221-t01 ----- +To create the RHOSP launch the `openstack_vm_create_passwordstore.yml` Ansible Playbook + using the following command. [source,bash] ---- -$ ansible-playbook playbook/openstack/openstack_vm_create_passwordstore.yml -e '{"openstack": {"vm": {"network": "provider_net_shared","image": "Fedora-Cloud-Base-35", "flavor": "m1.medium"}}}' -e key_name=test-adm-key -e vm_name=${VM_NAME} +ansible-playbook ansible/playbook/openstack/openstack_vm_create_passwordstore.yml -e '{"openstack": {"vm": {"network": "provider_net_shared","image": "Fedora-Cloud-Base-37", "flavor": "m1.medium"}}}' -e vm_name=snowdrop_sample_vm ---- -Although some failures might occur some might be ignored which shouldn't affect thhe process. This playbook should finish with no failed tasks. +[NOTE] +==== +Some error messages might show on the installation process and be ignored by the installation process. Nevertheless the playbook should finish with no failed tasks. +==== + +This is a sample result of the playbook execution. [source] -.... -PLAY RECAP ********************************************************************************************************************************************************************************************************************** +---- +PLAY RECAP ******************************************************************************************************* localhost : ok=68 changed=20 unreachable=0 failed=0 skipped=13 rescued=0 ignored=1 vm20210221-t01 : ok=32 changed=20 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 +---- -.... +After creating the VM this playbook also executes the VM secure host playbook. This is an independent playbook that can be executed against any host. More information link:../README.adoc#secure-host[here]. -The playbook also uses the variables defined in `roles/openstack/vm/defaults/main.yml`. Those variables can also be overridden using the syntax above. +Besides creating the VM the playbook also performs the following operations: -=== Delete a VM +* Store the host SSH keys on the controller device. +* Add entries to the `~/.ssh/known_hosts` file for this host. +* Add several entries to the passwordstore database in order to build the Ansible Inventory -To delete a VM, simply execute the `openstack_vm_remove_aggregate` playbook. +=== Delete a VM with Passwordstore + +To delete a VM, simply execute the `openstack_vm_remove_aggregate` playbook. Besides + removing the VM from the RHOSP it will also remove the entries from the passwordstore + database as well as any VM local ssh keys and entries from known hosts. [source,bash] ---- -$ ansible-playbook ansible/playbook/openstack/openstack_vm_remove_passwordstore.yml -e vm_name=${VM_NAME} +ansible-playbook ansible/playbook/openstack/openstack_vm_remove_passwordstore.yml -e vm_name=snowdrop_sample_vm ---- Although some failures might occur some might be ignored which shouldn't affect thhe process. This playbook should finish with no failed tasks. @@ -131,66 +162,3 @@ PLAY RECAP ********************************************************************* localhost : ok=17 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=2 .... - -=== Connect to the new instance - -All the information related to the host will be managed by our ansible passwordstore link:../../roles/passstore[roles] and link:../passstore[playbooks]. The current implementation also stores the ssh public and secret keys locally on each `~/.ssh` folder. To improve usability link:../../../tools/passstore-vm-ssh.sh[this] bash script has been created to make it easier to perform this connection. More documentation on the bash script can be found link:../../../tools/README.md[here]. - -To SSH connect to a VM use the `tools/passstore-vm-ssh.sh` bash script. - -The 3 arguments to pass to the script are the following. - -.Script options -[%header,cols="2,4"] -|=== -| Command | Description - -| 1: `VM_PROVIDER` - -[.fuchsia]#string# / [.red]#required# -a| Cloud provider - -Choices: - -* `hetzner` -* `openstack` - -| 2: `VM_NAME` - -[.fuchsia]#string# / [.red]#required# -a| Name of the VM to connect to. - -This is the inventory name of the VM. - -| 3: `PASSWORD_STORE_DIR` - -[.fuchsia]#string# -a| Folder where the PASSWORDSTORE database is located - -*Default*: `PASSWORD_STORE_DIR` environment variable, if set. -If this parameter is not provided and no `PASSWORD_STORE_DIR` env -variable is set the script will fail as it doesn't know the location -of the passwordstore project. - -|=== - - -.Connect to a passwordstore VM -[source,bash] ----- -./tools/passstore-vm-ssh.sh openstack ${VM_NAME} ----- - -This should connect ot the newly created VM. - -[source,bash] -====== -Last login: Thu Jan 1 00:00:00 1970 from x.x.x.x ------------------- - -This machine is property of RedHat. -Access is forbidden to all unauthorized person. -All activity is being monitored. - -Welcome to vm20210221-t01.. -====== diff --git a/ansible/playbook/openstack/floating_ip_create_passstore.yml b/ansible/playbook/openstack/floating_ip_create_passstore.yml new file mode 100644 index 00000000..768cf6bf --- /dev/null +++ b/ansible/playbook/openstack/floating_ip_create_passstore.yml @@ -0,0 +1,123 @@ +--- +# Requires: +# vars: +# tmp_directory: temporary directory =/opt/ocp/_tmp/ansible.yxam0y7mbuild +# ocp_root_directory: /opt/ocp +# ocp_cluster_name: ocp-sdev +# vm_name: ocp-sdev-p75fs-jump-server +- name: "Build OpenStack authentication for v3password" + ansible.builtin.import_playbook: "../openstack/openstack_auth_passstore_v3password.yml" + +- name: "Create floating IP for Jump Server" + hosts: localhost + gather_facts: true + + tasks: + + - name: Getting fip by associated fixed IP address. + openstack.cloud.floating_ip_info: + auth: "{{ rhos_auth }}" + auth_type: "{{ rhos_auth_type }}" + fixed_ip_address: "{{ query('passwordstore', 'openstack/' + vm_name + '/ansible_ssh_host' )[0] }}" + register: fip + ignore_errors: true + + - name: "Print FIP query" + ansible.builtin.debug: + msg: + - "fip: {{ fip }}" + verbosity: 1 + + - name: "Set floating ip variable is already created" + ansible.builtin.set_fact: + jump_server_floating_ip: "{{ fip.floating_ips[0].floating_ip_address }}" + when: not fip.failed and (fip.floating_ips | length > 0) + + - name: "Create Floating IP for Jump Server" + openstack.cloud.floating_ip: + auth: "{{ rhos_auth }}" + auth_type: "{{ rhos_auth_type }}" + state: present + reuse: true + server: "{{ vm_name }}" + network: "{{ rhos_network | default('provider_net_cci_13') }}" + wait: true + timeout: 180 + register: rhos_floating_ip_jump_server_res + when: jump_server_floating_ip is not defined + + - name: "Print rhos_floating_ip_jump_server_res" + ansible.builtin.debug: + msg: + - "rhos_floating_ip_jump_server_res: {{ rhos_floating_ip_jump_server_res }}" + verbosity: 1 + + - name: "Set floating ip variable is already created" + ansible.builtin.set_fact: + jump_server_floating_ip: "{{ rhos_floating_ip_jump_server_res.floating_ip.fixed_ip_address }}" + when: jump_server_floating_ip is not defined + + - name: "Print jump_server_floating_ip" + ansible.builtin.debug: + msg: + - "jump_server_floating_ip: {{ jump_server_floating_ip }}" + verbosity: 1 + + - name: "Store Floating IP on the passwordstore" + ansible.builtin.set_fact: + ansible_installation_folder_passwordstore: "{{ query('passwordstore', 'openstack/' + vm_name + '/floating_ip create=True userpass=' + jump_server_floating_ip )[0] }}" + +- name: "Wait for the VM to boot and we can ssh" + hosts: "{{ vm_name | default([]) }}" + gather_facts: no + + tasks: + + - name: "Wait for connection to host" + ansible.builtin.wait_for: + host: "{{ query('passwordstore', 'openstack/' + inventory_hostname + '/floating_ip')[0] }}" + port: "{{ query('passwordstore', 'openstack/' + inventory_hostname + '/ansible_ssh_port')[0] }}" + timeout: 120 + vars: + ansible_connection: local + register: wait_for_connection_reg + + post_tasks: + + - name: Refresh the inventory so the newly added host is available + meta: refresh_inventory + + - name: "DON'T FORGET TO SECURE YOUR SERVER" + ansible.builtin.debug: + msg: + - "DON'T FORGET TO SECURE YOUR SERVER!!!" + - "" + - "Trying to start start server securization automatically." + - "For manual execution: $ ansible-playbook ansible/playbook/sec_host.yml -e vm_name={{ vm_name }} -e provider=openstack" + +- name: "Add to known hosts" + hosts: localhost + gather_facts: true + + tasks: + + - name: "Add host Floating IP to known hosts {{ hostvars[vm_name]['floating_ip'] }}" + ansible.builtin.known_hosts: + name: "{{ hostvars[vm_name]['floating_ip'] }}" + key: "{{ lookup('pipe', 'ssh-keyscan {{ hostvars[vm_name].floating_ip }}') }}" + hash_host: true + +- name: "Add to known hosts" + hosts: "{{ vm_name | default([]) }}" + gather_facts: true + + tasks: + + - name: "Add host Floating IP to known hosts {{ hostvars[vm_name]['floating_ip'] }}" + ansible.builtin.known_hosts: + name: "{{ hostvars[vm_name]['floating_ip'] }}" + key: "{{ lookup('pipe', 'ssh-keyscan {{ hostvars[vm_name].floating_ip }}') }}" + hash_host: true + +... +# ansible-playbook ansible/playbook/openstack/floating_ip_create_passstore.yml -e rhos_network=provider_net_cci_13 -e vm_name=ocp-jump-server diff --git a/ansible/playbook/openstack/openstack_auth_passstore_v3applicationcredential.yml b/ansible/playbook/openstack/openstack_auth_passstore_v3applicationcredential.yml index 39af40c9..6ee4f311 100644 --- a/ansible/playbook/openstack/openstack_auth_passstore_v3applicationcredential.yml +++ b/ansible/playbook/openstack/openstack_auth_passstore_v3applicationcredential.yml @@ -4,6 +4,11 @@ tasks: + - name: Check if RHOSP authentication host is available + ansible.builtin.uri: + url: "{{ query('passwordstore', 'openstack/host/os_auth_url')[0] }}" + method: GET + - name: "Set facts" ansible.builtin.set_fact: rhos_authentication_type: v3applicationcredential diff --git a/ansible/playbook/openstack/openstack_auth_passstore_v3password.yml b/ansible/playbook/openstack/openstack_auth_passstore_v3password.yml index 146b7bfb..582e06d0 100644 --- a/ansible/playbook/openstack/openstack_auth_passstore_v3password.yml +++ b/ansible/playbook/openstack/openstack_auth_passstore_v3password.yml @@ -4,15 +4,11 @@ gather_facts: false tasks: - # - name: "Set openstack_auth facts" - # ansible.builtin.set_fact: - # openstack_auth: - # openstack_project_name: "{{ query('passwordstore', 'openstack/host/project_name')[0] }}" - # openstack_console_user: "{{ query('passwordstore', 'openstack/host/console_user')[0] }}" - # openstack_console_password: "{{ query('passwordstore', 'openstack/host/console_pw')[0] }}" - # openstack_user_domain: "{{ query('passwordstore', 'openstack/host/console_domain')[0] }}" - # openstack_project_domain: "{{ query('passwordstore', 'openstack/host/os_domain')[0] }}" - # openstack_os_auth_url: "{{ query('passwordstore', 'openstack/host/os_auth_url')[0] }}" + + - name: Check if RHOSP authentication host is available + ansible.builtin.uri: + url: "{{ query('passwordstore', 'openstack/host/os_auth_url')[0] }}" + method: GET - name: "Set authentication vars" ansible.builtin.set_fact: @@ -22,12 +18,6 @@ project_domain_name: "{{ query('passwordstore', 'openstack/host/os_domain')[0] }}" project_name: "{{ query('passwordstore', 'openstack/host/project_name')[0] }}" username: "{{ query('passwordstore', 'openstack/host/console_user')[0] }}" - user_domain_name: "{{ query('passwordstore', 'openstack/host/console_domain')[0] }}" - # auth_url: "{{ openstack_auth.openstack_os_auth_url }}" - # password: "{{ openstack_auth.openstack_console_password }}" - # project_domain_name: "{{ openstack_auth.openstack_project_domain }}" - # project_name: "{{ openstack_auth.openstack_project_name }}" - # username: "{{ openstack_auth.openstack_console_user }}" - # user_domain_name: "{{ openstack_auth.openstack_user_domain }}" + user_domain_name: "{{ query('passwordstore', 'openstack/host/console_domain')[0] }}" rhos_auth_type: v3password ... diff --git a/ansible/playbook/openstack/openstack_vm_create.yml b/ansible/playbook/openstack/openstack_vm_create.yml index bc64c0bd..fbf85f20 100644 --- a/ansible/playbook/openstack/openstack_vm_create.yml +++ b/ansible/playbook/openstack/openstack_vm_create.yml @@ -27,11 +27,11 @@ pre_tasks: - - name: "Validate OpenStack required variables" - assert: - that: - - "openstack_security_group is defined" - fail_msg: "Missing mandatory variables: openstack_security_group" + # - name: "Validate OpenStack required variables" + # assert: + # that: + # - "openstack_security_group is defined" + # fail_msg: "Missing mandatory variables: openstack_security_group" # - name: "Confirm the Config File exists" # stat: path="{{ config_file }}" @@ -44,7 +44,7 @@ # - name: "Print variables" # debug: # msg: "{{ item }}" - # with_items: + # with_items: # - "hostvars[inventory_hostname]: {{ hostvars[inventory_hostname] }}" # - "ansible_env: {{ ansible_env }}" @@ -52,7 +52,7 @@ # - name: "Set openstack_auth facts" # set_fact: - # openstack_auth: + # openstack_auth: # openstack_project_name: "{{ clouds.devstack.auth.project_name }}" # openstack_console_user: "{{ clouds.devstack.auth.username }}" # openstack_console_password: "{{ clouds.devstack.auth.password }}" @@ -60,48 +60,35 @@ # openstack_project_domain: "{{ clouds.devstack.auth.project_domain_name }}" # openstack_os_auth_url: "{{ clouds.devstack.auth.auth_url }}" + # - name: "Execute create inventory, if tagged as so" + # include_role: + # name: "openstack/vm" + # apply: + # tags: + # - always + # vars: + # state: "present" + - name: "Execute create inventory, if tagged as so" - include_role: - name: "openstack/vm" + ansible.builtin.include_role: + name: "snowdrop.cloud_infra.openstack_vm" apply: tags: - always - vars: + vars: state: "present" - - post_tasks: - - name: Refresh the inventory so the newly added host is available - meta: refresh_inventory - -- name: "Wait for the VM to boot and we can ssh" - hosts: "{{ vm_name }}" - gather_facts: no - - tasks: - - name: "Show 'Wait for connection to host' output" - debug: - msg: - - "ip : {{ query('passwordstore', 'openstack/' + vm_name + '/ansible_ssh_host')[0] }}" - - "port : {{ query('passwordstore', 'openstack/' + vm_name + '/ansible_ssh_port')[0] }}" - - - name: "Wait for connection to host" - ansible.builtin.wait_for: - host: "{{ query('passwordstore', 'openstack/' + vm_name + '/ansible_ssh_host')[0] }}" - port: "{{ query('passwordstore', 'openstack/' + vm_name + '/ansible_ssh_port')[0] }}" - timeout: 120 - register: wait_for_connection_reg - post_tasks: - name: "DON'T FORGET TO SECURE YOUR SERVER" - debug: - msg: "Trying to start start server securization automatically For manual execution: $ ansible-playbook ansible/playbook/sec_host.yml -e vm_name={{ vm_name }} -e provider=openstack" + ansible.builtin.debug: + msg: + - "Trying to start start server securization automatically For manual execution: " + - "$ ansible-playbook ansible/playbook/sec_host.yml -e vm_name={{ vm_name }} -e provider=openstack" -- name: "Openstack VM init" - hosts: "{{ vm_name }}" - gather_facts: yes - - roles: - - role: "openstack/init_vm" + - name: "Print VM IP address" + ansible.builtin.debug: + msg: + - "openstack_output: {{ openstack_output }}" + - "openstack_output.server: {{ openstack_output.server }}" -... \ No newline at end of file +... diff --git a/ansible/playbook/openstack/openstack_vm_create_passwordstore.yml b/ansible/playbook/openstack/openstack_vm_create_passwordstore.yml index 16f9caf7..4419fa21 100644 --- a/ansible/playbook/openstack/openstack_vm_create_passwordstore.yml +++ b/ansible/playbook/openstack/openstack_vm_create_passwordstore.yml @@ -10,14 +10,13 @@ # . k8s_version: Kubernetes version [117 ... 121], empty for no k8s installation - name: "Init passwordstore on controller" - import_playbook: "../passstore/passstore_controller_init.yml" + ansible.builtin.import_playbook: "../passstore/passstore_controller_init.yml" vars: pass_provider: "openstack" - name: "Validate passwordstore" ansible.builtin.import_playbook: "../passstore/passstore_controller_check.yml" -# tag::initialize_passwordstore_inventory[] # tag::initialize_passwordstore_inventory[] - name: "Initialize passwordstore inventory" ansible.builtin.import_playbook: "../passstore/passstore_controller_inventory.yml" @@ -55,34 +54,45 @@ - name: "Print VM IP address" ansible.builtin.debug: msg: + - "openstack_output: {{ openstack_output }}" + - "openstack_output.server: {{ openstack_output.server }}" + - "openstack_output.server.admin_password: {{ openstack_output.server.admin_password }}" - "openstack_vm_ipv4: {{ openstack_vm_ipv4 }}" - - "VM IPV4: {{ openstack_output.server.addresses[ openstack.vm.network ][0].addr }} }}" - - "key name: {{ openstack_output.server.key_name }} }}" + - "VM IPV4: {{ openstack_output.server.addresses[ openstack.vm.network ][0].addr }}" + - "key name: {{ openstack_output.server.key_name }}" - name: "Store Host information on passwordstore" ansible.builtin.set_fact: openstack_vm_ipv4: "{{ query('passwordstore', 'openstack/' + vm_name + '/ansible_ssh_host create=True userpass=' + openstack_vm_ipv4 )[0] }}" openstack_vm_ssh_port: "{{ query('passwordstore', 'openstack/' + vm_name + '/ansible_ssh_port create=True userpass=22')[0] }}" openstack_vm_ssh_user: "{{ query('passwordstore', 'openstack/' + vm_name + '/os_user create=True userpass=snowdrop')[0] }}" + + - name: "Store optional Host information on passwordstore" + ansible.builtin.set_fact: openstack_vm_admin_password: "{{ query('passwordstore', 'openstack/' + vm_name + '/admin_password create=True userpass=' + openstack_output.server.admin_password)[0] }}" + when: openstack_output is defined and openstack_output.server is defined and openstack_output.server.admin_password is defined and openstack_output.server.admin_password - name: Refresh the inventory so the newly added host is available meta: refresh_inventory - name: "Wait for the VM to boot and we can ssh" - hosts: "{{ vm_name | default([]) }}" - gather_facts: no + # hosts: "{{ vm_name | default([]) }}" + hosts: localhost + gather_facts: False tasks: - name: "Wait for connection to host" ansible.builtin.wait_for: - host: "{{ query('passwordstore', 'openstack/' + inventory_hostname + '/ansible_ssh_host')[0] }}" - port: "{{ query('passwordstore', 'openstack/' + inventory_hostname + '/ansible_ssh_port')[0] }}" + # host: "{{ query('passwordstore', 'openstack/' + inventory_hostname + '/ansible_ssh_host')[0] }}" + # port: "{{ query('passwordstore', 'openstack/' + inventory_hostname + '/ansible_ssh_port')[0] }}" + host: "{{ query('passwordstore', 'openstack/' + vm_name + '/ansible_ssh_host')[0] }}" + port: "{{ query('passwordstore', 'openstack/' + vm_name + '/ansible_ssh_port')[0] }}" timeout: 120 - vars: - ansible_connection: local + # vars: + # ansible_connection: local register: wait_for_connection_reg + when: skip_post_installation is undefined or not skip_post_installation post_tasks: - name: "DON'T FORGET TO SECURE YOUR SERVER" @@ -91,7 +101,10 @@ - "DON'T FORGET TO SECURE YOUR SERVER!!!" - "" - "Trying to start start server securization automatically." - - "For manual execution: $ ansible-playbook ansible/playbook/sec_host.yml -e vm_name={{ vm_name }} -e provider=openstack" + - "" + - "For manual execution:" + - "$ ansible-playbook ansible/playbook/openstack/openstack_vm_init.yml -e vm_name={{ vm_name }}" + - "$ ansible-playbook ansible/playbook/sec_host.yml -e vm_name={{ vm_name }} -e provider=openstack" - name: "Add to known hosts" hosts: localhost @@ -103,18 +116,19 @@ name: "{{ hostvars[vm_name]['ansible_ssh_host'] }}" key: "{{ lookup('pipe', 'ssh-keyscan {{ hostvars[vm_name].ansible_ssh_host }}') }}" hash_host: true + when: skip_post_installation is undefined or not skip_post_installation - name: "Openstack VM init" - hosts: "{{ vm_name | default([]) }}" - gather_facts: yes - - roles: - - role: "openstack/init_vm" + ansible.builtin.import_playbook: "openstack_vm_init.yml" + when: skip_post_installation is undefined or not skip_post_installation + # vars: + # ansible_python_interpreter: !!null - name: "Secure new server" ansible.builtin.import_playbook: "../sec_host.yml" vars: provider: "openstack" - hosts: "{{ vm_name | default([]) }}" + # hosts: "{{ vm_name | default([]) }}" tags: [always] + when: skip_post_installation is undefined or not skip_post_installation ... diff --git a/ansible/playbook/openstack/openstack_vm_init.yml b/ansible/playbook/openstack/openstack_vm_init.yml index 515a3ccf..a8c0d180 100644 --- a/ansible/playbook/openstack/openstack_vm_init.yml +++ b/ansible/playbook/openstack/openstack_vm_init.yml @@ -1,11 +1,15 @@ --- - name: "Openstack VM init" - hosts: "{{ vm_name }}" - gather_facts: yes + hosts: "{{ vm_name | default([]) }}" + gather_facts: "{{ vm_name is defined and (not skip_post_installation is defined or not skip_post_installation | bool) }}" + module_defaults: + ansible.builtin.setup: + gather_timeout: 45000 - roles: - - role: "openstack/init_vm" - vars: - state: "present" -... \ No newline at end of file + tasks: + - name: Init RHOS VM + ansible.builtin.include_role: + name: "openstack/init_vm" + +... diff --git a/ansible/playbook/openstack/openstack_vm_delete.yml b/ansible/playbook/openstack/openstack_vm_remove.yml similarity index 100% rename from ansible/playbook/openstack/openstack_vm_delete.yml rename to ansible/playbook/openstack/openstack_vm_remove.yml diff --git a/ansible/playbook/openstack/openstack_vm_remove_awx.yml b/ansible/playbook/openstack/openstack_vm_remove_awx.yml index 28108a97..31517dea 100644 --- a/ansible/playbook/openstack/openstack_vm_remove_awx.yml +++ b/ansible/playbook/openstack/openstack_vm_remove_awx.yml @@ -50,7 +50,7 @@ fail_msg: "IaaS Provider is either undefined or not 'openstack'" - name: "Delete Server on Openstack" - import_playbook: "openstack_vm_delete.yml" + import_playbook: "openstack_vm_remove.yml" when: "iaas_provider is not defined or iaas_provider == 'openstack'" - name: "Remove credentials from AWX" diff --git a/ansible/playbook/openstack/openstack_vm_remove_passwordstore.yml b/ansible/playbook/openstack/openstack_vm_remove_passwordstore.yml index 660a8ff8..a483117c 100644 --- a/ansible/playbook/openstack/openstack_vm_remove_passwordstore.yml +++ b/ansible/playbook/openstack/openstack_vm_remove_passwordstore.yml @@ -9,7 +9,7 @@ ansible.builtin.import_playbook: "../passstore/passstore_controller_check.yml" - name: "Delete Server on Openstack" - ansible.builtin.import_playbook: "openstack_vm_delete.yml" + ansible.builtin.import_playbook: "openstack_vm_remove.yml" # vars: # openstack_auth: # openstack_project_name: "{{ query('passwordstore', 'openstack/host/project_name')[0] }}" diff --git a/ansible/playbook/openstack/vm_init.yml b/ansible/playbook/openstack/vm_init.yml new file mode 100644 index 00000000..8d660edd --- /dev/null +++ b/ansible/playbook/openstack/vm_init.yml @@ -0,0 +1,4 @@ +--- +- name: "Openstack VM init" + ansible.builtin.import_playbook: "openstack_vm_init.yml" +... diff --git a/ansible/roles/ocp_cluster/README.adoc b/ansible/roles/ocp_cluster/README.adoc index a55a510b..1ac01deb 100644 --- a/ansible/roles/ocp_cluster/README.adoc +++ b/ansible/roles/ocp_cluster/README.adoc @@ -18,6 +18,101 @@ This role installs an OCP cluster on a RHOS cloud. |=== | Parameter | Comments +| `installation_dir` + +[.fuchsia]#string# +a| Installation folder for the OCP files, where `ocp_install_dir` is a role variable. + +_Default:_ `/` + +| `ocp_cluster_name` + +[.fuchsia]#string# +a| Name to be applied to the OCP cluster. It will be used as a prefix in the VM names. + +_Default:_ `ocp` + +| `ocp_cluster_user_admin_name` + +[.fuchsia]#string# +a| Admin user to be created in the OCP cluster. + +_Default:_ `admin` + +| `ocp_cluster_user_admin_pw` + +[.fuchsia]#string# +a| Password for the admin user. + +_Default:_ `admin` + +| `ocp_cluster_user_dev_name` + +[.fuchsia]#string# +a| Developer user to be created in the OCP cluster. + +_Default:_ `snowdrop` + +| `ocp_cluster_user_dev_pw` + +[.fuchsia]#string# +a| Password for the developer user. + +_Default:_ `snowdrop` + +| `ocp_master_nodes` + +[.fuchsia]#int# +a| Number of master nodes in the OCP cluster. + +_Default:_ `3` + +| `ocp_root_directory` + +[.fuchsia]#string# +a| Root folder for the installation. Under this folder 2 subfolders will be created: + +* `bin`: will store the executables for the installation which are `openshift-install`, `oc` and `kubectl`. +* ``: will store the installation data + +_Default:_ `/opt/ocp` + +| `ocp_version` + +[.fuchsia]#string# +a| OCP version to install + +_Default:_ `4.13.9` + +| `ocp_worker_nodes` + +[.fuchsia]#int# +a| Number of worker nodes in the OCP cluster. + +* Can be 0 +* *_Default:_ `3`* + +| `openstack_flavor_compute` + +[.fuchsia]#string# +a| + +_Default:_ `ocp4.compute` + +| `openstack_flavor_control_plane` + +[.fuchsia]#string# +a| + +_Default:_ `ocp4.control` + +| `openstack_network_provider` + +[.fuchsia]#string# +a| + +_Default:_ `provider_net_cci_13` + | `state` [.fuchsia]#string# / [.red]#required# @@ -28,19 +123,8 @@ Choices: * `present` to install the cluster * `absent` to remove the cluster -| `work_directory` - -[.fuchsia]#string# / [.red]#required# | Temporary work directory - -| `installation_dir` - -[.fuchsia]#string# -a| Installation folder for the OCP files, where `ocp_install_dir` is a role variable. -Default: `work_directory/` - |=== - [NOTE] ====== The full set of predefined variables can be found in the link:defaults/main.yml[default file]. @@ -58,9 +142,6 @@ include::defaults/main.yml[] ====== -The version of the cluster to be installed can be changed using the parameter `k8s_version`. -The following versions are currently link:vars/main.yml[supported]. - == Example Playbook .Sample playbook for deploying OCP on RHOS diff --git a/ansible/roles/ocp_cluster/defaults/main.yml b/ansible/roles/ocp_cluster/defaults/main.yml index 8e3766b7..111bd1f3 100644 --- a/ansible/roles/ocp_cluster/defaults/main.yml +++ b/ansible/roles/ocp_cluster/defaults/main.yml @@ -1,8 +1,14 @@ ocp_mirror: https://mirror.openshift.com/pub/openshift-v4/clients/ocp -ocp_version: 4.12.8 +ocp_version: 4.13.9 ocp_cluster_name: ocp ocp_user: snowdrop -ocp_install_dir: openshift-data +ocp_root_directory: "{{ ansible_env.HOME }}/ocp" +ocp_master_nodes: 3 +ocp_worker_nodes: 3 +ocp_cluster_user_admin_name: admin +ocp_cluster_user_admin_pw: admin +ocp_cluster_user_dev_name: snowdrop +ocp_cluster_user_dev_pw: snowdrop # tag::rhos_default_flavors[] # OpenStack flavors @@ -31,3 +37,5 @@ remove_kubeadmin: false use_dns: false # Values: godaddy, local dns_provider: godaddy + +rhos_log_path: "{{ ansible_env.HOME }}/ocp/log" diff --git a/ansible/roles/ocp_cluster/meta/argument_specs.yml b/ansible/roles/ocp_cluster/meta/argument_specs.yml new file mode 100644 index 00000000..ba52a158 --- /dev/null +++ b/ansible/roles/ocp_cluster/meta/argument_specs.yml @@ -0,0 +1,73 @@ +argument_specs: + main: + short_description: Options for the ocp_cluster role. + options: + installation_dir: + type: "str" + required: false + default: "{{ ocp_root_directory }}/{{ ocp_cluster_name }}" + description: "Installation folder for the OCP files" + ocp_cluster_name: + type: "str" + required: false + default: "ocp" + description: "Name to be applied to the OCP cluster. It will be used as a prefix in the VM names." + ocp_cluster_user_admin_name: + type: "str" + required: false + default: "admin" + description: "Admin user to be created in the OCP cluster." + ocp_cluster_user_admin_pw: + type: "str" + required: false + default: "admin" + description: "Password for the admin user." + ocp_cluster_user_dev_name: + type: "str" + required: false + default: "snowdrop" + description: "Developer user to be created in the OCP cluster." + ocp_cluster_user_dev_pw: + type: "str" + required: false + default: "ocp" + description: "Password for the developer user." + ocp_master_nodes: + type: "int" + required: false + default: 3 + description: "Number of master nodes in the OCP cluster." + ocp_root_directory: + type: "str" + required: false + default: "/opt/ocp" + description: "Root folder for the installation." + ocp_version: + type: "str" + required: false + default: "4.13.9" + description: "OCP version to install" + ocp_worker_nodes: + type: "int" + required: false + default: 3 + description: "Number of worker nodes in the OCP cluster." + openstack_flavor_compute: + type: "str" + required: false + default: "ocp4.compute" + description: "Flavor to be used on the compute nodes." + openstack_flavor_control_plane: + type: "str" + required: false + default: "ocp4.control" + description: "Flavor to be used on the control plane nodes." + openstack_network_provider: + type: "str" + required: false + default: "provider_net_cci_13" + description: "Network provider" + state: + type: "str" + required: true + description: "State of the cluster." diff --git a/ansible/roles/ocp_cluster/meta/main.yml b/ansible/roles/ocp_cluster/meta/main.yml new file mode 100644 index 00000000..cd9d89cd --- /dev/null +++ b/ansible/roles/ocp_cluster/meta/main.yml @@ -0,0 +1,24 @@ +--- +galaxy_info: + role_name: ocp_cluster + namespace: snowdrop + author: RedHat Snowdrop Team + description: Create and remove OCP cluster on RHOSP + company: Red Hat, Inc. + + license: Apache License 2.0 + + min_ansible_version: "2.9" + + # platforms: + # - name: EL + # versions: + # - 8 + + galaxy_tags: + - openshift + - redhat + +# dependencies: + # - { role: openshift_env } +... diff --git a/ansible/roles/ocp_cluster/tasks/build_output.yml b/ansible/roles/ocp_cluster/tasks/build_output.yml new file mode 100644 index 00000000..ae8299d6 --- /dev/null +++ b/ansible/roles/ocp_cluster/tasks/build_output.yml @@ -0,0 +1,34 @@ +--- +- name: Download OCP binary files + ansible.builtin.include_tasks: get_metadata.yml + +- name: "Read masters from installation folder" + ansible.builtin.slurp: + src: "{{ installation_dir + '/masters.tfvars.json' }}" + register: ocp_cluster_slurp_masters + +- name: "Read kubeadmin password from installation folder" + ansible.builtin.slurp: + src: "{{ installation_dir + '/auth/kubeadmin-password' }}" + register: ocp_cluster_slurp_kubeadminpw + +- name: "Read kubeconfig from installation folder" + ansible.builtin.slurp: + src: "{{ installation_dir + '/auth/kubeconfig' }}" + register: ocp_cluster_slurp_kubeconfig + +- name: "Set output variables" + ansible.builtin.set_fact: + ocp_cluster: + cluster_details: "{{ ocp_cluster_details }}" + floating_ip_api_address: "{{ rhos_floating_ip_api_res.stdout }}" + floating_ip_ingress_address: "{{ rhos_floating_ip_ingress_res.stdout }}" + installation_dir: "{{ installation_dir }}" + kubeadmin_password: "{{ ocp_cluster_slurp_kubeadminpw.content | b64decode }}" + kubeconfig: "{{ ocp_cluster_slurp_kubeconfig.content | b64decode| from_yaml }}" + masters: "{{ ocp_cluster_slurp_masters.content | b64decode | from_json }}" + metadata: "{{ ocp_cluster_metadata }}" + ocp_bin_directory: "{{ ocp_cluster_bin_directory }}" + ocp_root_directory: "{{ ocp_root_directory }}" + tmp_directory: "{{ tmp_directory }}" +... diff --git a/ansible/roles/ocp_cluster/tasks/download_installation_files.yml b/ansible/roles/ocp_cluster/tasks/download_installation_files.yml new file mode 100644 index 00000000..4a854d57 --- /dev/null +++ b/ansible/roles/ocp_cluster/tasks/download_installation_files.yml @@ -0,0 +1,51 @@ +--- +- name: "Set required facts" + ansible.builtin.set_fact: + ocp_cluster_bin_directory: "{{ ocp_root_directory }}/bin" + when: ocp_cluster_bin_directory is not defined + +- name: "Check if oc CLI exists" + ansible.builtin.stat: + path: "{{ ocp_cluster_bin_directory }}/oc" + register: oc_stat + +- name: "Download OpenShift client" + ansible.builtin.get_url: + url: "{{ ocp_mirror }}/{{ ocp_version }}/openshift-client-linux-{{ ocp_version }}.tar.gz" + dest: "/tmp/openshift-client-{{ ocp_version }}.tar.gz" + mode: '0644' + tmp_dest: "/tmp" + when: not oc_stat.stat.exists + +- name: "Extract oc CLI files" + ansible.builtin.unarchive: + src: "/tmp/openshift-client-{{ ocp_version }}.tar.gz" + dest: "{{ ocp_cluster_bin_directory }}" + remote_src: "{{ inventory_hostname != 'localhost' }}" + environment: + ANSIBLE_REMOTE_TMP: "/tmp" + when: not oc_stat.stat.exists + +- name: "Check if oc OpenShift installer exists" + ansible.builtin.stat: + path: "{{ ocp_cluster_bin_directory }}/openshift-install" + register: openshift_install_stat + +- name: "Download OpenShift installer" + ansible.builtin.get_url: + url: "{{ ocp_mirror }}/{{ ocp_version }}/openshift-install-linux-{{ ocp_version }}.tar.gz" + dest: "/tmp/openshift-install-{{ ocp_version }}.tar.gz" + mode: '0644' + tmp_dest: "/tmp" + when: not openshift_install_stat.stat.exists + +- name: "Extract installation files." + ansible.builtin.unarchive: + src: "/tmp/openshift-install-{{ ocp_version }}.tar.gz" + dest: "{{ ocp_cluster_bin_directory }}" + remote_src: "{{ inventory_hostname != 'localhost' }}" + environment: + ANSIBLE_REMOTE_TMP: "/tmp" + when: not openshift_install_stat.stat.exists + +... diff --git a/ansible/roles/ocp_cluster/tasks/get_metadata.yml b/ansible/roles/ocp_cluster/tasks/get_metadata.yml new file mode 100644 index 00000000..22e15ac0 --- /dev/null +++ b/ansible/roles/ocp_cluster/tasks/get_metadata.yml @@ -0,0 +1,15 @@ +--- +- name: "Read metadata from installation folder" + ansible.builtin.slurp: + src: "{{ installation_dir + '/metadata.json' }}" + register: ocp_cluster_slurp_metadata + +- name: "Print metadata slurp" + ansible.builtin.debug: + msg: "ocp_cluster_slurp_metadata: {{ ocp_cluster_slurp_metadata }}" + verbosity: 2 + +- name: "Transform metadata" + ansible.builtin.set_fact: + ocp_cluster_metadata: "{{ ocp_cluster_slurp_metadata.content | b64decode | from_json }}" +... diff --git a/ansible/roles/ocp_cluster/tasks/install.yml b/ansible/roles/ocp_cluster/tasks/install.yml index dc7a803e..c96497d0 100644 --- a/ansible/roles/ocp_cluster/tasks/install.yml +++ b/ansible/roles/ocp_cluster/tasks/install.yml @@ -1,72 +1,45 @@ --- -- name: "Set Facts" +- name: "Slurp SSH public key" + ansible.builtin.slurp: + src: "{{ ansible_env.HOME }}/.ssh/id_rsa_snowdrop_openstack.pub" + register: ocp_cluster_shared_ssh_public_key_slurp + +- name: "Transform SSH key slurp" ansible.builtin.set_fact: - shared_ssh_public_key: "{{lookup('ansible.builtin.file', ansible_env.HOME + '/.ssh/id_rsa_snowdrop_openstack.pub') }}" + ocp_cluster_shared_ssh_public_key: "{{ ocp_cluster_shared_ssh_public_key_slurp.content | b64decode }}" - name: "Create installation directory" ansible.builtin.file: path: "{{ installation_dir }}" state: directory + owner: "{{ ansible_user_id }}" + group: "{{ ansible_user_id }}" + mode: '0755' register: create_install_dir_res - -# - name: "Print Create installation directory result" -# debug: -# msg: "{{ create_install_dir_res }}" -# verbosity: 0 + # become: true # stage('Download key to our DNS server') { # // TODO --no-check-certificate shouldn't be necessary on proper slaves # sh 'wget -q --no-check-certificate -O xpaasqe.dnskey https://gitlab.cee.redhat.com/quarkus-qe/raw/main/roles/dnsservers/files/named.xpaasqe.dnskey' # } -# - name: "Template OpenStack auth" -# ansible.builtin.template: -# src: "templates/clouds.yaml.j2" -# dest: "{{ work_directory }}/clouds.yaml" -# mode: '0600' - -# auth: -# project_name: "{{ openstack_auth.openstack_project_name }}" -# username: "{{ openstack_auth.openstack_console_user }}" -# password: "{{ openstack_auth.openstack_console_password }}" -# user_domain_name: "{{ openstack_auth.openstack_user_domain }}" -# project_domain_name: "{{ openstack_auth.openstack_project_domain }}" -# auth_url: "{{ openstack_auth.openstack_os_auth_url }}" - -- name: "Download OpenShift client" - ansible.builtin.get_url: - url: "{{ ocp_mirror }}/{{ ocp_version }}/openshift-client-linux-{{ ocp_version }}.tar.gz" - dest: "{{ work_directory }}/openshift-client-{{ ocp_version }}.tar.gz" - -- name: "Download OpenShift installer" - ansible.builtin.get_url: - url: "{{ ocp_mirror }}/{{ ocp_version }}/openshift-install-linux-{{ ocp_version }}.tar.gz" - dest: "{{ work_directory }}/openshift-install-{{ ocp_version }}.tar.gz" - -- name: "Extract installation files." - ansible.builtin.unarchive: - src: "{{ work_directory }}/{{ extract_file_item }}" - dest: "{{ work_directory }}" - loop: - - openshift-client-{{ ocp_version }}.tar.gz - - openshift-install-{{ ocp_version }}.tar.gz - loop_control: - loop_var: extract_file_item +# - name: Download OCP binary files +# ansible.builtin.include_tasks: download_installation_files.yml - name: "Create Floating IP for OpenShift API" - ansible.builtin.shell: + ansible.builtin.shell: cmd: | - openstack --os-cloud openstack floating ip create --description "OCP API {{ ocp_cluster_name }}.{{ snowdrop_domain }}" -f value -c floating_ip_address {{ openstack_network_provider }} + openstack --os-cloud openstack floating ip create --description "OCP API {{ ocp_cluster_name }}.{{ snowdrop_domain }}" -f value -c floating_ip_address {{ openstack_network_provider }} --log-file {{ rhos_log_path }}/rhos_generic.log args: - chdir: "{{ work_directory }}" + chdir: "{{ tmp_directory }}" register: rhos_floating_ip_api_res - name: "Create Floating IP for OpenShift Ingress" - ansible.builtin.shell: + ansible.builtin.shell: cmd: | - openstack --os-cloud openstack floating ip create --description "OCP Ingress {{ ocp_cluster_name }}.{{ snowdrop_domain }}" -f value -c floating_ip_address {{ openstack_network_provider }} + openstack --os-cloud openstack floating ip create --description "OCP Ingress {{ ocp_cluster_name }}.{{ snowdrop_domain }}" -f value -c floating_ip_address {{ openstack_network_provider }} --log-file {{ rhos_log_path }}/rhos_generic.log args: - chdir: "{{ work_directory }}" + chdir: "{{ tmp_directory }}" register: rhos_floating_ip_ingress_res - name: "Set facts for Floating IPs" @@ -75,7 +48,7 @@ rhos_floating_ip_ingress_address: "{{ rhos_floating_ip_ingress_res.stdout }}" - name: "Print Floating IPs" - debug: + ansible.builtin.debug: msg: "{{ floating_ip_addresses }}" verbosity: 0 loop: @@ -98,11 +71,11 @@ # cmd: | # openstack floating ip create --description "API {{ ocp_cluster_name }}.{{ snowdrop_domain }}" -f value -c floating_ip_address {{ openstack_network_provider }} # args: -# chdir: "{{ work_directory }}" +# chdir: "{{ ocp_cluster_bin_directory }}" # register: openstack_floating_ip_res # - name: "Print OpenStack Floating IP details" -# debug: +# ansible.builtin.debug: # msg: "openstack_floating_ip_res: {{ openstack_floating_ip_res }}" # verbosity: 0 @@ -120,6 +93,8 @@ src: "templates/install-config.yaml.j2" dest: "{{ installation_dir }}/install-config.yaml" mode: '0644' + owner: "{{ ansible_user_id }}" + group: "{{ ansible_user_id }}" # https://github.com/openshift/installer/blob/master/docs/user/customization.md#install-time-customization-for-machine-configuration # NOTE: Uses the clouds.yaml file to connect ot the OpenStack instance @@ -127,11 +102,15 @@ ansible.builtin.shell: cmd: "./openshift-install create manifests --dir={{ installation_dir }} --log-level=debug" args: - chdir: "{{ work_directory }}" + chdir: "{{ ocp_cluster_bin_directory }}" + environment: + OS_CLIENT_CONFIG_FILE: "{{ tmp_directory }}/clouds.yaml" + when: not ocp_cluster_already_installed - name: "Pause for 10 seconds..." ansible.builtin.pause: seconds: 10 + when: not ocp_cluster_already_installed # https://docs.openshift.com/container-platform/4.3/installing/install_config/installing-customizing.html#installation-special-config-crony_installing-customizing - name: "Template chrony" @@ -163,7 +142,10 @@ ansible.builtin.shell: cmd: "./openshift-install create cluster --dir={{ installation_dir }} --log-level=debug" args: - chdir: "{{ work_directory }}" + chdir: "{{ ocp_cluster_bin_directory }}" + environment: + OS_CLIENT_CONFIG_FILE: "{{ tmp_directory }}/clouds.yaml" + when: not ocp_cluster_already_installed - name: "Get OpenShift installation state" include_tasks: openshift_install_state.yml @@ -173,68 +155,78 @@ # cmd: | # openstack --os-cloud openstack floating ip show {{ rhos_floating_ip_ingress_address }} -f json # args: -# chdir: "{{ work_directory }}" +# chdir: "{{ ocp_cluster_bin_directory }}" # register: rhos_floating_ip_ingress_show_res - name: "Get Ingress Floating IP information" openstack.cloud.floating_ip_info: - auth: - project_name: "{{ openstack_auth.openstack_project_name }}" - username: "{{ openstack_auth.openstack_console_user }}" - password: "{{ openstack_auth.openstack_console_password }}" - user_domain_name: "{{ openstack_auth.openstack_user_domain }}" - project_domain_name: "{{ openstack_auth.openstack_project_domain }}" - auth_url: "{{ openstack_auth.openstack_os_auth_url }}" + auth: "{{ rhos_auth }}" + auth_type: "{{ rhos_auth_type }}" floating_ip_address: "{{ rhos_floating_ip_ingress_address }}" register: rhos_floating_ip_ingress_info_res - name: "Get Ingress port information" openstack.cloud.port_info: - auth: - project_name: "{{ openstack_auth.openstack_project_name }}" - username: "{{ openstack_auth.openstack_console_user }}" - password: "{{ openstack_auth.openstack_console_password }}" - user_domain_name: "{{ openstack_auth.openstack_user_domain }}" - project_domain_name: "{{ openstack_auth.openstack_project_domain }}" - auth_url: "{{ openstack_auth.openstack_os_auth_url }}" + auth: "{{ rhos_auth }}" + auth_type: "{{ rhos_auth_type }}" filters: name: "{{ ocp_cluster_id }}-ingress-port" register: rhos_ocp_cluster_ingress_port -- name: "Set facts for Floating IP Set" - ansible.builtin.set_fact: - ingress_floating_ip_id: "{{ rhos_floating_ip_ingress_info_res.floating_ips[0].id }}" - ingress_port_ip: "{{ rhos_ocp_cluster_ingress_port.openstack_ports[0].fixed_ips[0].ip_address }}" - ingress_port_id: "{{ rhos_ocp_cluster_ingress_port.openstack_ports[0].id }}" +- name: "Print Ingress result" + ansible.builtin.debug: + msg: + - "rhos_floating_ip_ingress_info_res: {{ rhos_floating_ip_ingress_info_res }}" + - "rhos_ocp_cluster_ingress_port: {{ rhos_ocp_cluster_ingress_port }}" + verbosity: 0 -#- name: "Set fact from ingress Floating IP" -# ansible.builtin.set_fact: -# rhos_floating_ip_ingress: "{{ rhos_floating_ip_ingress_show_res.stdout | from_json }}" +# at this point, the OpenShift cluster is running in stock configuration +- name: "Pause to wait for the cluster to configure" + ansible.builtin.pause: + seconds: 60 +- name: "Set facts for Floating IP" + ansible.builtin.set_fact: + ocp_cluster_ingress_floating_ip_id: "{{ rhos_floating_ip_ingress_info_res.floating_ips[0].id }}" + ocp_cluster_ingress_port_ip: "{{ rhos_ocp_cluster_ingress_port.ports[0].fixed_ips[0].ip_address }}" + ocp_cluster_ingress_port_id: "{{ rhos_ocp_cluster_ingress_port.ports[0].id }}" + +- name: "Print Ingress information" + ansible.builtin.debug: + msg: + - "ocp_cluster_ingress_floating_ip_id: {{ ocp_cluster_ingress_floating_ip_id }}" + - "ocp_cluster_ingress_port_ip: {{ ocp_cluster_ingress_port_ip }}" + - "Associate the Floating IP with the ingress server: " + - " $ cd {{ tmp_directory }}" + - " $ openstack --os-cloud openstack floating ip set --fixed-ip-address {{ ocp_cluster_ingress_port_ip }} --port {{ ocp_cluster_ingress_port_id }} {{ ocp_cluster_ingress_floating_ip_id }}" + +# floating ip set: Set floating IP Properties - name: "Associate the Floating IP with the ingress server" - ansible.builtin.shell: + ansible.builtin.shell: cmd: | - openstack --os-cloud openstack floating ip set --fixed-ip-address {{ ingress_port_ip }} --port {{ ingress_port_id }} {{ ingress_floating_ip_id }} + openstack --os-cloud openstack floating ip set --fixed-ip-address {{ ocp_cluster_ingress_port_ip }} --port {{ ocp_cluster_ingress_port_id }} {{ ocp_cluster_ingress_floating_ip_id }} --log-file {{ rhos_log_path }}/rhos_associate_floating_ip.log args: - chdir: "{{ work_directory }}" + chdir: "{{ tmp_directory }}" + register: ocp_cluster_assoc_float_ip_ingress_server_res + failed_when: ocp_cluster_assoc_float_ip_ingress_server_res.rc != 0 and 'as that fixed IP already has a floating IP on external network' not in ocp_cluster_assoc_float_ip_ingress_server_res.stderr # and create a DNS record for it # https://docs.openshift.com/container-platform/4.3/installing/installing_openstack/installing-openstack-installer-custom.html#installation-osp-configuring-api-floating-ip_installing-openstack-installer-custom - name: "Template nsupdate ingress" ansible.builtin.template: src: "templates/nsupdate-api.txt.j2" - dest: "{{ work_directory }}/nsupdate-api.txt" + dest: "{{ tmp_directory }}/nsupdate-api.txt" mode: '0644' when: use_dns - name: "Run nsupdate" - ansible.builtin.shell: + ansible.builtin.shell: cmd: | export OPENSTACK_PORT_INGRESS=$(openstack port list -f value -c Name | grep -x "{{ ocp_cluster_name }}-.....-ingress-port") export OPENSTACK_FLOATING_IP_INGRESS=$(openstack floating ip create --description "Ingress {{ ocp_cluster_name }}.{{ snowdrop_domain }}" --port {{ rhos_floating_ip_ingress_address }} -f value -c floating_ip_address {{ openstack_network_provider }}) nsupdate -v -k xpaasqe.dnskey nsupdate-ingress.txt args: - chdir: "{{ work_directory }}" + chdir: "{{ tmp_directory }}" when: use_dns # at this point, the OpenShift cluster is running in stock configuration @@ -252,28 +244,28 @@ - name: "Configure htpasswd auth provider" ansible.builtin.shell: cmd: | - htpasswd -c -B -b users.htpasswd admin admin - htpasswd -b users.htpasswd {{ ocp_user }} {{ ocp_user }} + htpasswd -c -B -b users.htpasswd {{ ocp_cluster_user_admin_name }} {{ ocp_cluster_user_admin_pw }} + htpasswd -b users.htpasswd {{ ocp_cluster_user_dev_name }} {{ ocp_cluster_user_dev_pw }} ./oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd -n openshift-config args: - chdir: "{{ work_directory }}" + chdir: "{{ ocp_cluster_bin_directory }}" environment: KUBECONFIG: "{{ KUBECONFIG }}" - name: "Template htpasswd-provider" ansible.builtin.copy: src: "htpasswd-provider.yaml" - dest: "{{ work_directory }}/htpasswd-provider.yaml" + dest: "{{ tmp_directory }}/htpasswd-provider.yaml" mode: '0644' -- name: "Create user accounts 'admin' and {{ ocp_user }}" +- name: "Create user accounts for admin and developer" ansible.builtin.shell: cmd: | ./oc apply -f htpasswd-provider.yaml - ./oc adm policy add-cluster-role-to-user cluster-admin admin - ./oc adm policy add-cluster-role-to-user basic-user {{ ocp_user }} + ./oc adm policy add-cluster-role-to-user cluster-admin {{ ocp_cluster_user_admin_name }} + ./oc adm policy add-cluster-role-to-user basic-user {{ ocp_cluster_user_dev_name }} args: - chdir: "{{ work_directory }}" + chdir: "{{ ocp_cluster_bin_directory }}" environment: KUBECONFIG: "{{ KUBECONFIG }}" @@ -298,7 +290,7 @@ --from-file=registry.stage.redhat.io=RH-IT-Root-CA.crt ./oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-ca"}}}' --type=merge args: - chdir: "{{ work_directory }}" + chdir: "{{ ocp_cluster_bin_directory }}" environment: KUBECONFIG: "{{ KUBECONFIG }}" @@ -308,7 +300,7 @@ cmd: | ./oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge args: - chdir: "{{ work_directory }}" + chdir: "{{ ocp_cluster_bin_directory }}" environment: KUBECONFIG: "{{ KUBECONFIG }}" @@ -320,7 +312,7 @@ wget -q -O quarkus-logo.png https://design.jboss.org/quarkus/logo/final/PNG/quarkus_logo_horizontal_rgb_450px_reverse.png ./oc create configmap console-custom-logo --from-file quarkus-logo.png -n openshift-config args: - chdir: "{{ work_directory }}" + chdir: "{{ ocp_cluster_bin_directory }}" environment: KUBECONFIG: "{{ KUBECONFIG }}" when: use_logo @@ -328,16 +320,16 @@ - name: "Template custom-logo.yaml" ansible.builtin.copy: src: "custom-logo.yaml" - dest: "{{ work_directory }}/custom-logo.yaml" + dest: "{{ tmp_directory }}/custom-logo.yaml" mode: '0644' when: use_logo - name: "Apply logo to OpenShift console" - ansible.builtin.shell: + ansible.builtin.shell: cmd: | ./oc apply -f custom-logo.yaml args: - chdir: "{{ work_directory }}" + chdir: "{{ ocp_cluster_bin_directory }}" environment: KUBECONFIG: "{{ KUBECONFIG }}" when: use_logo @@ -348,30 +340,29 @@ # this should be the last "./oc" command - name: "Remove kubeadmin, the admin user is a cluster admin" - ansible.builtin.shell: + ansible.builtin.shell: cmd: | ./oc delete secrets kubeadmin -n kube-system args: - chdir: "{{ work_directory }}" + chdir: "{{ ocp_cluster_bin_directory }}" when: remove_kubeadmin -- name: "Archive the installation data directory" - ansible.builtin.shell: - cmd: | - tar -czf {{ ocp_cluster_name }}-data.tar.gz {{ installation_dir }} - args: - chdir: "{{ work_directory }}" - - name: "Display cluster details once more" ansible.builtin.shell: cmd: | ./openshift-install --dir "{{ installation_dir }}" wait-for install-complete args: - chdir: "{{ work_directory }}" + chdir: "{{ ocp_cluster_bin_directory }}" + environment: + OS_CLIENT_CONFIG_FILE: "{{ tmp_directory }}/clouds.yaml" register: ocp_cluster_details - name: "Print OCP cluster details" - debug: + ansible.builtin.debug: msg: "{{ ocp_cluster_details }}" verbosity: 0 + +- name: Download OCP binary files + ansible.builtin.include_tasks: build_output.yml + ... diff --git a/ansible/roles/ocp_cluster/tasks/install_operators.yml b/ansible/roles/ocp_cluster/tasks/install_operators.yml index 6429276f..3fd27037 100644 --- a/ansible/roles/ocp_cluster/tasks/install_operators.yml +++ b/ansible/roles/ocp_cluster/tasks/install_operators.yml @@ -10,7 +10,7 @@ ansible.builtin.shell: cmd: | ./oc apply -f install-operators-role.yaml - ./oc adm policy add-cluster-role-to-user install-operators-role {{ ocp_user }} + ./oc adm policy add-cluster-role-to-user install-operators-role {{ ocp_cluster_user_dev_name }} args: chdir: "{{ work_directory }}" environment: @@ -96,7 +96,7 @@ done echo Adding rights to qe user for using Datagrid cluster namespace - ./oc policy add-role-to-user admin {{ ocp_user }} --rolebinding-name=admin -n $WATCH_NAMESPACE + ./oc policy add-role-to-user admin {{ ocp_cluster_user_dev_name }} --rolebinding-name=admin -n $WATCH_NAMESPACE args: chdir: "{{ work_directory }}" environment: @@ -119,7 +119,7 @@ verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] EOF ./oc apply -f monitor-crd-edit-role.yaml - ./oc adm policy add-cluster-role-to-user monitor-crd-edit {{ ocp_user }} + ./oc adm policy add-cluster-role-to-user monitor-crd-edit {{ ocp_cluster_user_dev_name }} echo "Configure cluster monitoring" cat < ./cluster-monitoring-config.yaml @@ -142,7 +142,7 @@ echo "Wait for the $PROMETHEUS_NAMESPACE namespace" timeout 240s bash -c "until ./oc get namespace/$PROMETHEUS_NAMESPACE; do echo 'Waiting for namespace '$PROMETHEUS_NAMESPACE; sleep 2; done" - ./oc adm policy add-role-to-user edit {{ ocp_user }} -n openshift-user-workload-monitoring + ./oc adm policy add-role-to-user edit {{ ocp_cluster_user_dev_name }} -n openshift-user-workload-monitoring args: chdir: "{{ work_directory }}" environment: diff --git a/ansible/roles/ocp_cluster/tasks/install_prepare.yml b/ansible/roles/ocp_cluster/tasks/install_prepare.yml new file mode 100644 index 00000000..12fd017f --- /dev/null +++ b/ansible/roles/ocp_cluster/tasks/install_prepare.yml @@ -0,0 +1,50 @@ +--- +- name: "Create local OCP directories" + ansible.builtin.file: + path: "{{ ocp_folder_to_create }}" + state: directory + owner: "{{ ansible_user_id }}" + group: "{{ ansible_user_id }}" + mode: '0755' + # become: true + loop: + - "{{ ocp_root_directory }}" + - "{{ ocp_root_directory }}/bin" + - "{{ ocp_root_directory }}/_tmp" + - "{{ rhos_log_path }}/" + loop_control: + loop_var: ocp_folder_to_create + # register: create_root_dir_res + +- name: "Create bin directory" + ansible.builtin.file: + path: "{{ ocp_cluster_bin_directory }}" + state: directory + owner: "{{ ansible_user_id }}" + group: "{{ ansible_user_id }}" + mode: '0755' + # become: true + register: create_bin_dir_res + when: ocp_cluster_bin_directory is defined + +- name: "Check if OCP is installed by checking the presence of the kubeconfig file" + + ansible.builtin.stat: + path: "{{ installation_dir }}/auth/kubeconfig" + register: kubeconfig_file_stat + when: state == 'present' + +- name: "Set installation complete field" + ansible.builtin.set_fact: + ocp_cluster_already_installed: "{{ kubeconfig_file_stat.stat.exists }}" + when: state == 'present' + +- name: "Print Installation result" + ansible.builtin.debug: + msg: + - "ocp_cluster_already_installed: {{ ocp_cluster_already_installed }}" + - "when validation: {{ not ocp_cluster_already_installed }}" + verbosity: 0 + when: state == 'present' + +... diff --git a/ansible/roles/ocp_cluster/tasks/main.yml b/ansible/roles/ocp_cluster/tasks/main.yml index 4d55152f..cd01dfbd 100644 --- a/ansible/roles/ocp_cluster/tasks/main.yml +++ b/ansible/roles/ocp_cluster/tasks/main.yml @@ -1,24 +1,48 @@ --- -- name: "Create work directory" - ansible.builtin.file: - path: "{{ work_directory }}" +- name: "Set required facts" + ansible.builtin.set_fact: + ocp_cluster_bin_directory: "{{ ocp_root_directory }}/bin" + installation_dir: "{{ ocp_root_directory }}/{{ ocp_cluster_name }}" + +- include_tasks: install_prepare.yml + # when: state == 'present' + +# - name: "Create local tmp directories" +# ansible.builtin.file: +# path: "{{ ocp_root_directory }}/_tmp" +# state: directory +# owner: "{{ ansible_user_id }}" +# group: "{{ ansible_user_id }}" +# mode: '0755' +# become: true +# when: state == 'absent' + +- name: Download OCP binary files + ansible.builtin.include_tasks: download_installation_files.yml + +- name: Create temporary work directory + ansible.builtin.tempfile: + path: "{{ ocp_root_directory }}/_tmp" state: directory - register: create_work_dir_res + suffix: build + register: tmp_directory_res + +- name: "Set temporary folder" + ansible.builtin.set_fact: + tmp_directory: "{{ tmp_directory_res.path }}" - name: "Template OpenStack auth" ansible.builtin.template: src: "templates/clouds.yaml.j2" - dest: "{{ work_directory }}/clouds.yaml" - mode: '0600' + dest: "{{ tmp_directory }}/clouds.yaml" + mode: '0640' -- name: "Set default installation_dir, if not defined" - ansible.builtin.set_fact: - installation_dir: "{{ work_directory }}/{{ ocp_install_dir }}" - when: installation_dir is undefined +# - name: "Set default installation_dir, if not defined" +# ansible.builtin.set_fact: +# installation_dir: "{{ work_directory }}/{{ ocp_cluster_name }}" +# when: installation_dir is undefined - include_tasks: install.yml - # vars: - # installation_dir: "{{ installation_dir }}" when: state == 'present' - include_tasks: remove.yml @@ -28,7 +52,9 @@ - name: "Delete OpenStack auth file" ansible.builtin.file: - path: "templates/clouds.yaml.j2" + path: "{{ tmp_directory }}/clouds.yaml" state: absent ... -# ansible-playbook -i inventory/ playbook/ocp/ocp_openstack_info.yml -e work_directory=/opt/ocp -e installation_dir=/opt/ocp/openshift-data/ +# ansible-playbook -i inventory/ ansible/playbook/ocp/ocp_openstack_install.yml -e ocp_root_directory=/opt/ocp -e ocp_cluster_name=ocp-sdev -e openshift_pull_secret=${OCP_PULL_SECRET} -K +# ansible-playbook -i inventory/ ansible/playbook/ocp/ocp_openstack_info.yml -e ocp_cluster_bin_directory=/opt/ocp/bin -e installation_dir=/opt/ocp// -e ocp_cluster_name= -e openshift_pull_secret=${OCP_PULL_SECRET} -K +# ansible-playbook -i inventory/ ansible/playbook/ocp/ocp_openstack_install.yml -e ocp_cluster_bin_directory=/opt/ocp/bin -e installation_dir=/opt/ocp/ocp-sdev -e ocp_cluster_name=ocp-sdev -e openshift_pull_secret=${OCP_PULL_SECRET} -K diff --git a/ansible/roles/ocp_cluster/tasks/openshift_install_state.yml b/ansible/roles/ocp_cluster/tasks/openshift_install_state.yml index 6540a0d8..c0510408 100644 --- a/ansible/roles/ocp_cluster/tasks/openshift_install_state.yml +++ b/ansible/roles/ocp_cluster/tasks/openshift_install_state.yml @@ -1,19 +1,29 @@ --- - name: "Read server state file" + ansible.builtin.slurp: + src: "{{ installation_dir + '/.openshift_install_state.json' }}" + register: ocp_cluster_install_state_slurp + +- name: "Print install state slurp" + ansible.builtin.debug: + msg: "ocp_cluster_install_state_slurp: {{ ocp_cluster_install_state_slurp }}" + verbosity: 2 + +- name: "Transform metadata" ansible.builtin.set_fact: - openshift_install_state: "{{ lookup('file', installation_dir + '/.openshift_install_state.json') | from_json }}" + openshift_install_state: "{{ ocp_cluster_install_state_slurp.content | b64decode | from_json }}" -- name: "Print server details" - debug: +- name: "Print install details" + ansible.builtin.debug: msg: "openshift_install_state: {{ openshift_install_state }}" - verbosity: 0 + verbosity: 2 - name: "Set clusterid fact" ansible.builtin.set_fact: ocp_cluster_id: "{{ openshift_install_state['*installconfig.ClusterID'].InfraID }}" -- name: "Print server details" - debug: +- name: "Print cluster details" + ansible.builtin.debug: msg: "ocp_cluster_id: {{ ocp_cluster_id }}" - verbosity: 0 + verbosity: 2 ... diff --git a/ansible/roles/ocp_cluster/tasks/remove.yml b/ansible/roles/ocp_cluster/tasks/remove.yml index a83c8d6e..7ffd5026 100644 --- a/ansible/roles/ocp_cluster/tasks/remove.yml +++ b/ansible/roles/ocp_cluster/tasks/remove.yml @@ -4,5 +4,16 @@ cmd: | ./openshift-install destroy cluster --dir={{ installation_dir }} --log-level=info args: - chdir: "{{ work_directory }}" + chdir: "{{ ocp_cluster_bin_directory }}" + environment: + OS_CLIENT_CONFIG_FILE: "{{ tmp_directory }}/clouds.yaml" + +# TODO: When removing the + +- name: "Delete installation directory" + ansible.builtin.file: + path: "{{ installation_dir }}" + state: absent + become: true + ... diff --git a/ansible/roles/ocp_cluster/templates/clouds.yaml.j2 b/ansible/roles/ocp_cluster/templates/clouds.yaml.j2 index 937b7c4c..1b61ec5c 100644 --- a/ansible/roles/ocp_cluster/templates/clouds.yaml.j2 +++ b/ansible/roles/ocp_cluster/templates/clouds.yaml.j2 @@ -1,12 +1,13 @@ clouds: openstack: - auth: - project_name: "{{ openstack_auth.openstack_project_name }}" - username: "{{ openstack_auth.openstack_console_user }}" - password: "{{ openstack_auth.openstack_console_password }}" - user_domain_name: "{{ openstack_auth.openstack_user_domain }}" - project_domain_name: "{{ openstack_auth.openstack_project_domain }}" - auth_url: "{{ openstack_auth.openstack_os_auth_url }}" + auth_type: "{{ rhos_auth_type }}" + auth: + auth_url: "{{ rhos_auth.auth_url }}" + password: "{{ rhos_auth.password }}" + project_domain_name: "{{ rhos_auth.project_domain_name }}" + project_name: "{{ rhos_auth.project_name }}" + username: "{{ rhos_auth.username }}" + user_domain_name: "{{ rhos_auth.user_domain_name }}" region_name: "regionOne" interface: "public" identity_api_version: 3 diff --git a/ansible/roles/ocp_cluster/templates/install-config.yaml copy.j2 b/ansible/roles/ocp_cluster/templates/install-config.yaml copy.j2 deleted file mode 100644 index 67514a1a..00000000 --- a/ansible/roles/ocp_cluster/templates/install-config.yaml copy.j2 +++ /dev/null @@ -1,73 +0,0 @@ -apiVersion: v1 -baseDomain: {{ snowdrop_domain }} -metadata: - name: {{ ocp_cluster_name }} -controlPlane: -# name: master -# platform: {} -# replicas: 3 - architecture: amd64 - hyperthreading: Enabled - name: master - platform: {} - replicas: 3 -compute: -#- name: worker -# platform: -# openstack: -# type: {{ openstack_flavor_compute }} -# replicas: 3 -- architecture: amd64 - hyperthreading: Enabled - name: worker - platform: {} -# platform: -# openstack: -# type: {{ openstack_flavor_compute }} - replicas: 3 -networking: -# clusterNetwork: -# - cidr: 10.128.0.0/14 - hostPrefix: 23 - machineCIDR: 172.208.0.0/16 - networkType: OpenShiftSDN -# serviceNetwork: -# - 172.30.0.0/16 - clusterNetwork: - - cidr: 172.20.0.0/14 -# hostPrefix: 23 -# machineCIDR: 172.208.0.0/16 -# machineNetwork: -# - cidr: 172.208.0.0/16 -# networkType: OpenShiftSDN -# serviceNetwork: -# - 172.30.0.0/16 -platform: - openstack: -# cloud: openstack -# computeFlavor: {{ openstack_flavor_control_plane }} -# externalDNS: null -# externalNetwork: {{ openstack_network_provider }} -# lbFloatingIP: {{ openstack_floating_ip }} -# octaviaSupport: "0" -# region: "regionOne" -# trunkSupport: "1" - apiFloatingIP: 10.0.213.201 - apiVIPs: - - 172.208.0.5 -# - 172.31.0.5 - cloud: openstack - computeFlavor: {{ openstack_flavor_control_plane }} - defaultMachinePlatform: - type: ci.m1.xlarge - externalDNS: null - externalNetwork: {{ openstack_network_provider }} - ingressVIPs: - - 172.208.0.7 -# lbFloatingIP: {{ openstack_floating_ip }} - octaviaSupport: "0" - region: regionOne - trunkSupport: "1" -publish: External -pullSecret: '{{ openshift_pull_secret }}' -sshKey: '{{ shared_ssh_public_key }}' diff --git a/ansible/roles/ocp_cluster/templates/install-config.yaml.j2 b/ansible/roles/ocp_cluster/templates/install-config.yaml.j2 index 367dae97..7e4e6bb2 100644 --- a/ansible/roles/ocp_cluster/templates/install-config.yaml.j2 +++ b/ansible/roles/ocp_cluster/templates/install-config.yaml.j2 @@ -8,13 +8,13 @@ compute: platform: openstack: type: {{ openstack_flavor_compute }} - replicas: 3 + replicas: {{ ocp_worker_nodes }} controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} - replicas: 3 + replicas: {{ ocp_master_nodes }} metadata: creationTimestamp: null name: {{ ocp_cluster_name }} @@ -31,6 +31,8 @@ networking: platform: openstack: apiFloatingIP: {{ rhos_floating_ip_api_address }} +# ingressFloatingIP: {{ rhos_floating_ip_ingress_address }} +{% if rhos_cluster_os_image is defined %} clusterOSimage: {{ rhos_cluster_os_image }}{% endif %} # apiVIPs: # - 10.0.0.5 cloud: openstack @@ -43,4 +45,4 @@ platform: # - 10.0.0.7 publish: External pullSecret: '{{ openshift_pull_secret }}' -sshKey: '{{ shared_ssh_public_key }}' +sshKey: '{{ ocp_cluster_shared_ssh_public_key }}' diff --git a/collections/requirements.yml b/collections/requirements.yml index 06448bdb..515300f1 100644 --- a/collections/requirements.yml +++ b/collections/requirements.yml @@ -1,7 +1,7 @@ --- collections: - name: snowdrop.cloud_infra - version: 2.0.0 + version: 2.1.0 - name: openstack.cloud version: 2.1.0 diff --git a/molecule/requirements.txt b/molecule/requirements.txt new file mode 100644 index 00000000..ea05ed72 --- /dev/null +++ b/molecule/requirements.txt @@ -0,0 +1,6 @@ +#molecule[docker,lint] +molecule-plugins[docker,lint] +yq +ansible-lint +molecule-docker +lint diff --git a/openshift/ocp_on_openstack.adoc b/openshift/ocp_on_openstack.adoc index e4139aba..96869960 100644 --- a/openshift/ocp_on_openstack.adoc +++ b/openshift/ocp_on_openstack.adoc @@ -41,38 +41,61 @@ The early stages of the deployment process create an `install-config.yaml` file The installation process generates an installation folder on the host machine (by default is the same machine executing the Ansible playbooks) that must be kept for maintenance purposes, e.g. removing the OCP cluster. - -The ==== -== Prerequisites +To collect information required to the process, and also store the results of the installation, passwordstore is used (check more information link:../passwordstore/README.adoc[here]). -The prerequisites for using RHOS Ansible Playbooks are the following. +== Preparation -._Click to open the details_ -[%collapsible] -==== +The requirements for using the OCP on RHOS Ansible Playbooks are described on the link:../requirements.txt[`requirements.txt`] and link:../collections/requirements.yml[`collections/requirements.yml`] files. -[] -====== -include::../openstack/README.adoc[tag=rhos_prerequisites] -====== +More information on our link:../openstack/README.adoc[OpenStack README] for this project. -==== +Python virtual environment can be used to isolate all the python requirements from the host OS. For more information check the link:../ansible/README.adoc#python-venv[Python Virtual Env] section on our Ansible README. + +=== Passwordstore + +Passwordstore is used to both provide information to the playbooks as well + as store the result of the installation process. + +More information on our Passwordstore implementation link:../passwordstore/README.adoc[here]. + +=== Sizing -More information on the link:../openstack/README.adoc[OpenStack README] for this project. +The sizing of the OCP cluster is done by selecting the RHOS flavor of the + master and worker nodes as well as the number of replicas for each. -== Deploy OCP cluster on RHOS +For more information on obtaining Flavor information from RHOS using the + CLI check our link:../openstack/openstack-cli.adoc[RHOS CLI] document. -include::../ansible/playbook/ocp/README.adoc[tag=deploy_ocp_on_rhos] +More information on the following links: + +* link:https://docs.openshift.com/container-platform/4.12/installing/installing_bare_metal_ipi/ipi-install-installation-workflow.html[Setting up the environment for an OpenShift installation] +* link:https://docs.openshift.com/container-platform/4.12/installing/installing_openstack/installing-openstack-user.html[Installing a cluster on OpenStack on your own infrastructure] + +== Installation process + +=== Deploy OCP cluster on RHOS + +Using the `openshift-install` tool deploy the OCP cluster on the RHOS infrastructure. + +Check the link:../ansible/playbook/ocp/README.adoc[OCP README] document on the link:../ansible/playbook/ocp[OCP PlayBooks] folder. === Backup the installation directory +After the cluster deployment is finished let's backup the installation directory. + +[CAUTION] +==== +As referred prior on this document, the installation folder must be kept for + correct server maintenance. The procedure to make this backup is the following. +==== + Generate the base64 codification of the `.tar.gz` file containing the backup of the `openshift-install` directory. [source,bash] ---- -base64 ocp-sscpc-data.tar.gz > ocp-sscpc-data.tar.gz.base64 +base64 ocp-sscpc-data.tar.gz > ocp-sscpc-data.tar.gz.base64 ---- Copy the contents of the file to the clipboard. @@ -240,7 +263,7 @@ Copy to the _jump server_ the ssh key used in the deployment of the OCP cluster. [source,bash] ---- -scp -i ${HOME}/.ssh/id_rsa_snowdrop_openstack id_rsa_snowdrop_openstack snowdrop@$(pass show ${VM_PROVIDER}/${VM_NAME}/floating_ip | awk 'NR==1{print $1}'):/home/snowdrop/.ssh/ +scp -i ${HOME}/.ssh/id_rsa_snowdrop_openstack ${HOME}/.ssh/id_rsa_snowdrop_openstack snowdrop@$(pass show ${VM_PROVIDER}/${VM_NAME}/floating_ip | awk 'NR==1{print $1}'):/home/snowdrop/.ssh/ ---- Connect to the _jump server_. @@ -278,6 +301,20 @@ make configuration changes via `machineconfig` objects: ---- +=== Add domain to certificate manager + +The new domain associated to the has to be added to the certificate manager. + +[source,bash] +---- +kc -n snowdrop-site edit certificate snowdrop-dev +---- + +[source,bash] +---- +kc -n snowdrop-site edit issuer letsencrypt-prod-snowdrop-dev +---- + == Remove existing OCP cluster on RHOS include::../ansible/playbook/ocp/README.adoc[tag=undeploy_ocp_on_rhos] diff --git a/openstack/README.adoc b/openstack/README.adoc index 39e3f1ce..0e9bb2ba 100644 --- a/openstack/README.adoc +++ b/openstack/README.adoc @@ -1,9 +1,11 @@ -= OpenStack -Snowdrop Team (Antonio Costa) += Red Hat Open Stack +Snowdrop Team :icons: font :revdate: {docdate} :toc: left -:description: This document describes the requirements and the process to execute the provisioning of a Cloud VM on Openstack. +:toclevels: 3 +:description: Red Hat Open Stack tools +:sectnums: ifdef::env-github[] :tip-caption: :bulb: :note-caption: :information_source: @@ -12,63 +14,361 @@ ifdef::env-github[] :warning-caption: :warning: endif::[] -== Prerequisites -// tag::rhos_prerequisites[] +== Introduction -The following python packages are needed: +[.lead] +This document describes the tools that help provision and maintain + infrastructure on a Red Hat Openstack Platform. -- https://github.com/micheles/decorator/blob/master/docs/documentation.md#usefulness-of-decorators[decorator] -- https://pypi.org/project/openstacksdk/[openstacksdk] (Version: 1.2.0) +The tools implemented are the following: -and can be installed using pip +* Provision VMs on RHOSP +* Provision OCP cluster on RHOSP -[source,bash] ----- -$ [sudo] pip3 install decorator -$ [sudo] pip3 install openstacksdk ----- +[glossary] +== Terminology -like also the `openstack.cloud` Ansible collection. +Glossary of terms used. -[source,bash] +[glossary] +FS:: Filesystem +Host:: Target OpenStack instance or VM +OCP:: OpenShift Container Platform +RHOSP:: Red Hat OpenStack Platform + +== Requirements + +The following requirements must be met to fully use this project. + +=== Passwordstore + +Passwordstore is used during the execution of the ansible playbooks when RHOSP + instances or OCP clusters are created/removed. In one hand it + provides information for the deployment process, such as RHOSP authentication, + and also to store the results of the process and be used as Ansible + inventory. + +[NOTE] +==== +All RHOSP information will be stored under the `/openstack` passwordstore folder. +==== + +=== VPN + +This document assumes that you have access to the RHOSP infrastructure. In + the case of RHOS-PSI it is only available connected to the Red Hat VPN. + +[#rhosp-authentication] +== RHOSP authentication + +As this project connects to a RHOSP infrastructure, authenticating + against that platform is needed. Before using this project collect the + authentication information that fits your needs. + +The default authentication plugin for this project is `v3password`. In order + to use this plugin the following information is required. + +.RHOSP authentication information +[%header,cols="20%m,80%"] +|=== +| Variable | Meaning + +| auth_type + +a| Authentication plugin that will be used to handle the authentication process. In this scenario the value will be `v3password`. + +[TIP] +==== +Other values can be selected such as `v3token`. + +Check the `openstack` CLI man page or the + link:https://docs.openstack.org/python-openstackclient/latest/cli/man/openstack.html[OpenStack CLI web man] for a list of possible plugins. +==== + +| auth_url + +a| Authentication URL + +| password + +a| Authentication password + +| project_domain_name + +a| Domain name or ID containing project + +| project_name + +a| Project-level authentication scope (name or ID) + +| username + +a| Authentication username + +| user_domain_name + +a| Domain name or ID containing user + +|=== + +[TIP] +==== +For more detailed information on RHSOP authentication check the + link:https://docs.openstack.org/python-openstackclient/latest/cli/authentication.html[OpenStack CLI Authentication] + document. +==== + +=== Ansible playbooks + +2 Ansible playbooks are already available to collect the required + authentication information from passwordstore. + +* *link:../ansible/playbook/openstack/openstack_auth_passstore_v3password.yml[openstack_auth_passstore_v3password.yml] <= Default* +* link:../ansible/playbook/openstack/openstack_auth_passstore_v3applicationcredential.yml[openstack_auth_passstore_v3applicationcredential.yml] + +These playbooks collect the required information from the + `passwordstore` and fill the `rhos_auth` map and `rhos_auth_type` Ansible host variables that are later used on Playbooks and Roles. + +==== openstack_auth_passstore_v3password + +The default authentication playbook is `openstack_auth_passstore_v3password.yml` + and uses the *v3password* authentication plugin. It collects the + required information from the passwordstore. + +.Source for `v3password` authentication +[%header,cols="5%,15%,80%"] +|=== +2+| Variable | Passwordstore source + +2+| `rhos_auth` | + +| {nbsp} +| `auth_url` +| `openstack/host/os_auth_url` + +| {nbsp} +| `password` +| `openstack/host/console_pw` + +| {nbsp} +| `password` +| `openstack/host/console_pw` + +| {nbsp} +| `project_domain_name` +| `openstack/host/os_domain` + +| {nbsp} +| `project_name` +| `openstack/host/project_name` + +| {nbsp} +| `username` +| `openstack/host/console_user` + +| {nbsp} +| `user_domain_name` +| `openstack/host/console_domain` + +2+| `rhos_auth_type` +a| Authentication plugin used, with is `v3password` in this case. + +|=== + +.Click to show sample Ansible Playbook for setting the RHOSP authentication facts +[%collapsible] +====== +[source,yaml] ---- -$ ansible-galaxy collection install openstack.cloud +include::../ansible/playbook/openstack/openstack_auth_passstore_v3password.yml[] ---- -// end::rhos_prerequisites[] +====== -== VM Required Information +== Ansible Inventory -=== Images +The inventory of all RHOSP hosts is managed by Ansible. -Different OS images are available on Openstack. +[NOTE] +==== +Please refer to our link:../ansible/README.adoc[Ansible Document] for more information on the project Ansible Inventory. +==== -.OpenStack Image information -[%header,cols="2m,1,1,1"] -|=== -| Name | OS | Version | FS +The host information will be stored under the `openstack` folder where a + sub-folder exists for each host. It also stores the SSH public and secret + keys locally on the user's `~/.ssh` folder. + +== VM Provisioning + +The main goal of the RHOSP tools is to provision hosts and + store their information on the team passwordstore inventory for + later use by the team members. + +Prior to deploying a host aspects such as the OS image and + the sizing of the host must be addressed. -| Fedora-Cloud-Base-35 | Fedora | 35 | BTRFS -| CentOS-8-x86_64-GenericCloud-released-latest | CentOS | 8 | ???? -| CentOS-7-x86_64-GenericCloud-released-latest | CentOS | 7 | ???? +=== Preparing the Provisioning +[WARNING] +==== +The examples provided here are photos of a specific time frame and + RHOSP implementation. To get and updated lists of your available images + and flavors check your RHOSP cluster using either + the RHOSP console or the link:openstack-cli.adoc[RHOSP CLI]. +==== + +==== OS Image + +[quote,RedHat RHOSP Documentation,The Image service (glance)] +A virtual machine image is a file that contains a virtual disk with a bootable operating system installed. + +To identify available images and choose which one to use check our docs on the link:openstack-cli.adoc[RHOSP CLI tool] which describe some of the most used commands. + +.Sample OpenStack Cloud image list +[%header,cols="45%,45%,10%"] +|=== +| ID | Name | Status +| 0b7d28c6-56ec-4d72- | Fedora-Cloud-Base-30 | active +| 59ed78ec-c632-4a9d- | Fedora-Cloud-Base-32 | active +| 6e6327eb-522a-4d7e- | Fedora-Cloud-Base-33 | active +| e5b85cf9-6b7c-44a0- | Fedora-Cloud-Base-34 | active +| 8b8ab2a1-e349-4313- | Fedora-Cloud-Base-35 | active +| ca58d538-674d-40c8- | Fedora-Cloud-Base-36 | active +| cbea8fed- | Fedora-Cloud-Base-37 | active |=== -=== Flavors +[#flavors] +==== Flavors + +[quote,RedHat RHOSP Documentation,Flavors] +In OpenStack, flavors define the compute, memory, and storage capacity of nova computing instances. To put it simply, a flavor is an available hardware configuration for a server. It defines the size of a virtual server that can be launched. -.OpenStack Flavor information +Reduced list of flavors obtained from our RHOSP cluster. + +.Sample OpenStack Flavor information [%header,cols="2m,1,1,1,1,1"] |=== -| Flavor | VCPUS | RAM | Total Disks | Root Disk | Ephmeral Disk - -| m1.medium | 2 | 4 GB | 40 GB | 40 GB | 0 GB -| ci.m1.medium | 2 | 4 GB | 40 GB | 40 GB | 0 GB -| ci.m1.medium.large | 4| 4 GB | 16 GB | 16 GB | 0 GB -| ci.m4.xlarge | 4| 16 GB | 40 GB | 40 GB | 0 GB -| g.standard.xxl | 12 | 24GB | 120GB | 120GB | 0GB -| ci.m5.large | 16 | 32GB | 40GB | 40GB | 0GB +| Flavor | VCPUS | RAM | Total Disks | Root Disk | Ephemeral Disk + +| m1.medium | 2 | 4 GB | 40 GB | 40 GB | 0 GB +| ci.m1.medium | 2 | 4 GB | 40 GB | 40 GB | 0 GB +| ci.m1.medium.large | 4 | 4 GB | 16 GB | 16 GB | 0 GB +| ci.m4.xlarge | 4 | 16 GB | 40 GB | 40 GB | 0 GB +| ci.m5.large | 16 | 32GB | 40GB | 40GB | 0GB +| g.standard.xxl | 12 | 24GB | 120GB | 120GB | 0GB +| ocp4.single-node | 24 | 48GB | 200GB | 200GB | 0GB +| ocp4.control | 4 | 16GB | 100GB | 100GB | 0GB +| ocp4.compute | 2 | 8GB | 100GB | 100GB | 0GB +| ocp4.bootstrap | 4 | 16GB | 100GB | 100GB | 0GB |=== -== Available Ansible Playbooks +==== Networks + +[quote,RedHat RHOSP Documentation,Network] +A network is an isolated Layer 2 networking segment. + +`provider_net_shared` is the default network to be used. + +.Sample network list +[source] +---- ++-------------------------+-------------------------+--------------------------+ +| ID | Name | Subnets | ++-------------------------+-------------------------+--------------------------+ +| 0e212597-e475-4c4a- | provider_net_cci_13 | d3b1c702-bb71-4547-8cf0- | +| a4fa-db71f84ec04c | | 2ff5f9802595 | +| 5058fef2-f89f-4e70- | provider_net_cci_7 | eb8db9f4-a76f-4fe2-a0bd- | +| 9e01-66af2847ddc4 | | f932bc20dfa1 | +| 68a8220a-20f4-4940- | provider_net_cci_4 | 10a8b6b3-7ff5-4933-9e31- | +| 99b4-45b6f98bce6b | | 9be0f25d745e | +| 6a32627e-d98d-40d8- | provider_net_shared | b7e7d2b5-efc1-462a-96ec- | +| 9324-5da7cf1452fc | | eda940820520 | ++-------------------------+-------------------------+--------------------------+ +---- + +More information on RHOSP networks on link:openstack-cli.adoc#network[our OpenStack CLI document] + and in the link:https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/network.html[OpenStack CLI official documentation]. + +=== Provision the VM + +[.lead] +Once the system configuration is identified it's time to create a new Host + which can be done using our pre-prepared Ansible Playbooks. + +[WARNING] +==== +Detailed documentation on Host provisioning can be found at our + link:../ansible/playbook/openstack/README.adoc[OpenStack Ansible Playbooks] document. + +On this document you'll find: + +* list of the available playbooks; +* execution instructions to perform the VM provision operations, including + all the parameters available on the playbooks +* inormation on the outputs of each playbook +==== + +To quickly create a Host you can use the following command, taking care to + check the `network`, `image`, `flavor` and `vm_name` variables + that should be filled according to your implementation. + +.Create OpenStack Host command +[source,bash] +---- +ansible-playbook ansible/playbook/openstack/openstack_vm_create_passwordstore.yml -e '{"openstack": {"vm": {"network": "provider_net_shared","image": "Fedora-Cloud-Base-37", "flavor": "m1.medium"}}}' -e vm_name=snowdrop_sample_vm +---- + +To delete the newly created host execute the following command. + +.Remove OpenStack Host command +[source,bash] +---- +ansible-playbook ansible/playbook/openstack/openstack_vm_remove_passwordstore.yml -e vm_name=snowdrop_sample_vm +---- + +== Connect to a RHOSP instance + +To improve usability the link:../tools/passstore-vm-ssh.sh[`tools/passstore-vm-ssh.sh`] + bash script has been created. It makes it easier to connect to any + host under the project's Ansible Inventory. + +More documentation on the bash script can be found link:../tools/README.md[here]. + +To SSH connect to a VM use the bash script. + +.Sample connection execution +[source,bash] +---- +./tools/passstore-vm-ssh.sh openstack snowdrop_sample_vm +---- + +This should connect ot the newly created VM. + +==== +---------------------------- +Last login: Thu Jan 1 00:00:00 1970 from x.x.x.x +------------------ + +This machine is property of RedHat. +Access is forbidden to all unauthorized person. +All activity is being monitored. + +Welcome to snowdrop_sample_vm. +---------------------------- +==== + +== Deploy OCP on RHOSP + +[.lead] +Set of Ansible Playbooks and Roles that deploy an OCP cluster on RHOSP. + +This set of Ansible playbooks will provision an OCP cluster, tailored + to the selected size. It will also provision a jump server that will + allow ssh connections to the cluster nodes. + +To perform this installation check the link:../ansible/playbook/ocp/README.adoc[OCP on RHOSP Ansible Playbooks] document. + +== OpenStack CLI -More information on the available OpenStack Ansible Playbooks on the -link:../ansible/playbook/openstack/README.adoc[Playbook README]. +The OpenStack CLI tool comes very handy to perform several checks and + information collection. We've created the link:openstack-cli.adoc[RHOSP CLI] + document to describe some of the most used commands. diff --git a/openstack/openstack-cli.adoc b/openstack/openstack-cli.adoc index fdbc55c9..61b97cf8 100644 --- a/openstack/openstack-cli.adoc +++ b/openstack/openstack-cli.adoc @@ -1,7 +1,17 @@ = OpenStack CLI +Snowdrop Team :icons: font +:revdate: {docdate} :toc: left -:description: This document describes Openstack CLI commands. +:toclevels: 3 +:description: RHOS CLI +ifdef::env-github[] +:tip-caption: :bulb: +:note-caption: :information_source: +:important-caption: :heavy_exclamation_mark: +:caution-caption: :fire: +:warning-caption: :warning: +endif::[] == References @@ -23,7 +33,7 @@ pip3 install python-openstackclient To access the OpenStack platform using the client, different `Environment Variables` must be set as described https://docs.openstack.org/newton/user-guide/common/cli-set-environment-variables-using-openstack-rc.html[here]. So, connect to your RHOS instance (e.g https://rhos-d.infra.prod.upshift.rdu2.redhat.com/) and select within the menu: `Project > API Access`. -From there, download the `OpenStack RC file` bash file by clicking on the button and store somwhere (e.g ./rhos-openrc.sh). +From there, download the `OpenStack RC file` bash file by clicking on the button and store somewhere (e.g ./rhos-openrc.sh). Copy the file to a path directory, for instance `~/bin` (or `~/.local/bin` or `/usr/local/bin`) and give it execution permission. Source the bash script `source /usr/local/bin/rhos-openrc.sh` and enter your password as requested. @@ -35,23 +45,31 @@ Please enter your OpenStack Password for project xxxxxxxxxx as user xxxxxxxxxxxx That's it, you're now logged in. -=== Flavor +[#flavors] +=== Flavors -.Get the list of the flavors +List the flavors available. [source,bash] ---- -$ openstack flavor list -+--------------------------------------+------------------------------------+--------+------+-----------+-------+-----------+ -| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | -+--------------------------------------+------------------------------------+--------+------+-----------+-------+-----------+ -| 0f84ff1f-2cc8-4c99-baa2-d663577bd538 | ci.memory.xxl | 98304 | 60 | 0 | 12 | True | -| 3410bf5a-31c6-43f9-a859-a0f848552bc2 | ci.m5.xlarge | 49152 | 40 | 0 | 24 | True | -| 393bcaeb-a8bf-4a18-a893-799f01e338b8 | ci.memory.xxxl | 131072 | 80 | 0 | 16 | True | -| 52e23b17-4039-488e-9c63-543795a31bb8 | g.memory.xxxl | 131072 | 80 | 0 | 16 | True | -| 8cc169d5-184a-4392-bc27-6eac97735d62 | ocp-master-xxl | 49152 | 45 | 0 | 32 | True | -... -+--------------------------------------+------------------------------------+--------+------+-----------+-------+-----------+ +openstack flavor list | grep ocp +---- + +The result should be similar to: + +[source] +---- ++---------------------+-------------------+--------+------+-----+-------+------+ +| ID | Name | RAM | Disk | Eph | VCPUs | Publ | ++---------------------+-------------------+--------+------+-----+-------+------+ +| 15c48f0d-dec4-4290- | ocp-compute | 16384 | 45 | 0 | 16 | True | +| 62714b74-29ed-49a7- | ocp4.control | 16384 | 100 | 0 | 4 | True | +| 66502c57-663a-4e51- | ocp4.compute | 8192 | 100 | 0 | 2 | True | +| 9f5da47b-3d98-4644- | ocp-infra | 8192 | 40 | 0 | 4 | True | +| bc8850d7-db24-4bef- | ocp4.single-node | 49152 | 200 | 0 | 24 | True | +| e658777a-4953-4b11- | ocp4.bootstrap | 16384 | 100 | 0 | 4 | True | +| fbcd841e-6559-49a4- | ocp-master | 16384 | 45 | 0 | 4 | True | ++---------------------+-------------------+--------+------+-----+-------+------+ ---- .Filter list by RAM @@ -60,6 +78,13 @@ $ openstack flavor list openstack flavor list --min-ram 33000 ---- +[TIP] +==== +More information on RHOS flavors at the link:https://docs.openstack.org/nova/pike/admin/flavors.html[RHOS docs]. +==== + +==== Create flavor + Create a new flavor. .Create flavor @@ -68,26 +93,198 @@ Create a new flavor. $ openstack flavor create g.standard.xxxl.ram --ram 40960 --disk 160 --vcpus 16 ---- -=== Server +[#network] +=== Networks -List servers. +List networks. [source,bash] ---- -$ openstack server list -+--------------------------------------+----------------------------+---------+---------------------------------+----------------------------------------------+-----------------+ -| ID | Name | Status | Networks | Image | Flavor | -+--------------------------------------+----------------------------+---------+---------------------------------+----------------------------------------------+-----------------+ -| xxxxxx-xxxx-xxxx-xxxx-xxxxxx | k123-fedora35-01 | SHUTOFF | provider_net_shared=x.x.x.x | Fedora-Cloud-Base-35 | g.standard.xxxl | -| xxxxxx-xxxx-xxxx-xxxx-xxxxxx | 20220425-k121-centos8-test | ACTIVE | provider_net_shared=x.x.x.x | CentOS-8-x86_64-GenericCloud-released-latest | ci.m5.large | -| xxxxxx-xxxx-xxxx-xxxx-xxxxxx | n119-test | ACTIVE | provider_net_shared=x.x.x.x | CentOS-7-x86_64-GenericCloud-released-latest | ci.m5.large | -+--------------------------------------+----------------------------+---------+---------------------------------+----------------------------------------------+-----------------+ +openstack --os-cloud openstack network list --max-width 80 ---- -==== Resize server +[source] +---- ++-------------------------+-------------------------+--------------------------+ +| ID | Name | Subnets | ++-------------------------+-------------------------+--------------------------+ +| 0e212597-e475-4c4a- | provider_net_cci_13 | d3b1c702-bb71-4547-8cf0- | +| a4fa-db71f84ec04c | | 2ff5f9802595 | +| 10e45d6d-5924-48ee- | provider_net_ipv6_only | 95214fb1-550b-4274-92a2- | +| 9f5a-9713f5facc36 | | fae39a144a70 | +| 14c15d33-175c-424e- | provider_net_shared_2 | 17eca5aa-75c5-411c-a1cd- | +| 88ba-361a875e0c5c | | ae1d2cc8cf3d | +| 1cf0a81b-6786-4052- | provider_net_ocp_osbs | 14264d65-9a4e-46bf-950f- | +| a1bc-904e05ae410d | | 1f3f277ff64d | +| 25ec4907-36fc-4035- | provider_net_cci_5 | 6159e87c-06a1-4f56-aa5a- | +| b8d5-b797246330f2 | | aabad1298be5 | +| 271db5de-8bf4-4f99- | ocp- | 7cc7d0ee-f36b-46db- | +| a32d-002e7aea388a | sdev-9gv8d-openshift | ab57-ad4a8a653854 | +| 27671b90-c2bc-483f- | manila_net | 6ccbef33-7962-4214-a992- | +| b783-cc856f20ee5d | | 02f9e83a235d | +| 316eeb47-1498-46b4- | provider_net_shared_3 | 1447a1b3-c28f-4026-9edb- | +| b39e-00ddf73bd2a5 | | 98af355c29c9 | +| 333dadbb-3a26-4b66- | provider_net_lab | 289297e5-0fd1-45f8-b2a5- | +| ad1b-547196d92e88 | | 248f5c2abc18 | +| 36e46f70-99ff-48f5- | provider_net_cci_12 | e9fa371f-2b3e-4a4f-a33b- | +| aa9d-7bbd22b6218a | | 9e39f869aea3 | +| 3fdeb18a-2ad4-4536- | provider_net_dualstack_ | 489315d5-1d84-4b7f-9349- | +| bc23-c3c488a382ad | 1 | f5a14faff452, e4dba1d2- | +| | | f211-4d19-96f9- | +| | | 367eb281a41d | +| 3feeb1e1-132d-41a0- | provider_net_istio | df9fdb4f-1fd7-4581-ac7c- | +| 8fb9-55f69d11f7c6 | | 27146c4c1df0 | +| 49a185f9-83b0-4b2d- | provider_net_ocp_prodse | 3ea8f7b7-281f-4899-8c8d- | +| 811b-4f3cfbc3d30c | c_psi | edd0d78623dd | +| 4bc90704-dd9e-412f- | provider_net_ocp_stage | 17d2ed9d-790d-45d9-8f0b- | +| b89a-07267113fbfd | | e00fbfa45226 | +| 5058fef2-f89f-4e70- | provider_net_cci_7 | eb8db9f4-a76f-4fe2-a0bd- | +| 9e01-66af2847ddc4 | | f932bc20dfa1 | +| 52f90b15-4773-4b00- | provider_net_ocp4_prod_ | ebe0285b-4cca-4fba-85b9- | +| 84c4-ba27916c118a | psi | ea73f178d39e | +| 5cd089f9-8ed2-46bc- | provider_net_quicklab | 1199e331-bc4e-42de-b681- | +| 8ea7-4e1cdb5262ba | | c60f87319cd7 | +| 5f00bb1a-0e38-43f9- | provider_net_ocp_prod_p | e2206e0c-da84-4175-82c5- | +| b48d-fc424bfd6cab | si | d3d38e00cca1 | +| 60cacaff-86a6-4f88- | provider_net_cci_8 | 456329df-36f5-452a-bae2- | +| 82a4-ed3023724df1 | | 404003910f09 | +| 68a8220a-20f4-4940- | provider_net_cci_4 | 10a8b6b3-7ff5-4933-9e31- | +| 99b4-45b6f98bce6b | | 9be0f25d745e | +| 6a32627e-d98d-40d8- | provider_net_shared | b7e7d2b5-efc1-462a-96ec- | +| 9324-5da7cf1452fc | | eda940820520 | +| 6c256a91-7b1b-427d- | provider_net_ocp_stage_ | 42d62dda-5ce7-4ccf-9998- | +| bcb8-2495a7401f6a | psi | a14799fbf962 | +| 74e8faa7-87ba-41b2- | provider_net_cci_2 | 11b95215-522d-4730-97d5- | +| a000-438013194814 | | a76bdc66d6fa, 63b2d4a6- | +| | | 6df2-417c-8ee8- | +| | | d0e01bc523c8 | +| 90341629-df19-4196- | ocp-xyz-rhzhf-openshift | ea8e54be-523d-44ac-92eb- | +| 9002-d4a8d9fbf5b9 | | ab870cbe669c | +| 9b37aaba-874c-4ef4- | provider_net_ocp4_sdbx_ | 32b67ebf-6aa1-4964-83c2- | +| b45a-1efd6d21b928 | psi | c526d33359a3 | +| a0578760-3460-4f0d- | ocp-sdev-p75fs- | 63fa9393-3d64-43fa-b39f- | +| 827b-75edc1609cec | openshift | f36d2fde9c87 | +| b71d614c-b0b0-4f2d- | provider_net_cci_11 | af342799-3d03-4b51-b252- | +| b141-e78129212b98 | | f56bed4e0997 | +| b8426041-7cf9-4f36- | provider_net_ocp_dev | 58a82433-493b-41cb-966a- | +| 9732-e5d582469d3f | | 00d9b6e61772 | +| cd8cbb14-ec50-4417- | provider_net_ocp_osbs_p | 261d4685-edb6-4779-8ac8- | +| a5e6-34c3f2ccec3b | si | 495ab4882c0c | +| d284bcff-d1ed-452d- | provider_net_cci_1 | 1a14746d-8e7d-4dbe-a361- | +| b7e3-af979b9582a3 | | dfcc01b0bc5c, 3efe14a9- | +| | | 3d70-47a1-a7f8- | +| | | 5d373539c399 | +| d655dcd0-b593-439c- | provider_net_cci_9 | 46c0f9b7-0028-4780-97c9- | +| 997b-aa5bc8c03a3a | | 25b2e93f05d7 | +| de061265-0353-4b38- | ocp-sscpc-openshift | f050f0d2-3daa-4a63-9053- | +| a78e-5d0627797ea1 | | a07228068855 | +| eb3e8289-ce41-4825- | provider_net_cci_3 | 02a8825d-e5f7-4e91-b502- | +| a48a-8f8e11feaec7 | | fc8361051e44, 62a381e5- | +| | | 9313-43fa-a515- | +| | | cd0d7560907b | +| eceac180-5a4d-4b1d- | provider_net_cci_14 | b360d82a-1375-4549-a665- | +| b916-1d4e8f19b873 | | 1f505aae2663 | +| ee7dcdfe-2b6e-4b7e- | provider_net_cci_6 | 3abbd7bc-6027-49de- | +| bbe9-3dabc0972bb5 | | ba44-96e4a6268d45 | +| f27262a7-1304-4e45- | assisted-lab-net | 11eb1393-6040-4635-99af- | +| a7cf-6b8e0ba0c103 | | 7f3ae340523d | +| ff415208-8322-43c4- | provider_net_sysops | b40cec0a-1e14-43d5-9451- | +| af20-b764740aa3f4 | | 89eb8b48e323 | ++-------------------------+-------------------------+--------------------------+ +---- + +=== Images + +Different OS images are available on Openstack and can be discovered using the command `openstack image list`. +Filter them according to the target OS that you are interested in: + +[source,bash] +---- +openstack image list | grep -ni "Fedora-Cloud-Base.*" +openstack image list | grep -ni "RHEL-9.*" +---- + +To get the detail about an image you will use the command `openstack image show` [source,bash] +---- +openstack image show Fedora-Cloud-Base-37 --fit-width +---- + +Should present information for that image. + +[source] +---- ++------------------+--------------------------------------------------+ +| Field | Value | ++------------------+--------------------------------------------------+ +| checksum | 9d9493d443cbac882732ae65a85497b2 | +| container_format | bare | +| created_at | 2022-09-07T00:21:25Z | +| disk_format | qcow2 | +| id | cbea8fed-fef0-4319-b978-f7e983e85b19 | +| min_disk | 0 | +| min_ram | 0 | +| name | Fedora-Cloud-Base-37 | +| properties | direct_url='rbd://03e3321d-071f-4b28-a3f9- | +| | 0256f384bdca/images_d/cbea8fed- | +| | fef0-4319-b978-f7e983e85b19/snap', | +| | locations='[{'url': 'rbd://03e3321d-071f-4b28- | +| | a3f9-0256f384bdca/images_d/cbea8fed- | +| | fef0-4319-b978-f7e983e85b19/snap', 'metadata': | +| | {'store': 'default_backend'}}]', | +| | os_hash_algo='sha512', os_hash_value='d38a2bf524 | +| | 1730a7347dd74e27518dbb82b28070b424aca824d3e53a3c | +| | 812aacc7ab9a92c663e5b55a7ae63e3fe14efab71213d656 | +| | 4773a0d33ee5924787a983', os_hidden='False', | +| | stores='default_backend' | +| schema | /v2/schemas/image | +| size | 490405888 | +| status | active | +| tags | | +| updated_at | 2022-09-07T00:21:34Z | ++------------------+--------------------------------------------------+ +---- + +[TIP] ==== +More information on RHOS images at the link:https://docs.openstack.org/newton/user-guide/common/cli-manage-images.html[RHOS docs]. +==== + +[#servers] +=== Servers + +List existing servers. + +[source,bash] +---- +openstack server list --max-width 80 +---- + +The resulting list. + +[source] +---- ++-------------+-------------+--------+-------------+-------------+-------------+ +| ID | Name | Status | Networks | Image | Flavor | ++-------------+-------------+--------+-------------+-------------+-------------+ +| a0e54723- | snowdrop- | ACTIVE | provider_ne | Fedora- | ci.m4.xlarg | +| 7374-430b- | k8s | | t_shared=x | Cloud- | e | +| bcb7- | | | .x.x.x | Base-37 | | +| c144c583651 | | | | | | +| b | | | | | | +| a0923a85- | tap15 | ACTIVE | provider_ne | Fedora- | g.standard. | +| e5b1-4d03- | | | t_shared=x | Cloud- | xxl | +| 943d- | | | .x.x.x | Base-35 | | +| c7760a16563 | | | | | | +| 9 | | | | | | ++-------------+-------------+--------+-------------+-------------+-------------+ +---- + +==== Resize server + +[source,bash] +---- $ nova help resize usage: nova resize [--poll] @@ -100,7 +297,7 @@ Positional arguments: Options: --poll Report the server resize progress until it completes. -==== +---- [source,bash] ---- @@ -128,65 +325,28 @@ Positional arguments: $ nova resize-confirm k123-fedora35-01 ---- -Typical errors. +==== Typical errors -[source,bash] ----- -$ nova resize --poll k123-fedora35-01 PnTAE.CPU_20_Memory_65536_Disk_200 -ERROR (Forbidden): Quota exceeded for ram: Requested 32768, but already used 98304 of 122880 ram (HTTP 403) (Request-ID: xxxxxxxxxxxxxx) ----- +If we try an operation that will exceed the quota an error will be returned. [source,bash] ---- -$ nova resize --poll k123-fedora35-01 ci.m5.xlarge -ERROR (ClientException): Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. - (HTTP 500) (Request-ID: req-774039f4-3619-4bb8-8727-31e5f99edda2) +nova resize --poll k123-fedora35-01 PnTAE.CPU_20_Memory_65536_Disk_200 ---- -== Current implementation - -=== Images - -Different OS images are available on Openstack and can be discovered using the command `openstack image list`. -Filter them according to the target OS that you are interested in: - -``` -openstack image list | grep -ni "Fedora-Cloud-Base.*" -openstack image list | grep -ni "RHEL-9.*" -``` - -To get the detail about an image you will use the command `openstack image show` -``` -openstack image show 8b8ab2a1-e349-4313-9a38-a800f42ffe99 -f shell -``` - -.OpenStack Image information -[%header,cols="2m,1,1,1"] -|=== -| Name | OS | Version | FS - -| Fedora-Cloud-Base-35 | Fedora | 35 | BTRFS -| CentOS-8-x86_64-GenericCloud-released-latest | CentOS | 8 | ???? -| CentOS-7-x86_64-GenericCloud-released-latest | CentOS | 7 | ???? - -|=== - -=== Flavors - -.OpenStack Flavor information -[%header,cols="2m,1,1,1,1,1"] -|=== -| Flavor | VCPUS | RAM | Total Disks | Root Disk | Ephemeral Disk - -| m1.medium | 2 | 4 GB | 40 GB | 40 GB | 0 GB -| ci.m1.medium | 2 | 4 GB | 40 GB | 40 GB | 0 GB -| ci.m1.medium.large | 4| 4 GB | 16 GB | 16 GB | 0 GB -| ci.m5.large | 16 | 32GB | 40GB | 40GB | 0GB -|=== +Error message -:leveloffset: +1 +==== +ERROR (Forbidden): Quota exceeded for ram: Requested 32768, but already used 98304 of 122880 ram (HTTP 403) (Request-ID: xxxxxxxxxxxxxx) +==== -include::../ansible/playbook/openstack/README.adoc[] +[source,bash] +---- +nova resize --poll k123-fedora35-01 ci.m5.xlarge +---- -:leveloffset: -1 +==== +ERROR (ClientException): Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. + (HTTP 500) (Request-ID: req-774039f4-3619-4bb8-8727-31e5f99edda2) +==== diff --git a/passwordstore/README.adoc b/passwordstore/README.adoc index fbcf54ae..999d1bc7 100644 --- a/passwordstore/README.adoc +++ b/passwordstore/README.adoc @@ -81,3 +81,67 @@ script located at link:../ansible/inventory/pass_inventory.py[../ansible/invento == Ansible Playbooks Information on the available playbooks is available link:../ansible/playbook/passstore/README.adoc[here]. + + +== Connect to a host instance + +All the information related to the hosts will be stored in the passwrodstore Ansible inventory. The current implementation also stores the ssh public and secret keys locally on each `~/.ssh` folder. To improve usability link:../../../tools/passstore-vm-ssh.sh[this] bash script has been created to make it easier to perform this connection. More documentation on the bash script can be found link:../../../tools/README.md[here]. + +To SSH connect to a VM use the `tools/passstore-vm-ssh.sh` bash script. + +The 3 arguments to pass to the script are the following. + +.Script options +[%header,cols="2,4"] +|=== +| Command | Description + +| 1: `VM_PROVIDER` + +[.fuchsia]#string# / [.red]#required# +a| Cloud provider + +Choices: + +* `hetzner` +* `openstack` + +| 2: `VM_NAME` + +[.fuchsia]#string# / [.red]#required# +a| Name of the VM to connect to. + +This is the inventory name of the VM. + +| 3: `PASSWORD_STORE_DIR` + +[.fuchsia]#string# +a| Folder where the PASSWORDSTORE database is located + +*Default*: `PASSWORD_STORE_DIR` environment variable, if set. +If this parameter is not provided and no `PASSWORD_STORE_DIR` env +variable is set the script will fail as it doesn't know the location +of the passwordstore project. + +|=== + + +.Connect to a passwordstore VM +[source,bash] +---- +./tools/passstore-vm-ssh.sh openstack ${VM_NAME} +---- + +This should connect ot the newly created VM. + +[source,bash] +====== +Last login: Thu Jan 1 00:00:00 1970 from x.x.x.x +------------------ + +This machine is property of RedHat. +Access is forbidden to all unauthorized person. +All activity is being monitored. + +Welcome to vm20210221-t01.. +====== diff --git a/requirements.txt b/requirements.txt index f3279237..c36f1dd7 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,6 +1,8 @@ -molecule[docker,lint] -openstacksdk >= 1.2.0 -#python-openstackclient >= 6.2.0 -yq -ansible >= 2.9.10 +# molecule[docker,lint] ~= 5.0.1 +openstacksdk ~= 2.0.0 +#python-openstackclient ~= 5.8.0 +python-octaviaclient +yq ~= 3.2.2 +#ansible >=7.0.0,<8.0.0 +ansible ~= 8.0.0 ansible-lint diff --git a/tools/README.adoc b/tools/README.adoc new file mode 100644 index 00000000..233496cd --- /dev/null +++ b/tools/README.adoc @@ -0,0 +1,145 @@ += Tools +Snowdrop Team +:icons: font +:revdate: {docdate} +:toc: left +:toclevels: 3 +:description: Auxiliary tools. +ifdef::env-github[] +:tip-caption: :bulb: +:note-caption: :information_source: +:important-caption: :heavy_exclamation_mark: +:caution-caption: :fire: +:warning-caption: :warning: +endif::[] + +== Introduction + +[.lead] +Auxiliary tools for the `k8s_infra` project. + +== passstore-vm-ssh.sh + +Shell script that allows connecting to a host that’s added to the +passwordstore database. + +=== Requirements + +[arabic] +. pass installed on the computer (https://www.passwordstore.org/) +* Fedora: ++ +[source,bash] +---- +$ dnf install pass +---- +* RHEL ++ +[source,bash] +---- +$ yum install pass +---- +. team’s pass database updated on the computer +* check the project documentation + +=== Usage + +Call the script passing at least 2 of the 3 arguments. + +.Connect to a passwordstore VM +[source,bash] +---- +./tools/passstore-vm-ssh.sh <1> <2> <3> <4> +---- +<1> Cloud provider. +<2> Inventory host name. +<3> Passwordstore database folder. +<4> SSH command to be executed on remote host. + +.Script parameters +[%header,cols="2,4"] +|=== +| Parameter | Description + +| 1: `VM_PROVIDER` + +[.fuchsia]#string# / [.red]#required# +a| Cloud provider + +Choices: + +* `openstack` + +| 2: `VM_NAME` + +[.fuchsia]#string# / [.red]#required# +a| Name of the VM to connect to. + +This is the inventory name of the VM. + +| 3: `PASSWORD_STORE_DIR` + +[.fuchsia]#string# +a| Location of the PASSWORDSTORE database. + +This parameter is optional if the `PASSWORD_STORE_DIR` environment + variable is set. If neither this parameter is defined nor the + `PASSWORD_STORE_DIR` env var is set the script will fail. + + +| 4: SSH COMMAND + +[.fuchsia]#string# + +a| Optional command to be executed on remote host. + +If none, the ssh connection is returned to the user. + +|=== + +Connect to a remote host. + +[source,bash] +---- +./tools/passstore-vm-ssh.sh openstack vm20210221-t01 ~/git/passdatabase/ +---- + +As output the script will print the `ssh` command to be executed and +also launch it. For instance, the output of the previous command would +be something like the following. + +[source,bash] +---- +### SSH COMMAND: ssh -i /home/johndoe/.ssh/vm20210221-t01 loginuser@xxx.xxx.xxx.xxx -p 22 +[loginuser@h01-116 ~] +---- + +Execute a command on the remote host. + +[source,bash] +---- +./tooling/passstore-vm-ssh.sh hetzner h01-116 ~/github/snowdrop/pass/ ls +---- + +As output the script will print the `ssh` command to be executed and +also launch it. For instance, the output of the previous command would +be something like the following. + +[source,bash] +---- +### SSH COMMAND: ssh -i /home/johndoe/.ssh/vm20210221-t01 loginuser@xxx.xxx.xxx.xxx -p 22 ls +Documents +---- + +=== The passwordstore database + +The script gathers from the passwordstore database the following +information for using on the connection. + +* rsa secret key (id_rsa) +* host IP (ansible_ssh_host) +* ssh port (ansible_ssh_port) +* os user (os_user) + +The RSA Secret Key contents are used to generate the ssh identity file +at `~/.ssh/`, if that file doesn’t already exist. diff --git a/tools/README.md b/tools/README.md deleted file mode 100644 index 8215a5db..00000000 --- a/tools/README.md +++ /dev/null @@ -1,78 +0,0 @@ -Table of Contents -================= - -* [Introduction](#introduction) -* [ssh-vm](#ssh-vm) - * [Requirements](#requirements) - * [Usage](#usage) - * [The passwordstore database](#the-passwordstore-database) - - -# Introduction - -Auxiliary tools for the `k8s_infra` project. - -# ssh-vm - -Shell script that allows connecting to a host that's added to the passwordstore database. - -## Requirements - -1. pass installed on the computer (https://www.passwordstore.org/) - * Fedora: - ```bash - $ dnf install pass - ``` - * RHEL - ```bash - $ yum install pass - ``` -2. team's pass database updated on the computer - * check the project documentation - -## Usage - -Parameters: -1. VM_PROVIDER: Provider where the VM is deployed [hetzner,openstack] -2. VM_NAME: Name of the VM -3. PASSWORD_STORE_DIR: Folder location of the pass database -4. SSH COMMAND (optional): command to be executed on remote host. If none, the ssh connection is returned to the user. - -Connect to a remote host. - -```bash -k8s_infra] $ ./tooling/passstore-vm-ssh.sh hetzner h01-116 ~/github/snowdrop/pass/ -``` - -As output the script will print the `ssh` command to be executed and also launch it. For instance, -the output of the previous command would be something like the following. - -```bash -### SSH COMMAND: ssh -i /home/johndoe/.ssh/id_rsa_snowdrop_hetzner_h01-116 loginuser@xxx.xxx.xxx.xxx -p 22 -[loginuser@h01-116 ~] -``` - -Execute a command on the remote host. - -```bash -k8s_infra] $ ./tooling/passstore-vm-ssh.sh hetzner h01-116 ~/github/snowdrop/pass/ ls -``` - -As output the script will print the `ssh` command to be executed and also launch it. For instance, -the output of the previous command would be something like the following. - -```bash -### SSH COMMAND: ssh -i /home/johndoe/.ssh/id_rsa_snowdrop_hetzner_h01-116 loginuser@xxx.xxx.xxx.xxx -p 22 ls -Documents -``` - -## The passwordstore database - -The script gathers from the passwordstore database the following information for using on the connection. - -* rsa secret key (id_rsa) -* host IP (ansible_ssh_host) -* ssh port (ansible_ssh_port) -* os user (os_user) - -The RSA Secret Key contents are used to generate the ssh identity file at `~/.ssh/`, if that file doesn't already exist.