Skip to content

Commit

Permalink
Merge pull request #9 from leifmadsen/documentation/updates_and_examples
Browse files Browse the repository at this point in the history
Add proper documentation for using this playbook
  • Loading branch information
dougbtv authored Aug 24, 2017
2 parents cd7ea21 + 586ec1e commit 438b684
Show file tree
Hide file tree
Showing 9 changed files with 256 additions and 14 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
*.retry
inventory/*
!inventory/example
!inventory/example_virtual
!inventory/example_baremetal
244 changes: 231 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,30 +1,248 @@
## openshift-ansible-bootstrap

A way to bootstrap my nodes before an openshift-ansible install
Virtual and baremetal system bootstrap prior to deployment of OpenShift via the
openshift-ansible playbooks.

## Usage

### Step 0: Modify the inventory
### Step 0: Create new inventory directory from examples

Modify the `./inventory/inventory` to meet your needs.
In the `inventory/` directory exists both an `example_virtual` and
`example_baremetal` directory which provides an example configuration for both
virtual and baremetal deployments.

Pay particular attention to:
A virtual deployment will instantiate a new virtual environment on your virtual
host and setup the bridge interface.

* The host IP addresses
* The `bridge_network_cidr` (should match your LAN, to have VMs on the LAN [as opposed to NAT'ed])
*
For a baremetal deployment, significantly less pre-deployment work needs to be
done as it is assumed your baremetal nodes have had their operating system and
partitioning done ahead of time and are ready on the network for bootstrapping.

### Step 1: Attach disks to VMs
## Pre-deployment configuration (virtual and baremetal)

Note that the the list of `virtual_machines` in the inventory are the names that `virsh` sees in order to attach disks ()
Copy the contents of the `inventory/example_virtual/` or
`inventory/example_baremetal` directory into a new environment directory:

ansible-playbook -i inventory/inventory vm-disks.yaml
```
cp -r inventory/example_virtual/ inventory/testing/
```

### Step 2: Bootstrap openshift nodes with Docker setup
If performing a virtual deployment, modify the
`./inventory/testing/openshift-ansible.inventory` to set the virtual host IP
address. If you have local DNS setup, you can also use the virthost's hostname.

**NOTE**: This **will remove all Docker everything!**
For a virtual deployment, we'll address the OpenShift master and minion node IP
addresses after our initial deployment (we haven't built the nodes yet, so
don't know their IP addresses). You may be able to use DNS hostnames if those
will resolve correctly for you.

 ansible-playbook -i inventory/inventory bootstrap.yml
For a baremetal deployment, update the OpenShift master and minion node IP
addresses now (or set their locally resolving hostnames).

Next, we need to address the variables in the `group_vars/` directory. There
are three files you need to modify:

* `all.yml`
* `openshiftnodes.yml`
* `virthosts.yml`

> **NOTE**
>
> For a baremetal deployment, you'll likely only need to modify the
> `openshiftnodes.yml` file.
### `all.yml`

You'll likely not need to do anything to the `all.yml` file, but if you'd like
to pass a different virtual disk device name or change the SSH common args that
Ansible will use, you can do it here.

### `virthosts.yml`

The `virthosts.yml` file contains information about what virtual machines we're
going to create, the bridge network that will get created on the virtual host,
and the virtual machine parameters. It also contains the source of the virtual
machine qcow2 image and the virthost paths of where to store the base image and
the virtual machine images at instantiation.

Pay particular attention to the `bridge_network_cidr` (should match your LAN,
to have VMs on the LAN [as opposed to NAT'ed]).

### `openshiftnodes.yml`

Primarily you only need to update the `ansible_ssh_private_key_file` variable
which contains the path to your private key for accessing the nodes. If you're
not running this from the virthost directly, this would be the key created
during the playbook run. You'll need to copy it to your Ansible control host
for further connections.

> **PRO TIP**
>
> If you're running the deployment from a remote control machine that isn't the
> virtual host, then you'll want to add an `ansible_ssh_common_args` line that
> provides a method of creating an SSH tunnel to the nodes via the virtual
> host. You can do this by adding a line like the following:
> ```
> ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p [email protected]"'
> ```
> **NOTE**
>
> If you're performing a baremetal deployment, skip down to the **Baremetal
> deployment** section.
### Executing the virtual deployment
There are 3 playbooks you'll need to run to configure the entire setup:
* `virt-host-setup.yml`
* `vm-provision.yml`
* `bootstrap.yml`
You should only need to run the `virt-host-setup.yml` playbook once in order
to get the virtual host setup and ready to deploy our virtual machines.
The `vm-provision.yml` will instantiate the virtual machines, create storage
disks, and attach them to the virtual machines via KVM. If you need to remove
the virtual machines and their storage (say in the case you want to destroy and
re-instantiate a clean environment), you can run the `vm-deprovision.yml`
playbook.
After all your virtual machines are instantiated, you can then run the
`bootstrap.yml` playbook against the new OpenShift virtual machine nodes.
So now it's time to run the virtual host deployment.
> **PRO TIP**
>
> If your virtual host already has the networking configuration setup the way
> that you want, you can skip the bridge network configuration, set the
> `bridge_networking` value to `false`.
```
ansible-playbook -i inventory/virtual_testing/ virt-host-setup.yml
```
After we've successfully deployed the configuration for the virtual host, we
can instantiate our virtual machines for the OpenShift master and minions.
```
ansible-playbook -i inventory/virtual_testing/ vm-provision.yml
```
If all of that goes well, we should be able to bootstrap the nodes and get them
ready for an OpenShift deployment. The bootstrap process will setup Docker and
get the thinpool ready for persistent storage via the `direct-lvm`
configuration instead of the default `loopback` storage.
You can now jump down to the end and read the **Ready to go!** section.
## Baremetal Deployment
A baremetal deployment is significantly simpler, since it's assumed you've done
some of the hard work ahead of time. There are a couple of assumptions prior to
running this for a baremetal deployment to be aware of.
* You've deployed a CentOS 7 operating system to your baremetal nodes
* Your LVM thinpool has been created ahead of time
* You've added a correct `docker-thinpool` file to the LVM configuration
directory
There are two main sections you'll need in your Kickstart file (or you can
follow along with the Kickstart file and configure your disks the same way
through the graphical interface):
1. The partitioning configuration for the disk
2. The thinpool configuration for LVM
### Partitioning layout
First, we need to create our partitioning layout. OpenShift has some pretty
specific tests about size of the partitions and mounts, and they are slightly
different for the OpenShift Master and Minion.
* Master
* `/var/` must be 40G
* `/tmp/` must be 1G
* Minion
* `/var/` must be 15G
* `/tmp/` must be 1G
#### Master partitioning layout
Kickstart file snippet for the partitioning layout. We've done the following:
* Use disk `sda` and clear the partitioning info and master boot record
* Create `/boot/` at 500MB
* Create swap with the recommended size (usually matches RAM value)
* Create 2 physical volumes with LVM
* `pv.01` at a size of 40+8+2 GB for each of our logical volumes, plus 1GB
extra to grow `/var/` into.
* `pv.02` at a size of 10GB, growing to finish filling the disk
* Create 2 volume groups against the physical volumes
* `vg_system` to hold the system logical volumes
* `vg_docker` to hold our thinpool
* Create logical volumes on `vg_system`
* `/` has a size of 8GB
* `/var/` has a size of 40960, growing into the extra 1GB to avoid boundary
issues
* `/tmp` has a size of 2GB to have plenty of extra space
* Create thinpool logical volumes, unmounted, on `vg_docker`
```
# System bootloader configuration
zerombr
clearpart --drives=sda --all --initlabel
part /boot --fstype ext4 --size=500
part swap --recommended

# create physical volumes
part pv.01 --size=52224 --ondisk=sda
part pv.02 --fstype="lvmpv" --size=10240 --grow --ondisk=sda

# create volume groups
volgroup vg_system pv.01
volgroup vg_docker pv.02

# create logical volumes
logvol / --vgname=vg_system --fstype=ext4 --size=8192 --name=lv_root
logvol /var --vgname=vg_system --fstype=ext4 --size=40960 --grow --name=lv_var
logvol /tmp --vgname=vg_system --fstype=ext4 --size=2048 --name=lv_tmp

logvol none fstype=none --vgname=vg_docker --thinpool --percent=80 --grow --name=thinpool --metadatasize=1000 --chunksize=512 --profile=docker-thinpool

bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=sda
```
The other snippet we need to add is to the `%post` section of our Kickstart
file. This will allow LVM to properly mount and define the thinpool based on
our values for data and meta information stores.
```
# Docker LVM thinpool profile
cat << EOF > /etc/lvm/profile/docker-thinpool.profile
activation {
thin_pool_autoextend_threshold=80
thin_pool_autoextend_percent=20
}
EOF
```
### Executing the baremetal deployment
With our baremetal nodes configured and the partitioning all dealt with, we can
simply execute the `bootstrap.yml` playbook against our nodes.
```
ansible-playbook -i inventory/testing/ bootstrap.yml
```
# Ready to go!
At this point all your nodes and storage configuration should be ready to go.
You can then move onto executing an OpenShift deployment against your freshly
bootstrapped environment.
More information about deploying OpenShift can be found on Doug's blog at
http://dougbtv.com//nfvpe/2017/07/18/openshift-ansible-lab-byo/
2 changes: 2 additions & 0 deletions inventory/example_baremetal/group_vars/all.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
---
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
6 changes: 6 additions & 0 deletions inventory/example_baremetal/group_vars/openshiftnodes.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
ansible_ssh_user: admin
ansible_ssh_private_key_file: /home/username/.ssh/id_rsa
domain_name: management.local
thinpool_volume_group_name: vg_docker
thinpool_logical_volume_name: thinpool
setup_thinpool: false
15 changes: 15 additions & 0 deletions inventory/example_baremetal/openshift-ansible.inventory
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Setup this host first, and put the IP here.
virt_host ansible_host=virthost.management.local ansible_ssh_user=root

# After running the virt-host-setup, then change these to match.
gryffindor ansible_host=gryffindor.management.local
slytherin ansible_host=slytherin.management.local
ravenclaw ansible_host=ravenclaw.management.local

[virthosts]
virt_host

[openshiftnodes]
gryffindor
slytherin
ravenclaw
File renamed without changes.
File renamed without changes.

0 comments on commit 438b684

Please sign in to comment.