title | summary | category |
---|---|---|
Common Operations of DM-Ansible |
This document introduces the common operations when you administer a DM cluster using DM-Ansible. |
tools |
This document introduces the common operations when you administer a DM cluster using DM-Ansible.
Run the following command to start all the components (including DM-master, DM-worker and the monitoring component) of the whole DM cluster:
$ ansible-playbook start.yml
Run the following command to stop all the components (including DM-master, DM-worker and the monitoring component) of the whole DM cluster:
$ ansible-playbook stop.yml
-
Download the DM binary file.
-
Delete the existing file in the
downloads
directory.$ cd /home/tidb/dm-ansible $ rm -rf downloads
-
Use Playbook to download the latest DM binary file and replace the existing binary in the
/home/tidb/dm-ansible/resource/bin/
directory with it automatically.$ ansible-playbook local_prepare.yml
-
-
Use Ansible to perform the rolling update.
-
Perform a rolling update on the DM-worker instance:
ansible-playbook rolling_update.yml --tags=dm-worker
-
Perform a rolling update on the DM-master instance:
ansible-playbook rolling_update.yml --tags=dm-master
-
Upgrade dmctl:
ansible-playbook rolling_update.yml --tags=dmctl
-
Perform a rolling update on DM-worker, DM-master and dmctl:
ansible-playbook rolling_update.yml
-
Assuming that you want to add a DM-worker instance on the 172.16.10.74
machine and the alias of the instance is dm_worker3
, perform the following steps:
-
Configure the SSH mutual trust and sudo rules on the Control Machine.
-
Refer to Configure the SSH mutual trust and sudo rules on the Control Machine, log in to the Control Machine using the
tidb
user account and add172.16.10.74
to the[servers]
section of thehosts.ini
file.$ cd /home/tidb/dm-ansible $ vi hosts.ini [servers] 172.16.10.74 [all:vars] username = tidb
-
Run the following command and enter the
root
user password for deploying172.16.10.74
according to the prompt.$ ansible-playbook -i hosts.ini create_users.yml -u root -k
This step creates a
tidb
user on the172.16.10.74
machine, and configures sudo rules and the SSH mutual trust between the Control Machine and the172.16.10.74
machine.
-
-
Edit the
inventory.ini
file and add the new DM-worker instancedm_worker3
.[dm_worker_servers] dm_worker1 ansible_host=172.16.10.72 server_id=101 mysql_host=172.16.10.81 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306 dm_worker2 ansible_host=172.16.10.73 server_id=102 mysql_host=172.16.10.82 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306 dm_worker3 ansible_host=172.16.10.74 server_id=103 mysql_host=172.16.10.83 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306
-
Deploy the new DM-worker instance.
$ ansible-playbook deploy.yml --tags=dm-worker -l dm_worker3
-
Start the new DM-worker instance.
$ ansible-playbook start.yml --tags=dm-worker -l dm_worker3
-
Configure and restart the DM-master service.
$ ansible-playbook rolling_update.yml --tags=dm-master
-
Configure and restart the Prometheus service.
$ ansible-playbook rolling_update_monitor.yml --tags=prometheus
Assuming that you want to remove the dm_worker3
instance, perform the following steps:
-
Stop the DM-worker instance that you need to remove.
$ ansible-playbook stop.yml --tags=dm-worker -l dm_worker3
-
Edit the
inventory.ini
file and comment or delete the line where thedm_worker3
instance exists.[dm_worker_servers] dm_worker1 ansible_host=172.16.10.72 server_id=101 mysql_host=172.16.10.81 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306 dm_worker2 ansible_host=172.16.10.73 server_id=102 mysql_host=172.16.10.82 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306 # dm_worker3 ansible_host=172.16.10.74 server_id=103 mysql_host=172.16.10.83 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306 # Comment or delete this line
-
Configure and restart the DM-master service.
$ ansible-playbook rolling_update.yml --tags=dm-master
-
Configure and restart the Prometheus service.
$ ansible-playbook rolling_update_monitor.yml --tags=prometheus
Assuming that the 172.16.10.71
machine needs to be maintained or this machine breaks down, and you need to migrate the DM-master instance from 172.16.10.71
to 172.16.10.80
, perform the following steps:
-
Configure the SSH mutual trust and sudo rules on the Control machine.
-
Refer to Configure the SSH mutual trust and sudo rules on the Control Machine, log in to the Control Machine using the
tidb
user account, and add172.16.10.80
to the[servers]
section of thehosts.ini
file.$ cd /home/tidb/dm-ansible $ vi hosts.ini [servers] 172.16.10.80 [all:vars] username = tidb
-
Run the following command and enter the
root
user password for deploying172.16.10.80
according to the prompt.$ ansible-playbook -i hosts.ini create_users.yml -u root -k
This step creates the
tidb
user account on172.16.10.80
, configures the sudo rules and the SSH mutual trust between the Control Machine and the172.16.10.80
machine.
-
-
Stop the DM-master instance that you need to replace.
Note: If the
172.16.10.71
machine breaks down and you cannot log in via SSH, ignore this step.``` $ ansible-playbook stop.yml --tags=dm-master ```
-
Edit the
inventory.ini
file, comment or delete the line where the DM-master instance that you want to replace exists, and add the information of the new DM-master instance.[dm_master_servers] # dm_master ansible_host=172.16.10.71 dm_master ansible_host=172.16.10.80
-
Deploy the new DM-master instance.
$ ansible-playbook deploy.yml --tags=dm-master
-
Start the new DM-master instance.
$ ansible-playbook start.yml --tags=dm-master
-
Update the dmctl configuration file.
ansible-playbook rolling_update.yml --tags=dmctl
Assuming that the 172.16.10.72
machine needs to be maintained or this machine breaks down, and you need to migrate dm_worker1
from 172.16.10.72
to 172.16.10.75
, perform the following steps:
-
Configure the SSH mutual trust and sudo rules on the Control Machine.
-
Refer to Configure the SSH mutual trust and sudo rules on the Control Machine, log in to the Control Machine using the
tidb
user account, and add172.16.10.75
to the[servers]
section of thehosts.ini
file.$ cd /home/tidb/dm-ansible $ vi hosts.ini [servers] 172.16.10.75 [all:vars] username = tidb
-
Run the following command and enter the
root
user password for deploying172.16.10.75
according to the prompt.$ ansible-playbook -i hosts.ini create_users.yml -u root -k
This step creates the
tidb
user account on172.16.10.75
, and configures the sudo rules and the SSH mutual trust between the Control Machine and the172.16.10.75
machine.
-
-
Stop the DM-worker instance that you need to replace.
Note: If the
172.16.10.72
machine breaks down and you cannot log in via SSH, ignore this step.$ ansible-playbook stop.yml --tags=dm-worker -l dm_worker1
-
Edit the
inventory.ini
file and add the new DM-worker instance.Edit the
inventory.ini
file, comment or delete the line where thedm_worker1
instance172.16.10.72
that you want to replace exists, and add the172.16.10.75
information of the newdm_worker1
instance.[dm_worker_servers] dm_worker1 ansible_host=172.16.10.75 server_id=101 mysql_host=172.16.10.81 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306 # dm_worker1 ansible_host=172.16.10.72 server_id=101 mysql_host=172.16.10.81 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306 dm_worker2 ansible_host=172.16.10.73 server_id=102 mysql_host=172.16.10.82 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306
-
Deploy the new DM-worker instance.
$ ansible-playbook deploy.yml --tags=dm-worker -l dm_worker1
-
Start the new DM-worker instance.
$ ansible-playbook start.yml --tags=dm-worker -l dm_worker1
-
Configure and restart the DM-master service.
$ ansible-playbook rolling_update.yml --tags=dm-master
-
Configure and restart the Prometheus service.
$ ansible-playbook rolling_update_monitor.yml --tags=prometheus