Use Ansible to manager cluster

From UNamur InfoSec
Revision as of 18:38, 17 September 2021 by Mkuy (talk | contribs) (Created page with "= Ansible = === Installation === # Install Ansible <pre>sudo apt update sudo apt install software-properties-common sudo add-apt-repository --yes --update ppa:ansible/ansib...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Ansible

Installation

  1. Install Ansible
sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install ansible python3-pip -y
  1. Install extra Ansible collection
ansible-galaxy collection install community.general
ansible-galaxy collection install ansible.posix
  1. Install paramiko SSH connection (For SSH password authentication with hosts)
pip3 install paramiko

Building Ansible inventory

We can edit the /etc/ansible/hosts file to add/group the hosts we want Ansible to manage. In this example, we have used MAAS to register our machines with their hostname. So our Ansible client can resolve the host’s IP address from MAAS’s DNS. And each node in our cluster can also use it.

# append 4 hosts to ansible inventory, and group them under cluster (hostname)
sudo tee -a /etc/ansible/hosts > /dev/null << EOT
[cluster]
red
yellow
black
green
[other_host_example]
192.168.1.50 ansible_user=pi
EOT

Use Ansible playbook to add SSH authorized keys to hosts

  1. Create a playbook file
# playbooks/add_ssh_key_to_cluster.yml
---
- name: add SSH autorized key
  hosts: cluster #all hosts under cluster will affect
  remote_user: pi #the SSH username of our hosts
  tasks:
  - name: Set authorized key taken from file
    ansible.posix.authorized_key: #we use authorized_key module from posix collection
      user: pi
      state: present
      key: "{{ lookup('file', '/home/ubuntu/.ssh/id_rsa.pub') }}

#To import SSH keys from a specific user from Github instead
#      key: https://github.com/movsun.keys 
  1. We run the playbook with paramiko as SSH connection, and add ‘-k’ option to use SSH password authentication instead of public key.
ansible-playbook -c paramiko -k playbooks/add_ssh_key_to_cluster.yml

If we successfully add the Ansible client machine’s public key to the hosts. The next time we run ansible-playbook, we don’t need to use paramiko connection and -k to type the SSH password.

Use Ansible playbook to power off the cluster.

  1. Write the playbook file
# playbooks/power_off_the_cluster.yml
---
- name: Power off cluster
  hosts: cluster #all hosts under cluster will affect
  remote_user: pi #the SSH username of our hosts
  become: true #we switch to root user to execute the task
  tasks:
- name: Delay shutting down the remote node
  community.general.shutdown:
    delay: 5
  1. Run the playbook
ansible-playbook playbooks/power_off_the_cluster.yml

If the remote user requires to enter a password to run the sudo command. We can add ‘-K’ to enter the password of the remote user.

ansible-playbook playbooks/power_off_the_cluster.yml -K

Kubernetes

Remove k3s from master and worker nodes.

#clean up k3s
---
- name: clean up k3s installation
  hosts: red
  remote_user: pi
  tasks:
  - name: stop service
    command: /usr/local/bin/k3s-killall.sh
    ignore_unreachable: yes
  - name: uninstall master node
    command: /usr/local/bin/k3s-uninstall.sh
    ignore_unreachable: yes
- name: clean up k3s installation
  hosts: [yellow black green]
  remote_user: pi
  tasks:
  - name: stop service
    command: /usr/local/bin/k3s-killall.sh
    ignore_unreachable: yes
  - name: Uninstall worker node
    command: /usr/local/bin/k3s-agent-uninstall.sh
    ignore_unreachable: yes