Commit 8a4a0afa authored by kaiyou's avatar kaiyou
Browse files

Rename hosto to hepto

parent ec9838b9
This playbook is meant to deploy hosto pods, i.e. pddman pods that embeds wesher and k3s.
This playbook is meant to deploy hepto pods, i.e. pddman pods that embeds wesher and k3s.
It can be used either to deploy an entire cluster, or to add nodes to a cluster that you do not own entirely (especially if you do not own the master).
# Hosto pods?
# Hepto pods?
A hosto *pod* is the base unit of a Hosto cluster. It is based on podman and systemd and made of two containers: a wesher instance that joins an encrypted cluster network, and k3s, the container orchestrator.
A hepto *pod* is the base unit of a hepto cluster. It is based on podman and systemd and made of two containers: a wesher instance that joins an encrypted cluster network, and k3s, the container orchestrator.
A Hosto cluster is made of multiple hosto *pods*, one of which is the master. Any physical host (or virtual machine for that matter) can host any number of hosto *pods*, that may belong to various hosto clusters.
A hepto cluster is made of multiple hepto *pods*, one of which is the master. Any physical host (or virtual machine for that matter) can host any number of hepto *pods*, that may belong to various hepto clusters.
# Requirements
Each hosto *pod* must have its own IPv4 or IPv6, independent of the physical host own addresses. If your physical host is hosted by a provider that only grants you a single IPv4 and IPv6, unfortunately you cannot run hosto at this point.
Each hepto *pod* must have its own IPv4 or IPv6, independent of the physical host own addresses. If your physical host is hosted by a provider that only grants you a single IPv4 and IPv6, unfortunately you cannot run hepto at this point.
On each host, python must be installed for ansible to run properly:
......@@ -20,7 +20,7 @@ sudo apt install python
# Create an inventory file
Create a `hosts.ini` file that contains details about your hosto *pods*. For each hosto *pod*, you must specify:
Create a `hosts.ini` file that contains details about your hepto *pods*. For each hepto *pod*, you must specify:
- a name, which must be unique across all clusters that run on your physical hosts;
- the physical host is meant to run on;
- the network interface it will use to contact other pods;
......@@ -28,7 +28,7 @@ Create a `hosts.ini` file that contains details about your hosto *pods*. For eac
You must also specify general variables, including:
- the pod name of the master;
- the public address of the master for other hosto *pods* to join;
- the public address of the master for other hepto *pods* to join;
- a 32 bytes overlay key, used to secure network communications inside the cluster;
- a 32 bytes cluster key, used to restrict access to the master.
......@@ -52,7 +52,7 @@ The key should be generated randomly. You should be able to change it afterwards
# Apply the playbook
In order to configure hosto *pods* on your physical hosts, simply run the playbook:
In order to configure hepto *pods* on your physical hosts, simply run the playbook:
```
ansible-playbook -i hosts.ini site.yaml
......
......@@ -3,12 +3,12 @@ pod_slug: "{{ inventory_hostname }}"
# standard paths
systemd_dir: /etc/systemd/system
storage_base: /hosto
storage_base: /hepto
storage_dir: "{{ storage_base }}/{{ pod_slug }}"
# object names
pod_name: "hosto-{{ pod_slug }}"
cni_name: "hosto-{{ pod_slug }}"
pod_name: "hepto-{{ pod_slug }}"
cni_name: "hepto-{{ pod_slug }}"
# networking
overlay_net: 10.20.0.0/16
......
......@@ -15,7 +15,7 @@
- "{{ storage_dir }}/var"
- "{{ storage_dir }}/data"
- name: Install the hosto init file
- name: Install the hepto init file
template:
src: init.j2
dest: "{{ storage_dir }}/init"
......@@ -23,17 +23,17 @@
serole: object_r
setype: bin_t
- name: Install the hosto cni config
- name: Install the hepto cni config
template:
src: cni.j2
dest: "{{ storage_dir }}/etc/hosto.conflist"
dest: "{{ storage_dir }}/etc/hepto.conflist"
- name: Install the hosto service file
- name: Install the hepto service file
template:
src: service.j2
dest: "{{ systemd_dir }}/{{ pod_name }}.service"
- name: Enable the hosto service
- name: Enable the hepto service
systemd:
name: "{{ pod_name }}"
daemon_reload: yes
......
......@@ -5,11 +5,6 @@
modprobe wireguard
modprobe br-netfilter
# First delete and recreate a fresh pod
# This means that restarting the service will delete and clean any running node process
{{ podman }} pod rm -f {{ pod_name }}
{{ podman }} pod create --network {{ cni_name }} --name {{ pod_name }}
# Setup networking, this is required because libcontainer cni config syntax does not
# allow gateways declared outside the node network, which is however often the case on some
# providers networks
......@@ -38,7 +33,7 @@ modprobe br-netfilter
{{ podman }} container create --pod {{ pod_name }} --name {{ pod_name }}-k3s --privileged --restart always \
-e K3S_CLUSTER_SECRET={{ cluster_key }} \
{% if not master == pod_slug %}
-e K3S_URL=https://hosto-{{ master }}:6443 \
-e K3S_URL=https://hepto-{{ master }}:6443 \
{% endif %}
-v {{ storage_dir }}/etc:/etc/rancher \
-v {{ storage_dir }}/log:/var/log \
......
[Unit]
Description=Hosto {{ pod_slug }} pod
Description=Hepto {{ pod_slug }} pod
Wants=syslog.service
[Service]
{% set podman = "/usr/bin/podman --cni-config-dir " + storage_dir + "/etc" %}
# for now the service needs be forking, we could instead rely on multiple services
Type=forking
Type=exec
# This is required so that k3s inside the pod can itself mount pod volumes as shared
MountFlags=shared
Restart=always
ExecStartPre=-{{ podman }} pod create --network {{ cni_name }} --name {{ pod_name }}
ExecStartPre=-{{ storage_dir }}/init
ExecStart={{ podman }} pod start {{ pod_name }}
ExecStop={{ podman }} pod rm -f {{ pod_name }}
ExecStartPre={{ podman }} pod start {{ pod_name }}
ExecStart={{ podman }} wait {{ pod_name }}-k3s
ExecStop={{ podman }} pod stop {{ pod_name }}
ExecStopPost=-{{ podman }} pod rm {{ pod_name }}
[Install]
WantedBy=multi-user.target
......@@ -5,5 +5,5 @@
roles:
- { role: wireguard }
- { role: podman }
- { role: hosto }
- { role: hepto }
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment