The Components and Terminology of Simplenetes
This section provides an architectural view of Simplenetes.
The goal for Simplenetes is to be composed of as few moving parts as possible, while having a robust and easy to understand design. You will not find any usage of iptables
nor any etcd
cluster.
The parts are:
- Pods. Including:
- Ingress Pod
- Proxy Pod
- Let's Encrypt Pod
- The Pod compiler (
podc
)
- Hosts
- Clusters and Cluster Projects (as Git repositories)
- The Proxy and clusterPorts
- Daemon (
simplenetesd
) - The
sns
management tool
Pods
A Simplenetes Pod is the same as a Kubernetes Pod in the sense that it is defined as a set of one or many containers which are managed together and share the same network.
A Simplenetes Pod is described in a single and simple YAML file format named pod.yaml. It is compiled into a standalone shell script which uses podman
as container runtime.
The standalone shell script, named pod
, can be run as is, either locally or managed by the Simplenetes Daemon (simplenetesd
) on a host.
The pod
shell script uses podman
to run containers, which is made for running containers as an unprivileged user (rootless). Podman is compatible with Docker.
Special Pods
There are three special Pods in Simplenetes. Most often they are used in Clusters, but they do not have to be. The Pods are:
- Ingress Pod: responsible for inbound traffic routing and TLS termination coming from the Internet (using HAProxy);
- Proxy Pod: an internal traffic router allowing Pods to talk to other Pods on other hosts or on the same host within the internal network;
- Let's Encrypt Pod: renews SSL/TLS certificates for all domains and makes them available to the Ingress Pod using Let's Encrypt.
Actually, Simplenetes Pods do not have to be containers at all. A Simplenetes Pod is an executable named pod
which conforms to the Simplenetes Pod API.
For more technical details, check out the Pod API section in https://github.com/simplenetes-io/podc/blob/master/PODSPEC.md.
The Proxy Pod mentioned above does not run any containers. It runs directly on the Host as a native application, but it is managed simply as a Pod. However, you don't need to bother about that.
The Pod compiler (podc)
The Pod compiler is a separate project which compiles pod.yaml files into standalone pod
executables that take advantage of podman
as the container runtime.
Other Pod types
For example, the Simplenetes Proxy is treated as a regular Pod, but is is not containerized because it accesses .conf files placed by the daemon in the host's root directory. The Simplenetes Daemon however does not know the difference.
As long as the Proxy is provided in the form of a pod
executable and that it conforms to the Pod API, the Daemon can manage the Pod's lifecycle.
Hosts
Represented by a Virtual Machine, a bare metal machine, or your laptop. The Host is part of a Cluster.
A Host runs Pods.
Hosts are, in Simplenetes terminology, divided into load balancers and workers. Load balancers are exposed to the public Internet, while workers are not. Workers receive traffic from the load balancers via the internal proxy.
A Host is expected to be configured with podman
, if it is expected to run container Pods (sns
will set this up for you).
If a Host is meant to receive public Internet traffic directly, then it is likely going to be running an Ingress Pod.
Any Pod is allowed to bind to the Host network or map ports to the Host interface (the one expected to be publicly exposed), as long as the firewall rules allow the specified traffic settings.
Hosts which are workers usually are not exposed directly to the Internet, but receive traffic from the internal proxy which is then transmitted to Pods running on the Host according to the Pod ingress rules.
When Simplenetes is connecting to a Host it reads the host.env file and uses that information to establish a secure connection with the Host (using SSH).
A Host can declare in its host.env file a JUMPHOST
, informing the SSH connection that a connection to that particular host must be established first before connecting to the actual Host. This is the recommended way of doing it, so that worker Hosts are not exposed to incoming traffic from the public Internet at all.
If the host.env file has HOST=local
set, then it does not connect via SSH, it "connects" directly to local disk. Using local disk as host target is great for local development. In that case, a host representation is created on disk using the sns host register
command.
Clusters and Cluster Projects
A Cluster is a one or many hosts on the same VLAN. A Cluster can be as simple as your laptop running sns
.
Typically, a Cluster is one or two load balancer Hosts exposed to the Internet on ports 80
and 443
, together with a couple of worker hosts where Pods are running.
A Cluster is mirrored as a Git repository on the operators local disk (or in a CI/CD system). In the context of Simplenetes, that repo is referred to as a Cluster Project.
The Cluster being mirrored as the directory structure is a design choice. We believe this brings a low mental burden on grasping the system as a whole. It also gives understandable, traceable GitOps procedures in a way that you can inspect the full cluster layout right from the Git repo.
A Cluster Project is a Git repository which mirrors the full Cluster with all its Hosts. Hosts are organized as subdirectories in the repo. Each Host is identified by having a host.env file inside of it.
A Cluster is managed by the sns
tool.
When syncing to a Cluster from a Cluster Project, Simplenetes will connect to each Host (in parallel) and update the state of files on the Host by copying, modifying and deleting files as necessary, so it mirrors and matches the contents of the Cluster Project at the end of the process.
Following GitOps procedures, the sync will not be allowed if the Cluster repo branch which we are syncing from is behind the Cluster itself, unless forced, such as in cases major rollbacks are deemed necessary.
The Daemon running on each Host will pick up the changes and manage the state changes of the Pods.
Typical Cluster setup
A common example setup is composed of a VPC with two load balancer Hosts (exposed to the Internet and open on ports 80
and 443
) combined with two worker hosts, which only accepts traffic coming from within the VLAN. Finally, a fifth host, which we call the "backdoor", is exposed to the Internet on port 22
for handling SSH connections. All SSH connections made to any load balancer or worker Host are always jumped via the backdoor Host. This reduces the surface area of attack since none of the known IP addresses are open to SSH connections coming from the Internet.
Setting up the Cluster with its Virtual Machines is described in more detail in the Provisioning a production cluster section.
Proxy and clusterPorts
For Pods to be able to communicate with each other within the Cluster and across Hosts, there is a concept of clusterPorts
and the Simplenetes Proxy.
A Pod which is open for traffic via the Proxy declares a clusterPort
in its expose
section in the pod.yaml file. A clusterPort
is then targeted at a specific port inside the Pod. Other Pods can open connections to that clusterPort
from anywhere in the Cluster and be proxied to any Pod exposing that clusterPort
.
Note 1: there can be multiple replicas of a specific Pod version listening to the same
clusterPort
, which means traffic will be shared among them.Note 2: when softly rolling out a new Pod version, the new version could also share the same
clusterPort
to guarantee no downtime occurs. That works because traffic is shared between old and new Pod versions until the old version(s) are removed.Note 3: when setting
clusterPorts
manually one can force totally different Pods to share the incoming traffic by having them use the same cluster ports setting. Normally cluster ports are automatically provided bysns
.Note 4: Pods which receive traffic from the Ingress by matching domain names and URLs can operate with automatically assigned cluster ports.
Note 5: for Pods which are to serve internally, those are required to set fixed cluster ports so that other Pods know how to connect to them.
When a process inside a container wants to connect internally to another Pod, it does so by opening a TCP socket to proxy:clusterPort
. proxy
is expected to be a static host name automatically put into each containers hosts file (/etc/hosts), which points to the hosts internal IP address.
The native Proxy Pod listens to a set of clusterPorts
and its job is to proxy connections to another Proxy on a Host in the Cluster, which then can forward it to a Pod running on the Host.
The Proxy is very robust and simple in its cleverness. It requires very little configurations to work. It needs an updated list of Host addresses in the Cluster, as well as a proxy.conf file to be generated by the Daemon, telling it what clusterPorts
are bound on the Host. The Proxy itself will then, when proxying a connection, try all hosts for answering connections and remember the results for a while. This gives a robust and easy to manage system which is free from iptables
hacks or constantly needing to update global routing tables when Pods come and go in the cluster.
A Pod can also bind directly to a specific hostPort
on the host which it is running on. This is particularly useful for the Ingress Pod.
Note6: Each clusterPort on a Pod is automatically mapped to a
hostPort
. It is thishostPort
the Proxy connects to.Note7:
clusterPorts
are a set of TCP ports being listened by all proxies across all hosts. Connecting to aclusterPort
(on a Proxy) results in a connection from the Proxy to another Proxy and then to a mappedhostPort
, which in turn connects to thetargetPort
inside the Pod.
Ports range
Cluster ports are often automatically assigned, but they can be manually assigned in the range between 1024 and 65535
. Ports 30000-32767
are reserved for Host ports and the Proxy itself (which claims port 32767
).
Auto-assigned Cluster ports are set in the range of 61000-63999
.
Auto-assigned Host ports are set in the range of 30000-31999
(the full range of dedicated Host ports is 30000-32766
).
Daemon
The Simplenetes Daemon manages the lifecycle of all the Pods on the Hosts, regardless of their runtime type (be it podman
or native executables).
It reads .state files alongside the pod
file and executes the pod
file with arguments relating to the desired state of the Pod.
The Simplenetes Daemon is installed and runs on each Host in a Cluster as a systemd service.
The Simplenetes Daemon is preferably installed with root privileges so that it can create ramdisks for the Pods which require that, but it drops privileges when it interacts with any Pod script or executable.
The Daemon can be run as a foreground process in user mode instead of as root which is generally useful when running in development mode, for a single user, straight on the laptop.
The sns management tool
The simplenetes repo provides the sns
tool used to create and manage clusters.
The next section covers the installation of Simplenetes sns
program, as well as auxiliary programs.