• sales

    +86-0755-88291180

NanoCluster-Basic-Package Application Development

K3s Deployment

1. Introduction

K3s is a lightweight version of Kubernetes. It is easy to install and requires only half the memory of Kubernetes, making it suitable for resource-constrained environments, especially for use cases like edge computing, IoT, and others.

2. Deployment Guide

2.1. Prepare the Environment

First, ensure that the network of the cluster is working correctly and that it can access the internet. You can SSH into each machine in the cluster to execute the subsequent installation commands. Make sure that the IP addresses of the master node and worker nodes are fixed, and that they can access each other over the network.

2.2. Install K3s (Master Node)

The installation of K3s is very simple. Just run the following command on the master node:

curl -sfL https://get.k3s.io | sh -

If the download is slow, you can speed up the installation by using the following command:

curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -

This command will automatically download and install K3s. After the installation is complete, check if the K3s service is running with the following command:

sudo systemctl status k3s

If it shows active (running), K3s has started successfully.

sudo systemctl status k3s
● k3s.service - Lightweight Kubernetes
     Loaded: loaded (/etc/systemd/system/k3s.service; enabled; preset: enabled)
     Active: active (running) since Mon 2025-02-17 12:07:15 CST; 3h 38min ago
       Docs: https://k3s.io
    Process: 8803 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service 2>/dev/null (code=exited, status=0/SUCCESS)
    Process: 8805 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
    Process: 8808 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 8810 (k3s-server)
      Tasks: 32
     Memory: 583.2M
        CPU: 29min 49.755s
     CGroup: /system.slice/k3s.service
             ├─8810 "/usr/local/bin/k3s server"
             └─8895 "containerd "

2.3. Install K3s (Worker Nodes)

When installing K3s on the worker nodes, you need to connect them to the master node. Run the following command on the worker node to install K3s:

curl -sfL https://get.k3s.io | K3S_URL=https://<MasterNodeIP>:6443 K3S_TOKEN=<MasterNodeToken> sh -

In the command above, replace 主节点IP with the master node's IP address, and MasterNodeIP with the token obtained from the master node. You can retrieve the token by running the following command on the master node:

sudo cat /var/lib/rancher/k3s/server/node-token

After installation is complete, verify that the worker node has successfully joined the cluster by running:

sudo kubectl get nodes

If the worker node appears in the list with a status of Ready, it means the worker node has successfully joined the cluster.

sipeed@lpi3h-a2d1:~$ sudo kubectl get nodes
NAME         STATUS   ROLES                  AGE     VERSION
lpi3h-1967   Ready    <none>                 20h   v1.31.5+k3s1
lpi3h-231e   Ready    <none>                 20h   v1.31.5+k3s1
lpi3h-4782   Ready    <none>                 56m   v1.31.5+k3s1
lpi3h-a2d1   Ready    control-plane,master   23h   v1.31.5+k3s1
lpi3h-ba13   Ready    <none>                 19h   v1.31.5+k3s1
lpi3h-c06b   Ready    <none>                 21h   v1.31.5+k3s1

2.4. Deploy an Application

We will create a configuration file to run a K3s container.

nano hello-kubernetes.yaml

The file content is as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-kubernetes
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-kubernetes
  template:
    metadata:
      labels:
        app: hello-kubernetes
    spec:
      containers:
      - name: hello-kubernetes
        image: paulbouwer/hello-kubernetes:1.10.1
        env:
        - name: MESSAGE
          value: "Hello Kubernetes"

Then, use this configuration file to start a container:

sudo kubectl apply -f hello-kubernetes.yaml

Check the status of the pods:

sudo kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP            NODE   NOMINATED NODE   READINESS GATES
hello-kubernetes-7fbb7f4899-zqs5x   1/1     Running   0          2m39s   10.42.0.114   arch   <none>           <none>

Finally, access the application via a browser at 10.42.0.114:8080


distcc Deployment

1. Introduction

distcc is a distributed C/C++ compilation system that speeds up the compilation process by distributing compilation tasks across multiple machines. It allows you to leverage the computing power of multiple computers to compile code faster, making it especially useful for large codebases or resource-constrained environments.

2. Deployment Guide

2.1. Server

For Debian-based systems, you can install distcc directly using the package manager:

sudo apt install distcc

Start the distcc service:

distccd --daemon --allow 192.168.0.0/24  # Allow specific IP range to access

2.2. Client

sudo apt install distcc distcc-pump

Set up the DISTCC_HOSTS environment variable to specify the available worker nodes. You can add the following to your .bashrc

export DISTCC_POTENTIAL_HOSTS='localhost 192.168.0.240 192.168.0.243 192.168.0.245 192.168.0.246'

Then, you can try compiling a simple program to check if distcc is correctly distributing the compilation tasks:

distcc-pump distcc -o test test.c
sipeed@lpi3h-a2d1:~/distcc$ distcc-pump distcc -o test test.c
__________Using distcc-pump from /usr/bin
__________Found 4 available distcc servers
__________Shutting down distcc-pump include server

3. Compilation Testing

To test whether distcc effectively accelerates the compilation process, we used OpenSSL for the compilation test. OpenSSL is a widely-used C library with a large codebase, making it a good candidate to test the effectiveness of distributed compilation.

git clone https://github.com/openssl/openssl.git
cd openssl
./config
distcc-pump make -j20 CC=distcc

You can use distccmon-text to check the current distribution of compilation tasks:

sipeed@lpi3h-2193:~$ distccmon-text 
 67535  Compile     cmp_ctx.c                                 192.168.0.240[0]
 67528  Compile     cmp_asn.c                                 192.168.0.240[1]
 67635  Compile     cms_dh.c                                  192.168.0.240[2]
 67569  Compile     cmp_http.c                                192.168.0.243[0]
 67696  Compile     cms_io.c                                  192.168.0.245[0]
 67583  Compile     cmp_server.c                              192.168.0.245[1]
 67561  Compile     cmp_hdr.c                                 192.168.0.245[2]
 67606  Compile     cmp_vfy.c                                 192.168.0.245[3]
 67657  Compile     cms_enc.c                                 192.168.0.246[1]
 67672  Compile     cms_env.c                                 192.168.0.246[2]

3.1. Compilation Performance Comparison

In the testing process, we compiled the OpenSSL project using both single-machine compilation and distributed compilation (5 machines). Below are the results for each method:

Single-Machine Compilation (without distcc)
real    18m11.760s
user    64m37.024s
sys     5m56.326s
Distributed Compilation (using distcc)
real    6m32.262s
user    18m39.468s
sys     4m30.008s

As seen, the compilation time using distcc for distributed compilation is significantly reduced, from 18 minutes to about 6 minutes. The acceleration effect of distributed compilation is evident, and it also helps alleviate the load on individual machines.

Nomad Playbook

1. Introduction

nomad-playbook is an automated deployment script written with Ansible, designed to quickly set up a cluster environment based on HashiCorp Nomad and Consul. This project supports one-click deployment of a single-server Nomad/Consul cluster, using Podman (or optionally Docker) as the container runtime. It is ideal for rapid deployment and testing of portable HomeLab or small edge computing clusters.


TAG: Raspberry Pi 5 PoE MINI HAT(G) Power over RJ45 Ethernet 802.3af/at Moudle for Pi5 SC09 Serial UART Bus Servo Motor Switchable 2.3kg 300Angle Digital Input STLINK JETSON-IO-BASE-A user Guid Mini TV Pi 5 PCIe to M.2 NVMe SSD Board Raspberry Pi Pico Basics User Guide Raspberry Pi 5 Audio Card Industrial Modbus POE ETH RJ45 To Relay 30CH RTU/Modbus TCP-Ethernet For IOT GC2083 ESP32-S3 USB Dongle MiniPCIe Interface 2 CH CAN Card USB CAN Multiple Protection Circuits For Linux/Windows EPS32S3 1.8inch Round LCD Display WIFI-AIDA64-Secondary-TouchScreen/Wireless-Power/Video-Player LVGL Raspberry Pi IR Thermal Imagi Accelerometer Raspberry Pi Industrial Isolated RS485 TO ETH (C) RJ45 Converter Wall/Rail-Mount For Modbus DeepSeek ESP32-S3 Voice Chat Robot espHome XiaoZhi Ball Raspberry Pi Compute Module 4 CM4 PCIe to M.2 NVMe SSD