1. Home
  2. Knowledge Base
  3. Howto's
  4. How to configure a cluster in SKUDONET Community Edition v7

How to configure a cluster in SKUDONET Community Edition v7


SKUDONET Cluster Service can be configured like an independent piece of software outside of the SKUDONET CE core package, this new SKUDONET cluster service has been developed with the idea of being easily managed and modified by sysadmins in order to adapt it to the needs of any network architecture.
The next procedure describes how to install and configure the SKUDONET Cluster in case high availability for your Load Balancer is required.

Configure our official APT repository as follows:

https://www.skudonet.com/knowledge-base/howtos/configure-apt-repository-skudonet-community-edition/

Install SKUDONET CE cluster package

Once the local database repository is updated please search the cluster package skudonet-ce-cluster as follows:

root@lb1 > apt-cache search skudonet-ce-cluster
skudonet-ce-cluster - Skudonet Load Balancer Community Edition Cluster Service

root@lb1 > apt-cache show skudonet-ce-cluster
Package: skudonet-ce-cluster
Version: 2.0
Maintainer: Skudonet SL <skudonet-ce-users@skudonet.com>
Architecture: amd64
Depends: skudonet (>=7.0.0), liblinux-inotify2-perl, rsync, libpcap0.8-dev
Priority: optional
Section: admin
Filename: pool/main/s/skudonet-ce-cluster/skudonet-ce-cluster_2.0_amd64.deb
Size: 49176
SHA256: 19f12efad613fdcbe789e3ff35cf5bda2e948137f11587bd213e515e5b962c52
SHA1: 97a74c9e9ccfaa5185edd457a431890b7d913356
MD5sum: 64b4b87cf6d8c6288ddb1ad6bd46243f
Description: Skudonet Load Balancer Community Edition Cluster Service
 Cluster service for Skudonet CE, based in ucarp for vrrp implementation and skdinotify for configuration replication. VRRP through UDP is supported in this version.
Description-md5: a23a19cc5a6e72e8d31eaa01f977a454

root@lb1 > apt-get install skudonet-ce-cluster
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  libdbus-1-dev liblinux-inotify2-perl libpcap0.8-dev libpkgconf3
  pkg-config pkgconf pkgconf-bin rsync sgml-base xml-core
Suggested packages:
  iwatch python3-braceexpand sgml-base-doc debhelper
The following NEW packages will be installed:
  libdbus-1-dev liblinux-inotify2-perl libpcap0.8-dev libpkgconf3
  pkg-config pkgconf pkgconf-bin rsync sgml-base skudonet-ce-cluster
  xml-core
0 upgraded, 11 newly installed, 0 to remove and 46 not upgraded.
Need to get 1151 kB of archives.
After this operation, 3201 kB of additional disk space will be used.
Do you want to continue? [Y/n] 
Get:1 https://ftp.debian.org/debian bookworm/main amd64 rsync amd64 3.2.7-1 [417 kB]
Get:2 https://ftp.debian.org/debian bookworm/main amd64 sgml-base all 1.31 [15.4 kB]
Get:3 https://ftp.debian.org/debian bookworm/main amd64 libpkgconf3 amd64 1.8.1-1 [36.1 kB]
Get:4 https://ftp.debian.org/debian bookworm/main amd64 pkgconf-bin amd64 1.8.1-1 [29.5 kB]
Get:5 https://ftp.debian.org/debian bookworm/main amd64 pkgconf amd64 1.8.1-1 [25.9 kB]
Get:6 https://ftp.debian.org/debian bookworm/main amd64 pkg-config amd64 1.8.1-1 [13.7 kB]
Get:7 https://ftp.debian.org/debian bookworm/main amd64 xml-core all 0.18+nmu1 [23.8 kB]
Get:8 https://ftp.debian.org/debian bookworm/main amd64 libdbus-1-dev amd64 1.14.8-2~deb12u1 [240 kB]
Get:9 https://ftp.debian.org/debian bookworm/main amd64 liblinux-inotify2-perl amd64 1:2.3-2 [19.4 kB]
Get:10 https://ftp.debian.org/debian bookworm/main amd64 libpcap0.8-dev amd64 1.10.3-1 [281 kB]
Get:11 http://repo.skudonet.com/ce/v7 bookworm/main amd64 skudonet-ce-cluster amd64 2.0 [49.2 kB]
Fetched 1151 kB in 1s (1549 kB/s)               
Selecting previously unselected package rsync.
(Reading database ... 56264 files and directories currently installed.)
Preparing to unpack .../00-rsync_3.2.7-1_amd64.deb ...
Unpacking rsync (3.2.7-1) ...
Selecting previously unselected package sgml-base.
Preparing to unpack .../01-sgml-base_1.31_all.deb ...
Unpacking sgml-base (1.31) ...
Selecting previously unselected package libpkgconf3:amd64.
Preparing to unpack .../02-libpkgconf3_1.8.1-1_amd64.deb ...
Unpacking libpkgconf3:amd64 (1.8.1-1) ...
Selecting previously unselected package pkgconf-bin.
Preparing to unpack .../03-pkgconf-bin_1.8.1-1_amd64.deb ...
Unpacking pkgconf-bin (1.8.1-1) ...
Selecting previously unselected package pkgconf:amd64.
Preparing to unpack .../04-pkgconf_1.8.1-1_amd64.deb ...
Unpacking pkgconf:amd64 (1.8.1-1) ...
Selecting previously unselected package pkg-config:amd64.
Preparing to unpack .../05-pkg-config_1.8.1-1_amd64.deb ...
Unpacking pkg-config:amd64 (1.8.1-1) ...
Selecting previously unselected package xml-core.
Preparing to unpack .../06-xml-core_0.18+nmu1_all.deb ...
Unpacking xml-core (0.18+nmu1) ...
Selecting previously unselected package libdbus-1-dev:amd64.
Preparing to unpack .../07-libdbus-1-dev_1.14.8-2~deb12u1_amd64.deb ...
Unpacking libdbus-1-dev:amd64 (1.14.8-2~deb12u1) ...
Selecting previously unselected package liblinux-inotify2-perl.
Preparing to unpack .../08-liblinux-inotify2-perl_1%3a2.3-2_amd64.deb ...
Unpacking liblinux-inotify2-perl (1:2.3-2) ...
Selecting previously unselected package libpcap0.8-dev:amd64.
Preparing to unpack .../09-libpcap0.8-dev_1.10.3-1_amd64.deb ...
Unpacking libpcap0.8-dev:amd64 (1.10.3-1) ...
Selecting previously unselected package skudonet-ce-cluster.
Preparing to unpack .../10-skudonet-ce-cluster_2.0_amd64.deb ...
Unpacking skudonet-ce-cluster (2.0) ...
Setting up liblinux-inotify2-perl (1:2.3-2) ...
Setting up libpkgconf3:amd64 (1.8.1-1) ...
Setting up pkgconf-bin (1.8.1-1) ...
Setting up sgml-base (1.31) ...
Setting up rsync (3.2.7-1) ...
rsync.service is a disabled or a static unit, not starting it.
Setting up pkgconf:amd64 (1.8.1-1) ...
Setting up pkg-config:amd64 (1.8.1-1) ...
Setting up xml-core (0.18+nmu1) ...
Processing triggers for libc-bin (2.36-9+deb12u1) ...
Processing triggers for sgml-base (1.31) ...
Setting up libdbus-1-dev:amd64 (1.14.8-2~deb12u1) ...
Setting up libpcap0.8-dev:amd64 (1.10.3-1) ...
Setting up skudonet-ce-cluster (2.0) ...
Completing the Skudonet CE Cluster installation...

Notice that SKUDONET CE Cluster uses VRRP and the synchronization time is mandatory for this protocol, so ensure your NTP service is properly configured and NTP servers are reachable from the Load Balancer (destination port 123 UDP). NTP service can be configured from the web GUI, lateral menu section “SYSTEM > Services”, here you will be able to see the NTP configuration box.

Configure SKUDONET CE cluster package

Once the installation is concluded, please configure the cluster service as follows:

Open the configuration file in the path /usr/local/zevenet/app/ucarp/etc/skudonet-cluster.conf

The most important parameters are described next:

#interface used for the cluster where is configured local_ip and remote_ip
$interface="eth0";

#local IP to be monitored, i e 192.168.1.250
$local_ip="192.168.1.250";

#remote IP to be monitored, i e 192.168.1.251
$remote_ip="192.168.1.251";

#used password for vrrp protocol communication
$password="secret";

#unique value for vrrp cluster in the network
$cluster_id="1";

#used virtual IP in the cluster, this IP will run always in the master node
$cluster_ip="192.168.1.252";

# if the nic used for cluster is different to eth0 then please change the exclude conf file in following line
########
$exclude="--exclude if_eth0_conf";

Do the same configuration in the other node but taking into consideration that $local_ip will will point to 192.168.1.251 and $_remote_ip will point to 192.168.1.250.

Notice that only virtual interfaces are replicated, so if you are running with more than one NIC or VLAN then they have to be excluded in the cluster configuration file, for example, eth0 is used for cluster purposes and vlan100 (eth0.100) for load balancing purpose, then:

$exclude="--exclude if_eth0_conf --exclude if_eth0.100_conf";

Notice that the SKUDONET cluster is managed by the root user and it replicates the configuration from the master node to the backup through rsync (ssh) so ssh a without password between nodes needs to be configured.

Open the ssh connection against node1 and run the following command:

root@lb1 > ssh-keygen
ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:cpFPrW67uSfX3IajToNKeoBOZx0/bCzTjsT4f7F2pK8 root@sku7ce1
The key's randomart image is:
+---[RSA 3072]----+
|                 |
|         . .     |
|        o . .    |
|        .+ .     |
|     ..+S*o      |
|    o =o*.B.. .  |
|   o o +.*+.oB o |
|    .  o+oo+B.* o|
|      ....*XE=.o |
+----[SHA256]-----+

Do the same in node2 of the cluster.

This command will generate a few files in path /root/.ssh/, we will require to copy the content of the file /root/.ssh/id_rsa.pub in a new file with name /root/.ssh/authorized_keys that need to be generated in the other file, as follow:

Copy the content of file /root/.ssh/id_rsa.pub of node1 in a new file created in node2 with name /root/.ssh/authorized_keys
Copy the content of file /root/.ssh/id_rsa.pub of node2 in a new file created in node1 with name /root/.ssh/authorized_keys

Please ensure password is not required if you run ssh between both nodes of the cluster.

Notice that the defined $cluster_ip has to be configured and UP in one SKUDONET virtual load balancer, the future Master, as soon the service is started in this node the configuration file for $cluster_ip will be replicated to backup server automatically.

Now enable the cluster service with the following two steps:

First open the file /etc/init.d/skudonet-ce-cluster and change the following variable:

$enable_cluster="true";

Secondly, the service skudonet-ce-cluster is disabled by default after boot, please execute the following command to enable skudonet-ce-cluster after reboot:

[] root@lb1 > systemctl enable skudonet-ce-cluster

Take into account that any change in the configuration file /usr/local/zevenet/app/ucarp/etc/skudonet-cluster.conf requires a restart of the cluster service, so once the configuration parameters are done please restart the cluster in both nodes as follows:

[] root@lb1 > /etc/init.d/skudonet-ce-cluster stop
[] root@lb1 > /etc/init.d/skudonet-ce-cluster start

Notice that as soon the cluster service is running the prompt in the load balancer is modified in order to show the cluster status in each service:
Master:

[master] root@lb1>

Backup:

[backup] root@lb2>

Logs and troubleshooting

  1. SSH without a password is required between both cluster nodes
  2. NTP is required to be configured in both cluster nodes
  3. skdinotify service only will run in the master node, please confirm skdinotify is running with the following command:
    [master] root@lb1> ps -ef | grep skdinotify
    root 16912 1 0 03:20 ? 00:00:00 /usr/bin/perl /usr/local/zevenet/app/skdinotify/skdinotify.pl
    

    And you should see nothing related to skdinotify in backup node.

    [backup] root@lb2> ps -ef | grep skdinotify
    [backup] root@lb2>
    

     

  4. Logs for ucarp service are sent to syslog /var/log/syslog
  5. Logs for skdinotify replication service are sent to /var/log/skdinotify.log
  6. skdinotify replication service is responsible of sending any change in the config file /usr/local/zevenet/config from the master to the backup
  7. Cluster status is shown in the prompt and it is updated after any command execution, additionally, the cluster status is saved in config file: /etc/skudonet-ce-cluster.status, if this file doesn’t exist then cluster service is stopped.
  8. At the moment the cluster node promotes to MASTER the following script is executed: /usr/local/zevenet/app/ucarp/sbin/skudonet-ce-cluster-start
  9. At the moment the cluster node promotes to BACKUP the following script is executed: /usr/local/zevenet/app/ucarp/sbin/skudonet-ce-cluster-stop
  10. At the moment the cluster node needs to run advertisements the following script is executed: /usr/local/zevenet/app/ucarp/sbin/skudonet-ce-cluster-advertisement
  11. In case you need to change any parameter in the ucarp execution you can modify the execution function for ucarp in the script /etc/init.d/skudonet-ce-cluster subrutine run_cluster()
  12. Cluster service uses VRRP implementation, so multicast packages need to be allowed in the switches
Was this article helpful?

Related Articles

Download Skudonet ADC Load Balancer
Community Edition

Source Code

A versatile and installable ADC system designed for diverse vendor hardware.

DOWNLOAD SOURCE

Installable ISO 

Load Balancing as a Service alongside an ADC orchestration toolkit.

DOWNLOAD ISO
Download Community Edition

Download Community Edition

“We manage the information you provide with the sole aim of assisting with your requests or queries in regards to our products or services; applying the computer and security procedures to ensure its protection. Your data can be rectified or removed upon request but won’t be offered to any third parties, unless we are legally required to do so.” Responsible: SKUDONET SL - info@skudonet.com