Tag Archives: containers

NFS Server Inside FreeBSD VNET Jail

FreeBSD Jails is a great piece of container technology pioneered several years before Solaris Zones. Not to mention 15 years before Docker was born. Today they still work great and offer some new features like entire network stack for each Jail called VNET. Unfortunately they also have downsides. For example anything related to NFS is broken inside FreeBSD Jails (especially when they are VNET based Jails) and the relevant 251347 Bug Report remains unsolved.

There is however a way to run NFS server inside VNET based FreeBSD Jail – we will use userspace NFS server implementation instead of using the FreeBSD base system kernel space NFS server. Its available as net/unfs3 package and this is exactly what we will gonna use for this guide.

unfs3


Same in plain text below.

/ % cd /usr/ports/net/unfs3

/usr/ports/net/unfs3 % cat pkg-descr
UNFS3 is a user-space implementation of the NFSv3 server specification. It
provides a daemon for the MOUNT and NFS protocols, which are used by NFS
clients for accessing files on the server.
Since it runs in user-space, you can use it in a jail.

WWW: https://unfs3.github.io/

/usr/ports/net/unfs3 % pkg info -l unfs3           
unfs3-0.9.22_2:
        /usr/local/man/man7/tags.7.gz
        /usr/local/man/man8/unfsd.8.gz
        /usr/local/sbin/unfsd
        /usr/local/share/licenses/unfs3-0.9.22_2/BSD3CLAUSE
        /usr/local/share/licenses/unfs3-0.9.22_2/LICENSE
        /usr/local/share/licenses/unfs3-0.9.22_2/catalog.mk

Its also pity that VNET feature for FreeBSD Jails is not well documented. Search the FreeBSD Handbook or FreeBSD FAQ for the VNET or VIMAGE keywords. Not a single match. There are only man pages and some stuff left in the /usr/share/examples/jails dir. There is also FreeBSD Mastery: Jails book by Michael W. Lucas but its 3 years old already.

Setup

Below you will find the list of systems we will use in this guide.

10.0.10.250  host
10.0.10.251  nfs_server

The host is a common FreeBSD server installed on a physical or virtual machine. We will also use it as out NFS client and mount the NFS share there. The nfs_server is a FreeBSD Jail with VNET separate network stack enabled. We will run NFS server from this host nfs_server system. Both of them run latest FreeBSD 13.1-RELEASE but I suspect that it should also work the same on older versions.

FreeBSD Host and NFS Client (host)

First we will setup the host machine. Its typical default ZFS FreeBSD install – nothing special about that. To use the VNET enabled Jails we will use jib tool from the /usr/share/examples/jails directory as we will need it to automate epair(4) interfaces management.

root@host:/ # install -o root -g wheel -m 0555 /usr/share/examples/jails/jib /usr/sbin/jib

Our next step would be to fetch and setup the nfs_server FreeBSD Jail. We will not waste time in compilation – we will fetch the base.txz directly from FreeBSD page.

root@host:/ # mkdir -p /jail/BASE
root@host:/jail/BASE # cd /jail/BASE
root@host:/jail/BASE # fetch http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/13.1-RELEASE/base.txz
root@host:/jail/BASE # mv base.txz 13.1-base.txz

Now the nfs_server FreeBSD Jail.

root@host:/ # mkdir -p /jail/nfs_server
root@host:/jail/nfs_server # cd /jail/nfs_server
root@host:/jail/nfs_server # tar -xzf /jail/BASE/13.1-base.txz --unlink

The main FreeBSD /etc/rc.conf configuration file does not hold any special setting – pretty usual stuff.

root@host:/ # cat /etc/rc.conf
# NETWORK
  hostname="host"
  ifconfig_em0="inet 10.0.10.250/24 up"
  defaultrouter="10.0.10.1"
  gateway_enable="YES"

# DAEMONS
  dumpdev="AUTO"
  sshd_enable="YES"
  zfs_enable="YES"
  sendmail_enable="NO"
  sendmail_submit_enable="NO"
  sendmail_outbound_enable="NO"
  sendmail_msp_queue_enable="NO"
  update_motd="NO"

# JAILS
  jail_enable="YES"
  jail_parallel_start="YES"
  jail_list="nfs_server"

The nfs_server FreeBSD Jail as configured in the /etc/jail.conf config file.

root@host:/ # cat /etc/jail.conf
nfs_server {
  path = "/jail/${name}";
  host.hostname = "${name}";
  allow.raw_sockets = 1;
  allow.set_hostname = 1;
  allow.sysvipc = 1;
  mount.devfs;
  exec.clean;
  vnet;
  vnet.interface = "e0b_${name}";
  exec.prestart += "/usr/sbin/jib addm -b _bridge0 ${name} em0";
  exec.poststop += "/usr/sbin/jib destroy ${name}";
  exec.start += "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.consolelog = "/var/log/jail_${name}_console.log";
}

… and last but not least lets make sure the following DNS haiku will not bother us πŸ™‚

dns

Setup the /etc/hosts on the host system.

root@host:/ # tail -3 /etc/hosts
10.0.10.250 host
10.0.10.251 nfs_server

FreeBSD NFS Server VNET Jail (nfs_server)

As our FreeBSD Jail is installed we will now start it and configure it.

root@host:/ # service jail onestart nfs_server

root@host:/ # jls
   JID  IP Address      Hostname                      Path
     1                  nfs_server                    /jail/nfs_server

root@host:/ # jexec 1

root@nfs_server:/ # 

First we will install latest net/unfs3 package – this userspace NFS server is also very minimal and does not have any dependencies.

root@nfs_server:/ # echo nameserver 1.1.1.1 > /etc/resolv.conf

root@nfs_server:/ # sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf

root@nfs_server:/ # pkg install unfs3

root@nfs_server:/ # pkg info -qoa
ports-mgmt/pkg
net/unfs3

Now we will configure our NFS share under /share dir and start the unfsd(8) userspace NFS server.

root@nfs_server:/ # mkdir /share

root@nfs_server:/ # cat /etc/exports
/share  10.0.10.250(rw,no_root_squash,no_all_squash)

… last but not least – DNS πŸ™‚

root@nfs_server:/ # tail -3 /etc/hosts
10.0.10.250 host
10.0.10.251 nfs_server

As we are using VNET network stack in a FreeBSD Jail we will have to address the network interface in the Jails /etc/rc.conf file. The unfsd(8) daemon does not start without rpcbind service so we will also enable it.

root@nfs_server:/ # cat /etc/rc.conf
# NETWORK
  hostname="nfs_server"
  ifconfig_e0b_nfs_server="10.0.10.251/24 up"
  defaultrouter="10.0.10.1"

# DAEMONS
  sshd_enable="YES"
  rpcbind_enable="YES"
  sendmail_enable="NO"
  sendmail_submit_enable="NO"
  sendmail_outbound_enable="NO"
  sendmail_msp_queue_enable="NO"

We will make unfsd(8) start automatically at Jails start with plain old /etc/rc.local file.

root@nfs_server:/ # cat /etc/rc.local 
/usr/local/sbin/unfsd &

We will not restart our FreeBSD Jail to make these changes take effect.

root@host:/ # service jail onerestart nfs_server

root@host:/ # jls
   JID  IP Address      Hostname                      Path
     2                  nfs_server                    /jail/nfs_server

root@host:/ # jexec 2

root@nfs_server:/ # 

After startup we can see that unfsd(8) is listening on a NFS 2049 port.

root@nfs_server:/ # sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS      
root     sshd       1261  4  tcp4   *:22                  *:*
root     sendmail   1241  5  tcp4   127.0.0.1:25          *:*
root     unfsd      1223  3  udp4   *:2049                *:*
root     unfsd      1223  4  tcp4   *:2049                *:*
root     rpcbind    1196  9  udp4   *:111                 *:*
root     rpcbind    1196  10 udp4   *:842                 *:*
root     rpcbind    1196  11 tcp4   *:111                 *:*
root     syslogd    1188  6  udp4   *:514                 *:*
  

We should have our epair(4) interface called e0b_nfs_server addressed properly.

root@nfs_server:/ # ifconfig 
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
        inet 127.0.0.1 netmask 0xff000000
        groups: lo
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
e0b_nfs_server: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=8<VLAN_MTU>
        ether 0e:27:dd:b3:81:88
        hwaddr 02:30:0d:9f:57:0b
        inet 10.0.10.251 netmask 0xffffff00 broadcast 10.0.10.255
        groups: epair
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>

Mount /share on NFS Client

I added that NFS entry to the /etc/fstab file on the host machine.

root@host:~ # cat /etc/fstab 
#DEV         #MNT       #TYPE    #OPT  #DUMP/PASS
/dev/ada0p1  /boot/efi  msdosfs  rw    2 2
/dev/ada0p3  none       swap     sw    0 0

#DEV                #MNT  #TYPE  #OPT       #DUMP/PASS
10.0.10.251:/share  /mnt  nfs    rw,noauto  0 0

We will now attempt to mount the /share NFS export on the host machine.

root@host:/ # mount /mnt

root@host:/ # mount | grep share
10.0.10.251:/share on /mnt (nfs)

root@host:/ # cd /mnt

root@host:/mnt # :> FILE

root@host:/mnt # ls -l FILE
-rw-r--r-- 1 root root 0 2022-05-21 22:53 FILE

root@host:/mnt # rm FILE

Seems to work properly.

Here are also network interfaces on the host machine.

root@host:/ # ifconfig 
em0: flags=8963<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=4810099<RXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,VLAN_HWFILTER,NOMAP>
        ether 08:00:27:b3:81:88
        inet 10.0.10.250 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1 netmask 0xff000000
        groups: lo
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
em0_bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        ether 58:9c:fc:10:ff:dd
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        member: e0a_nfs_server flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 4 priority 128 path cost 2000
        member: em0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 1 priority 128 path cost 20000
        groups: bridge
        nd6 options=9<PERFORMNUD,IFDISABLED>
e0a_nfs_server: flags=8963<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=8<VLAN_MTU>
        ether 02:27:dd:b3:81:88
        hwaddr 02:30:0d:9f:57:0a
        groups: epair
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>


Future of NFS Server in Jails

This setup – while allowing to run the NFS server inside FreeBSD Jail with even VNET enabled has its drawbacks unfortunately. First is that it is run in userspace instead of kernel space – which means its slower. Second is that the unfsd(8) only implements NFS version 3 – so no version 4 is not possible.

freebsd-foundation-logo

Where we can go from here? Like with WiFi stuff IMHO the FreeBSD Foundation could step in to sponsor the missing bits of NFS server and VNET to make these native tools work they should. Its up to you to put the pressure on the FreeBSD Foundation when they as what are you missing from the FreeBSD UNIX system that could be improved with one of their projects. You may also join the discussion at the 251347 Bug Report of course.

I think its a big loss that native kernel space NFS server is not currently possible with VNET FreeBSD Jails.

EOF

Secure Containerized Browser

By default Chromium on OpenBSD (not so) recently got OpenBSD’s unveil(2) support. That means that of you run Chromium with --enable-unveil flag then it will be prevented from accessing anything other than the ~/Downloads directory. No such thing on FreeBSD exists. Firefox or Chromium have access to all files user can read – even to your system sshd(8) keys or even worse to your private keys laying in the ~/.ssh dir. On FreeBSD thanks to its FreeBSD Jails technology we can create secure containerized browser with only access to the specified directory. On my system its the ~/download dir.

You may want to check other desktop related articles in the FreeBSD Desktop series on the FreeBSD Desktop page.

Configuration

We will start with /etc/jail.conf file configuration. For the record – we will be using /jail for our FreeBSD Jails main dir. I will also use /jail dir for the ‘base’ FreeBSD versions tarballs as a convenient place. As I use 10.0.0.0/24 address space I will use 10.0.0.200 for our containerized browser. Feel free to pick other IP from which you will be able to reach the Internet. The /etc/jail.conf is shown below. One thing to note here. As I am using WiFi wlan0 interface I have put that into the Jail configuration. If you use LAN interface (for example em0) then put that instead into this Jail config. As you see from the example below we will be using Firefox browser in out example.

root@host # cat /etc/jail.conf

# GLOBAL
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  exec.consolelog = "/var/log/jail_${name}_console.log";
  mount.devfs;
  host.hostname = ${name};
  path = /jail/${name};

# JAILS
  firefox {
    devfs_ruleset = 30;
    ip4.addr = 10.0.0.200;
    interface = wlan0;
    allow.raw_sockets;
    allow.sysvipc;
    mount.fstab = "/jail/firefox/etc/fstab";
  }

As you can see we will also be using devfs(8) rules in the /etc/devfs.rules file – shown below. This configuration is needed to have access to sound(4) in our FreeBSD Jail. If you do not need sound then you can delete devfs_ruleset = 30; from the /etc/jail.conf file and also do not add anything in the /etc/devfs.rules file.

root@host # cat /etc/devfs.rules
[sound=30]
add path 'mixer*' unhide
add path 'dsp*'   unhide

If we are about to share the ~/download dir with our containerized browser then we need to somehow add that information to our FreeBSD Jail. We will use the FreeBSD’s mount_nullfs(8) command to mount our currently existing ~/download dir into our FreeBSD Jail. We will use following /jail/firefox/etc/fstab for that purpose.

root@host # cat /jail/firefox/etc/fstab
#SOURCE         #MNT                                      #TYPE   #OPTS       #DUMP/PASS
/data/download  /jail/firefox/usr/home/vermaden/download  nullfs  rw,noatime  0 0

Of course you do not have to share any directory with your containerized browser.

You may as well would want to make this jails start everytime you boot your system. To do that add below lines to the /etc/rc.conf file as shown below.

jail_enable=YES
jail_parallel_start=YES
jail_list="firefox"

Create the Jail

As I use FreeBSD 13.0-RELEASE I would be using also the FreeBSD 13.0-RELEASE Jail for that purpose. If you are running for example FreeBSD 12.3-RELEASE then make sure that you will use FreeBSD 12.3-RELEASE Jail. The Jail version needs to be lower then the host system version. We will now fetch needed FreeBSD ‘base’ file and unpack it within /jail/firefox dir where our container would live. We will also configure several other basic files such as /etc/resolv.conf or /etc/hosts files.

root@host # mkdir -p /jail/BASE /jail/firefox /jail/firefox/usr/home/vermaden/download

root@host # fetch -o /jail/BASE/13.0-RELEASE-base.txz \
    http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/13.0-RELEASE/base.txz

root@host # tar -xvpf /jail/BASE/13.0-RELEASE-base.txz -C /jail/firefox

root@host # echo nameserver 1.1.1.1 > /jail/firefox/etc/resolv.conf

root@host # echo 10.0.0.200 firefox >> /jail/firefox/etc/hosts

root@host # cat << EOF > /jail/firefox/etc/fstab
#SOURCE         #MNT                                      #TYPE   #OPTS       #DUMP/PASS
/data/download  /jail/firefox/usr/home/vermaden/download  nullfs  rw,noatime  0 0
EOF

We will now start our fresh FreeBSD Jail.

root@host # service jail onestart firefox

We can now also see two new mounts in the mount(8) output.

root@host # mount | tail -2
/data/download on /jail/firefox/usr/home/vermaden/download (nullfs, local, noatime)
devfs on /jail/firefox/dev (devfs)

root@host # mount -p | tail -2 | column -t
/data/download /jail/firefox/usr/home/vermaden/download nullfs rw,noatime 0 0
devfs /jail/firefox/dev devfs rw 0 0

You may want to update the FreeBSD version to the most up to date one with freebsd-update(8) commands.

root@host # freebsd-update -b /jail/firefox fetch
root@host # freebsd-update -b /jail/firefox install

Install Needed Packages

Before installing anything we will first switch to the latest branch for the pkg(8) packages to have most up to date software. We will then process to installing the Firefox package. We will also need x11/xauth package for X11 Forwarding process.

root@host # sed -i '' s.quarterly.latest.g /jail/firefox/etc/pkg/FreeBSD.conf

root@host # grep latest /jail/firefox/etc/pkg/FreeBSD.conf
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",

root@host # jls
   JID  IP Address      Hostname                      Path
     1  10.0.0.200      firefox                       /jail/firefox

root@host # jexec 1

(root@jail) # pkg install -y firefox xauth

Create Matching User and Configure sshd(8) Daemon

We will now enter our FreeBSD Jail again for several other needed tasks for our containerized browser to be working. First is creating inside similar user as you currently use inside. Especially with the same UID/GID to have files with proper permissions in your real ~/download directory instead of files with other UID/GID that you will have to chown(8) with root user. As my vermaden user uses UID/GID 1000 I will also use that inside. I will also set simple password that You will only use once – to copy your public SSH key there.

root@host # jexec 1

(root@jail) # echo your-username-password-goes-here | pw user add -u 1000 -n vermaden -m -s /bin/sh -h 0

Now we need to run /usr/local/bin/dbus-uuidgen --ensure once to make sure DBUS is initialized properly. Firefox and many other apps would not start if we omit that step.

(root@jail) # /usr/local/bin/dbus-uuidgen --ensure

Now the sshd(8) daemon. The only thing we need to do is to add it to the system startup and also add X11UseLocalhost no option to its config file.

(root@jail) # sysrc sshd_enable=YES
sshd_enable: NO -> YES

(root@jail) # echo X11UseLocalhost no >> /etc/ssh/sshd_config

(root@jail) # service sshd start
Generating RSA host key.
2048 SHA256:VnrvItf0tl738C5Oc2St6T63/6o8zaDlfUskB+NrElo root@firefox (RSA)
Generating ECDSA host key.
256 SHA256:ZAjcAGqlrVwvY+J9MuVzErx9QUOqIOJE3nJX/Oqwtpk root@firefox (ECDSA)
Generating ED25519 host key.
256 SHA256:JdzUql2D2+X8iBn3c1jWDHQRNQMKqWGOcL4J16fIX0E root@firefox (ED25519)
Performing sanity check on sshd configuration.
Starting sshd.

Copy Public SSH Key and Start

Copying your public SSH key is optional but if you omit this step then you would have to type your FreeBSD Jail user password every time you would want to start your secure Firefox instance.

vermaden@host % ssh-copy-id -i ~/.ssh/id_rsa vermaden@10.0.0.200
Password:

Now you can start your containerized browser. I have added some useful flags for ssh(1) client like compression with -C and fastest supported encryption with -c aes128-ctr option. The -X is for X11 Forwarding option. I also added GDK_SYNCHRONIZE=1 to make Firefox yell less πŸ™‚

vermaden@host % ssh -C -c aes128-ctr -X vermaden@10.0.0.200 env GDK_SYNCHRONIZE=1 firefox --new-instance

Now without password you should see fresh Firefox instance.

firefox-fresh

I will now try to play some random video. I can not show you that from an image but the sound also works πŸ™‚

firefox-youtube

Similar setup can be created for other browser if Firefox is not your browser of choice of course. If you are curious how much space it uses its about this:

root@host # du -smx /jail/BASE/13.0-RELEASE-base.txz /jail/firefox 
181 /jail/BASE/13.0-RELEASE-base.txz
1603 /jail/firefox

root@host # du -smx -A /jail/BASE/13.0-RELEASE-base.txz /jail/firefox
181 /jail/BASE/13.0-RELEASE-base.txz
2601 /jail/firefox

I also added the -A flag in second the du(1) command to show you how much more space would be used without the ZFS LZ4 compression.

UPDATE 1 – Use XPRA Instead of X11 Forwarding

Some people complained that this is quite good setup but they were not happy with using X11 Forwarding for the connection method. I decided to add additional XPRA method to connect to our secure containerized browser. First thing you need to do is to install the x11/xpra package on both the host system and also inside the jail container.

root@host # pkg install -y xpra
(root@jail) # pkg install -y xpra

Now – after logging into your user in the Jail container – vermaden in may case – we will use the xpra commands to create new session with Firefox browser.

Lets see if any xpra sessions currently exists.

(vermaden@jail) % xpra list
Warning: XDG_RUNTIME_DIR is not defined
 and '/run/user/1000' does not exist
 using '/tmp'
No xpra sessions found

Seems not. We can not start our Firefox session.

(vermaden@jail) % xpra start --bind-tcp=:14500 --start='firefox --new-instance'
Warning: XDG_RUNTIME_DIR is not defined
 and '/run/user/1000' does not exist
 using '/tmp'
Entering daemon mode; any further errors will be reported to:
  /tmp/xpra/S19958.log
Actual display used: :0
Actual log file name is now: /tmp/xpra/:0.log

We can see in the xpra list command that new session appeared.

(vermaden@jail) % xpra list
Warning: XDG_RUNTIME_DIR is not defined
 and '/run/user/1000' does not exist
 using '/tmp'
Found the following xpra sessions:
/home/vermaden/.xpra:
        LIVE session at :0

We can also see that xpra is now listening on the 14500 port.

(vermaden@jail) % sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
vermaden python3.8  20781 3  tcp4   10.0.0.200:14500      *:*
root     sshd       58454 3  tcp4   10.0.0.200:22         *:*
root     syslogd    48568 5  udp4   10.0.0.200:514        *:*

We will now move to out host and start graphical xpra client to connect to our FreeBSD Jail with Firefox process.

update1-xpra-main

After clicking the large Connect button we can now enter our Jail address.

update1-xpra-connect

After again clicking the Connect button on the bottom this time we can now se our Firefox browser from our secure environment.

update1-xpra-firefox

After we done our job at more secure Firefox we can now end our xpra session on the jail system.

(vermaden@jail) % xpra stop
Warning: XDG_RUNTIME_DIR is not defined
 and '/run/user/1000' does not exist
 using '/tmp'
xpra initialization error:
 cannot find any live servers to connect to

(vermaden@jail) % xpra list
Warning: XDG_RUNTIME_DIR is not defined
 and '/run/user/1000' does not exist
 using '/tmp'
No xpra sessions found

As XPRA provides OpenGL acceleration you may verify that fact from your host system using below command.

vermaden@host % xpra opengl
Warning: XDG_RUNTIME_DIR is not defined
 and '/run/user/1000' does not exist
 using '/tmp'
Warning: cannot handle window transparency
 screen is not composited
Warning: vendor 'Intel Open Source Technology Center' is greylisted,
 you may want to turn off OpenGL if you encounter bugs
Warning: window 0xffffffff changed its transparency attribute
 from False to True, behaviour is undefined
GLU.version=1.3
GLX=1.4
accelerate=3.1.5
accum-alpha-size=0
accum-blue-size=0
accum-green-size=0
accum-red-size=0
alpha-size=0
aux-buffers=0
blue-size=8
buffer-size=24
depth=24
depth-size=0
direct=True
display_mode=ALPHA, DOUBLE
double-buffered=True
green-size=8
level=0
max-viewport-dims=16384, 16384
opengl=3.0
pyopengl=3.1.5
red-size=8
renderer=Mesa DRI Intel(R) HD Graphics 3000 (SNB GT2)
rgba=True
safe=True
shading-language-version=1.30
stencil-size=0
stereo=False
success=True
texture-size-limit=8192
transparency=True
vendor=Intel Open Source Technology Center
zerocopy=True

You can also use VNC or other methods of course.

Hope that helps πŸ™‚

EOF

RabbitMQ Cluster on FreeBSD Containers

I really like small and simple dedicated solutions that do one thing well and do it really good – maybe its because I like UNIX that much. Good example of such approach is Minio object storage which implements S3 protocol with distributed clustering, erasure code and builtin web interface along with many other features about which I wrote in the Distributed Object Storage with Minio on FreeBSD article.

The RabbitMQ is another such example – currently probably the most popular implementation of the AMQP protocol – it also comes with small and sleek web interface. The difference is power. Minio comes with very basic user oriented web interface while most administrative and configuration tasks needs to be done from the CLI. The Minio web interface allows one to create/delete buckets there and also to download/upload files. RabbitMQ have so sophisticated web interface that after you enable it you do not need command line anymore. Everything can be accomplished using just web interface.

rabbitmq-logo.png

Compared to other messaging solutions like ActiveMQ or Apache Kafka it is very popular when checked in the Google Trends query.

rabbitmq-trends.jpg

Today I would like to show you RabbitMQ messaging with quite redundant clustered setup with mirrored queues.

You will find Table of Contents below.

  • Jails Setup
  • RabbitMQ Installation
  • RabbitMQ Setup
  • RabbitMQ Plugins
  • RabbitMQ Administrative User
  • RabbitMQ Cluster Setup
  • RabbitMQ Highly Available Policy
  • Feed the Queue
  • Go Language Installation
  • Simple Benchmark
  • High Availability
  • UPDATE 1 – This Month in RabbitMQ
  • UPDATE 2 – Make RabbitMQ Use Less CPU

From all possible virtualization possibilities available on FreeBSD (VirtualBox/Bhyve/QEMU/Jails/Docker) I have chosen the lightweight FreeBSD Containers – Jails πŸ™‚

The legend is the same as usual.

Command run on the host system as root user.

host # command

Command run on the host system as regular user.

host % command

Command run on the rabbitX Jail.

rabbitX # command

Jails Setup

First we will create the base Jails for our setup. Both the host system and Jails Containers use FreeBSD 11.2-RELEASE system.

host # mkdir -p /jail/BASE
host # fetch -o /jail/BASE/11.2-RELEASE.base.txz http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.1-RELEASE/base.txz
host # for I in 1 2; do echo ${I}; mkdir -p /jail/rabbit${I}; tar --unlink -xpJf /jail/BASE/11.2-RELEASE.base.txz -C /jail/rabbit${I}; done
1
2
host #

We now have 2 empty clean Jails.

We will now add Jails configuration to the /etc/jail.conf file.

I have used my laptop for the Jail host thus Jails will configured to use the wireless wlan0 interface and 192.168.43.10X addresses. I also added 10.0.0.10X network addresses as this will make it more convenient for me for the purposes of writing this article.

host # for I in 1 2
do
  cat >> /etc/jail.conf << __EOF
rabbit${I} {
  host.hostname = rabbit${I}.local;
  ip4.addr += 192.168.43.10${I};
  ip4.addr += 10.0.0.10${I};
  interface = wlan0;
  path = /jail/rabbit${I};
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
}

__EOF
done
host #

This is how the /etc/jail.conf file looks after its configured.

host # cat /etc/jail.conf
rabbit1 {
  host.hostname = rabbit1.local;
  ip4.addr += 192.168.43.101;
  ip4.addr += 10.0.0.101;
  interface = wlan0;
  path = /jail/rabbit1;
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
}

rabbit2 {
  host.hostname = rabbit2.local;
  ip4.addr += 192.168.43.102;
  ip4.addr += 10.0.0.102;
  interface = wlan0;
  path = /jail/rabbit2;
  exec.start = "/bin/sh /etc/rc";
  exec.stop = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  allow.raw_sockets;
}

Now we can start our Jails.

host # for I in 1 2; do service jail onestart rabbit${I}; done
Starting jails: rabbit1.
Starting jails: rabbit2.

Jails are running properly.

# jls
   JID  IP Address      Hostname                      Path
     1  192.168.43.101  rabbit1.local                 /jail/rabbit1
     2  192.168.43.102  rabbit2.local                 /jail/rabbit2

Time to add DNS server to the Jails so they will have Internet connectivity.

host # for I in 1 2; do cat /jail/rabbit${I}/etc/resolv.conf; done
nameserver 1.1.1.1
nameserver 1.1.1.1

Now we will switch from 'quarterly' to 'latest' packages.

host # for I in 1 2; do sed -i '' s/quarterly/latest/g /jail/rabbit${I}/etc/pkg/FreeBSD.conf; done

host # for I in 1 2; do grep latest /jail/rabbit${I}/etc/pkg/FreeBSD.conf; done
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",

RabbitMQ Installation

We can now install RabbitMQ package.

host # for I in 1 2; do jexec rabbit${I} env ASSUME_ALWAYS_YES=yes pkg install -y rabbitmq; echo; done
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:11:amd64/latest, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[rabbit1.local] Installing pkg-1.10.5_5...
[rabbit1.local] Extracting pkg-1.10.5_5: 100%
Updating FreeBSD repository catalogue...
pkg: Repository FreeBSD load error: access repo file(/var/db/pkg/repo-FreeBSD.sqlite) failed: No such file or directory
[rabbit1.local] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
[rabbit1.local] Fetching packagesite.txz: 100%    6 MiB 745.4kB/s    00:09    
Processing entries: 100%
FreeBSD repository update completed. 32114 packages processed.
All repositories are up to date.
Updating database digests format: 100%
The following 2 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        rabbitmq: 3.7.15
        erlang-runtime19: 21.3.8.2

Number of packages to be installed: 2

The process will require 104 MiB more space.
41 MiB to be downloaded.
[rabbit1.local] [1/2] Fetching rabbitmq-3.7.15.txz: 100%    9 MiB 762.2kB/s    00:12    
[rabbit1.local] [2/2] Fetching erlang-runtime19-21.3.8.2.txz: 100%   33 MiB 978.8kB/s    00:35    
Checking integrity... done (0 conflicting)
[rabbit1.local] [1/2] Installing erlang-runtime19-21.3.8.2...
[rabbit1.local] [1/2] Extracting erlang-runtime19-21.3.8.2: 100%
[rabbit1.local] [2/2] Installing rabbitmq-3.7.15...
===> Creating groups.
Creating group 'rabbitmq' with gid '135'.
===> Creating users
Creating user 'rabbitmq' with uid '135'.
[rabbit1.local] [2/2] Extracting rabbitmq-3.7.15: 100%
Message from erlang-runtime19-21.3.8.2:

===========================================================================

To use this runtime port for development or testing, just prepend
its binary path ("/usr/local/lib/erlang19/bin") to your PATH variable.

===========================================================================

(...)

// SAME MESSAGES FOR THE OTHER rabbit2 JAIL //

Lets verify that RabbitMQ package has installed successfully.

host # for I in 1 2; do jexec rabbit${I} which rabbitmqctl; done
/usr/local/sbin/rabbitmqctl
/usr/local/sbin/rabbitmqctl

RabbitMQ Setup

We will now configure /etc/hosts files on our Jails.

host # for I in 1 2; do cat >> /jail/rabbit${I}/etc/hosts << __EOF
192.168.43.101 rabbit1
192.168.43.102 rabbit2

__EOF
done

… and fast verification.

host # cat /jail/rabbit?/etc/hosts | grep 192.168.43 | sort -n | uniq -c
2 192.168.43.101 rabbit1
2 192.168.43.102 rabbit2

As we have RabbitMQ package installed we need to enable it and start it.

host # jexec rabbit1 /usr/local/etc/rc.d/rabbitmq rcvar
# rabbitmq
#
rabbitmq_enable="NO"
#   (default: "")

As we see we need to set rabbitmq_enable=YES value in /etc/rc.conf file within each of our Jails.

host # for I in 1 2; do jexec rabbit${I} sysrc rabbitmq_enable=YES; done
rabbitmq_enable:  -> YES
rabbitmq_enable:  -> YES

Now we can start the RabbitMQ in the Jails.

host # for I in 1 2; do jexec rabbit${I} service rabbitmq start; done
Starting rabbitmq.
Starting rabbitmq.

Now we have four RabbitMQ instances up and running.

This is the list of plugins enabled by default. None.

RabbitMQ Plugins

rabbit1 # rabbitmq-plugins list
 Configured: E = explicitly enabled; e = implicitly enabled
 | Status: * = running on rabbit@rabbit1
 |/
[  ] rabbitmq_amqp1_0                  3.7.15
[  ] rabbitmq_auth_backend_cache       3.7.15
[  ] rabbitmq_auth_backend_http        3.7.15
[  ] rabbitmq_auth_backend_ldap        3.7.15
[  ] rabbitmq_auth_mechanism_ssl       3.7.15
[  ] rabbitmq_consistent_hash_exchange 3.7.15
[  ] rabbitmq_event_exchange           3.7.15
[  ] rabbitmq_federation               3.7.15
[  ] rabbitmq_federation_management    3.7.15
[  ] rabbitmq_jms_topic_exchange       3.7.15
[  ] rabbitmq_management               3.7.15
[  ] rabbitmq_management_agent         3.7.15
[  ] rabbitmq_mqtt                     3.7.15
[  ] rabbitmq_peer_discovery_aws       3.7.15
[  ] rabbitmq_peer_discovery_common    3.7.15
[  ] rabbitmq_peer_discovery_consul    3.7.15
[  ] rabbitmq_peer_discovery_etcd      3.7.15
[  ] rabbitmq_peer_discovery_k8s       3.7.15
[  ] rabbitmq_random_exchange          3.7.15
[  ] rabbitmq_recent_history_exchange  3.7.15
[  ] rabbitmq_sharding                 3.7.15
[  ] rabbitmq_shovel                   3.7.15
[  ] rabbitmq_shovel_management        3.7.15
[  ] rabbitmq_stomp                    3.7.15
[  ] rabbitmq_top                      3.7.15
[  ] rabbitmq_tracing                  3.7.15
[  ] rabbitmq_trust_store              3.7.15
[  ] rabbitmq_web_dispatch             3.7.15
[  ] rabbitmq_web_mqtt                 3.7.15
[  ] rabbitmq_web_mqtt_examples        3.7.15
[  ] rabbitmq_web_stomp                3.7.15
[  ] rabbitmq_web_stomp_examples       3.7.15

Time to enable web interface plugin.

host # for I in 1 2; do jexec rabbit${I} rabbitmq-plugins enable rabbitmq_management; done
The following plugins have been configured:
  rabbitmq_management
  rabbitmq_management_agent
  rabbitmq_web_dispatch
Applying plugin configuration to rabbit@rabbit1...
The following plugins have been enabled:
  rabbitmq_management
  rabbitmq_management_agent
  rabbitmq_web_dispatch

started 3 plugins.

(...)

// SAME MESSAGES FOR THE OTHER rabbit2 JAIL //

Now we have web interface plugin enabled in each RabbitMQ FreeBSD Jail.

Big ‘E‘ letter means that this is the plugin that we enabled and small ‘e‘ letter means that this plugin is only enabled as ‘dependency’ for some other plugin we requested to be enabled.

rabbit1 # rabbitmq-plugins list
 Configured: E = explicitly enabled; e = implicitly enabled
 | Status: * = running on rabbit@rabbit1
 |/
[  ] rabbitmq_amqp1_0                  3.7.15
[  ] rabbitmq_auth_backend_cache       3.7.15
[  ] rabbitmq_auth_backend_http        3.7.15
[  ] rabbitmq_auth_backend_ldap        3.7.15
[  ] rabbitmq_auth_mechanism_ssl       3.7.15
[  ] rabbitmq_consistent_hash_exchange 3.7.15
[  ] rabbitmq_event_exchange           3.7.15
[  ] rabbitmq_federation               3.7.15
[  ] rabbitmq_federation_management    3.7.15
[  ] rabbitmq_jms_topic_exchange       3.7.15
[E*] rabbitmq_management               3.7.15
[e*] rabbitmq_management_agent         3.7.15
[  ] rabbitmq_mqtt                     3.7.15
[  ] rabbitmq_peer_discovery_aws       3.7.15
[  ] rabbitmq_peer_discovery_common    3.7.15
[  ] rabbitmq_peer_discovery_consul    3.7.15
[  ] rabbitmq_peer_discovery_etcd      3.7.15
[  ] rabbitmq_peer_discovery_k8s       3.7.15
[  ] rabbitmq_random_exchange          3.7.15
[  ] rabbitmq_recent_history_exchange  3.7.15
[  ] rabbitmq_sharding                 3.7.15
[  ] rabbitmq_shovel                   3.7.15
[  ] rabbitmq_shovel_management        3.7.15
[  ] rabbitmq_stomp                    3.7.15
[  ] rabbitmq_top                      3.7.15
[  ] rabbitmq_tracing                  3.7.15
[  ] rabbitmq_trust_store              3.7.15
[e*] rabbitmq_web_dispatch             3.7.15
[  ] rabbitmq_web_mqtt                 3.7.15
[  ] rabbitmq_web_mqtt_examples        3.7.15
[  ] rabbitmq_web_stomp                3.7.15
[  ] rabbitmq_web_stomp_examples       3.7.15

Now – in order to create a cluster – we need these RabbitMQ instances to share the same ERLANG cookie. The ERLANG cookie can be found at /var/db/rabbitmq/.erlang.cookie on FreeBSD system.

rabbot1 # cat /var/db/rabbitmq/.erlang.cookie; echo
NOEVQNXJDNLAJOSVWNIW
rabbot1 # 

We will need to stop RabbitMQ to change ERLANG cookie.

host # for I in 1 2; do jexec rabbit${I} service rabbitmq stop; done
Stopping rabbitmq.
Waiting for PIDS: 88684.
Stopping rabbitmq.
Waiting for PIDS: 20976.

Let’s set the same ERLANG cookie on each FreeBSD Jail then.

host # for I in 1 2; do cat > /jail/rabbit${I}/var/db/rabbitmq/.erlang.cookie << __EOF
RABBITMQFREEBSDJAILS
__EOF
done

… and now we need to start them again.

host # for I in 1 2; do jexec rabbit${I} service rabbitmq start; done
Starting rabbitmq.
Starting rabbitmq.

Fast verification.

host # for I in 1 2; do jexec rabbit${I} cat /var/db/rabbitmq/.erlang.cookie; done
RABBITMQFREEBSDJAILS
RABBITMQFREEBSDJAILS

RabbitMQ Administrative User

Now we will create administrative user called admin for the RabbitMQ instances.

host # for I in 1 2; do jexec rabbit${I} rabbitmqctl add_user admin ADMINPASSWORD; done
Adding user "admin" ...
Adding user "admin" ...

host # for I in 1 2; do jexec rabbit${I} rabbitmqctl set_user_tags admin administrator; done
Setting tags for user "admin" to [administrator] ...
Setting tags for user "admin" to [administrator] ...

host # for I in 1 2; do jexec rabbit${I} rabbitmqctl set_permissions -p / admin ".*" ".*" ".*" ; done
Setting permissions for user "admin" in vhost "/" ...
Setting permissions for user "admin" in vhost "/" ...

We should now be able to login to the http://192.168.43.101:15672/ (or http://10.0.0.101:15672/ also) RabbitMQ management page.

01-rabbitmq-login.png

After login a useful RabbitMQ dashboard will welcome you.

02-rabbitmq-dashboard.png

RabbitMQ Cluster Setup

We will now create RabbitMQ cluster.

rabbit1 # rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1]}]},
 {running_nodes,[rabbit@rabbit1]},
 {cluster_name,},
 {partitions,[]},
 {alarms,[{rabbit@rabbit1,[]}]}]

rabbit2 # hostname
rabbit2.local

rabbit2 # rabbitmqctl join_cluster rabbit@rabbit1
Error: this command requires the 'rabbit' app to be stopped on the target node. Stop it with 'rabbitmqctl stop_app'.
Arguments given:
        join_cluster rabbit@rabbit1

Usage

rabbitmqctl [--node ] [--longnames] [--quiet] join_cluster [--disc|--ram] 

We first need to stop the RabbitMQ ‘application’ to join the cluster.

rabbit2 # rabbitmqctl stop_app
Stopping rabbit application on node rabbit@rabbit2 ...

rabbit2 # rabbitmqctl join_cluster rabbit@rabbit1
Clustering node rabbit@rabbit2 with rabbit@rabbit1

rabbit2 # rabbitmqctl start_app
Starting node rabbit@rabbit2 ...
 completed with 5 plugins.

rabbit2 # rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2]}]},
 {running_nodes,[rabbit@rabbit1,rabbit@rabbit2]},
 {cluster_name,},
 {partitions,[]},
 {alarms,[{rabbit@rabbit1,[]},{rabbit@rabbit2,[]}]}]

rabbit1 # rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2]}]},
 {running_nodes,[rabbit@rabbit2,rabbit@rabbit1]},
 {cluster_name,},
 {partitions,[]},
 {alarms,[{rabbit@rabbit2,[]},{rabbit@rabbit1,[]}]}]

Now we have formed two node RabbitMQ cluster. We will rename it to cluster then.

rabbit1 # rabbitmqctl set_cluster_name rabbit@cluster
Setting cluster name to rabbit@cluster ...

rabbit1 # rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2]}]},
 {running_nodes,[rabbit@rabbit2,rabbit@rabbit1]},
 {cluster_name,},
 {partitions,[]},
 {alarms,[{rabbit@rabbit2,[]},{rabbit@rabbit1,[]}]}]

Here is how our cluster looks in the web interface.

08-rabbitmq-cluster.png

RabbitMQ Highly Available Policy

To have Highly Available (Mirrored) Queues in RabbitMQ you need to create Policy. We will declare Policy named ha which will match queues whose names begin with ‘ha-‘ prefix so they will be configured with mirroring to all two nodes in the cluster.

This is the command you need to execute to create such Policy.

rabbit1 # rabbitmqctl set_policy ha "^ha-\.*" '{"ha-mode":"all","ha-sync-mode":"automatic"}'
Setting policy "ha-mirror" for pattern "^ha-\." to "{"ha-mode":"all","ha-sync-mode":"automatic"}" with priority "0" for vhost "/" ...

… or alternatively you can use the web interface to create it.

No matter which method you have chosen you will end up with needed ha Policy as shown below.

03-rabbitmq-policy.png

Feed the Queue

We now have two node RabbitMQ cluster with HA for queues that name starts with ha- prefix. We will now test our RabbitMQ setup and will create and feed the queue with send.go script – as you probably guessed – written in Go. We will need to add Go language to our host system.

Go Language Installation

host # pkg install go
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        go: 1.12.5,1

Number of packages to be installed: 1

The process will require 262 MiB more space.
75 MiB to be downloaded.

Proceed with this action? [y/N]: y
(...)

host % go version
go version go1.12.5 freebsd/amd64

This is the send.go script – we will use it to send 10 messages to the ha-default queue. Its based on the RabbitMQ Hello World tutorial.

host % cat send.go
package main

import (
  "log"
  "amqp"
)

func FAIL_ON_ERROR(err error, msg string) {
  if err != nil {
    log.Fatalf("%s: %s", msg, err)
  }
}

func main() {
  conn, err := amqp.Dial("amqp://admin:ADMINPASSWORD@10.0.0.101:5672/")
  FAIL_ON_ERROR(err, "ER: failed to connect to RabbitMQ")
  defer conn.Close()

  ch, err := conn.Channel()
  FAIL_ON_ERROR(err, "ER: failed to open channel")
  defer ch.Close()

  q, err := ch.QueueDeclare(
    "ha-default", // name
    false,        // durable
    false,        // delete when unused
    false,        // exclusive
    false,        // no-wait
    nil,          // arguments
  )
  FAIL_ON_ERROR(err, "ER: failed to declare queue")

  body := "Hello World!"

  for i := 1; i <= 10; i++ {
    err = ch.Publish(
      "",     // exchange
      q.Name, // routing key
      false,  // mandatory
      false,  // immediate
      amqp.Publishing{
        ContentType: "text/plain",
        Body:        []byte(body),
      })
    log.Printf("IN: sent message '%s' (%d)", body, i)
    FAIL_ON_ERROR(err, "ER: failed to publish message")
  }

}


We will now run it.

host % go run send.go
send.go:5:3: cannot find package "amqp" in any of:
        /usr/local/go/src/amqp (from $GOROOT)
        /home/vermaden/.gopkg/src/amqp (from $GOPATH)

We lack the amqp package for the Go language.

We will need to download it from its https://github.com/streadway/amqp page. We will get it by downloading everything in a ZIP package.

host % mkdir -p ~/.gopkg/src
host % cd !$
host % pwd
/home/vermaden/.gopkg/src
host % fetch https://github.com/streadway/amqp/archive/master.zip
host % unzip master.zip 
Archive:  /home/vermaden/.gopkg/src/master.zip
   creating: amqp-master/
 extracting: amqp-master/.gitignore
 extracting: amqp-master/.travis.yml
 (...)
 extracting: amqp-master/uri.go
 extracting: amqp-master/uri_test.go
 extracting: amqp-master/write.go
host % rm master.zip
host % mv amqp-master amqp
host % cd amqp
host % pwd
/home/vermaden/.gopkg/src/amqp
host % exa
_examples          confirms.go         delivery_test.go        LICENSE            spec091.go
spec               confirms_test.go    doc.go                  pre-commit         tls_test.go
allocator.go       connection.go       example_client_test.go  read.go            types.go
allocator_test.go  connection_test.go  examples_test.go        read_test.go       uri.go
auth.go            consumers.go        fuzz.go                 README.md          uri_test.go
certs.sh           consumers_test.go   gen.sh                  reconnect_test.go  write.go
channel.go         CONTRIBUTING.md     go.mod                  return.go          
client_test.go     delivery.go         integration_test.go     shared_test.go     

We also need to make sure that PATH and GOPATH are properly configured. To do so you need to put these in your interactive shell config.

# GO SHELL SETUP
mkdir -p ~/.gopkg
export GOPATH=~/.gopkg
export PATH="${PATH}:~/.gopkg"

Now we can get back to feeding our queue.

host % go run send.go
2019/06/05 13:53:59 IN: sent message 'Hello World!' (1)
2019/06/05 13:53:59 IN: sent message 'Hello World!' (2)
2019/06/05 13:53:59 IN: sent message 'Hello World!' (3)
2019/06/05 13:53:59 IN: sent message 'Hello World!' (4)
2019/06/05 13:53:59 IN: sent message 'Hello World!' (5)
2019/06/05 13:53:59 IN: sent message 'Hello World!' (6)
2019/06/05 13:53:59 IN: sent message 'Hello World!' (7)
2019/06/05 13:53:59 IN: sent message 'Hello World!' (8)
2019/06/05 13:53:59 IN: sent message 'Hello World!' (9)
2019/06/05 13:53:59 IN: sent message 'Hello World!' (10)
% 

The ha-default queue has been created and feeded with 10 messages.

04-rabbitmq-queue

Now we need to ‘receive’ these messages from the queue. This is where receive.go script comes with help. It is also based on the RabbitMQ Hello World tutorial.

host % cat receive.go
package main

import (
  "log"
  "amqp"
)

func FAIL_ON_ERROR(err error, msg string) {
  if err != nil {
    log.Fatalf("%s: %s", msg, err)
  }
}

func main() {
  conn, err := amqp.Dial("amqp://admin:ADMINPASSWORD@10.0.0.102:5672/")
  FAIL_ON_ERROR(err, "ER: failed to connect to RabbitMQ")
  defer conn.Close()

  ch, err := conn.Channel()
  FAIL_ON_ERROR(err, "ER: failed to open channel")
  defer ch.Close()

  q, err := ch.QueueDeclare(
    "ha-default", // name
    false,        // durable
    false,        // delete when unused
    false,        // exclusive
    false,        // no-wait
    nil,          // arguments
  )
  FAIL_ON_ERROR(err, "ER: failed to declare queue")

  msgs, err := ch.Consume(
    q.Name, // queue
    "",     // consumer
    true,   // auto-ack
    false,  // exclusive
    false,  // no-local
    false,  // no-wait
    nil,    // args
  )
  FAIL_ON_ERROR(err, "ER: failed to register consumer")

  forever := make(chan bool)

  go func() {
    for d := range msgs {
      log.Printf("IN: received message: %s", d.Body)
    }
  }()

  log.Printf("IN: waiting for messages")
  log.Printf("IN: to exit press CTRL+C")
  <-forever
}

Here is its output after running. It will not stop running until you end it with CTRL-C sequence.

host % go run receive.go
2019/06/05 13:54:34 IN: waiting for messages
2019/06/05 13:54:34 IN: to exit press CTRL+C
2019/06/05 13:54:34 IN: received message: Hello World!
2019/06/05 13:54:34 IN: received message: Hello World!
2019/06/05 13:54:34 IN: received message: Hello World!
2019/06/05 13:54:34 IN: received message: Hello World!
2019/06/05 13:54:34 IN: received message: Hello World!
2019/06/05 13:54:34 IN: received message: Hello World!
2019/06/05 13:54:34 IN: received message: Hello World!
2019/06/05 13:54:34 IN: received message: Hello World!
2019/06/05 13:54:34 IN: received message: Hello World!
2019/06/05 13:54:34 IN: received message: Hello World!
^C
%

If you checked the source code carefully then you probably noticed that I ‘sent’ messages to the rabbit1 node (10.0.0.101) while I ‘received’ the messages at the rabbit2 node (10.0.0.102).

Simple Benchmark

We will now make simple benchmark with receive.go script left running and modified send.go script with the for loop with 100000 messages.

host % go run receive.go
2019/06/05 13:52:34 IN: waiting for messages
2019/06/05 13:52:34 IN: to exit press CTRL+C

… and now the messages.

host % go run send.go
2019/06/05 13:53:59 IN: sent message 'Hello World!' (1)
2019/06/05 13:53:59 IN: sent message 'Hello World!' (2)
2019/06/05 13:53:59 IN: sent message 'Hello World!' (3)
(...)
2019/06/05 13:56:26 IN: sent message 'Hello World!' (99998)
2019/06/05 13:56:26 IN: sent message 'Hello World!' (99999)
2019/06/05 13:56:26 IN: sent message 'Hello World!' (100000)
% 

The results of this simple benchmark are below.

05-rabbitmq-benchmark.png

About 4000-5000 messages per second are handled by this RabbitMQ clustered instance within two FreeBSD Jails.

High Availability

Now we will test the high availability of our RabbitMQ cluster.

Currently the ha-default qeue is at rabbit1 node. We will now kill the rabbit1 Jail and see how RabbitMQ web interface reacts.

host # jls
   JID  IP Address      Hostname                      Path
     1  192.168.43.101  rabbit1.local                 /jail/rabbit1
     2  192.168.43.102  rabbit2.local                 /jail/rabbit2

host # killall -9 -j 1

host # umount /jail/rabbit1/dev

Our ha-default queue in a matter of seconds switched to the rabbit2 node – HA works as desired.

06-rabbitmq-ha-node-fail.png

Let’s start rabbit1 Jail to get redundancy back.

host # service jail onestart rabbit1
Starting jails: rabbit1.
host # 

07-rabbitmq-ha-node-back.png

The ha-default queue got redundancy back with +1 mark but it remained on the rabbit2 node.

… and last but not least – little anniversary at the end – this is the 50th article (not counting Valuable News series) on my blog πŸ™‚

UPDATE 1 – This Month in RabbitMQ

The RabbitMQ Cluster on FreeBSD Containers article was featured in the This Month in RabbitMQ – July 2019 episode.

Thanks for mentioning!

UPDATE 2 – Make RabbitMQ Use Less CPU

As reported by Felix Ehlers on Twitter – the RabbitMQ CPU usage will be reduced by setting RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+sbwt none" variable.

EOF