Tag Archives: linux

Quare FreeBSD?

I really wanted to make this article short … but I failed miserably. At least I tried to organize it well so one may get back to it after ‘some’ reading because its not a short lecture. I wanted to title it Why FreeBSD? but when you type that into your favorite duck.com search engine there are so many similar articles. I wanted it to have distinguished and unique name so I used Latin word for ‘why‘ which is ‘quare‘.

logo-freebsd

What FreeBSD can offer you that other operating systems does not? From all of the operating systems I used I find FreeBSD to suck the least. This post is not here to convince you to use or try FreeBSD – this you will have to do by yourself. This article will show you why FreeBSD is valuable or better alternative to other operating systems and is definitely not dying.

This is the Table of Contents for this article.

  • Base System
  • ZFS Boot Environments
  • Rescue
  • Audio
  • Jails
  • FreeBSD Ports Infrastructure
  • Updating/Building from Source
  • Storage
  • Init System
  • Linux Binary Compatibility
  • Simplicity
  • Evolution Instead Rewriting
  • Documentation
  • Community
  • Closing Thoughts

Base System

When you install a Linux system its just a bunch of RPM or DEB packages. For example of you install CentOS 7.8 Minimal variant you end up with several hundred RPM packages installed. After a week or month many of these packages will get updates sometimes making this CentOS system unusable or even unbootable (recent GRUB Boothole problem for example). On the contrary FreeBSD comes with a Base System concept. This means that when you install FreeBSD you install a minimal system as a whole. No packages or subsystems to be separately updated. Just whole Base System. That means that /boot /bin /sbin /usr /etc /lib /libexec /rescue directories are untouchable by any packages. When you decide to install packages (or build them using FreeBSD Ports) they will all fall into the /usr/local prefix. That means /usr/local/etc for configuration. The /usr/local/bin and /usr/local/sbin directories for binaries. The /usr/local/lib and /usr/local/libexec for libraries and so on. The FreeBSD Base System kernel modules are kept in the same dir along with the kernel in the /boot/kernel directory. To make things tidy all kernel modules that are provided by packages go into the /boot/modules dir. Everything has its place and its separated.

That is separation between Base System binaries (at /bin /sbin /usr/bin /usr/sbin dirs) and Third Party Packages maintained by pkg(8) and are located at /usr/local/bin and /usr/local/sbin dirs. We all know differences between bin (user) and sbin (root) binaries but in FreeBSD there is also another more UFS related separation. When there was only UFS filesystem in the FreeBSD world the /bin and /sbin binaries were available at boot after the root (/) filesystem was mounted and yet before /usrย  filesystem was mounted – this is historical (and still useful in the UFS setups) distinction dating to old UNIX days. In ZFS setups it does not matter as all files are on ZFS pool anyway.

The FreeBSD Base System separation also helps with another thing – if any package gets the ‘great’ idea to install new compiler named cc and override the default system compiler … or to add libraries/includes in such a way that makes it super hard to get back into a working system. If some random FreeBSD package would add libc.so to /usr/local/lib dir then you are covered and not prevented from running programs as usual because FreeBSD system binaries are linked to stuff in /usr/lib dir. This is why there is PATH variable on UNIX systems (and FreeBSD as well) to set which directories should be searched for binaries first. On FreeBSD by default its set search Base System binaries dirs first and then Third Party Packages later.

You can update (or not) the Base System separately from the installed packages with freebsd-update(8) command when using RELEASE or by recompiling with make buildworld and make installworld commands when using STABLE/CURRENT systems. When it comes to packages you can update them using the pkg(8) tool or portmaster when building from FreeBSD Ports tree under /usr/ports dir. That means that any packages updates will not touch your FreeBSD Base System at all. For example when you mess up (and I have done that in the beginning of my FreeBSD journey) the compiled ports and packages and you want to start over the only thing you have to do is remove /usr/local and /boot/modules and /var/db/pkg directories. That’s it. You are just reverted to your Base System and can start over. This is just not possible when using Linux system. Even with Gentoo that many concepts are based on FreeBSD ideas does not have Base System feature. This Base System also have additional feature. Because its separated from packages version no one stops you from running oldshool FreeBSD 9.0 from 2012 and install there latest Firefox 80 or LibreOffice 7.0. You can not install latest Firefox on Ubuntu from 2012 …

One may be ‘afraid’ that such Base System independent from installed packages would take more space but nothing far more from the truth. The fresh installed FreeBSD 12.1 system uses less then 1 GB of disk space and takes less then 75 MB of RAM with sshd(8) running. For the comparison fresh CentOS 7.8 install with ‘Minimal’ set chosen takes 1.1 GB of disk space and uses more then 100 MB RAM with sshd(8) running. Such CentOS system is really naked and really needs more packages to be usable while FreeBSD with its Base System is far more capable and powerful and comes along with builtin latest version of LLVM/CLANG compiler suite for example.

More on the Base System topic:

ZFS Boot Environments

I have talked about this many times and probably one time too less because Linux world still ignores this bless. Having ZFS Boot Environments its such a game changer that once you realize how powerful it is you will never want to use a system that does not support it. The idea is that you can snapshot a running system at any moment of time and then reboot into that moment (or snapshot) if something happened. Its perfect solution for upgrade or changes to the system. The FreeBSD systems are already well ‘protected’ from problems arising after updating the packages but ZFS Boot Environments takes this to a whole new level.

groundhog

Like in the movie Groundhog Day (1993) with ZFS Boot Environments you will have limitless chances to get your shit toghether. Even the Base System updates and changes are protected by it. You can even transport that Boot Environment by using zfs send and zfs recv commands to other system … or propagate it on many systems. You can create Jails containers from it … or install new version of FreeBSD in the new Boot Environment and reboot into it while still having your older ‘production’ system untouched.

More on the ZFS Boot Environments topic:

Rescue

When you really mess up to the point that even Base System concept or ZFS Boot Environments feature did not stopped you from killing your FreeBSD installation then there is one more level of rescue … the Rescue subsystem.

rescue

You have about 150 statically linked binaries available at your disposal for the rescue mission of that FreeBSD installation. You probably think now that if its so many binaries then it probably takes a lot of space … nothing far more from the truth. Its actually one static binary with hardlinks … and it takes whooping 11 MB of disk space.

# ls -lh /rescue | head -5
total 1118446
-r-xr-xr-x  146 root  wheel    11M 2020.02.19 21:10 [
-r-xr-xr-x  146 root  wheel    11M 2020.02.19 21:10 bectl
-r-xr-xr-x  146 root  wheel    11M 2020.02.19 21:10 bsdlabel
-r-xr-xr-x  146 root  wheel    11M 2020.02.19 21:10 bunzip2

They Rescue subsystem even contains such binaries as bectl(8) for ZFS Boot Environments management or zfs(8) and zpool(8) commands for the ZFS filesystem. Here is complete list of these binaries.

# ls /rescue
[           dd               fsck_ffs      init       mdmfs          ping      rtsol        unlink
bectl       devfs            fsck_msdosfs  ipf        mkdir          ping6     savecore     unlzma
bsdlabel    df               fsck_ufs      iscsictl   mknod          pkill     sed          unxz
bunzip2     dhclient         fsdb          iscsid     more           poweroff  setfacl      unzstd
bzcat       dhclient-script  fsirand       kenv       mount          ps        sh           vi
bzip2       disklabel        gbde          kill       mount_cd9660   pwd       shutdown     whoami
camcontrol  dmesg            geom          kldconfig  mount_msdosfs  rcorder   sleep        xz
cat         dump             getfacl       kldload    mount_nfs      rdump     spppcontrol  xzcat
ccdconfig   dumpfs           glabel        kldstat    mount_nullfs   realpath  stty         zcat
chflags     dumpon           gpart         kldunload  mount_udf      reboot    swapon       zdb
chgrp       echo             groups        ldconfig   mount_unionfs  red       sync         zfs
chio        ed               gunzip        less       mt             rescue    sysctl       zpool
chmod       ex               gzcat         link       mv             restore   tail         zstd
chown       expr             gzip          ln         nc             rm        tar          zstdcat
chroot      fastboot         halt          ls         newfs          rmdir     tcsh         zstdmt
clri        fasthalt         head          lzcat      newfs_msdos    route     tee          
cp          fdisk            hostname      lzma       nextboot       routed    test         
csh         fsck             id            md5        nos-tun        rrestore  tunefs       
date        fsck_4.2bsd      ifconfig      mdconfig   pgrep          rtquery   umount   

More on the Rescue topic:

Audio

Not many people expect from FreeBSD to shine in that department but it shines a lot here and not from yesterday but from decades. Remember when Linux got rid of the old OSS subsystem with one channel and came up with ‘great’ idea to write ALSA? I remember because I used Linux back then. Disaster is very polite word to describe Linux audio stack back then … and then PulseAudio came and whole Linux audio system got much worse. Back then because of that one OSS channel and many ALSA channels meant that ONLY ONE application with OSS backend could do the sound (for example WINE). But if another application would want to ‘make’ sound using OSS and you already have WINE started then it will be soundless because that one and only OSS channel was already taken. And remember that ALSA was so bad back then that KDE or GNOME made their own sound daemons mixing audio in userspace that were incompatible with each other. That means if you used KDE and GNOME apps back then you could have sound from GNOME apps but not from KDE apps or vice versa. One big fucking audio hell on Linux.

audio

Lets get back to FreeBSD audio then. What FreeBSD offered? A whooping 256 OSS channels mixed live in kernel for low latency. Everything audio related just worked out of the box – and still works today. You could have WINE or KDE/GNOME sound backends attached to their OSS channels and also ALSA apps getting their sound device without a problem. Even when you plugged a 5.1 surround system into FreeBSD it worked out of the box without any configuration and applications were able to use it immediately. That FreeBSD audio supremacy remains today as PulseAudio sound mixing in userspace while generally working incorporates large latency on Liunx compared to in kernel FreeBSD mixing with low latency.

Comrade meka suggested that FreeBSD is also the only OS which has virtual_oss that allows mixing/resampling/compressing in user space and allows one to have Bluetooth headphones and USB microphone represented as single sound card.

More on the Audio topic:

Jails

The FreeBSD Jails are one of the oldest OS Level Virtualization implementations dating back to 1999. Even the Solaris Zones/Containers came five years later in 2004.

containers

After Docker was introduced in Linux the term OS Level Virtualization became less used to the Containers term and now the FreeBSD Jails along with Solaris Zones/Containers are named 1st generation containers. But that naming nomenclature change does not make FreeBSD Jails less powerful. They are also really brain dead simple to use. You just need a directory – for example /jail/nextcloud – where you will extract the FreeBSD Base System for desired release version – for example base.txz from 12.1-RELEASE and create the Jail config in the /etc/jail.conf file as shown below.

# mkdir -p /jail/nextcloud
# fetch -o - http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.1-RELEASE/base.txz | tar --unlink -xpJf - -C /jail/nextcloud
# cat /etc/jail.conf
nextcloud {
  host.hostname = nextcloud.local;
  ip4.addr = 10.0.0.100;
  path = /jail/nextcloud;
}

Now you can start you Jail right away.

# service jail onestart nextcloud
Starting jails: nextcloud.

Voila! Your FreeBSD Jail is already running.

# jls
   JID  IP Address      Hostname                      Path
     1  10.0.0.100      nextcloud.local               /jail/nextcloud

You can of course have a trimmed down version of FreeBSD Base System in the Jail if that is needed. The ZFS filesystem also helps here greatly because with zfs clone only your ‘base’ Jail will take space and only the changes you make to Jails created from it. Thanks to other FreeBSD subsystem – the Linux Binary Compatibility – you can also create a Linux Jail – for example running Devuan Jail.

The FreeBSD Jails are also very lightweight. You can boot and use about 1000 FreeBSD Jails on a single FreeBSD system with 4 GB RAM.

They are also very easy to debug and troubleshoot comparing even to plain Docker – not to even mention Kubernetes which requires whole team of highly skilled people to maintain.

The FreeBSD Jails may be configured/managed only by the Base System utilities such as jls(8)/jexec(8) but you can also select from many third party Jail management frameworks. From all available ones I would choose BastilleBSD because of their modern approach and many ready to use templates for all needed use cases.

More on the Jails topic:

FreeBSD Ports Infrastructure

This is one of another examples why FreeBSD rocks that much. When you install Ubuntu or CentOS in some version there is chance that you will end up with not latest versions of packages but with versions that were quite up-to-date when this distribution version was released. Its especially visible in the CentOS world (and its upstream enterprise source system from Red Hat) where packages are quite up-to-date when .0 (dot zero) release is published but are VERY outdated when .8 or .9 incarnation of that release is available. Not to even mention that Firefox for example is released every month …

packages

As I said before when describing the FreeBSD Base System the FreeBSD Ports (and packages built from it available through pkg(8)) are independent. That means that third party software from FreeBSD Ports is almost always up-to-date (or very close to it). You can even check it on the repology.org site for the details. Below you will find a ‘snapshot’ of the repology.org stats from time of writing this article. The ‘online’ table is very long so I copy/pasted just the systems relevant to the article.

repology

One of the other advantages of FreeBSD Ports is that it offers really MASSIVE amount of software counting 40354 ports when writing this article and still rising. Amount of ready to be installed packages are little smaller with more then 32000 available.

I once migrated for a while to OpenSolaris in 2009 on my Dell Latitude D630 laptop because I really liked all the Solaris features (including ZFS and ZFS Boot Environments that were not available on FreeBSD back then) and the OpenSolaris GNOME based desktop was pretty nice back then even with Time Slider feature for ZFS snapshots in the Nautilus file manager. I got working WiFi connection, sound was working, generally everything on my laptop was supported and working with OpenSolaris … but there was no software. Of course ‘large’ projects like GIMP or OpenOffice was available even in the default pkg(8) repository but not much else. There was less then 4000 packages back then on OpenSolaris while about 25000 packages on FreeBSD if I recall correctly.

You can also easily browse available FreeBSD Ports (and its options) on the web by using the https://freshports.org/ page.

ports

The count of FreeBSD Ports is one thing, the features is another. No matter which Linux distribution you are using you will find a software that was compiled and shipped without that needed flag that you desperately need. If you find such software on FreeBSD it ‘hurts’ only for a moment because you can VERY EASILY recompile that software with needed options and replace that ‘default’ package with yours. For example the FreeBSD project is afraid to provide packages of Lame because of existing MP3 patents, so multimedia/ffmpeg package is built without MP3 support (with --disable-libmp3lame flag). That is why I have my own audio/lame and multimedia/ffmpeg packages built with my configure options and that is very easy to achieve. You need to go to the /usr/ports/multimedia/ffmpeg dir type make config and select [x] LAME at the ncurses dialog. Your chosen options will be saved as plain /var/db/ports/multimedia_ffmpeg/options file. If you remove that file (or type make rmconfig) then these custom options will reset to defaults. Then you type make build deinstall install clean and your port with new options is ready and installed as package. Nothing more is needed. You can even lock that package from the pkg(8) upgrades with pkg lock -y ffmpeg command so it will not be modified later but its better to rebuild such packages everytime you do a pkg upgrade procedure because of libraries versions bump and changes. While its very easy and fast to create a script with these commands to make it more automated you can also use other parts of the FreeBSD Ports infrastructure – enter Poudriere (or Synth) – more on that in the next part.

You also do not have to configure each port that way (which could be PITA for large amount of ports) but you may specify your needed (OPTIONS_SET) or unwanted (OPTIONS_UNSET) parameters only once globally using the /etc/make.conf file. You can also specify which default versions of software you want to use, for example Apache 2.2 instead of 2.4 and PHP 7.0 instead of 7.2. You can find all default versions in the /usr/ports/Mk/bsd.default-versions.mk file. Once you setup these options you can build/rebuild or update your packages from FreeBSD Ports by portmaster(8) tool. Like on Gentoo Linux with USE flags. But this is the original. Gentoo took all/most of its ideas from FreeBSD system and its Ports infrastructure.

The Poudriere is a build framework that uses FreeBSD Ports and FreeBSD Jails to build requested packages in clean reproducible way. You can create whole new binary package repository for pkg(8) command to use with it. I mentioned Synth because while Poudriere is often used to produce whole package repository the Synth is usually used just to rebuild several packages that does not fit your needs.

There is one important things about FreeBSD Ports that is often misunderstood by newcomers. What is the difference between the Ports and packages that are fetched and installed by pkg(8) tool? Its quite simple. A package is just a build and installed port. Nothing more or less. When you use the binary packages using pkg(8) command you are using packages that someone (the FreeBSD project in that case) built for you from the FreeBSD Ports in some point in time. While FreeBSD strives to maintain as up-to-date built packages as possible its the nature of FreeBSD Ports that they are always more up-to-date then the built packages. That is why you may build and install a new version of needed packages by yourself using FreeBSD Ports. One may think of such usage when it comes to security holes. When some locally executed commands (like file(1) for example) has a security hole then its not critical for you to update it as fast as possible because that security hole can be harmless for you, but when new version of Firefox fixes very important security hole then its better to update from FreeBSD Ports version faster because waiting 2 days for the package to be built (along with other packages) can be too long.

More on the FreeBSD Ports topic:

Updating/Building from Source

While the FreeBSD Ports infrastructure is for third party software the FreeBSD Base System (or its parts) also can be easily and convenient build from source. The FreeBSD kernel config is also very small and simple. While Linux kernel config contains thousands of options – 4432 for example in the default CentOS 8.2 install the FreeBSD GENERIC config has about 20 times options less – only 260 options. But that does not saturate the topic. You can start with MINIMAL FreeBSD kernel config which has only 75 options specified.

Linux # grep -c '^CONFIG' /boot/config-$( uname -r )
4432

FreeBSD # grep -c -E '^(device|options)' /usr/src/sys/amd64/conf/GENERIC
260

FreeBSD # grep -c -E '^(device|options)' /usr/src/sys/amd64/conf/MINIMAL
75

… and its not only about smaller amount of options. Can you tell my how many steps (and which ones are required) to rebuild CentOS or Ubuntu for example without Bluetooth support?

code

On the contrary its very simple (and fast) on the FreeBSD side. While /etc/make.conf file is used to enable/disable Ports options the /etc/src.conf file is used to enable/disable FreeBSD Base System options while building it from source. To build FreeBSD without Bluetooth support just add WITHOUT_BLUETOOTH=yes to the /etc/src.conf file and type these to build it:

# beadm create safe
# cd /usr/src
# make buildworld kernel
# reboot
# cd /usr/src
# make installworld
# mergemaster -iU
# reboot

Voila! You now have FreeBSD without Bluetooth support … and if any of the steps failed or because of your lack of experience/expertise your FreeBSD system does not boot or is broken you can use tools from /rescue to try to fix it (or at least figure out what is broken) and when you do not want to cope with this jest select safe ZFS Boot Environment at the FreeBSD loader(8) to boot to the system before you started building modified version of FreeBSD. Yes, You are bulletproof here. While having 294 WITHOUT_X options and 125 WITH_X options you can really tune FreeBSD Base System to your needs.

# zgrep -c WITHOUT_ /usr/share/man/man5/src.conf.5.gz
294

# zgrep -c WITH_ /usr/share/man/man5/src.conf.5.gz
125

The big downside of updating FreeBSD by source is that you can not use the freebsd-update tools to do it … but nothing stops you from creating your own FreeBSD Update Server so you will be able to use freebsd-update by adding updates using a CURRENT or STABLE system instead of RELEASE. That process is described in the Build Your Own FreeBSD Update Server article of official FreeBSD documentation.

More on the FreeBSD Source Updates/Builds topic:

Storage

Storage is one of the parts where FreeBSD really shines. Lots of people adore FreeBSD for well integrated ZFS filesystem and its really true. ZFS in FreeBSD has always been first class citizen. Lately OpenZFS 2.0 has been also integrated from the upstream joint FreeBSD and Linux repository. More and more FreeBSD features and solutions are using ZFS features.

openzfs

Most of these people that like integrated ZFS in FreeBSD do not know about the FreeBSD GEOM modular disk transformation framework which provides various storage related features and utilities like software RAID0/RAID1/RAID10/RAID3/RAID5 configurations or transparent encryption of underlying devices with GELI/GDBE (like LUKS on Linux). It also allows transparent filesystem journaling for ANY filesystem with GJOURNAL (yes also for FAT32 or exFAT) or allows one to export block devices over network with GEOM GATE devices (like NFS for block devices).

storage

FreeBSD also has its own FUSE implementation which allows all these FUSE based filesystems to work natively on FreeBSD. While lots of Linux folks know DRBD probably very few of them knew that FreeBSD comes with its own DRBD like solution called HAST – which does exactly the same thing. While ZFS has a lot features and possibilities FreeBSD still maintains and develops fast and small memory footprint UFS filesystem which today is used either with Soft Updates (SU) or Journaled Soft Updates (SUJ) depending on the use case. For example 10 TB data on UFS filesystem with Journaled Soft Updates (SUJ) takes about 1 minute under fsck(8). These storage solutions are available from FreeBSD Base System alone. The FreeBSD Ports offers much more with distributed filesystems solutions such as CEPH, LeoFS, LizardFS or Minio for Amazon S3 compatible storage.

More on the Storage topic:

Init System

FreeBSD offers really simple yet very powerful init system. It has system wide config under /etc/rc.conf file when you can enable/disable needed services with service_enable=YES and service_enable=NO stanzas. You do not even need to launch vi(1) to add them – just type sysrc service_enable=YES and they are added to the /etc/rc.conf file. There are also default values and services that are enabled and you will find them – along with many comments – in the /etc/defaults/rc.conf file. Each FreeBSD service file has PROVIDE/REQUIRE stanzas which are then used to automatically order the services to start. Services that can be run in parallel are started in parallel to save time. For example its pointless to start sshd(8) daemon without network. To start or stop the serivice you need to type service sshd start or service sshd stop command. But when a service is not enabled in the /etc/rc.conf file then you need to used add onestart and onestop instead. The Base System separation remains here as FreeBSD Base System services are located at /etc/rc.d directory and third party applications from ports/packages are kept under /usr/local prefix which means /usr/local/etc/rc.d dir.

When using systemd(1) you never know how the services gonna start because it will be different each time. Zero determinism. On FreeBSD you know exactly which services will start when because they are always ordered in the same state according to the PROVIDE/REQUIRE stanzas. FreeBSD also offers tools that will tell you the exact order – rcorder(8) – which can be used for all services, Base System services or third party services separately. There is also service -r command that will show you what was the orfer at the boot time.

# rcorder /etc/rc.d/* | head
/etc/rc.d/growfs
/etc/rc.d/sysctl
/etc/rc.d/hostid
/etc/rc.d/zvol
/etc/rc.d/dumpon
/etc/rc.d/ddb
/etc/rc.d/geli
/etc/rc.d/gbde
/etc/rc.d/ccd
/etc/rc.d/swap

# rcorder /usr/local/etc/rc.d/* | tail
/usr/local/etc/rc.d/hald
/usr/local/etc/rc.d/git_daemon
/usr/local/etc/rc.d/fscd
/usr/local/etc/rc.d/cupsd
/usr/local/etc/rc.d/cups_browsed
/usr/local/etc/rc.d/clamav-clamd
/usr/local/etc/rc.d/clamav-milter
/usr/local/etc/rc.d/clamav-freshclam
/usr/local/etc/rc.d/avahi-dnsconfd
/usr/local/etc/rc.d/aria2

# rcorder /etc/rc.d/* /usr/local/etc/rc.d/* 2> | grep -C 3 sshd
/etc/rc.d/ubthidhci
/etc/rc.d/syscons
/etc/rc.d/swaplate
/etc/rc.d/sshd
/etc/rc.d/cron
/etc/rc.d/jail
/etc/rc.d/localpkg

Adding new service to FreeBSD is also very easy as template for new service is very small and simple.

#!/bin/sh

. /etc/rc.subr

name=dummy
rcvar=dummy_enable

start_cmd="${name}_start"
stop_cmd=":"

load_rc_config $name
: ${dummy_enable:=no}
: ${dummy_msg="Nothing started."}

dummy_start()
{
	echo "$dummy_msg"
}

run_rc_command "$1"

If its not simple enought for you there is dedicated FreeBSD article about writing them – Practical rc.d Scripting in BSD – available here.

More on the Init System topic:

Linux Binary Compatibility

While Linux can not be FreeBSD – the FreeBSD can be Linux – and its not some slow emulation – its implementation of Linux system calls. There was time when enterprises used to work with Linux only applications (not available on FreeBSD by then) using the Linux Binary Compatibility on FreeBSD because it was faster then running them natively on Linux – FreeBSD Used to Generate Spectacular Special Effects – an official FreeBSD Press Release about FreeBSD being used to generate spacial effects to the one of the best movies of all time – The Matrix (1999).

matrix

Today the LINUX_COMPAT is also natively fast and allows one to run Linux applications – even Linux games in X11 with hardware acceleration for graphics. Think of it as WINE but for Linux applications. It lives under /compat/linux directory. It even implements Linux /proc virtual filesystem which can be mounted at the /compat/linux/proc dir but its not mandatory. For any software that does not come with source code and works on Linux the Linux Binary Compatibility saves the day. For example the f.lux project. Before I got to know Redshift I used f.lux Linux binary using LINUX_COMPAT to suppress blue spectrum light from my FreeBSD screen. The Linux Binary Compatibility subsystem can also be used to run Linux bases FreeBSD Jails – with Devuan for example.

More on the Linux Binary Compatibility topic:

Simplicity

FreeBSD is simple but not coarse/ornery. For example as Linux the FreeBSD system also supports the /proc virtual filesystem but on FreeBSD its optional and not used by default while Linux could not live without it. But while Linux has mandatory /proc it also has another virtual filesystem residing under /sys … but why Linux people need two different virtual filesystems with similar purposes? Why they could not create everything under /proc as it already existed. That is big enigma for my sanity.

But /sys is not the end of that madness. Its just a beginning.

What about these?

  • securityfs
  • devpts
  • cgroup
  • pstore
  • bpf
  • configfs
  • selinuxfs
  • systemd-1
  • mqueue
  • debugfs
  • hugetlbfs

Take a look at the FreeBSD mount(8) output after the default install on ZFS.

FreeBSD # mount
zroot/ROOT/12.1 on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)

Several ZFS datasets and one virtual devfs filesystem for /dev directory. With install on UFS it would be similar with several UFS partitions mounted instead of ZFS datasets.

Take a look at the CentOS 8.2 installation with just one physical root (/) XFS filesystem.

[root@centos8 ~]# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=919388k,nr_inodes=229847,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime,seclabel)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,rdma)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/sda1 on / type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=34,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=17309)
mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
debugfs on /sys/kernel/debug type debugfs (rw,relatime,seclabel)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel,pagesize=2M)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=187088k,mode=700)

Fuck me. Its even really hard to just find any REAL filesystem there … fortunately we can ask for only XFS filesystems to display.

[root@centos8 ~]# mount -t xfs
/dev/sda1 on / type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

Lets get on the networking now. Lets assume that you want to make standard enterprise networking setup on a physical server with two interfaces aggregated together into highly available interface bond0 (lagg0 on FreeBSD) and then you want to put VLAN tag and IP address on that VLAN. The CentOS 7.x/8.x installer (Anaconda) will welcome you with this mess.

[root@centos7 ~]# ls -1 /etc/sysconfig/network-scripts/ifcfg-*
ifcfg-Bond_connection_1
ifcfg-eno49
ifcfg-eno49-1
ifcfg-eno50
ifcfg-eno50-1
ifcfg-VLAN_connection_1

[root@centos7 ~]# cat /etc/sysconfig/network-scripts/ifcfg-Bond_connection_1
DEVICE=bond0
BONDING_OPTS="miimon=1 updelay=0 downdelay=0 mode=active-backup"
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_PRIVACY=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME="Bond connection 1"
UUID=ca85417f-8852-43bf-96ee-5bd3f0f83648
ONBOOT=yes

[root@centos7 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eno49
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eno49
UUID=2f60f50b-38ad-492a-b90a-ba736acf6792
DEVICE=eno49
ONBOOT=no

[root@centos7 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eno49-1
HWADDR=xx:xx:xx:xx:xx:xx
TYPE=Ethernet
NAME=eno49
UUID=342b8494-126d-4f3a-b749-694c8c922aa1
DEVICE=eno49
ONBOOT=yes
MASTER=bond0
SLAVE=yes

[root@centos7 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eno50
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eno50
UUID=4fd36e24-1c6d-4a65-a316-7a14e9a92965
DEVICE=eno50
ONBOOT=no

[root@centos7 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eno50-1
HWADDR=xx:xx:xx:xx:xx:xx
TYPE=Ethernet
NAME=eno50
UUID=a429b697-73c2-404d-9379-472cb3c35e06
DEVICE=eno50
ONBOOT=yes
MASTER=bond0
SLAVE=yes

[root@centos7 ~]# cat/etc/sysconfig/network-scripts/ifcfg-VLAN_connection_1
VLAN=yes
TYPE=Vlan
PHYSDEV=ca85417f-8852-43bf-96ee-5bd3f0f83648
VLAN_ID=601
REORDER_HDR=yes
GVRP=no
MVRP=no
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=10.20.30.40
PREFIX=24
GATEWAY=10.20.30.1
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_PRIVACY=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME="VLAN connection 1"
UUID=90f7a9bb-1443-4adf-a3eb-86a03b23ecfb
ONBOOT=yes

For the record – I have choosen ‘STATIC’ IPv4 address but installer made these interfaces to use DHCP and that STATIC address. That could be a bug but lets get to the point.

After manual fixing with vi(1) (and hour later) this is how it supposed to look.

[root@centos7 ~]# cat /etc/sysconfig/network
GATEWAY=10.20.30.1
NOZEROCONF=yes

[root@centos7 ~]# ls -1 /etc/sysconfig/network-scripts/ifcfg-*
ifcfg-bond0
ifcfg-bond0.601
ifcfg-eno49
ifcfg-eno50

[root@centos7 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BONDING_OPTS="miimon=1 updelay=0 downdelay=0 mode=active-backup"
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
IPV4_FAILURE_FATAL=no
IPV6INIT=no
ONBOOT=yes

[root@centos7 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0.601
VLAN=yes
TYPE=Vlan
VLAN_ID=601
DEVICE=bond0.601
REORDER_HDR=yes
GVRP=no
MVRP=no
BOOTPROTO=none
IPADDR=10.20.30.40
PREFIX=24
IPV4_FAILURE_FATAL=no
IPV6INIT=no
ONBOOT=yes

[root@centos7 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eno49
BOOTPROTO=none
IPV4_FAILURE_FATAL=no
IPV6INIT=no
TYPE=Ethernet
NAME=eno49
DEVICE=eno49
ONBOOT=yes
MASTER=bond0
SLAVE=yes

[root@centos7 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eno50
BOOTPROTO=none
IPV4_FAILURE_FATAL=no
IPV6INIT=no
TYPE=Ethernet
NAME=eno50
DEVICE=eno50
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Better … but still takes A LOT OF SPACE and several files to cover that quite simple setup. Not to mention its level of complication and making that very error prone way. The same configuration on FreeBSD would take just 7 lines within single /etc/rc.conf file as shown below.

ifconfig_fxp0="up"
ifconfig_fxp1="up"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto failover laggport fxp0 laggport fxp1"
vlans_lagg0="601"
ifconfig_lagg0_601="inet 10.20.30.40/24"
defaultrouter="10.20.30.1"

What about the boot process? FreeBSD boots from root on ZFS partition with just small 512 KB not mountable partition. No separate /boot device is needed. On the other side Linux always needs that separate /boot partition filled with GRUB modules. No matter if its ZFS or LVM. That is why implementation of ZFS Boot Environments is quite complicated on Linux bacause even if you have root on ZFS on a Linux system there is still unprotected /boot filesystem that can not be snapshoted with ZFS and has to be protected in old classic way which kill the idea of ZFS Boot Environments or Linux.

FreeBSD is really simple and well thought operating system. But also a very underestimated one.

Evolution Instead Rewriting

How many Linux tools or subsystems are abandoned or superseeded by new ones? Why the ifconfig(8) command was not updated with new options and instead a new ip(8) command was introduced? Same with netstat(8) being replaced by ss(8). Same with arp(8)/iwconfig/route(8) and many more. What about whole init system? The Linux world has been taken over by systemd(1) whenever you like it or not. Even distributions that have grown their mature init systems like Ubuntu with its Upstart has moved to systemd(1) altogether. The distributions that do not use it are very few and considered a niche today.

evolution

In the FreeBSD land on the countary such things happen only if there is no other way to implement new things. Its the last thing wanted in the FreeBSD. FreeBSD evolves and is developed with stability and backward compatibility in mind. Userland tools are grown and updated with new options instead of rewriting them over and over again. Not to mention how many new bugs are introduced by changing one tool to another.

More on the Evolution Instead Rewriting topic:

Documentation

Having system that can do almost anything but not knowing how to do that makes that system pretty useless (or at least pretty PITA to use). FreeBSD offers second to none documentation that is actively maintained and updated. Along with its legendary FreeBSD Handbook and FreeBSD FAQ the FreeBSD project also offers official FreeBSD Articles about various FreeBSD topics. The Man Pages are also very detailed and contain many examples. There is also FreeBSD Wiki page for work in progress documentation and ideas related to FreeBSD development and if you have any problems or questions related to FreeBSD there are official FreeBSD Forums and oldschool Mailing Lists available.

documentation

These were only the official project knowledge sources but there are also lots of FreeBSD books. Here are the best and up-to-date ones.

  • Absolute FreeBSD – Complete Guide to FreeBSD – 3nd Edition (2019)
  • Beginning Modern Unix (2018)
  • Book of PF – 3rd Edition (2015)
  • Design and Implementation of FreeBSD 11 Operating System – 2nd Edition (2015)
  • FreeBSD Device Drivers (2012)
  • FreeBSD Mastery – ZFS (2015)
  • FreeBSD Mastery – Advanced ZFS (2016)
  • FreeBSD Mastery – Storage Essentials (2014)
  • FreeBSD Mastery – Specialty Filesystems (2015)
  • FreeBSD Mastery – Jails (2019)

There are also two magazines that are dedicated to BSD and FreeBSD systems. Both are free and cover lots of interesting topics regarding FreeBSD.

With all this knowledge and support its really hard not to achieve what you need/want with FreeBSD system.

Community

Last but not least and I would say its even more important then good documentation (which FreeBSD has awesome). People that use FreeBSD do that conciously and are often experienced not only in FreeBSD land but also in topics related to other UNIX systems. Often they took long road of first using the Linux systems before finally setting on the FreeBSD land or they still do Linux adminitration for a living while resting using far more reasonable and sensible FreeBSD solution. I always find FreeBSD Community helpful and friendly. Always willingly helpful – especially towards newcommers. Even when you try to ‘force’ FreeBSD people to ‘fight’ in unjust/doubtful discussion they will reply with dignity and technical arguments instead of yelling at you.

The FreeBSD project even made several articles and Handbook chapters especially for Linux newcommers (or sometimes called systemd(1) refugees).

Closing Thoughts

I tried really hard to not make it a Linux rant but some may feel it that way – if so please remember that this was not my intention. FreeBSD like Linux and like any other operating system has its ups and downs. Hope that I showed you most interesting FreeBSD parts. I may add new sections here without a warning in the future ๐Ÿ™‚

EOF

ย 

FreeBSD Cluster with Pacemaker and Corosync

I always missed ‘proper’ cluster software for FreeBSD systems. Recently I got to run several Pacemaker/Corosync based clusters on Linux systems. I thought how to make similar high availability solutions on FreeBSD and I was really shocked when I figured out that both Pacemaker and Corosync tools are available in the FreeBSD Ports and packages as net/pacemaker2 and net/corosync2 respectively.

In this article I will check how well Pacemaker and Corosync cluster works on FreeBSD.

pacemaker

There are many definitions of a cluster. One that I like the most is that a cluster is a system that is still redundant after losing one of its nodes (is still a cluster). This means that 3 nodes is a minimum for a cluster by that definition. The two node clusters are quite problematic because of their biggest exposure to the split brain problem. That is why often in the two node clusters additional devices or systems are added to make sure that this split brain does not happen. For example one can add third node without any resources or services just as a ‘witness’ role. Other way is to add a shared disk resource that will serve the same purpose and often its a raw volume with SCSI-3 Persistent Reservation mechanism used.

Lab Setup

As usual it will be entirely VirtualBox based and it will consist of 3 hosts. To not create 3 same FreeBSD installations I used 12.1-RELEASE virtual machine image available from the FreeBSD Project directly:

There are several formats available – qcow2/raw/vhd/vmdk – but as I will be using VirtualBox I used the VMDK one.

Here is the list of the machines for the GlusterFS cluster:

  • 10.0.10.111 node1
  • 10.0.10.112 node2
  • 10.0.10.113 node3

Each VirtualBox virtual machine for FreeBSD is the default one (as suggested in the VirtualBox wizard) with 512 MB RAM and NAT Network as shown on the image below.

machine

Here is the configuration of the NAT Network on VirtualBox.

nat-network-01

nat-network-02

Before we will try connect to our FreeBSD machines we need to make the minimal network configuration inside each VM. Each FreeBSD machine will have such minimal /etc/rc.conf file as shown example for node1 host.

root@node1:~ # cat /etc/rc.conf
hostname=node1
ifconfig_em0="inet 10.0.10.111/24 up"
defaultrouter=10.0.10.1
sshd_enable=YES

For the setup purposes we will need to allow root login on these FreeBSD machines with PermitRootLogin yes option in the /etc/ssh/sshd_config file. You will also need to restart the sshd(8) service after the changes.

root@node1:~ # grep PermitRootLogin /etc/ssh/sshd_config
PermitRootLogin yes

root@node1:~ # service sshd restart

By using NAT Network with Port Forwarding the FreeBSD machines will be accessible on the localhost ports. For example the node1 machine will be available on port 2211, the node2 machine will be available on port 2212 and so on. This is shown in the sockstat utility output below.

nat-network-03-sockstat

nat-network-04-ssh

To connect to such machine from the VirtualBox host system you will need this command:

vboxhost % ssh -l root localhost -p 2211

Packages

As we now have ssh(1) connectivity we need to add needed packages. To make our VMs resolve DNS queries we need to add one last thing. We will also switch to ‘quarterly’ branch of the pkg(8) packages.

root@node1:~ # echo 'nameserver 1.1.1.1' > /etc/resolv.conf
root@node1:~ # sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf

Remember to repeat these two upper commands on node2 and node3 systems.

Now we will add Pacemaker and Corosync packages.

root@node1:~ # pkg install pacemaker2 corosync2 crmsh

root@node2:~ # pkg install pacemaker2 corosync2 crmsh

root@node3:~ # pkg install pacemaker2 corosync2 crmsh

These are messages both from pacemaker2 and corosync2 that we need to address.

Message from pacemaker2-2.0.4:

--
For correct operation, maximum socket buffer size must be tuned
by performing the following command as root :

# sysctl kern.ipc.maxsockbuf=18874368

To preserve this setting across reboots, append the following
to /etc/sysctl.conf :

kern.ipc.maxsockbuf=18874368

======================================================================

Message from corosync2-2.4.5_1:

--
For correct operation, maximum socket buffer size must be tuned
by performing the following command as root :

# sysctl kern.ipc.maxsockbuf=18874368

To preserve this setting across reboots, append the following
to /etc/sysctl.conf :

kern.ipc.maxsockbuf=18874368

We need to change the kern.ipc.maxsockbuf parameter. Lets do it then.

root@node1:~ # echo 'kern.ipc.maxsockbuf=18874368' >> /etc/sysctl.conf
root@node1:~ # service sysctl restart

root@node2:~ # echo 'kern.ipc.maxsockbuf=18874368' >> /etc/sysctl.conf
root@node2:~ # service sysctl restart

root@node3:~ # echo 'kern.ipc.maxsockbuf=18874368' >> /etc/sysctl.conf
root@node3:~ # service sysctl restart

Lets check what binaries come with these packages.

root@node1:~ # pkg info -l pacemaker2 | grep bin
        /usr/local/sbin/attrd_updater
        /usr/local/sbin/cibadmin
        /usr/local/sbin/crm_attribute
        /usr/local/sbin/crm_diff
        /usr/local/sbin/crm_error
        /usr/local/sbin/crm_failcount
        /usr/local/sbin/crm_master
        /usr/local/sbin/crm_mon
        /usr/local/sbin/crm_node
        /usr/local/sbin/crm_report
        /usr/local/sbin/crm_resource
        /usr/local/sbin/crm_rule
        /usr/local/sbin/crm_shadow
        /usr/local/sbin/crm_simulate
        /usr/local/sbin/crm_standby
        /usr/local/sbin/crm_ticket
        /usr/local/sbin/crm_verify
        /usr/local/sbin/crmadmin
        /usr/local/sbin/fence_legacy
        /usr/local/sbin/iso8601
        /usr/local/sbin/pacemaker-remoted
        /usr/local/sbin/pacemaker_remoted
        /usr/local/sbin/pacemakerd
        /usr/local/sbin/stonith_admin

root@node1:~ # pkg info -l corosync2 | grep bin
        /usr/local/bin/corosync-blackbox
        /usr/local/sbin/corosync
        /usr/local/sbin/corosync-cfgtool
        /usr/local/sbin/corosync-cmapctl
        /usr/local/sbin/corosync-cpgtool
        /usr/local/sbin/corosync-keygen
        /usr/local/sbin/corosync-notifyd
        /usr/local/sbin/corosync-quorumtool

root@node1:~ # pkg info -l crmsh | grep bin
        /usr/local/bin/crm

Cluster Initialization

Now we will initialize our FreeBSD cluster.

First we need to make sure that names of the nodes are DNS resolvable.

root@node1:~ # tail -3 /etc/hosts

10.0.10.111 node1
10.0.10.112 node2
10.0.10.113 node3

root@node2:~ # tail -3 /etc/hosts

10.0.10.111 node1
10.0.10.112 node2
10.0.10.113 node3

root@node3:~ # tail -3 /etc/hosts

10.0.10.111 node1
10.0.10.112 node2
10.0.10.113 node3


Now we will generate the Corosync key.

root@node1:~ # corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Writing corosync key to /usr/local/etc/corosync/authkey.

root@node1:~ # echo $?
0

root@node1:~ # ls -l /usr/local/etc/corosync/authkey
-r--------  1 root  wheel  128 Sep  2 20:37 /usr/local/etc/corosync/authkey

Now the Corosync configuration file. For sure some examples were provided by the package maintainer.

root@node1:~ # pkg info -l corosync2 | grep example
        /usr/local/etc/corosync/corosync.conf.example
        /usr/local/etc/corosync/corosync.conf.example.udpu

We will take the second one as a base for our config.

root@node1:~ # cp /usr/local/etc/corosync/corosync.conf.example.udpu /usr/local/etc/corosync/corosync.conf

root@node1:~ # vi /usr/local/etc/corosync/corosync.conf
               /* LOTS OF EDITS HERE */

root@node1:~ # cat /usr/local/etc/corosync/corosync.conf

totem {
  version: 2
  crypto_cipher: aes256
  crypto_hash: sha256
  transport: udpu

  interface {
    ringnumber: 0
    bindnetaddr: 10.0.10.0
    mcastport: 5405
    ttl: 1
  }
}

logging {
  fileline: off
  to_logfile: yes
  to_syslog: no
  logfile: /var/log/cluster/corosync.log
  debug: off
  timestamp: on

  logger_subsys {
    subsys: QUORUM
    debug: off
  }
}

nodelist {

  node {
    ring0_addr: 10.0.10.111
    nodeid: 1
  }

  node {
    ring0_addr: 10.0.10.112
    nodeid: 2
  }

  node {
    ring0_addr: 10.0.10.113
    nodeid: 3
  }

}

quorum {
  provider: corosync_votequorum
  expected_votes: 2
}

Now we need to propagate both Corosync key and config across the nodes in the cluster.

We can use some simple tools created exactly for that like net/csync2 cluster synchronization tool for example but plain old net/rsync will serve as well.

root@node1:~ # pkg install -y rsync

root@node1:~ # rsync -av /usr/local/etc/corosync/ node2:/usr/local/etc/corosync/
The authenticity of host 'node2 (10.0.10.112)' can't be established.
ECDSA key fingerprint is SHA256:/ZDmln7GKi6n0kbad73TIrajPjGfQqJJX+ReSf3NMvc.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2' (ECDSA) to the list of known hosts.
Password for root@node2:
sending incremental file list
./
authkey
corosync.conf
service.d/
uidgid.d/

sent 1,100 bytes  received 69 bytes  259.78 bytes/sec
total size is 4,398  speedup is 3.76

root@node1:~ # rsync -av /usr/local/etc/corosync/ node3:/usr/local/etc/corosync/
The authenticity of host 'node2 (10.0.10.112)' can't be established.
ECDSA key fingerprint is SHA256:/ZDmln7GKi6n0kbad73TIrajPjGfQqJJX+ReSf3NMvc.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node3' (ECDSA) to the list of known hosts.
Password for root@node3:
sending incremental file list
./
authkey
corosync.conf
service.d/
uidgid.d/

sent 1,100 bytes  received 69 bytes  259.78 bytes/sec
total size is 4,398  speedup is 3.76

Now lets check that they are the same.

root@node1:~ # cksum /usr/local/etc/corosync/{authkey,corosync.conf}
2277171666 128 /usr/local/etc/corosync/authkey
1728717329 622 /usr/local/etc/corosync/corosync.conf

root@node2:~ # cksum /usr/local/etc/corosync/{authkey,corosync.conf}
2277171666 128 /usr/local/etc/corosync/authkey
1728717329 622 /usr/local/etc/corosync/corosync.conf

root@node3:~ # cksum /usr/local/etc/corosync/{authkey,corosync.conf}
2277171666 128 /usr/local/etc/corosync/authkey
1728717329 622 /usr/local/etc/corosync/corosync.conf

Same.

We can now add corosync_enable=YES and pacemaker_enable=YES to the /etc/rc.conf file.

root@node1:~ # sysrc corosync_enable=YES
corosync_enable:  -> YES

root@node1:~ # sysrc pacemaker_enable=YES
pacemaker_enable:  -> YES

root@node2:~ # sysrc corosync_enable=YES
corosync_enable:  -> YES

root@node2:~ # sysrc pacemaker_enable=YES
pacemaker_enable:  -> YES

root@node3:~ # sysrc corosync_enable=YES
corosync_enable:  -> YES

root@node3:~ # sysrc pacemaker_enable=YES
pacemaker_enable:  -> YES

Lets start these services then.

root@node1:~ # service corosync start
Starting corosync.
Sep 02 20:55:35 notice  [MAIN  ] Corosync Cluster Engine ('2.4.5'): started and ready to provide service.
Sep 02 20:55:35 info    [MAIN  ] Corosync built-in features:
Sep 02 20:55:35 warning [MAIN  ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Sep 02 20:55:35 warning [MAIN  ] Please migrate config file to nodelist.

root@node1:~ # ps aux | grep corosync
root  1695   0.0  7.9 38340 38516  -  S    20:55    0:00.40 /usr/local/sbin/corosync
root  1699   0.0  0.1   524   336  0  R+   20:57    0:00.00 grep corosync

Do the same on the node2 and node3 systems.

The Pacemaker is not yet running so that will fail.

root@node1:~ # crm status
Could not connect to the CIB: Socket is not connected
crm_mon: Error: cluster is not available on this node
ERROR: status: crm_mon (rc=102): 

We will start it now.

root@node1:~ # service pacemaker start
Starting pacemaker.

root@node2:~ # service pacemaker start
Starting pacemaker.

root@node3:~ # service pacemaker start
Starting pacemaker.

You need to give it little time to start because if you will execute crm status command right away you will get 0 nodes configured message as shown below.

root@node1:~ # crm status
Cluster Summary:
  * Stack: unknown
  * Current DC: NONE
  * Last updated: Wed Sep  2 20:58:51 2020
  * Last change:  
  * 0 nodes configured
  * 0 resource instances configured


Full List of Resources:
  * No resources

… but after a while everything is detected and works as desired.

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 21:02:49 2020
  * Last change:  Wed Sep  2 20:59:00 2020 by hacluster via crmd on node2
  * 3 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * No resources

The Pacemaker runs properly.

root@node1:~ # ps aux | grep pacemaker
root      1716   0.0  0.5 10844   2396  -  Is   20:58     0:00.00 daemon: /usr/local/sbin/pacemakerd[1717] (daemon)
root      1717   0.0  5.2 49264  25284  -  S    20:58     0:00.27 /usr/local/sbin/pacemakerd
hacluster 1718   0.0  6.1 48736  29708  -  Ss   20:58     0:00.75 /usr/local/libexec/pacemaker/pacemaker-based
root      1719   0.0  4.5 40628  21984  -  Ss   20:58     0:00.28 /usr/local/libexec/pacemaker/pacemaker-fenced
root      1720   0.0  2.8 25204  13688  -  Ss   20:58     0:00.20 /usr/local/libexec/pacemaker/pacemaker-execd
hacluster 1721   0.0  3.9 38148  19100  -  Ss   20:58     0:00.25 /usr/local/libexec/pacemaker/pacemaker-attrd
hacluster 1722   0.0  2.9 25460  13864  -  Ss   20:58     0:00.17 /usr/local/libexec/pacemaker/pacemaker-schedulerd
hacluster 1723   0.0  5.4 49304  26300  -  Ss   20:58     0:00.41 /usr/local/libexec/pacemaker/pacemaker-controld
root      1889   0.0  0.6 11348   2728  0  S+   21:56     0:00.00 grep pacemaker

We can check how Corosync sees its members.

root@node1:~ # corosync-cmapctl | grep members
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(10.0.10.111) 
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(10.0.10.112) 
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2.status (str) = joined
runtime.totem.pg.mrp.srp.members.3.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.3.ip (str) = r(0) ip(10.0.10.113) 
runtime.totem.pg.mrp.srp.members.3.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.3.status (str) = joined

… or the quorum information.

root@node1:~ # corosync-quorumtool
Quorum information
------------------
Date:             Wed Sep  2 21:00:38 2020
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          1
Ring ID:          1/12
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2  
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes Name
         1          1 10.0.10.111 (local)
         2          1 10.0.10.112
         3          1 10.0.10.113

The Corosync log file is filled with the following information.

root@node1:~ # cat /var/log/cluster/corosync.log
Sep 02 20:55:35 [1694] node1 corosync notice  [MAIN  ] Corosync Cluster Engine ('2.4.5'): started and ready to provide service.
Sep 02 20:55:35 [1694] node1 corosync info    [MAIN  ] Corosync built-in features:
Sep 02 20:55:35 [1694] node1 corosync warning [MAIN  ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Sep 02 20:55:35 [1694] node1 corosync warning [MAIN  ] Please migrate config file to nodelist.
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] Initializing transport (UDP/IP Unicast).
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha256
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] The network interface [10.0.10.111] is now up.
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync configuration map access [0]
Sep 02 20:55:35 [1694] node1 corosync info    [QB    ] server name: cmap
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync configuration service [1]
Sep 02 20:55:35 [1694] node1 corosync info    [QB    ] server name: cfg
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Sep 02 20:55:35 [1694] node1 corosync info    [QB    ] server name: cpg
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync profile loading service [4]
Sep 02 20:55:35 [1694] node1 corosync notice  [QUORUM] Using quorum provider corosync_votequorum
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Sep 02 20:55:35 [1694] node1 corosync info    [QB    ] server name: votequorum
Sep 02 20:55:35 [1694] node1 corosync notice  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Sep 02 20:55:35 [1694] node1 corosync info    [QB    ] server name: quorum
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] adding new UDPU member {10.0.10.111}
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] adding new UDPU member {10.0.10.112}
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] adding new UDPU member {10.0.10.113}
Sep 02 20:55:35 [1694] node1 corosync notice  [TOTEM ] A new membership (10.0.10.111:4) was formed. Members joined: 1
Sep 02 20:55:35 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:55:35 [1694] node1 corosync notice  [QUORUM] Members[1]: 1
Sep 02 20:55:35 [1694] node1 corosync notice  [MAIN  ] Completed service synchronization, ready to provide service.
Sep 02 20:58:14 [1694] node1 corosync notice  [TOTEM ] A new membership (10.0.10.111:8) was formed. Members joined: 2
Sep 02 20:58:14 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:58:14 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:58:14 [1694] node1 corosync notice  [QUORUM] This node is within the primary component and will provide service.
Sep 02 20:58:14 [1694] node1 corosync notice  [QUORUM] Members[2]: 1 2
Sep 02 20:58:14 [1694] node1 corosync notice  [MAIN  ] Completed service synchronization, ready to provide service.
Sep 02 20:58:19 [1694] node1 corosync notice  [TOTEM ] A new membership (10.0.10.111:12) was formed. Members joined: 3
Sep 02 20:58:19 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:58:19 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:58:19 [1694] node1 corosync warning [CPG   ] downlist left_list: 0 received
Sep 02 20:58:19 [1694] node1 corosync notice  [QUORUM] Members[3]: 1 2 3
Sep 02 20:58:19 [1694] node1 corosync notice  [MAIN  ] Completed service synchronization, ready to provide service.

Here is the configuration.

root@node1:~ # crm configure show
node 1: node1
node 2: node2
node 3: node3
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=2.0.4-2deceaa3ae \
        cluster-infrastructure=corosync

As we will not be configuring the STONITH mechanism we will disable it.

root@node1:~ # crm configure property stonith-enabled=false

New configuraion with STONITH disabled.

root@node1:~ # crm configure show
node 1: node1
node 2: node2
node 3: node3
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=2.0.4-2deceaa3ae \
        cluster-infrastructure=corosync \
        stonith-enabled=false

The STONITH configuration is out of scope of this article but properly configured STONITH looks like that.

stonith

First Service

We will now configure our first highly available service – a classic – a floating IP address ๐Ÿ™‚

root@node1:~ # crm configure primitive IP ocf:heartbeat:IPaddr2 params ip=10.0.10.200 cidr_netmask="24" op monitor interval="30s"

Lets check how it behaves.

root@node1:~ # crm configure show
node 1: node1
node 2: node2
node 3: node3
primitive IP IPaddr2 \
        params ip=10.0.10.200 cidr_netmask=24 \
        op monitor interval=30s
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=2.0.4-2deceaa3ae \
        cluster-infrastructure=corosync \
        stonith-enabled=false

Looks good – lets check the cluster status.

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:03:35 2020
  * Last change:  Wed Sep  2 22:02:53 2020 by root via cibadmin on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * IP  (ocf::heartbeat:IPaddr2):        Stopped

Failed Resource Actions:
  * IP_monitor_0 on node3 'not installed' (5): call=5, status='complete', exitreason='Setup problem: couldn't find command: ip', last-rc-change='2020-09-02 22:02:53Z', queued=0ms, exec=132ms
  * IP_monitor_0 on node2 'not installed' (5): call=5, status='complete', exitreason='Setup problem: couldn't find command: ip', last-rc-change='2020-09-02 22:02:54Z', queued=0ms, exec=120ms
  * IP_monitor_0 on node1 'not installed' (5): call=5, status='complete', exitreason='Setup problem: couldn't find command: ip', last-rc-change='2020-09-02 22:02:53Z', queued=0ms, exec=110ms

Crap. Linuxism. The ip(8) command is expected to be present in the system. This is FreeBSD and as any UNIX system it comes with ifconfig(8) command instead.

We will have to figure something else. For now we will delete our useless IP service.

root@node1:~ # crm configure delete IP

Status after deletion.

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:04:34 2020
  * Last change:  Wed Sep  2 22:04:31 2020 by root via cibadmin on node1
  * 3 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * No resources

Custom Resource

Lets check what resources are available by stock Pacemaker installation.

root@node1:~ # ls -l /usr/local/lib/ocf/resource.d/pacemaker
total 144
-r-xr-xr-x  1 root  wheel   7484 Aug 29 01:22 ClusterMon
-r-xr-xr-x  1 root  wheel   9432 Aug 29 01:22 Dummy
-r-xr-xr-x  1 root  wheel   5256 Aug 29 01:22 HealthCPU
-r-xr-xr-x  1 root  wheel   5342 Aug 29 01:22 HealthIOWait
-r-xr-xr-x  1 root  wheel   9450 Aug 29 01:22 HealthSMART
-r-xr-xr-x  1 root  wheel   6186 Aug 29 01:22 Stateful
-r-xr-xr-x  1 root  wheel  11370 Aug 29 01:22 SysInfo
-r-xr-xr-x  1 root  wheel   5856 Aug 29 01:22 SystemHealth
-r-xr-xr-x  1 root  wheel   7382 Aug 29 01:22 attribute
-r-xr-xr-x  1 root  wheel   7854 Aug 29 01:22 controld
-r-xr-xr-x  1 root  wheel  16134 Aug 29 01:22 ifspeed
-r-xr-xr-x  1 root  wheel  11040 Aug 29 01:22 o2cb
-r-xr-xr-x  1 root  wheel  11696 Aug 29 01:22 ping
-r-xr-xr-x  1 root  wheel   6356 Aug 29 01:22 pingd
-r-xr-xr-x  1 root  wheel   3702 Aug 29 01:22 remote

Not many … we will try to modify the Dummy service into an IP changer on FreeBSD.

root@node1:~ # cp /usr/local/lib/ocf/resource.d/pacemaker/Dummy /usr/local/lib/ocf/resource.d/pacemaker/ifconfig

root@node1:~ # vi /usr/local/lib/ocf/resource.d/pacemaker/ifconfig
               /* LOTS OF TYPING */

Because of the WordPress blogging system limitations I am forced to post this ifconfig resource as an image … but fear not – the text version is also available here – ifconfig.odt – for download.

Also the first version did not went that well …

root@node1:~ # setenv OCF_ROOT /usr/local/lib/ocf
root@node1:~ # ocf-tester -n resourcename /usr/local/lib/ocf/resource.d/pacemaker/ifconfig
Beginning tests for /usr/local/lib/ocf/resource.d/pacemaker/ifconfig...
* rc=3: Your agent has too restrictive permissions: should be 755
-:1: parser error : Start tag expected, '<' not found
usage: /usr/local/lib/ocf/resource.d/pacemaker/ifconfig {start|stop|monitor}
^
* rc=1: Your agent produces meta-data which does not conform to ra-api-1.dtd
* rc=3: Your agent does not support the meta-data action
* rc=3: Your agent does not support the validate-all action
* rc=0: Monitoring a stopped resource should return 7
* rc=0: The initial probe for a stopped resource should return 7 or 5 even if all binaries are missing
* Your agent does not support the notify action (optional)
* Your agent does not support the demote action (optional)
* Your agent does not support the promote action (optional)
* Your agent does not support master/slave (optional)
* rc=0: Monitoring a stopped resource should return 7
* rc=0: Monitoring a stopped resource should return 7
* rc=0: Monitoring a stopped resource should return 7
* Your agent does not support the reload action (optional)
Tests failed: /usr/local/lib/ocf/resource.d/pacemaker/ifconfig failed 9 tests

But after adding 755 mode to it and making several (hundred) changes it become usable.

root@node1:~ # vi /usr/local/lib/ocf/resource.d/pacemaker/ifconfig
             /* LOTS OF NERVOUS TYPING */
root@node1:~ # chmod 755 /usr/local/lib/ocf/resource.d/pacemaker/ifconfig
root@node1:~ # setenv OCF_ROOT /usr/local/lib/ocf
root@node1:~ # ocf-tester -n resourcename /usr/local/lib/ocf/resource.d/pacemaker/ifconfig
Beginning tests for /usr/local/lib/ocf/resource.d/pacemaker/ifconfig...
* Your agent does not support the notify action (optional)
* Your agent does not support the demote action (optional)
* Your agent does not support the promote action (optional)
* Your agent does not support master/slave (optional)
* Your agent does not support the reload action (optional)
/usr/local/lib/ocf/resource.d/pacemaker/ifconfig passed all tests

Looks usable.

The ifconfig resource. Its pretty limited and with hardcoded IP address as for now.

ifconfig

Lets try to add new IP resource to our FreeBSD cluster.

Tests

root@node1:~ # crm configure primitive IP ocf:pacemaker:ifconfig op monitor interval="30"

Added.

Lets see what status command now shows.

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:44:52 2020
  * Last change:  Wed Sep  2 22:44:44 2020 by root via cibadmin on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * IP  (ocf::pacemaker:ifconfig):       Started node1

Failed Resource Actions:
  * IP_monitor_0 on node3 'not installed' (5): call=24, status='Not installed', exitreason='', last-rc-change='2020-09-02 22:42:52Z', queued=0ms, exec=5ms
  * IP_monitor_0 on node2 'not installed' (5): call=24, status='Not installed', exitreason='', last-rc-change='2020-09-02 22:42:53Z', queued=0ms, exec=2ms

Crap. I forgot to copy this new ifconfig resource to the other nodes. Lets fix that now.

root@node1:~ # rsync -av /usr/local/lib/ocf/resource.d/pacemaker/ node2:/usr/local/lib/ocf/resource.d/pacemaker/
Password for root@node2:
sending incremental file list
./
ifconfig

sent 3,798 bytes  received 38 bytes  1,534.40 bytes/sec
total size is 128,003  speedup is 33.37

root@node1:~ # rsync -av /usr/local/lib/ocf/resource.d/pacemaker/ node3:/usr/local/lib/ocf/resource.d/pacemaker/
Password for root@node3:
sending incremental file list
./
ifconfig

sent 3,798 bytes  received 38 bytes  1,534.40 bytes/sec
total size is 128,003  speedup is 33.37

Lets stop, delete and re-add our precious resource now.

root@node1:~ # crm resource stop IP
root@node1:~ # crm configure delete IP
root@node1:~ # crm configure primitive IP ocf:pacemaker:ifconfig op monitor interval="30"

Fingers crossed.

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:45:46 2020
  * Last change:  Wed Sep  2 22:45:43 2020 by root via cibadmin on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * IP  (ocf::pacemaker:ifconfig):       Started node1

Looks like running properly.

Lets verify that its really up where it should be.

root@node1:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=81009b
        ether 08:00:27:2a:78:60
        inet 10.0.10.111 netmask 0xffffff00 broadcast 10.0.10.255
        inet 10.0.10.200 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT )
        status: active
        nd6 options=29

root@node2:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=81009b
        ether 08:00:27:80:50:05
        inet 10.0.10.112 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT )
        status: active
        nd6 options=29

root@node3:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=81009b
        ether 08:00:27:74:5e:b9
        inet 10.0.10.113 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT )
        status: active
        nd6 options=29

Seems to be working.

Now lets try to move it to the other node in the cluster.

root@node1:~ # crm resource move IP node3
INFO: Move constraint created for IP to node3

root@node1:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:47:31 2020
  * Last change:  Wed Sep  2 22:47:28 2020 by root via crm_resource on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * IP  (ocf::pacemaker:ifconfig):       Started node3

Switched properly to node3 system.

root@node3:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=81009b
        ether 08:00:27:74:5e:b9
        inet 10.0.10.113 netmask 0xffffff00 broadcast 10.0.10.255
        inet 10.0.10.200 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT )
        status: active
        nd6 options=29

root@node1:~ # ifconfig em0
em0: flags=8843 metric 0 mtu 1500
        options=81009b
        ether 08:00:27:2a:78:60
        inet 10.0.10.111 netmask 0xffffff00 broadcast 10.0.10.255
        media: Ethernet autoselect (1000baseT )
        status: active
        nd6 options=29

Now we will poweroff the node3 system to check it that IP is really highly available.

root@node2:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:49:57 2020
  * Last change:  Wed Sep  2 22:47:29 2020 by root via crm_resource on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 node3 ]

Full List of Resources:
  * IP  (ocf::pacemaker:ifconfig):       Started node3

root@node3:~ # poweroff

root@node2:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node2 (version 2.0.4-2deceaa3ae) - partition with quorum
  * Last updated: Wed Sep  2 22:50:16 2020
  * Last change:  Wed Sep  2 22:47:29 2020 by root via crm_resource on node1
  * 3 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node1 node2 ]
  * OFFLINE: [ node3 ]

Full List of Resources:
  * IP  (ocf::pacemaker:ifconfig):       Started node1

Seems that failover went well.

The crm command also colors various sections of its output.

failover

Good to know that Pacemaker and Corosync cluster runs well on FreeBSD.

Some work is needed to write the needed resource files but one with some time and determination can surely put FreeBSD into a very capable highly available cluster.

EOF

FreeBSD Enterprise Storage at PBUG

Yesterday I was honored to give a talk about FreeBSD Enterprise Storage at the Polish BSD User Group meeting.

You are invited to download the PDF Slides โ€“ https://is.gd/bsdstg โ€“ available here.

bsdstg

The PBUG (Polish BSD User Group) meetings are very special. In “The Matrix” movie (which has been rendered on FreeBSD system by the way) – FreeBSD Used to Generate Spectacular Special Effects – details available here – its not possible to describe what the Matrix really is, one has to feel it. Enter it. The same I can tell you about the PBUG meetings. Its kinda like with the “Hangover” movie. What happens in Vegas PBUG meeting stays in Vegas PBUG meeting ๐Ÿ™‚

If you will have the possibility and time then join the next Polish BSD User Group meeting. You will not regret it :>

UPDATE 1 – Shorter Unified Version

The original – https://is.gd/bsdstg – presentation is 187 pages long and is suited for live presentation while not the best for later ‘offline’ view.

I have created a unified version – https://is.gd/bsdstguni – with only 42 pages.

EOF

Manage Contacts the UNIX Way

About two years ago my neighbor asked me a question – “How do you manage contacts on your devices?” – and that was my ‘a-ha’ moment in that topic – I do not. I do not at all. He had a problem of having an iPhone with iTunes and Android phone and wanted to manage contacts between them in one single sensible place. Finally he settled on some closed source freeware software which run on Windows. But that was not the answer – that was just the beginning – how to manage contacts the UNIX open source way?

I have tried to search for some open source software that is capable of doing that efficiently and without too much effort and PITA … and I failed miserably.

So as usual I came with my set of scripts that will do the job and after several years of using this ‘system’ I am quite satisfied with the results and PITA reduced to minimum.

Export from Phone

The VCF file (also called VCARD) exported from a mobile Android based phone looks like below.

% cat export.vcf
BEGIN:VCARD
VERSION:2.1
N:pierre;herbert;hugues;;
FN:herbert pierre hugues
TEL;PREF:555123456
NOTE:executive
EMAIL:pierre@gmail.com
END:VCARD
BEGIN:VCARD
VERSION:2.1
N:(local);butcher;;
FN:butcher (local)
TEL;PREF:555123457
TEL;HOME:225553457
TEL;HOME:451232421
NOTE:cheap
END:VCARD
BEGIN:VCARD
VERSION:2.1
N:(f1);martin;brundle;;
FN:martin brundle (f1)
TEL;PREF:555987654
TEL;HOME:451232421
X-QQ;HOME:gg:32847916
NOTE:fast
END:VCARD


I have used colors to distinguish different contacts.

The most annoying field seems to be 'N' which tries to be smarter then needed – trying really hard to first put surname, then name, and then other names. The 'FN' field is a lot more useful here. The remaining fields as 'TEL' or 'EMAIL' does not try to outsmart us and work as desired. The VCARD of course starts with 'BEGIN:VCARD' and ends with 'END:VCARD', that is obvious. In 2015 when I initially wrote those scripts the Instant Messaging was still used by me. Now several years fast forward I use it very rarely, but its still in use. I keep this Instant Messaging number/account information in the VCARD 'X-QQ' field in which I use protocol:number notation and use it for all different Instant Messaging solutions. The 'gg:' is for example for the Polish solution called Gadu-Gadu.

I do not find this VCARD format readable, nor grepable/searchable, thus I convert it into the plain text file which looks like follows and is grep(1) and awk(1) friendly (columns separated by spaces).

NAME  PHONE  IM  MAIL  NOTES
====  =====  ==  ====  =====

Here is how the above VCARD information looks after converting it with my script to the plain text columns.

% contacts-convert-vcf-from.sh -t export.vcf | column -t > contacts
% cat contacts
NAME                                    PHONE                                                IM                MAIL                                                    NOTES
======================================  ===================================================  ================  ======================================================  =====
butcher-(local)                         555123457;225553457;451232421                        -                 -                                                       cheap
herbert-pierre-hugues                   555123456                                            -                 pierre@gmail.com                                        executive
martin-brundle-(f1)                     555987654;451232421                                  gg:32847916       -                                                       fast

The length of ‘=====’ underscores is defined/hardcoded in the scripts itself. Why hardcode this? For comparison purposes – more on that later. The entries are also sorted by name. I could embed/rework the script to contain also the column -t command but I did not saw the need to – but its of course possible.

Now – lets suppose you want to generate new VCARD with some of your contacts, then you could use grep(1) to filter out the unneeded entries, like that.

% grep -v butcher contacts > contacts.NOBUTCHER
% contacts-convert-vcf-to.sh contacts.NOBUTCHER > import.vcf
% cat import.vcf

BEGIN:VCARD
VERSION:2.1
FN:herbert pierre hugues
TEL;PREF:555123456
EMAIL:pierre@gmail.com
NOTE:executive
END:VCARD

BEGIN:VCARD
VERSION:2.1
FN:martin brundle (f1)
TEL;PREF:555987654
TEL:451232421
X-QQ:gg:32847916
NOTE:fast
END:VCARD

Its obvious but the generated VCARD does not contain the 'butcher (local)' contact. You can now send this import.vcf file to your phone using email and then import these contacts as you would from any other VCARD shared with you.

Scripts

I use three scripts to convert/export/import/check that data in VCARD form.

The contacts-convert-vcf-from.sh script as the name suggests converts VCARD data (VCF file) into the plain text information. but I also implemented the CSV method which may be useful for some people – to put that data into the spreadsheet.

% contacts-convert-vcf-from.sh
usage: contacts-convert-vcf-from.sh TYPE FILE
  TYPE: -c | --csv
        -p | --plain
        -t | --text

Here is example CSV output from the script.

% contacts-convert-vcf-from.sh -c export.vcf
NAME,PHONE,IM,MAIL,NOTES
butcher-(local),555123457;225553457;451232421,-,-,cheap
herbert-pierre-hugues,555123456,-,pierre@gmail.com,executive
martin-brundle-(f1),555987654;451232421,gg:32847916,-,fast

The contacts-convert-vcf-to.sh script converts the plain text data into the VCARD format.

% contacts-convert-vcf-to.sh
usage: contacts-convert-vcf-to.sh FILE

The last contacts-check.sh script is used to find duplicated phone information within the plain text file. Many time I have found duplicated contacts with different names but with the same phone number.

% contacts-check.sh contacts | column -t
butcher-(local)      555123457;225553457;451232421  -            -  cheap
martin-brundle-(f1)  555987654;451232421            gg:32847916  -  fast

All of the three are available in my GitHub scripts page – https://github.com/vermaden/scripts/ – available here.

You can of course download them using command line like that.

% wget https://raw.githubusercontent.com/vermaden/scripts/master/contacts-check.sh
% wget https://raw.githubusercontent.com/vermaden/scripts/master/contacts-convert-vcf-from.sh
% wget https://raw.githubusercontent.com/vermaden/scripts/master/contacts-convert-vcf-to.sh
% chmod +x contacts-*

Updating Contacts

Its easy to maintain several contacts – no matter in which format – but when you grow to have about a 1000 of contacts (and I do) then you need to deal with it intelligently.

Not to mention that you can add a new contact on your phone (more often) but You can also update your local plain text contacts file.

This is where UNIX comes handy. You may use diff(1) to compare these ‘updates’ with following command.

% diff -u contacts contacts.NEW | egrep '^\-|^\+'
--- contacts            2019-12-13 15:29:23.541256000 +0100
+++ contacts.NEW        2019-12-13 15:29:36.087084000 +0100
-john-doe-the-third                      -                                                    -                 jogh.doe@gmail.com                                      -
+jan-kowalski                            555192384                                            gg:11844916       -                                                       slow


This way you know that there are two new contacts, one '-' from the local contacts file and one '+' from the plain text version generated from phone exported VCF file called contacts.NEW here.

You can also use vim(1) with its diff mode enabled by starting with -d flag as shown below.

% vim -d contacts contacts.NEW

Here is how it looks like.

vim.png

… and we now get back to the amount of '====' used in the columns in the plain text file. If you keep the same amount of these each time, then diff is possible. If I would not put them there the column -t command would generate larger NAME column for example because of longer contact name – and because of additional space in the remaining contacts both diff(1) and vim(1) tools will show that all contacts are new.

This is how I manage the contacts the UNIX way, if you have more fun way of dealing with the contacts then please let me know ๐Ÿ™‚

EOF

List Block Devices on FreeBSD lsblk(8) Style

When I have to work on Linux systems I usually miss many nice FreeBSD tools such as these for example to name the few:

  • sockstat
  • gstat
  • top -b -o res
  • top -m io -o total
  • usbconfig
  • rcorder
  • beadm/bectl
  • idprio/rtprio

… but sometimes – which rarely happens – Linux has some very useful tool that is not available on FreeBSD. An example of such tool is lsblk(8) that does one thing and does it quite well – lists block devices and their contents. It has some problems like listing a disk that is entirely used under ZFS pool on which lsblk(8) displays two partitions instead of information about ZFS just being there – but we all know how much in some circles the CDDL licensed ZFS is unloved in that GPL world.

Example lsblk(8) output from Linux system:

$ lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE   MOUNTPOINT
sr0                           11:0    1  1024M  0 rom
sda                            8:0    0 931.5G  0 disk
|-sda1                         8:1    0   500M  0 part   /boot
`-sda2                         8:2    0   931G  0 part
  |-vg_local-lv_root (dm-0)  253:0    0    50G  0 lvm    /
  |-vg_local-lv_swap (dm-1)  253:1    0  17.7G  0 lvm    [SWAP]
  `-vg_local-lv_home (dm-2)  253:2    0   1.8T  0 lvm    /home
sdc                            8:32   0 232.9G  0 disk
`-sdc1                         8:33   0 232.9G  0 part
  `-md1                        9:1    0 232.9G  0 raid10 /data
sdd                            8:48   0 232.9G  0 disk
`-sdd1                         8:49   0 232.9G  0 part
  `-md1                        9:1    0 232.9G  0 raid10 /data

What FreeBSD offers in this department? The camcontrol(8) and geom(8) commands are available. You can also use gpart(8) command to list partitions. Below you will find output of these commands from my single disk laptop. Please note that because of WordPress limitations I need to change all > < characters to ] [ ones in the commands outputs.

# camcontrol devlist
[Samsung SSD 860 EVO mSATA 1TB RVT41B6Q]  at scbus1 target 0 lun 0 (ada0,pass0)

% geom disk list
Geom name: ada0
Providers:
1. Name: ada0
   Mediasize: 1000204886016 (932G)
   Sectorsize: 512
   Mode: r1w1e2
   descr: Samsung SSD 860 EVO mSATA 1TB
   lunid: 5002538e402b4ddd
   ident: S41PNB0K303632D
   rotationrate: 0
   fwsectors: 63
   fwheads: 1

# gpart show
=>        40  1953525088  ada0  GPT  (932G)
          40      409600     1  efi  (200M)
      409640        1024     2  freebsd-boot  (512K)
      410664         984        - free -  (492K)
      411648  1953112064     3  freebsd-zfs  (931G)
  1953523712        1416        - free -  (708K)

They provide needed information in acceptable manner but only on systems with small amount of disks. What if you would like to display a summary of all system drives contents? This is where lsblk.sh comes handy. While lsblk(8) has many interesting features like --perms/--scsi/--inverse modes I focused to provide only the basic feature – to list the system block devices and their contents. As I have long and pleasing experience with writing shell scripts such as sysutils/beadm or sysutils/automount I though that writing lsblk.sh may be a good idea. I actually ‘open-sourced’ or should I say shared that project/idea in 2016 in this thread lsblk(8) Command for FreeBSD on FreeBSD Forums but lack of time really slowed that ‘side project’ development pace. I finally got back to it to finish it.

The lsblk.sh is generally small and simple shell script which tales less then 400 SLOC.

lsblk

Here is example output of lsblk.sh command from my single disk laptop.

% lsblk.sh
DEVICE         MAJ:MIN  SIZE TYPE                      LABEL MOUNT
ada0             0:5b  932G GPT                           - -
  ada0p1         0:64  200M efi                    efiboot0 [UNMOUNTED]
  ada0p2         0:65  512K freebsd-boot           gptboot0 -
  [FREE]         -:-   492K -                             - -
  ada0p3         0:66  931G freebsd-zfs                zfs0 [ZFS]
  [FREE]         -:-   708K -                             - -


Same output in graphical window.

lolcat

Below you will find an example lsblk.sh output from server with two system SSD drives (da0/da1) and two HDD data drives (da2/da3).

# lsblk.sh
DEVICE         MAJ:MIN SIZE TYPE                      LABEL MOUNT
da0              0:be  224G GPT                           - -
  da0p1          0:15a 200M efi                    efiboot0 [UNMOUNTED]
  da0p2          0:15b 512K freebsd-boot           gptboot0 -
  [FREE]         -:-   492K -                             - -
  da0p3          0:15c 2.0G freebsd-swap              swap0 [UNMOUNTED]
  da0p4          0:15d 221G freebsd-zfs                zfs0 [ZFS]
  [FREE]         -:-   580K -                             - -
da1              0:bf  224G GPT                           - -
  da1p1          0:16a 200M efi                    efiboot1 [UNMOUNTED]
  da1p2          0:16b 512K freebsd-boot           gptboot1 -
  [FREE]         -:-   492K -                             - -
  da1p3          0:16c 2.0G freebsd-swap              swap1 [UNMOUNTED]
  da1p4          0:16d 221G freebsd-zfs                zfs1 [ZFS]
  [FREE]         -:-   580K -                             - -
da2              0:c0   11T GPT                           - -
  da2p1          0:16e  11T freebsd-zfs                   - [ZFS]
  [FREE]         -:-   1.0G -                             - -
da3              0:c1   11T GPT                           - -
  da3p1          0:16f  11T freebsd-zfs                   - [ZFS]
  [FREE]         -:-   1.0G -                             - -

Below you will find other examples from other systems I have tested lsblk.sh on.

lsblk.examples

While lsblk.sh is not the fastest script on Earth (because of all the needed parsing) it does its job quite well. If you would like to install it in your system just type the command below:

# fetch -o /usr/local/bin/lsblk https://raw.githubusercontent.com/vermaden/scripts/master/lsblk.sh
# chmod +x /usr/local/bin/lsblk
# hash -r || rehash
# lsblk

If I got time which other original Linux lsblk(8) subcommand/option/argument is worth adding to the lsblk.sh script? ๐Ÿ™‚

Regards.

UPDATE 1 – Added USAGE/HELP Information

Just added some usage information that can be displayed by specifying one of these as argument:

  • h
  • -h
  • --h
  • help
  • -help
  • --help

IMHO writing man page for such simple utility is needless. I think I will create dedicated man page when lsblk.sh tool will grow in size and options to comparable with the Linux lsblk(8) equivalent. Here is how it looks.

# lsblk.sh --help
usage:

  BASIC USAGE INFORMATION
  =======================
  # lsblk.sh [DISK]

example(s):

  LIST ALL BLOCK DEVICES IN SYSTEM
  --------------------------------
  # lsblk.sh
  DEVICE         MAJ:MIN SIZE TYPE                      LABEL MOUNT
  ada0             0:5b  932G GPT                           - -
    ada0p1         0:64  200M efi                    efiboot0 [UNMOUNTED]
    ada0p2         0:65  512K freebsd-boot           gptboot0 -
    [FREE]         -:-   492K -                             - -
    ada0p3         0:66  931G freebsd-zfs                zfs0 [ZFS]

  LIST ONLY da1 BLOCK DEVICE
  --------------------------
  # lsblk.sh da1
  DEVICE         MAJ:MIN SIZE TYPE                      LABEL MOUNT
  da1              0:80  2.0G MBR                           - -
    da1s1          0:80  2.0G freebsd                       - -
      da1s1a       0:81  1.0G freebsd-ufs                root /
      da1s1b       0:82  1.0G freebsd-swap               swap SWAP

hint(s):

  DISPLAY ALL DISKS IN SYSTEM
  ---------------------------
  # sysctl kern.disks
  kern.disks: ada0 da0 da1

Regards.

UPDATE 2 โ€“ Code Reorganization and 75% Rewrite

… at least this is what git(1) tries to tell me after commit message.

% git commit (...)
[master 12fd4aa] Rework entire flow. Split code into functions. Add many useful comments. In other words its 2.0 version.
 1 file changed, 494 insertions(+), 505 deletions(-)
 rewrite lsblk.sh (75%)

After several productive hours new incarnation of lsblk.sh is now available.

It has similar SLOC but its now smaller by a quarter … while doing more and with better accuracy. Great example why “less is more.”

% wc scripts/lsblk.sh.OLD
     491    2201   19721 scripts/lsblk.sh.OLD

% wc scripts/lsblk.sh
     494    1871   15472 scripts/lsblk.sh

Things that does not have simple solution are described below.

One of them is ‘double’ label for FAT filesystems. We have both /dev/gpt/efiboot0 label and FAT label is named EFISYS. We have to choose something here. As not all FAT filesystems have label I have chosen the GPT label.

% glabel status | grep ada0p1
  gpt/efiboot0     N/A  ada0p1
msdosfs/EFISYS     N/A  ada0p1

I was also not able to cover FUSE mounts. When you mount – for example – the /dev/da0 device as NTFS (with ntfs-3g) or exFAT (with mount.exfat) there is no visible difference in mount(8) output.

% mount -t fusefs
/dev/fuse on /mnt/ntfs (fusefs)
/dev/fuse on /mnt/exfat (fusefs)

When I mount such filesystem by my daemon (like sysutils/automount) I keep track of what device have been mounted to which directory in the /var/run/automount.state file. Then when I get the detach event for /dev/da0 device I know what to u(n)mount … but when I only have /dev/fuse device its just not possible.

… or maybe YOU know any way of extracting information from /dev/fuse (or generally from FUSE) what device is mounted where?

Now little presentation after update.

Here are various non ZFS filesystems mounted.

% mount -t nozfs
devfs on /dev (devfs, local, multilabel)
linprocfs on /compat/linux/proc (linprocfs, local)
tmpfs on /compat/linux/dev/shm (tmpfs, local)
/dev/label/ASD on /mnt/tmp (msdosfs, local)
/dev/fuse on /mnt/ntfs (fusefs)
/dev/md0s1f on /mnt/ufs.other (ufs, local)
/dev/gpt/OTHER on /mnt/fat.other (msdosfs, local)
/dev/md0s1a on /mnt/ufs (ufs, local)

… and here is how now lsblk.sh displays them.

% lsblk.sh
DEVICE         MAJ:MIN SIZE TYPE                      LABEL MOUNT
ada0             0:56  932G GPT                           - -
  ada0p1         0:64  200M efi                gpt/efiboot0 -
  ada0p2         0:65  512K freebsd-boot       gpt/gptboot0 -
  [FREE]         -:-   492K -                             - -
  ada0p3         0:66  931G freebsd-zfs                   - [ZFS]
  [FREE]         -:-   708K -                             - -
md0              0:28f 1.0G MBR                           - -
  md0s1          0:294 512M freebsd                       - -
    md0s1a       0:29a 100M freebsd-ufs                root /mnt/ufs
    md0s1b       0:29b  32M freebsd-swap         label/swap SWAP
    md0s1e       0:29c  64M freebsd-ufs                   - -
    md0s1f       0:29d 316M freebsd-ufs                   - /mnt/ufs.other
  md0s2          0:296 256M ntfs                          - -
  md0s3          0:297 256M fat32               msdosfs/ONE -
md1              0:2a4 1.0G msdosfs                   LARGE 
md2              0:298 2.0G GPT                           - -
  md2p1          0:29f 2.0G ms-basic-data         gpt/OTHER /mnt/fat.other

I used some file based memory devices for this. Now by default lsblk.sh also displays memory disks contents.

% mdconfig.sh -l
md0     vnode    1024M  /home/vermaden/FILE     
md2     vnode    2048M  /home/vermaden/FILE.GPT 
md1     vnode    1024M  /home/vermaden/FILER    

Here is how it looks in the xterm(1) terminal.

lsblk.2.0

Regards.

UPDATE 3 – Added geli(8) Support

I thought that adding geli(8) support may be useful. The latest lsblk.sh now avoids code duplication for MOUNT and LABEL detection (moved into single unified function). Also added more comments for code readability and some minor fixes … and its again smaller ๐Ÿ™‚

% wc lsblk.sh.1.0
     491    2201   19721 lsblk.sh.1.0

% wc lsblk.sh.2.0
     493    1861   15415 lsblk.sh.2.0

% wc lsblk.sh
     488    1820   15332 lsblk.sh

About 40% (according to git commit was changed this time (191 insertions and 196 deletions).

# git commit (...)
[master ec9985a] Add geli(8) support. Avoid code duplication and move MOUNT/LABEL detection into function. More comments. Minor fixes.
 1 file changed, 191 insertions(+), 196 deletions(-)

Also forgot to mention that now lsblk.sh thanks to smart optimizations (like not doing things twice and aggregating grep(1) | awk(1) pipes into single awk(1) queries) runs 3 times faster then the initial version ๐Ÿ™‚

New output with geli(8) support below.

lsblk.2.1.geli.png

Regards.

UPDATE 4 – Added fuse(8) Support

As I wrote in the UPDATE 2 keeping track of what is mounted and where under fuse(8) is very hard as all mounted devices magically become /dev/fuse after mount is done.

After little research I found that this information (what really is mounted where by using fuse(8) interface under FreeBSD) is available after mounting procfs filesystem under /proc. You just need to cat cmdline entry for all PIDs of ntfs-3g. Its not perfect but the information at least is available.

# mount -t procfs proc /proc

# ps ax | grep ntfs-3g
45995  -  Is      0:00.00 ntfs-3g /dev/md1s2 /mnt/ntfs
59607  -  Is      0:00.00 ntfs-3g /dev/md3 /mnt/ntfs.another
83323  -  Is      0:00.00 ntfs-3g /dev/md3 /mnt/ntfs.another

# pgrep ntfs-3g
59607
83323
45995

% pgrep ntfs-3g | while read I; do cat /proc/$I/cmdline; echo; done
ntfs-3g/dev/md3/mnt/ntfs.another
ntfs-3g/dev/md3/mnt/ntfs.another
ntfs-3g/dev/md1s2/mnt/ntfs

This was the code prototype that worked for fuse(8) mountpoints detection.

    if [ -e /proc/0/status ]
    then
      FUSE_MOUNTS=$(
        while read PID
        do
          cat /proc/${PID}/cmdline
          echo
        done << ________EOF
          $( pgrep ntfs-3g )
________EOF
)
      FUSE_MOUNTS=$( echo "${FUSE_MOUNTS}" | sort -u )
      FUSE_MOUNTS=$( echo "${FUSE_MOUNTS}" | sed 's|ntfs-3g||g' )
      FUSE_CHECKS=$( echo "${FUSE_MOUNTS}" | grep /dev/${TARGET}/ )
      if [ "${FUSE_CHECKS}" != "" ]
      then
        MOUNT=$( echo "${FUSE_CHECKS}" | sed "s|/dev/${TARGET}||g" )
      fi
    fi
  fi

… and I have just realized that I found new (better) way of getting that information without mounting /proc filesystem – all you need to do is to display the ntfs-3g processes with their command line arguments, for example like that:

% ps -p $( pgrep ntfs-3g | tr '\n' ',' | sed '$s/.$//' ) -o command | sed 1d
ntfs-3g /dev/md1s2 /mnt/ntfs
ntfs-3g /dev/md3 /mnt/ntfs.another
ntfs-3g /dev/md3 /mnt/ntfs.another

So after I also thought that its only for NTFS (ntfs-3g(8) process) I also added exFAT support by searching for mount.exfat PIDs as well. The fuse(8) mount point detection works now for both NTFS and exFAT filesystems … and code to support it is even shorter.

  # TRY fuse(8) MOUNTS FROM PROCESSES
  if [ "${MOUNT_FOUND}" != "1" ]
  then
    FUSE_PIDS=$( pgrep mount.exfat ntfs-3g | tr '\n' ',' | sed '$s/.$//' )
    FUSE_MOUNTS=$( ps -p "${FUSE_PIDS}" -o command | sed 1d | sort -u )
    MOUNT=$( echo "${FUSE_MOUNTS}" |  grep "/dev/${TARGET} " | awk '{print $3}' )
  fi

I also changed how MAJOR and MINOR numbers are displayed – from HEX to DEC – as it is on Linux. The FreeBSD’s ls(1) from Base System displays these as HEX – for example you will get 0x2af value:

% ls -l /dev/md4
crw-rw----  1 root  operator  0x2af 2019.09.29 05:18 /dev/md4

But do the same with GNU equivalent by using gls(1) from FreeBSD Ports (from sysutils/coreutils package) and it shows MAJOR and MINOR in DEC values. The gls(1) is just ls(1) from the Linux world but as ls(1) name is already ‘taken’ by FreeBSD’s Base System tool the FreeBSD developers/maintainers add ‘g’ letter (for GNU) to distinguish them.

% gls -l /dev/md4
crw-rw---- 1 root 2, 175 2019-09-29 05:18 /dev/md4

… and they are also easier/faster to get with stat(1) tool.

  MAJ=$( stat -f "%Hr" /dev/${DEV} )
  MIN=$( stat -f "%Lr" /dev/${DEV} )

Latest lsblk.sh looks like that now.

lsblk.2.3.fuse.NTFS.exFAT

… that is why I did not (yet) added lsblk.sh to the FreeBSD Ports. Several new versions with important features span across just two days ๐Ÿ™‚

Regards.

UPDATE 5 – Another 69% Rewrite

After messing with gpart(8) more I found that using its -p flag which is a game changer. The difference is that with -p flag it displays names along partitions – its no longer needed to find the PREFIX and ‘create’ partition names.

Default gpart(8) output.

# gpart show md0
=>     63  2097089  md0  MBR  (1.0G)
       63  1048576    1  freebsd  (512M)
  1048639   524288    2  ntfs  (256M)
  1572927   524225    3  fat32  (256M)

Output of gpart(8) with -p flag.

# gpart show -p md0
=>     63  2097089    md0  MBR  (1.0G)
       63  1048576  md0s1  freebsd  (512M)
  1048639   524288  md0s2  ntfs  (256M)
  1572927   524225  md0s3  fat32  (256M)

That discovery implicated a quite large rewrite of lsblk.sh. The git commit estimates this as 69% code rewrite.

# git commit (...)
(...)
 1 file changed, 487 insertions(+), 501 deletions(-)
 rewrite lsblk.sh (69%)

The latest lsblk.sh has now these features:

  • Previous bugs fixed.
  • Detects exFAT labels.
  • Is now 20% faster.
  • Has less 10% SLOC.
  • Has less 15% of code.
  • Handles bsdlabel(8) on entire device properly.
  • Handles exFAT on entire device properly.

The difference in code is shown below.

# wc lsblk.sh
     487    1791   13705 lsblk.sh

# wc lsblk.sh.OLD
     544    1931   16170 lsblk.sh.OLD

Latest lsblk.sh looks as usual but I now use ‘-‘ instead of ‘[UNMOUNTED]‘ one.

lsblk.2.5.gpart.exfat

EOF

FreeBSD Enterprise 1 PB Storage

Today FreeBSD operating system turns 26 years old. 19 June is an International FreeBSD Day. This is why I got something special today :). How about using FreeBSD as an Enterprise Storage solution on real hardware? This where FreeBSD shines with all its storage features ZFS included.

Today I will show you how I have built so called Enterprise Storage based on FreeBSD system along with more then 1 PB (Petabyte) of raw capacity.

I have build various storage related systems based on FreeBSD:

This project is different. How much storage space can you squeeze from a single 4U system? It turns out a lot! Definitely more then 1 PB (1024 TB) of raw storage space.

Here is the (non clickable) Table of Contents.

  • Hardware
  • Management Interface
  • BIOS/UEFI
  • FreeBSD System
    • Disks Preparation
    • ZFS Pool Configuration
    • ZFS Settings
    • Network Configuration
    • FreeBSD Configuration
  • Purpose
  • Performance
    • Network Performance
    • Disk Subsystem Performance
  • FreeNAS
  • UPDATE 1 – BSD Now 305
  • UPDATE 2 โ€“ Real Life Pictures in Data Center

Hardware

There are 4U servers with 90-100 3.5″ drive slots which will allow you to pack 1260-1400 Terabytes of data (with 14 TB drives). Examples of such systems are:

I would use the first one – the TYAN FA100 for short name.

logo-tyan.png

While both GlusterFS and Minio clusters were cone on virtual hardware (or even FreeBSD Jails containers) this one uses real physical hardware.

The build has following specifications.

 2 x 10-Core Intel Xeon Silver 4114 CPU @ 2.20GHz
 4 x 32 GB RAM DDR4 (128 GB Total)
 2 x Intel SSD DC S3500 240 GB (System)
90 x Toshiba HDD MN07ACA12TE 12 TB (Data)
 2 x Broadcom SAS3008 Controller
 2 x Intel X710 DA-2 10GE Card
 2 x Power Supply

Price of the whole system is about $65 000 – drives included. Here is how it looks.

tyan-fa100-small.jpg

One thing that you will need is a rack cabinet that is 1200 mm long to fit that monster ๐Ÿ™‚

Management Interface

The so called Lights Out management interface is really nice. Its not bloated, well organized and works quite fast. you can create several separate user accounts or can connect to external user services like LDAP/AD/RADIUS for example.

n01.png

After logging in a simple Dashboard welcomes us.

n02.png

We have access to various Sensor information available with temperatures of system components.

n03

We have System Inventory information with installed hardware.

n04.png

There is separate Settings menu for various setup options.

n05.png

I know its 2019 but HTML5 only Remote Control (remote console) without need for any third party plugins like Java/Silverlight/Flash/… is very welcomed. It works very well too.

n06.png

n07.png

One is of course allowed to power on/off/cycle the box remotely.

n08.png

The Maintenance menu for BIOS updates.

n09.png

BIOS/UEFI

After booting into the BIOS/UEFI setup its possible to select from which drives to boot from. On the screenshots the two SSD drives prepared for system.

nas01.png

The BIOS/UEFI interface shows two Enclosures but its two Broadcom SAS3008 controllers. Some drive are attached via first Broadcom SAS3008 controller, the rest is attached via the second one, and they call them Enclosures instead od of controllers for some reason.

nas05.png

FreeBSD System

I have chosen latest FreeBSD 12.0-RELEASE for the purpose of this installation. Its generally very ‘default’ installation with ZFS mirror on two SSD disks. Nothing special.

logo-freebsd.jpg

The installation of course supports the ZFS Boot Environments bulletproof upgrades/changes feature.

# zpool list zroot
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zroot   220G  3.75G   216G        -         -     0%     1%  1.00x  ONLINE  -

# zpool status zroot
  pool: zroot
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            da91p4  ONLINE       0     0     0
            da11p4  ONLINE       0     0     0

errors: No known data errors

# df -g
Filesystem              1G-blocks Used  Avail Capacity  Mounted on
zroot/ROOT/default            211    2    209     1%    /
devfs                           0    0      0   100%    /dev
zroot/tmp                     209    0    209     0%    /tmp
zroot/usr/home                209    0    209     0%    /usr/home
zroot/usr/ports               210    0    209     0%    /usr/ports
zroot/usr/src                 210    0    209     0%    /usr/src
zroot/var/audit               209    0    209     0%    /var/audit
zroot/var/crash               209    0    209     0%    /var/crash
zroot/var/log                 209    0    209     0%    /var/log
zroot/var/mail                209    0    209     0%    /var/mail
zroot/var/tmp                 209    0    209     0%    /var/tmp

# beadm list
BE      Active Mountpoint  Space Created
default NR     /            2.4G 2019-05-24 13:24

Disks Preparation

From all the possible setups with 90 disks of 12 TB capacity I have chosen to go the RAID60 way – its ZFS equivalent of course. With 12 disks in each RAID6 (raidz2) group – there will be 7 such groups – we will have 84 used for the ZFS pool with 6 drives left as SPARE disks – that plays well for me. The disks distribution will look more or less like that.

DISKS  CONTENT
   12  raidz2-0
   12  raidz2-1
   12  raidz2-2
   12  raidz2-3
   12  raidz2-4
   12  raidz2-5
   12  raidz2-6
    6  spares
   90  TOTAL

Here is how FreeBSD system sees these drives by camcontrol(8) command. Sorted by attached SAS controller – scbus(4).

# camcontrol devlist | sort -k 6
(AHCI SGPIO Enclosure 1.00 0001)   at scbus2 target 0 lun 0 (pass0,ses0)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 50 lun 0 (pass1,da0)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 52 lun 0 (pass2,da1)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 54 lun 0 (pass3,da2)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 56 lun 0 (pass5,da4)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 57 lun 0 (pass6,da5)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 59 lun 0 (pass7,da6)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 60 lun 0 (pass8,da7)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 66 lun 0 (pass9,da8)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 67 lun 0 (pass10,da9)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 74 lun 0 (pass11,da10)
(ATA INTEL SSDSC2KB24 0100)        at scbus3 target 75 lun 0 (pass12,da11)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 76 lun 0 (pass13,da12)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 82 lun 0 (pass14,da13)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 83 lun 0 (pass15,da14)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 85 lun 0 (pass16,da15)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 87 lun 0 (pass17,da16)
(Tyan B7118 0500)                  at scbus3 target 88 lun 0 (pass18,ses1)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 89 lun 0 (pass19,da17)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 90 lun 0 (pass20,da18)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 91 lun 0 (pass21,da19)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 92 lun 0 (pass22,da20)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 93 lun 0 (pass23,da21)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 94 lun 0 (pass24,da22)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 95 lun 0 (pass25,da23)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 96 lun 0 (pass26,da24)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 97 lun 0 (pass27,da25)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 98 lun 0 (pass28,da26)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 99 lun 0 (pass29,da27)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 100 lun 0 (pass30,da28)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 101 lun 0 (pass31,da29)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 102 lun 0 (pass32,da30)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 103 lun 0 (pass33,da31)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 104 lun 0 (pass34,da32)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 105 lun 0 (pass35,da33)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 106 lun 0 (pass36,da34)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 107 lun 0 (pass37,da35)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 108 lun 0 (pass38,da36)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 109 lun 0 (pass39,da37)
(ATA TOSHIBA MG07ACA1 0101)        at scbus3 target 110 lun 0 (pass40,da38)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 48 lun 0 (pass41,da39)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 49 lun 0 (pass42,da40)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 51 lun 0 (pass43,da41)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 53 lun 0 (pass44,da42)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 55 lun 0 (da43,pass45)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 59 lun 0 (pass46,da44)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 64 lun 0 (pass47,da45)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 67 lun 0 (pass48,da46)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 68 lun 0 (pass49,da47)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 69 lun 0 (pass50,da48)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 73 lun 0 (pass51,da49)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 76 lun 0 (pass52,da50)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 77 lun 0 (pass53,da51)
(Tyan B7118 0500)                  at scbus4 target 80 lun 0 (pass54,ses2)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 81 lun 0 (pass55,da52)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 82 lun 0 (pass56,da53)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 83 lun 0 (pass57,da54)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 84 lun 0 (pass58,da55)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 85 lun 0 (pass59,da56)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 86 lun 0 (pass60,da57)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 87 lun 0 (pass61,da58)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 88 lun 0 (pass62,da59)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 89 lun 0 (da63,pass66)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 90 lun 0 (pass64,da61)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 91 lun 0 (pass65,da62)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 92 lun 0 (da60,pass63)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 94 lun 0 (pass67,da64)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 97 lun 0 (pass68,da65)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 98 lun 0 (pass69,da66)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 99 lun 0 (pass70,da67)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 100 lun 0 (pass71,da68)
(Tyan B7118 0500)                  at scbus4 target 101 lun 0 (pass72,ses3)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 102 lun 0 (pass73,da69)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 103 lun 0 (pass74,da70)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 104 lun 0 (pass75,da71)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 105 lun 0 (pass76,da72)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 106 lun 0 (pass77,da73)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 107 lun 0 (pass78,da74)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 108 lun 0 (pass79,da75)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 109 lun 0 (pass80,da76)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 110 lun 0 (pass81,da77)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 111 lun 0 (pass82,da78)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 112 lun 0 (pass83,da79)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 113 lun 0 (pass84,da80)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 114 lun 0 (pass85,da81)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 115 lun 0 (pass86,da82)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 116 lun 0 (pass87,da83)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 117 lun 0 (pass88,da84)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 118 lun 0 (pass89,da85)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 119 lun 0 (pass90,da86)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 120 lun 0 (pass91,da87)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 121 lun 0 (pass92,da88)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 122 lun 0 (pass93,da89)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 123 lun 0 (pass94,da90)
(ATA INTEL SSDSC2KB24 0100)        at scbus4 target 124 lun 0 (pass95,da91)
(ATA TOSHIBA MG07ACA1 0101)        at scbus4 target 125 lun 0 (da3,pass4)

One my ask how to identify which disk is which when the FAILURE will came … this is where FreeBSD’s sesutil(8) command comes handy.

# sesutil locate all off
# sesutil locate da64 on

The first sesutil(8) command disables all location lights in the enclosure. The second one turns on the identification for disk da64.

I will also make sure to NOT use the whole space of each drive. Such idea may be pointless but imagine the following situation. Five 12 TB disks failed after 3 years. You can not get the same model drives so you get other 12 TB drives, maybe even from other manufacturer.

# grep da64 /var/run/dmesg.boot
da64 at mpr1 bus 0 scbus4 target 93 lun 0
da64:  Fixed Direct Access SPC-4 SCSI device
da64: Serial Number 98G0A1EQF95G
da64: 1200.000MB/s transfers
da64: Command Queueing enabled
da64: 11444224MB (23437770752 512 byte sectors)

A single 12 TB drive has 23437770752 of 512 byte sectors which equals 12000138625024 bytes of raw capacity.

# expr 23437770752 \* 512
12000138625024

Now image that these other 12 TB drives from other manufacturer will come with 4 bytes smaller size … ZFS will not allow their usage because their size is smaller.

This is why I will use exactly 11175 GB size of each drive which is more or less 1 GB short of its total 11176 GB size.

Below is command that will do that for me for all 90 disks.

# camcontrol devlist \
    | grep TOSHIBA \
    | awk '{print $NF}' \
    | awk -F ',' '{print $2}' \
    | tr -d ')' \
    | while read DISK
      do
        gpart destroy -F                   ${DISK} 1> /dev/null 2> /dev/null
        gpart create -s GPT                ${DISK}
        gpart add -t freebsd-zfs -s 11175G ${DISK}
      done

# gpart show da64
=>         40  23437770672  da64  GPT  (11T)
           40  23435673600     1  freebsd-zfs  (11T)
  23435673640      2097072        - free -  (1.0G)


ZFS Pool Configuration

Next, we will have to create our ZFS pool, its probably the longest zpool command I have ever executed ๐Ÿ™‚

As the Toshiba 12 TB disks have 4k sectors we will need to set vfs.zfs.min_auto_ashift to 12 to force them.

# sysctl vfs.zfs.min_auto_ashift=12
vfs.zfs.min_auto_ashift: 12 -> 12

# zpool create nas02 \
    raidz2  da0p1  da1p1  da2p1  da3p1  da4p1  da5p1  da6p1  da7p1  da8p1  da9p1 da10p1 da12p1 \
    raidz2 da13p1 da14p1 da15p1 da16p1 da17p1 da18p1 da19p1 da20p1 da21p1 da22p1 da23p1 da24p1 \
    raidz2 da25p1 da26p1 da27p1 da28p1 da29p1 da30p1 da31p1 da32p1 da33p1 da34p1 da35p1 da36p1 \
    raidz2 da37p1 da38p1 da39p1 da40p1 da41p1 da42p1 da43p1 da44p1 da45p1 da46p1 da47p1 da48p1 \
    raidz2 da49p1 da50p1 da51p1 da52p1 da53p1 da54p1 da55p1 da56p1 da57p1 da58p1 da59p1 da60p1 \
    raidz2 da61p1 da62p1 da63p1 da64p1 da65p1 da66p1 da67p1 da68p1 da69p1 da70p1 da71p1 da72p1 \
    raidz2 da73p1 da74p1 da75p1 da76p1 da77p1 da78p1 da79p1 da80p1 da81p1 da82p1 da83p1 da84p1 \
    spare  da85p1 da86p1 da87p1 da88p1 da89p1 da90p1

# zpool status
  pool: nas02
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:05 with 0 errors on Fri May 31 10:26:29 2019
config:

        NAME        STATE     READ WRITE CKSUM
        nas02       ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            da0p1   ONLINE       0     0     0
            da1p1   ONLINE       0     0     0
            da2p1   ONLINE       0     0     0
            da3p1   ONLINE       0     0     0
            da4p1   ONLINE       0     0     0
            da5p1   ONLINE       0     0     0
            da6p1   ONLINE       0     0     0
            da7p1   ONLINE       0     0     0
            da8p1   ONLINE       0     0     0
            da9p1   ONLINE       0     0     0
            da10p1  ONLINE       0     0     0
            da12p1  ONLINE       0     0     0
          raidz2-1  ONLINE       0     0     0
            da13p1  ONLINE       0     0     0
            da14p1  ONLINE       0     0     0
            da15p1  ONLINE       0     0     0
            da16p1  ONLINE       0     0     0
            da17p1  ONLINE       0     0     0
            da18p1  ONLINE       0     0     0
            da19p1  ONLINE       0     0     0
            da20p1  ONLINE       0     0     0
            da21p1  ONLINE       0     0     0
            da22p1  ONLINE       0     0     0
            da23p1  ONLINE       0     0     0
            da24p1  ONLINE       0     0     0
          raidz2-2  ONLINE       0     0     0
            da25p1  ONLINE       0     0     0
            da26p1  ONLINE       0     0     0
            da27p1  ONLINE       0     0     0
            da28p1  ONLINE       0     0     0
            da29p1  ONLINE       0     0     0
            da30p1  ONLINE       0     0     0
            da31p1  ONLINE       0     0     0
            da32p1  ONLINE       0     0     0
            da33p1  ONLINE       0     0     0
            da34p1  ONLINE       0     0     0
            da35p1  ONLINE       0     0     0
            da36p1  ONLINE       0     0     0
          raidz2-3  ONLINE       0     0     0
            da37p1  ONLINE       0     0     0
            da38p1  ONLINE       0     0     0
            da39p1  ONLINE       0     0     0
            da40p1  ONLINE       0     0     0
            da41p1  ONLINE       0     0     0
            da42p1  ONLINE       0     0     0
            da43p1  ONLINE       0     0     0
            da44p1  ONLINE       0     0     0
            da45p1  ONLINE       0     0     0
            da46p1  ONLINE       0     0     0
            da47p1  ONLINE       0     0     0
            da48p1  ONLINE       0     0     0
          raidz2-4  ONLINE       0     0     0
            da49p1  ONLINE       0     0     0
            da50p1  ONLINE       0     0     0
            da51p1  ONLINE       0     0     0
            da52p1  ONLINE       0     0     0
            da53p1  ONLINE       0     0     0
            da54p1  ONLINE       0     0     0
            da55p1  ONLINE       0     0     0
            da56p1  ONLINE       0     0     0
            da57p1  ONLINE       0     0     0
            da58p1  ONLINE       0     0     0
            da59p1  ONLINE       0     0     0
            da60p1  ONLINE       0     0     0
          raidz2-5  ONLINE       0     0     0
            da61p1  ONLINE       0     0     0
            da62p1  ONLINE       0     0     0
            da63p1  ONLINE       0     0     0
            da64p1  ONLINE       0     0     0
            da65p1  ONLINE       0     0     0
            da66p1  ONLINE       0     0     0
            da67p1  ONLINE       0     0     0
            da68p1  ONLINE       0     0     0
            da69p1  ONLINE       0     0     0
            da70p1  ONLINE       0     0     0
            da71p1  ONLINE       0     0     0
            da72p1  ONLINE       0     0     0
          raidz2-6  ONLINE       0     0     0
            da73p1  ONLINE       0     0     0
            da74p1  ONLINE       0     0     0
            da75p1  ONLINE       0     0     0
            da76p1  ONLINE       0     0     0
            da77p1  ONLINE       0     0     0
            da78p1  ONLINE       0     0     0
            da79p1  ONLINE       0     0     0
            da80p1  ONLINE       0     0     0
            da81p1  ONLINE       0     0     0
            da82p1  ONLINE       0     0     0
            da83p1  ONLINE       0     0     0
            da84p1  ONLINE       0     0     0
        spares
          da85p1    AVAIL
          da86p1    AVAIL
          da87p1    AVAIL
          da88p1    AVAIL
          da89p1    AVAIL
          da90p1    AVAIL

errors: No known data errors

# zpool list nas02
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
nas02   915T  1.42M   915T        -         -     0%     0%  1.00x  ONLINE  -

# zfs list nas02
NAME    USED  AVAIL  REFER  MOUNTPOINT
nas02    88K   675T   201K  none

ZFS Settings

As the primary role of this storage would be keeping files I will use one of the largest values for recordsize – 1 MB – this helps getting better compression ratio.

… but it will also serve as iSCSI Target in which we will try to fit in the native 4k blocks – thus 4096 bytes setting for iSCSI.

# zfs set compression=lz4         nas02
# zfs set atime=off               nas02
# zfs set mountpoint=none         nas02
# zfs set recordsize=1m           nas02
# zfs set redundant_metadata=most nas02
# zfs create                      nas02/nfs
# zfs create                      nas02/smb
# zfs create                      nas02/iscsi
# zfs set recordsize=4k           nas02/iscsi

Also one word on redundant_metadata as its not that obvious parameter. To quote the zfs(8) man page.

# man zfs
(...)
redundant_metadata=all | most
  Controls what types of metadata are stored redundantly.  ZFS stores
  an extra copy of metadata, so that if a single block is corrupted,
  the amount of user data lost is limited.  This extra copy is in
  addition to any redundancy provided at the pool level (e.g. by
  mirroring or RAID-Z), and is in addition to an extra copy specified
  by the copies property (up to a total of 3 copies).  For example if
  the pool is mirrored, copies=2, and redundant_metadata=most, then ZFS
  stores 6 copies of most metadata, and 4 copies of data and some
  metadata.

  When set to all, ZFS stores an extra copy of all metadata.  If a
  single on-disk block is corrupt, at worst a single block of user data
  (which is recordsize bytes long can be lost.)

  When set to most, ZFS stores an extra copy of most types of metadata.
  This can improve performance of random writes, because less metadata
  must be written.  In practice, at worst about 100 blocks (of
  recordsize bytes each) of user data can be lost if a single on-disk
  block is corrupt.  The exact behavior of which metadata blocks are
  stored redundantly may change in future releases.

  The default value is all.
(...)

From the description above we can see that its mostly useful on single device pools because when we have redundancy based on RAIDZ2 (RAID6 equivalent) we do not need to keep additional redundant copies of metadata. This helps to increase write performance.

For the record – iSCSI ZFS zvols are create with command like that one below – as sparse files – also called Thin Provisioning mode.

# zfs create -s -V 16T nas02/iscsi/test

As we have SPARE disks we will also need to enable the zfsd(8) daemon by adding zfsd_enable=YES to the /etc/rc.conf file.

We also need to enable autoreplace property for our pool because by default its set to off.

# zpool get autoreplace nas02
NAME   PROPERTY     VALUE    SOURCE
nas02  autoreplace  off      default

# zpool set autoreplace=on nas02

# zpool get autoreplace nas02
NAME   PROPERTY     VALUE    SOURCE
nas02  autoreplace  on       local

Other ZFS settings are in the /boot/loader.conf file. As this system has 128 GB RAM we will let ZFS use 50 to 75% of that amount for ARC.

# grep vfs.zfs /boot/loader.conf
  vfs.zfs.prefetch_disable=1
  vfs.zfs.cache_flush_disable=1
  vfs.zfs.vdev.cache.size=16M
  vfs.zfs.arc_min=64G
  vfs.zfs.arc_max=96G
  vfs.zfs.deadman_enabled=0

Network Configuration

This is what I really like about FreeBSD. To setup LACP link aggregation tou just need 5 lines in /etc/rc.conf file. On Red Hat Enterprise Linux you would need several files with many lines each.

# head -5 /etc/rc.conf
  defaultrouter="10.20.30.254"
  ifconfig_ixl0="up"
  ifconfig_ixl1="up"
  cloned_interfaces="lagg0"
  ifconfig_lagg0="laggproto lacp laggport ixl0 laggport ixl1 10.20.30.2/24 up"

# ifconfig lagg0
lagg0: flags=8843 metric 0 mtu 1500
        options=e507bb
        ether a0:42:3f:a0:42:3f
        inet 10.20.30.2 netmask 0xffffff00 broadcast 10.20.30.255
        laggproto lacp lagghash l2,l3,l4
        laggport: ixl0 flags=1c
        laggport: ixl1 flags=1c
        groups: lagg
        media: Ethernet autoselect
        status: active
        nd6 options=29

The Intel X710 DA-2 10GE network adapter is fully supported under FreeBSD by the ixl(4) driver.

intel-x710-da-2.jpg

Cisco Nexus Configuration

This is the Cisco Nexus configuration needed to enable LACP aggregation.

First the ports.

NEXUS-1  Eth1/32  NAS02_IXL0  connected 3  full  a-10G  SFP-H10GB-A
NEXUS-2  Eth1/32  NAS02_IXL1  connected 3  full  a-10G  SFP-H10GB-A

… and now aggregation.

interface Ethernet1/32
  description NAS02_IXL1
  switchport
  switchport access vlan 3
  mtu 9216
  channel-group 128 mode active
  no shutdown
!
interface port-channel128
  description NAS02
  switchport
  switchport access vlan 3
  mtu 9216
  vpc 128

… and the same/similar on the second Cisco Nexus NEXUS-2 switch.

FreeBSD Configuration

These are three most important configuration files on any FreeBSD system.

I will now post all settings I use on this storage system.

The /etc/rc.conf file.

# cat /etc/rc.conf
# NETWORK
  hostname="nas02.local"
  defaultrouter="10.20.30.254"
  ifconfig_ixl0="up"
  ifconfig_ixl1="up"
  cloned_interfaces="lagg0"
  ifconfig_lagg0="laggproto lacp laggport ixl0 laggport ixl1 10.20.30.2/24 up"

# KERNEL MODULES
  kld_list="${kld_list} aesni"

# DAEMON | YES
  zfs_enable=YES
  zfsd_enable=YES
  sshd_enable=YES
  ctld_enable=YES
  powerd_enable=YES

# DAEMON | NFS SERVER
  nfs_server_enable=YES
  nfs_client_enable=YES
  rpc_lockd_enable=YES
  rpc_statd_enable=YES
  rpcbind_enable=YES
  mountd_enable=YES
  mountd_flags="-r"

# OTHER
  dumpdev=NO

The /boot/loader.conf file.

# cat /boot/loader.conf
# BOOT OPTIONS
  autoboot_delay=3
  kern.geom.label.disk_ident.enable=0
  kern.geom.label.gptid.enable=0

# DISABLE INTEL HT
  machdep.hyperthreading_allowed=0

# UPDATE INTEL CPU MICROCODE AT BOOT BEFORE KERNEL IS LOADED
  cpu_microcode_load=YES
  cpu_microcode_name=/boot/firmware/intel-ucode.bin

# MODULES
  zfs_load=YES
  aio_load=YES

# RACCT/RCTL RESOURCE LIMITS
  kern.racct.enable=1

# DISABLE MEMORY TEST @ BOOT
  hw.memtest.tests=0

# PIPE KVA LIMIT | 320 MB
  kern.ipc.maxpipekva=335544320

# IPC
  kern.ipc.shmseg=1024
  kern.ipc.shmmni=1024
  kern.ipc.shmseg=1024
  kern.ipc.semmns=512
  kern.ipc.semmnu=256
  kern.ipc.semume=256
  kern.ipc.semopm=256
  kern.ipc.semmsl=512

# LARGE PAGE MAPPINGS
  vm.pmap.pg_ps_enabled=1

# ZFS TUNING
  vfs.zfs.prefetch_disable=1
  vfs.zfs.cache_flush_disable=1
  vfs.zfs.vdev.cache.size=16M
  vfs.zfs.arc_min=64G
  vfs.zfs.arc_max=96G

# ZFS DISABLE PANIC ON STALE I/O
  vfs.zfs.deadman_enabled=0

# NEWCONS SUSPEND
  kern.vt.suspendswitch=0

The /etc/sysctl.conf file.

# cat /etc/sysctl.conf
# ZFS ASHIFT
  vfs.zfs.min_auto_ashift=12

# SECURITY
  security.bsd.stack_guard_page=1

# SECURITY INTEL MDS (MICROARCHITECTURAL DATA SAMPLING) MITIGATION
  hw.mds_disable=3

# DISABLE ANNOYING THINGS
  kern.coredump=0
  hw.syscons.bell=0

# IPC
  kern.ipc.shmmax=4294967296
  kern.ipc.shmall=2097152
  kern.ipc.somaxconn=4096
  kern.ipc.maxsockbuf=5242880
  kern.ipc.shm_allow_removed=1

# NETWORK
  kern.ipc.maxsockbuf=16777216
  kern.ipc.soacceptqueue=1024
  net.inet.tcp.recvbuf_max=8388608
  net.inet.tcp.sendbuf_max=8388608
  net.inet.tcp.mssdflt=1460
  net.inet.tcp.minmss=1300
  net.inet.tcp.syncache.rexmtlimit=0
  net.inet.tcp.syncookies=0
  net.inet.tcp.tso=0
  net.inet.ip.process_options=0
  net.inet.ip.random_id=1
  net.inet.ip.redirect=0
  net.inet.icmp.drop_redirect=1
  net.inet.tcp.always_keepalive=0
  net.inet.tcp.drop_synfin=1
  net.inet.tcp.fast_finwait2_recycle=1
  net.inet.tcp.icmp_may_rst=0
  net.inet.tcp.msl=8192
  net.inet.tcp.path_mtu_discovery=0
  net.inet.udp.blackhole=1
  net.inet.tcp.blackhole=2
  net.inet.tcp.hostcache.expire=7200
  net.inet.tcp.delacktime=20

Purpose

Why one would built such appliance? Because its a lot cheaper then to get the ‘branded’ one. Think about Dell EMC Data Domain for example – and not just ‘any’ Data Domain but almost the highest one – the Data Domain DD9300 at least. It would cost about ten times more at least … with smaller capacity and taking not 4U but closer to 14U with three DS60 expanders.

But you can actually make this FreeBSD Enterprise Storage behave like Dell EMC Data Domain .. or like their Dell EMC Elastic Cloud Storage for example.

The Dell EMC CloudBoost can be deployed somewhere on your VMware stack to provide the DDBoost deduplication. Then you would need OpenStack Swift as its one of the supported backed devices.

emc-cloudboost-swift-cover.png

emc-cloudboost-swift-support.png

The OpenStack Swift package in FreeBSD is about 4-5 years behind reality (2.2.2) so you will have to use Bhyve here.

# pkg search swift
(...)
py27-swift-2.2.2_1             Highly available, distributed, eventually consistent object/blob store
(...)

Create Bhyve virtual machine on this FreeBSD Enterprise Storage with CentOS 7.6 system for example, then setup Swift there, but it will work. With 20 physical cores to spare and 128 GB RAM you would not even noticed its there.

This way you can use Dell EMC Networker with more then ten times cheaper storage.

In the past I also wrote about IBM Spectrum Protect (TSM) which would also greatly benefit from FreeBSD Enterprise Storage. I actually also use this FreeBSD based storage as space for IBM Spectrum Protect (TSM) container pool directories. Exported via iSCSI works like a charm.

You can also compare that FreeBSD Enterprise Storage to other storage appliances like iXsystems TrueNAS or EXAGRID.

Performance

You for sure would want to know how fast this FreeBSD Enterprise Storage performs ๐Ÿ™‚

I will share all performance data that I gathered with a pleasure.

Network Performance

First the network performance.

I user iperf3 as the benchmark.

I started the server on the FreeBSD side.

# iperf3 -s

… and then I started client on the Windows Server 2016 machine.

C:\iperf-3.1.3-win64>iperf3.exe -c nas02 -P 8
(...)
[SUM]   0.00-10.00  sec  10.8 GBytes  9.26 Gbits/sec                  receiver
(..)

This is with MTU 1500 – no Jumbo frames unfortunatelly ๐Ÿ˜ฆ

Unfortunatelly this system has only one physical 10GE interface but I did other test also. Using two such boxes with single 10GE interface. That saturated the dual 10GE LACP on FreeBSD side nicely.

I also exported NFS and iSCSI to Red Hat Enterprise Linux system. The network performance was about 500-600 MB/s on single 10GE interface. That would be 1000-1200 MB/s on LACP aggregation.

Disk Subsystem Performance

Now the disk subsystem.

First some naive test using diskinfo(8) FreeBSD’s builtin tool.

# diskinfo -ctv /dev/da12
/dev/da12
        512             # sectorsize
        12000138625024  # mediasize in bytes (11T)
        23437770752     # mediasize in sectors
        4096            # stripesize
        0               # stripeoffset
        1458933         # Cylinders according to firmware.
        255             # Heads according to firmware.
        63              # Sectors according to firmware.
        ATA TOSHIBA MG07ACA1    # Disk descr.
        98H0A11KF95G    # Disk ident.
        id1,enc@n500e081010445dbd/type@0/slot@c/elmdesc@ArrayDevice11   # Physical path
        No              # TRIM/UNMAP support
        7200            # Rotation rate in RPM
        Not_Zoned       # Zone Mode

I/O command overhead:
        time to read 10MB block      0.067031 sec       =    0.003 msec/sector
        time to read 20480 sectors   2.619989 sec       =    0.128 msec/sector
        calculated command overhead                     =    0.125 msec/sector

Seek times:
        Full stroke:      250 iter in   5.665880 sec =   22.664 msec
        Half stroke:      250 iter in   4.263047 sec =   17.052 msec
        Quarter stroke:   500 iter in   6.867914 sec =   13.736 msec
        Short forward:    400 iter in   3.057913 sec =    7.645 msec
        Short backward:   400 iter in   1.979287 sec =    4.948 msec
        Seq outer:       2048 iter in   0.169472 sec =    0.083 msec
        Seq inner:       2048 iter in   0.469630 sec =    0.229 msec

Transfer rates:
        outside:       102400 kbytes in   0.478251 sec =   214114 kbytes/sec
        middle:        102400 kbytes in   0.605701 sec =   169060 kbytes/sec
        inside:        102400 kbytes in   1.303909 sec =    78533 kbytes/sec

So now we know how fast a single disk is.

Let’s repeast the same test on the ZFS zvol device.

# diskinfo -ctv /dev/zvol/nas02/iscsi/test
/dev/zvol/nas02/iscsi/test
        512             # sectorsize
        17592186044416  # mediasize in bytes (16T)
        34359738368     # mediasize in sectors
        65536           # stripesize
        0               # stripeoffset
        Yes             # TRIM/UNMAP support
        Unknown         # Rotation rate in RPM

I/O command overhead:
        time to read 10MB block      0.004512 sec       =    0.000 msec/sector
        time to read 20480 sectors   0.196824 sec       =    0.010 msec/sector
        calculated command overhead                     =    0.009 msec/sector

Seek times:
        Full stroke:      250 iter in   0.006151 sec =    0.025 msec
        Half stroke:      250 iter in   0.008228 sec =    0.033 msec
        Quarter stroke:   500 iter in   0.014062 sec =    0.028 msec
        Short forward:    400 iter in   0.010564 sec =    0.026 msec
        Short backward:   400 iter in   0.011725 sec =    0.029 msec
        Seq outer:       2048 iter in   0.028198 sec =    0.014 msec
        Seq inner:       2048 iter in   0.028416 sec =    0.014 msec

Transfer rates:
        outside:       102400 kbytes in   0.036938 sec =  2772213 kbytes/sec
        middle:        102400 kbytes in   0.043076 sec =  2377194 kbytes/sec
        inside:        102400 kbytes in   0.034260 sec =  2988908 kbytes/sec

Almost 3 GB/s – not bad.

Time for even more oldschool test – the immortal dd(8) command.

This is with compression=off setting.

One process.

# dd if=/dev/zero of=FILE bs=128m status=progress
26172456960 bytes (26 GB, 24 GiB) transferred 16.074s, 1628 MB/s
202+0 records in
201+0 records out
26977763328 bytes transferred in 16.660884 secs (1619227644 bytes/sec)

Four concurrent processes.

# dd if=/dev/zero of=FILE${X} bs=128m status=progress
80933289984 bytes (81 GB, 75 GiB) transferred 98.081s, 825 MB/s
608+0 records in
608+0 records out
81604378624 bytes transferred in 98.990579 secs (824365101 bytes/sec)

Eight concurrent processes.

# dd if=/dev/zero of=FILE${X} bs=128m status=progress
174214610944 bytes (174 GB, 162 GiB) transferred 385.042s, 452 MB/s
1302+0 records in
1301+0 records out
174617264128 bytes transferred in 385.379296 secs (453104943 bytes/sec)

Lets summarize that data.

1 STREAM(s) ~ 1600 MB/s ~ 1.5 GB/s
4 STREAM(s) ~ 3300 MB/s ~ 3.2 GB/s
8 STREAM(s) ~ 3600 MB/s ~ 3.5 GB/s

So the disk subsystem is able to squeeze 3.5 GB/s of sustained speed in sequential writes. That us that if we would want to saturate it we would need to add additional two 10GE interfaces.

The disks were stressed only to about 55% which you can see in other useful FreeBSD tool – gstat(8) command.

n10.png

Time for more ‘intelligent’ tests. The blogbench test.

First with compression disabled.

# time blogbench -d .
Frequency = 10 secs
Scratch dir = [.]
Spawning 3 writers...
Spawning 1 rewriters...
Spawning 5 commenters...
Spawning 100 readers...
Benchmarking for 30 iterations.
The test will run during 5 minutes.
(...)
Final score for writes:          6476
Final score for reads :        660436

blogbench -d .  280.58s user 4974.41s system 1748% cpu 5:00.54 total

Second with compression set to LZ4.

# time blogbench -d .
Frequency = 10 secs
Scratch dir = [.]
Spawning 3 writers...
Spawning 1 rewriters...
Spawning 5 commenters...
Spawning 100 readers...
Benchmarking for 30 iterations.
The test will run during 5 minutes.
(...)
Final score for writes:          7087
Final score for reads :        733932

blogbench -d .  299.08s user 5415.04s system 1900% cpu 5:00.68 total

Compression did not helped much, but helped.

To have some comparision we will run the same test on the system ZFS pool – two Intel SSD DC S3500 240 GB drives in mirror which have following features.

The Intel SSD DC S3500 240 GB drives:

  • Sequential Read (up to) 500 MB/s
  • Sequential Write (up to) 260 MB/s
  • Random Read (100% Span) 75000 IOPS
  • Random Write (100% Span) 7500 IOPS
# time blogbench -d .
Frequency = 10 secs
Scratch dir = [.]
Spawning 3 writers...
Spawning 1 rewriters...
Spawning 5 commenters...
Spawning 100 readers...
Benchmarking for 30 iterations.
The test will run during 5 minutes.
(...)
Final score for writes:          6109
Final score for reads :        654099

blogbench -d .  278.73s user 5058.75s system 1777% cpu 5:00.30 total

Now the randomio test. Its multithreaded disk I/O microbenchmark.

The usage is as follows.

usage: randomio filename nr_threads write_fraction_of_io fsync_fraction_of_writes io_size nr_seconds_between_samples

filename                    Filename or device to read/write.
write_fraction_of_io        What fraction of I/O should be writes - for example 0.25 for 25% write.
fsync_fraction_of_writes    What fraction of writes should be fsync'd.
io_size                     How many bytes to read/write (multiple of 512 bytes).
nr_seconds_between_samples  How many seconds to average samples over.

The randomio with 4k block.

# zfs create -s -V 1T nas02/iscsi/test
# randomio /dev/zvol/nas02/iscsi/test 8 0.25 1 4096 10
  total |  read:         latency (ms)       |  write:        latency (ms)
   iops |   iops   min    avg    max   sdev |   iops   min    avg    max   sdev
--------+-----------------------------------+----------------------------------
54137.7 |40648.4   0.0    0.1  575.8    2.2 |13489.4   0.0    0.3  405.8    2.6
66248.4 |49641.5   0.0    0.1   19.6    0.3 |16606.9   0.0    0.2   26.4    0.7
66411.0 |49817.2   0.0    0.1   19.7    0.3 |16593.8   0.0    0.2   20.3    0.7
64158.9 |48142.8   0.0    0.1  254.7    0.7 |16016.1   0.0    0.2  130.4    1.0
48454.1 |36390.8   0.0    0.1  542.8    2.7 |12063.3   0.0    0.3  507.5    3.2
66796.1 |50067.4   0.0    0.1   24.1    0.3 |16728.7   0.0    0.2   23.4    0.7
58512.2 |43851.7   0.0    0.1  576.5    1.7 |14660.5   0.0    0.2  307.2    1.7
63195.8 |47341.8   0.0    0.1  261.6    0.9 |15854.1   0.0    0.2  361.1    1.9
67086.0 |50335.6   0.0    0.1   20.4    0.3 |16750.4   0.0    0.2   25.1    0.8
67429.8 |50549.6   0.0    0.1   21.8    0.3 |16880.3   0.0    0.2   20.6    0.7
^C

… and with 512 sector.

# zfs create -s -V 1T nas02/iscsi/test
# randomio /dev/zvol/nas02/iscsi/TEST 8 0.25 1 512 10
  total |  read:         latency (ms)       |  write:        latency (ms)
   iops |   iops   min    avg    max   sdev |   iops   min    avg    max   sdev
--------+-----------------------------------+----------------------------------
58218.9 |43712.0   0.0    0.1  501.5    2.1 |14506.9   0.0    0.2  272.5    1.6
66325.3 |49703.8   0.0    0.1  352.0    0.9 |16621.4   0.0    0.2  352.0    1.5
68130.5 |51100.8   0.0    0.1   24.6    0.3 |17029.7   0.0    0.2   24.4    0.7
68465.3 |51352.3   0.0    0.1   19.9    0.3 |17112.9   0.0    0.2   23.8    0.7
54903.5 |41249.1   0.0    0.1  399.3    1.9 |13654.4   0.0    0.3  335.8    2.2
61259.8 |45898.7   0.0    0.1  574.6    1.7 |15361.0   0.0    0.2  371.5    1.7
68483.3 |51313.1   0.0    0.1   22.9    0.3 |17170.3   0.0    0.2   26.1    0.7
56713.7 |42524.7   0.0    0.1  373.5    1.8 |14189.1   0.0    0.2  438.5    2.7
68861.4 |51657.0   0.0    0.1   21.0    0.3 |17204.3   0.0    0.2   21.7    0.7
68602.0 |51438.4   0.0    0.1   19.5    0.3 |17163.7   0.0    0.2   23.7    0.7
^C

Both randomio tests were run with compression set to LZ4.

Next is bonnie++ benchmark. It has been run with compression set to LZ4.

# bonnie++ -d . -u root
Using uid:0, gid:0.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.97       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
nas02.local 261368M   139  99 775132  99 589190  99   383  99 1638929  99 12930 2046
Latency             60266us    7030us    7059us   21553us    3844us    5710us
Version  1.97       ------Sequential Create------ --------Random Create--------
nas02.local         -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ 12680  44 +++++ +++ +++++ +++ 30049  99
Latency              2619us      43us     714ms    2748us      28us      58us

… and last but not least the fio benchmark. Also with LZ4 compression enabled.

# fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=64
fio-3.13
Starting 1 process
Jobs: 1 (f=1): [m(1)][98.0%][r=38.0MiB/s,w=12.2MiB/s][r=9735,w=3128 IOPS][eta 00m:05s]
test: (groupid=0, jobs=1): err= 0: pid=35368: Tue Jun 18 15:14:44 2019
  read: IOPS=3157, BW=12.3MiB/s (12.9MB/s)(3070MiB/248872msec)
   bw (  KiB/s): min= 9404, max=57732, per=98.72%, avg=12469.84, stdev=3082.99, samples=497
   iops        : min= 2351, max=14433, avg=3117.15, stdev=770.74, samples=497
  write: IOPS=1055, BW=4222KiB/s (4323kB/s)(1026MiB/248872msec)
   bw (  KiB/s): min= 3179, max=18914, per=98.71%, avg=4166.60, stdev=999.23, samples=497
   iops        : min=  794, max= 4728, avg=1041.25, stdev=249.76, samples=497
  cpu          : usr=1.11%, sys=88.64%, ctx=677981, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=12.3MiB/s (12.9MB/s), 12.3MiB/s-12.3MiB/s (12.9MB/s-12.9MB/s), io=3070MiB (3219MB), run=248872-248872msec
  WRITE: bw=4222KiB/s (4323kB/s), 4222KiB/s-4222KiB/s (4323kB/s-4323kB/s), io=1026MiB (1076MB), run=248872-248872msec

Dunno how about you but I am satisfied with performance ๐Ÿ™‚

FreeNAS

Originally I really wanted to use FreeNAS on these boxes and I even installed FreeNAS on them. It run nicely but … the security part of FreeNAS was not best.

This is the output of pkg audit command. Quite scarry.

root@freenas[~]# pkg audit -F
Fetching vuln.xml.bz2: 100%  785 KiB 804.3kB/s    00:01
python27-2.7.15 is vulnerable:
Python -- NULL pointer dereference vulnerability
CVE: CVE-2019-5010
WWW: https://vuxml.FreeBSD.org/freebsd/d74371d2-4fee-11e9-a5cd-1df8a848de3d.html

curl-7.62.0 is vulnerable:
curl -- multiple vulnerabilities
CVE: CVE-2019-3823
CVE: CVE-2019-3822
CVE: CVE-2018-16890
WWW: https://vuxml.FreeBSD.org/freebsd/714b033a-2b09-11e9-8bc3-610fd6e6cd05.html

libgcrypt-1.8.2 is vulnerable:
libgcrypt -- side-channel attack vulnerability
CVE: CVE-2018-0495
WWW: https://vuxml.FreeBSD.org/freebsd/9b5162de-6f39-11e8-818e-e8e0b747a45a.html

python36-3.6.5_1 is vulnerable:
Python -- NULL pointer dereference vulnerability
CVE: CVE-2019-5010
WWW: https://vuxml.FreeBSD.org/freebsd/d74371d2-4fee-11e9-a5cd-1df8a848de3d.html

pango-1.42.0 is vulnerable:
pango -- remote DoS vulnerability
CVE: CVE-2018-15120
WWW: https://vuxml.FreeBSD.org/freebsd/5a757a31-f98e-4bd4-8a85-f1c0f3409769.html

py36-requests-2.18.4 is vulnerable:
www/py-requests -- Information disclosure vulnerability
WWW: https://vuxml.FreeBSD.org/freebsd/50ad9a9a-1e28-11e9-98d7-0050562a4d7b.html

libnghttp2-1.31.0 is vulnerable:
nghttp2 -- Denial of service due to NULL pointer dereference
CVE: CVE-2018-1000168
WWW: https://vuxml.FreeBSD.org/freebsd/1fccb25e-8451-438c-a2b9-6a021e4d7a31.html

gnupg-2.2.6 is vulnerable:
gnupg -- unsanitized output (CVE-2018-12020)
CVE: CVE-2017-7526
CVE: CVE-2018-12020
WWW: https://vuxml.FreeBSD.org/freebsd/7da0417f-6b24-11e8-84cc-002590acae31.html

py36-cryptography-2.1.4 is vulnerable:
py-cryptography -- tag forgery vulnerability
CVE: CVE-2018-10903
WWW: https://vuxml.FreeBSD.org/freebsd/9e2d0dcf-9926-11e8-a92d-0050562a4d7b.html

perl5-5.26.1 is vulnerable:
perl -- multiple vulnerabilities
CVE: CVE-2018-6913
CVE: CVE-2018-6798
CVE: CVE-2018-6797
WWW: https://vuxml.FreeBSD.org/freebsd/41c96ffd-29a6-4dcc-9a88-65f5038fa6eb.html

libssh2-1.8.0,3 is vulnerable:
libssh2 -- multiple issues
CVE: CVE-2019-3862
CVE: CVE-2019-3861
CVE: CVE-2019-3860
CVE: CVE-2019-3858
WWW: https://vuxml.FreeBSD.org/freebsd/6e58e1e9-2636-413e-9f84-4c0e21143628.html

git-lite-2.17.0 is vulnerable:
Git -- Fix memory out-of-bounds and remote code execution vulnerabilities (CVE-2018-11233 and CVE-2018-11235)
CVE: CVE-2018-11235
CVE: CVE-2018-11233
WWW: https://vuxml.FreeBSD.org/freebsd/c7a135f4-66a4-11e8-9e63-3085a9a47796.html

gnutls-3.5.18 is vulnerable:
GnuTLS -- double free, invalid pointer access
CVE: CVE-2019-3836
CVE: CVE-2019-3829
WWW: https://vuxml.FreeBSD.org/freebsd/fb30db8f-62af-11e9-b0de-001cc0382b2f.html

13 problem(s) in the installed packages found.

root@freenas[~]# uname -a
FreeBSD freenas.local 11.2-STABLE FreeBSD 11.2-STABLE #0 r325575+95cc58ca2a0(HEAD): Mon May  6 19:08:58 EDT 2019     root@mp20.tn.ixsystems.com:/freenas-releng/freenas/_BE/objs/freenas-releng/freenas/_BE/os/sys/FreeNAS.amd64  amd64

root@freenas[~]# freebsd-version -uk
11.2-STABLE
11.2-STABLE

root@freenas[~]# sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
root     uwsgi-3.6  4006  3  tcp4   127.0.0.1:9042        *:*
root     uwsgi-3.6  3188  3  tcp4   127.0.0.1:9042        *:*
nobody   mdnsd      3144  4  udp4   *:31417               *:*
nobody   mdnsd      3144  6  udp4   *:5353                *:*
www      nginx      3132  6  tcp4   *:443                 *:*
www      nginx      3132  8  tcp4   *:80                  *:*
root     nginx      3131  6  tcp4   *:443                 *:*
root     nginx      3131  8  tcp4   *:80                  *:*
root     ntpd       2823  21 udp4   *:123                 *:*
root     ntpd       2823  22 udp4   10.49.13.99:123       *:*
root     ntpd       2823  25 udp4   127.0.0.1:123         *:*
root     sshd       2743  5  tcp4   *:22                  *:*
root     syslog-ng  2341  19 udp4   *:1031                *:*
nobody   mdnsd      2134  3  udp4   *:39020               *:*
nobody   mdnsd      2134  5  udp4   *:5353                *:*
root     python3.6  236   22 tcp4   *:6000                *:*


I even tried to get explanation why FreeNAS has such outdated and insecure packages in their latest version – FreeNAS 11.2-U3 Vulnerabilities – a thread I started on their forums.

Unfortunatelly its their policy which you can summarize as ‘do not touch/change versions if its working’ – at last I got this implression.

Because if these security holes I can not recommend the use of FreeNAS and I movedto original – the FreeBSD system.

One other interesting note. After I installed FreeBSD I wanted to import the ZFS pool created by FreeNAS. This is what I got after executing the zpool import command.

# zpool import
   pool: nas02_gr06
     id: 1275660523517109367
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

        nas02_gr06  ONLINE
          raidz2-0  ONLINE
            da58p2  ONLINE
            da59p2  ONLINE
            da60p2  ONLINE
            da61p2  ONLINE
            da62p2  ONLINE
            da63p2  ONLINE
            da64p2  ONLINE
            da26p2  ONLINE
            da65p2  ONLINE
            da23p2  ONLINE
            da29p2  ONLINE
            da66p2  ONLINE
            da67p2  ONLINE
            da68p2  ONLINE
        spares
          da69p2

   pool: nas02_gr05
     id: 5642709896812665361
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

        nas02_gr05  ONLINE
          raidz2-0  ONLINE
            da20p2  ONLINE
            da30p2  ONLINE
            da34p2  ONLINE
            da50p2  ONLINE
            da28p2  ONLINE
            da38p2  ONLINE
            da51p2  ONLINE
            da52p2  ONLINE
            da27p2  ONLINE
            da32p2  ONLINE
            da53p2  ONLINE
            da54p2  ONLINE
            da55p2  ONLINE
            da56p2  ONLINE
        spares
          da57p2

   pool: nas02_gr04
     id: 2460983830075205166
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

        nas02_gr04  ONLINE
          raidz2-0  ONLINE
            da44p2  ONLINE
            da37p2  ONLINE
            da18p2  ONLINE
            da36p2  ONLINE
            da45p2  ONLINE
            da19p2  ONLINE
            da22p2  ONLINE
            da33p2  ONLINE
            da35p2  ONLINE
            da21p2  ONLINE
            da31p2  ONLINE
            da47p2  ONLINE
            da48p2  ONLINE
            da49p2  ONLINE
        spares
          da46p2

   pool: nas02_gr03
     id: 4878868173820164207
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

        nas02_gr03  ONLINE
          raidz2-0  ONLINE
            da81p2  ONLINE
            da71p2  ONLINE
            da14p2  ONLINE
            da15p2  ONLINE
            da80p2  ONLINE
            da16p2  ONLINE
            da88p2  ONLINE
            da17p2  ONLINE
            da40p2  ONLINE
            da41p2  ONLINE
            da25p2  ONLINE
            da42p2  ONLINE
            da24p2  ONLINE
            da43p2  ONLINE
        spares
          da39p2

   pool: nas02_gr02
     id: 3299037437134217744
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

        nas02_gr02  ONLINE
          raidz2-0  ONLINE
            da84p2  ONLINE
            da76p2  ONLINE
            da85p2  ONLINE
            da8p2   ONLINE
            da9p2   ONLINE
            da78p2  ONLINE
            da73p2  ONLINE
            da74p2  ONLINE
            da70p2  ONLINE
            da77p2  ONLINE
            da11p2  ONLINE
            da13p2  ONLINE
            da79p2  ONLINE
            da89p2  ONLINE
        spares
          da90p2

   pool: nas02_gr01
     id: 1132383125952900182
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

        nas02_gr01  ONLINE
          raidz2-0  ONLINE
            da91p2  ONLINE
            da75p2  ONLINE
            da0p2   ONLINE
            da82p2  ONLINE
            da1p2   ONLINE
            da83p2  ONLINE
            da2p2   ONLINE
            da3p2   ONLINE
            da4p2   ONLINE
            da5p2   ONLINE
            da86p2  ONLINE
            da6p2   ONLINE
            da7p2   ONLINE
            da72p2  ONLINE
        spares
          da87p2



It seems that FreeNAS does ZFS little differently and they create a separate pool for every RAIDZ2 target with dedicated spares. Interesting …

UPDATE 1 – BSD Now 305

The FreeBSD Enterprise 1 PB Storage article was featured in the BSD Now 305 – Changing Face of Unix episode.

Thanks for mentioning!

UPDATE 2 – Real Life Pictures in Data Center

Some of you asked for a real life pictures of this monster. Below you will find several pics taken at the data center.

Front case with cabling.

tyan-real-01.jpg

Alternate front view.

tyan-real-09.jpg

Back of the case with cabling.

tyan-real-02.jpg

Top view with disks.

tyan-real-03

Alternate top view.

tyan-real-07.jpg

Disks slots zoom.

tyan-real-08.jpg

SSD and HDD disks.

tyan-real-06.jpg

EOF

Manage Photography the UNIX Way

After using UNIX for so many years you start to think the UNIX way. This article aims to automate and accelerate the flow of importing photos from camera and storing it for future use.

When I had a lot of time I shoot both RAW and JPEG images at the same time (RAW and JPEG file were written for every picture). Then I used one of the DxO Optics Pro/Raw Theraphee/Darktable applications to make these RAW files shine even more with mass conversion. Then I compared these to out of camera JPEG files and left only the one that suited me best. Its was probably the best way of having ‘the best version’ of each photo but it also took whole a lot of time. Now as I do not have that much time I needed to find a way to make this process fast and almost seamless.

Hardware

I use SONY cameras because they are superior to other brands when it comes to price/performance ratio and also have some important features that are absent in other brands. For example SONY A-mount based cameras – SONY a68 camera offers just so much more for very small amount of money then any near Nikon or Canon competitor. If you want to get grip on these differences take a look at my SONY a68 review at DPReview site – https://www.dpreview.com/forums/thread/4152155 – available here.

a68-lcd.jpg

Besides the price/performance ratio SONY cameras are just too fun/too comfortable to use something different – while providing similar or better results then Nikon/Canon competition. Take the viewfinder for example. Nikon/Canon cameras are ‘by default’ using the optical viewfinder and to switch to LCD panel you need to manually push a button and switch into the PAINFULLY SLOW (autofocus is actually unusable) mode called Live View … but if you want to use viewfinder again then you again need to switch that mode off with a button. How its implemented in SONY? SONY camera just automatically switches to EVF when you attach your eye to the viewfinder and switches back to LCD automatically when you take your eye off of it … and autofocus is same fast on both viewfinder and LCD. This is just one of the examples of course. For example Nikon cameras can not record movie when you are using viewfinder – you can only do it with LCD.

a68-flash.jpg

There is also SONY E-mount system which utilizes newer/different ideas – its generally much more expensive then older A-mount system but has even more features then Canon/Nikon cameras. One of the selling points of SONY E-mount cameras is also their small size – for which feature I recently switched from SONY a68 (A-mount) to SONY a5100 (E-mount) camera.

Approach

I basically use two SONY cameras.

The small and ultra portable SONY RX100 III which is probably the best pocket/compact camera in the world when it comes to price/performance ratio. As it has quite large 1 INCH sensor (2.7 crop factor) it allows to use high ISO values without that much noise which allows to shoot indoors in low light without much loss of quality. It also has tiltable flash which you can point to ceiling to get extra bounced light in low light situations indoors. This small gem generally has all the features that all SONY APS-C/Full Frame cameras have. Same menu interface with same features. Its not some small handicapped cripple like a lot of compact cameras. And its fast too. It even features EVF! It also features XAVC S 50 Mbit video codec which helps greatly in low light situations. Of course in good light conditions this camera shines even more. As it has 24-70mm f/1.8-2.8 light/fast lens it its very universal. The Full Frame depth of field equivalent is even better then most APS-C cameras because its f/4.9-7.6 Full Frame depth of field equivalent is better – for example – then SONY a6400 with its f/3.5-5.6 kit lens – which only has f/5.3-8.4 (because of 1.5 crop ratio for APS-C).

rx100-evf-lcd-on.jpg

You can read more about depth of field equivalence here – https://www.dpreview.com/articles/2666934640/what-is-equivalence-and-why-should-i-care – a good article on DPReview explaining this.

The other SONY camera I used was SONY a68 with following lenses:

  • TAMRON 18-270mm f/3.5-5.6 – all-rounder
  • SONY 35mm f/1.8 – small bokeh low light friend
  • SIGMA 50-150mm f/2.8 – large bokeh friend
  • SAMYANG 85mm f/1.4 – manual focus bokeh master

… but as I checked my ‘habits’ it was that way most of the time:
– use/take small/portable SONY RX100 III because its convenient
– grab SONY a68 with 35mm f/1.8 at house for some bokeh pictures

If you are not sure what ‘bokeh’ means then please check Wikipedia article about it – https://en.wikipedia.org/wiki/Bokeh – available here.

I very rarely used other lenses. Which made me to think how to ‘optimize’ the SONY a68 A-mount camera. Also because SONY a68 built-in flash is not able to point up (to get extra light from ceiling indoors) I also needed dedicated external SONY HVL F20M flash on ISO hot shoe which made this large camera even bigger.

I checked the SONY portfolio and got older SONY a5100 E-mount camera instead. It has nice and fast autofocus from SONY a6000 camera along with XAVC S video codec and useful tiling LCD screen. It even has a touch screen which allows you to take a photo on the place when you touched the screen! It works similar in movies – just touch when you want it to focus. Its probably smallest SONY APS-C body – very close in size to SONY RX100 III … and I got SONY E-mount 35mm f/1.8 lens to it. I also missed 85mm f/1.4 lens so I take different route now. As E-mount system allows one to adapt older lenses with Lens Turbo adapters (about 0.7 ratio) I get an old used Minolta MD 56mm f/1.4 lens and E-mount to MD Lens Turbo adapter from ALIEXPRESS. This way I got small ultimate bokeh machine – with only one downside – manual autofocus – but SONY a5100 provides very nice implementation of Focus Peaking so its still a pleasure to use.

a5100-lcd.jpg

Of course SONY a5100 has its limitations – no viewfinder for example – but I VERY rarely used it anyway – of course intensive outdoor light can be problematic sometimes without EVF – but if someone wants to have EVF then one should get one of the SONY a6000/a6300/a6400/a6500 cameras – they are not much more larger and provide both EVF and hot shoe.

a5100-flash.jpg

Generally SONY RX100 III when powered on its comparable in size with SONY a5100 with SONY 35mm f/1.8 lens. Its the powered off state and lens range (24-70mm on SONY RX100 III) that make a difference – the SONY RX100 III even fits in the pocket – SONY a5100 does not – maybe with SONY 20mm f/2.8 lens.

If you have quite more budget to spend I also recommend the SONY RX100 V/VA which also incorporates very fast phase detection autofocus and 4k video. The SONY RX100 IV only offers 4k video but still has slower contrast autofocus – thus its IMHO pointless to get it. For the record – the SONY RX100 III also uses slower contrast based autofocus and has video up to FullHD (1080p).

top-a5100-a68.jpg

These cameras also share nice feat – they can be charged directly by attaching USB micro cable to them – very convenient – no need to provide dedicated external chargers for batteries. I really liked SONY a68 grip and lots of direct controls but I really like the size/compactness of SONY a5100. While SONY a5100 body weights 283 grams the SONY a68 is 690 grams – for the body alone. Add flash and larger lens to it and you get the idea.

top-rx100-a5100-with-lens-size.jpg

Comparing to the other side the SONY RX100 III weights 290 grams while SONY a5100 wights 437 grams with SONY 35mm f/1.8 lens attached, not bad.

Gear Summary

I have settled on these two cameras for now.

  • SONY RX100 III – gives 24-70mm f/4.9-7.6 depth of field Full Frame equivalent
  • SONY a5100 with these lenses:
    • Sony 35mm f/1.8 OSS – gives 53mm f/2.7 depth of field Full Frame equivalent
    • Minolta MD 56mm f/1.4 with Lens Turbo 0.7x adapter – gives 59mm f/1.5 depth of field Full Frame equivalent

Scripts

I switched off shooting RAW+JPEG images and now I only shoot EXTRA FINE JPEG images with Vivid profile and -0.7 EV (to not have over-burned images).

The 1st part is copying the images to new directory. That means pictures from DCIM directory and movies from PRIVATE directory.

Now the first two scripts come to play – to rename images to something useful. Each Picture and Video will have YYYY.MM.DD.HHMM(x) name.

These are made by these two scripts:

  • photo-rename-images.sh
  • photo-rename-movies.sh

Links to the scripts will be posted later in the article.

The photo-rename-images.sh uses jhead as dependency.

Now as we have everything named as it should be the size needs to be addressed. The videos will be converted using ffmpeg and images will be compressed to 92% JPEG quality with convert utility from ImageMagick suite.

  • photo-requality.sh
  • photo-movie-audio-ac3.sh

One may ask why convert JPEG from 99% to 92% and lose more quality even more? Well, you should check the differences – and one have to try really hard with very large zoom to find any. For most purposes these differences are negligible. You can also use larger value to have quite better quality and less storage savings -take photo-requality.sh 95 for example as consensus.

This is the comparison between original out of camera JPEG file and the same file compressed to 92% quality using convert utility. I was not able to stop any differences – maybe you will.

diff-crop.jpg

One may be also worried about quality loss in the videos as the size savings are that big. I also tried to find these differences and if its really hard to find them then storage savings are justified – at least for me.

I also recently added photo-flow.sh which takes two arguments. First is the device under which the camera SD card is mounted – its mmcsd0s1 on FreeBSD for most of the times. The second is directory ~/photo.NEW in which the pictures and videos will be dumped, renamed and (re)compressed.

I have put these scripts to my external (from WordPress) account on GitHub – https://github.com/vermaden/scripts – here they are:

Flow

As I attached the SD card from one of my cameras to my laptop it was automounted by my automount solution – described here – Automount Removable Media – as /media/mmcsd0s1 directory – that will be first argument for the import scripts. As I import new pictures to ~/photo.NEW directory – that will be the second argument for the import scripts.

Below you will find example output of such import/convertion process. It took about half an hour on 2011 dual-core laptop (ThinkPad T420s). I omitted/cut large parts of the same output with (…) chars in the output.

% photo-flow.sh /media/mmcsd0s1 ~/photo.NEW
/media/mmcsd0s1/DCIM/100MSDCF/DSC00390.JPG -> /home/vermaden/photo.NEW/2019.06.10.DUMP/DSC00390.JPG
/media/mmcsd0s1/DCIM/100MSDCF/DSC00391.JPG -> /home/vermaden/photo.NEW/2019.06.10.DUMP/DSC00391.JPG
/media/mmcsd0s1/DCIM/100MSDCF/DSC00393.JPG -> /home/vermaden/photo.NEW/2019.06.10.DUMP/DSC00393.JPG
(...)
/media/mmcsd0s1/DCIM/100MSDCF/DSC00462.JPG -> /home/vermaden/photo.NEW/2019.06.10.DUMP/DSC00462.JPG
/media/mmcsd0s1/DCIM/100MSDCF/DSC00463.JPG -> /home/vermaden/photo.NEW/2019.06.10.DUMP/DSC00463.JPG
/media/mmcsd0s1/DCIM/100MSDCF/DSC00464.JPG -> /home/vermaden/photo.NEW/2019.06.10.DUMP/DSC00464.JPG
/media/mmcsd0s1/PRIVATE/M4ROOT/CLIP/C0015.MP4 -> /home/vermaden/photo.NEW/2019.06.10.DUMP/C0015.MP4
/media/mmcsd0s1/PRIVATE/M4ROOT/CLIP/C0015M01.XML -> /home/vermaden/photo.NEW/2019.06.10.DUMP/C0015M01.XML

DSC00390.JPG --> 2019.05.08.0732.jpg
DSC00391.JPG --> 2019.05.08.0732a.jpg
DSC00393.JPG --> 2019.05.08.0732b.jpg
(...)
DSC00462.JPG --> 2019.06.07.2110c.jpg
DSC00463.JPG --> 2019.06.07.2110d.jpg
DSC00464.JPG --> 2019.06.07.2110e.jpg
C0015.MP4 -> 2019.06.01.2140.MP4
C0015M01.XML -> 2019.06.01.2140.XML
File './2019.05.22.0543.jpg' converted to '92' quality.
File './2019.06.07.0508a.jpg' converted to '92' quality.
File './2019.06.01.2141.jpg' converted to '92' quality.
(...)
File './2019.05.23.0124c.jpg' converted to '92' quality.
File './2019.06.01.2140e.jpg' converted to '92' quality.
File './2019.05.22.0548a.jpg' converted to '92' quality.
ffmpeg version 4.1.3 Copyright (c) 2000-2019 the FFmpeg developers
(...)
Guessed Channel Layout for Input Stream #0.1 : stereo
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '2019.06.01.2140.MP4':
  Metadata:
    major_brand     : XAVC
    minor_version   : 16785407
    compatible_brands: XAVCmp42iso2
    creation_time   : 2019-06-01T19:40:52.000000Z
  Duration: 00:00:21.60, start: 0.000000, bitrate: 52049 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709/bt709/iec61966-2-4), 1920x1080 [SAR 1:1 DAR 16:9], 50101 kb/s, 50 fps, 50 tbr, 50k tbn, 100 tbc (default)
    Metadata:
      creation_time   : 2019-06-01T19:40:52.000000Z
      handler_name    : Video Media Handler
      encoder         : AVC Coding
    Stream #0:1(und): Audio: pcm_s16be (twos / 0x736F7774), 48000 Hz, stereo, s16, 1536 kb/s (default)
    Metadata:
      creation_time   : 2019-06-01T19:40:52.000000Z
      handler_name    : Sound Media Handler
    Stream #0:2(und): Data: none (rtmd / 0x646D7472), 409 kb/s (default)
    Metadata:
      creation_time   : 2019-06-01T19:40:52.000000Z
      handler_name    : Timed Metadata Media Handler
      timecode        : 83:01:01;02
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #0:1 (pcm_s16be (native) -> ac3 (native))
Press [q] to stop, [?] for help
[libx264 @ 0x80ddfb400] using SAR=1/1
[libx264 @ 0x80ddfb400] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 @ 0x80ddfb400] profile High, level 4.2, 4:2:0, 8-bit
[libx264 @ 0x80ddfb400] 264 - core 157 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=1 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=25000 vbv_bufsize=25000 crf_max=0.0 nal_hrd=none filler=0 ip_ratio=1.40 aq=1:1.00
Output #0, matroska, to '2019.06.01.2140.MP4.mkv':
  Metadata:
    major_brand     : XAVC
    minor_version   : 16785407
    compatible_brands: XAVCmp42iso2
    encoder         : Lavf58.20.100
    Stream #0:0(und): Video: h264 (libx264) (H264 / 0x34363248), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 50 fps, 1k tbn, 50 tbc (default)
    Metadata:
      creation_time   : 2019-06-01T19:40:52.000000Z
      handler_name    : Video Media Handler
      encoder         : Lavc58.35.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 25000000/0/0 buffer size: 25000000 vbv_delay: -1
    Stream #0:1(und): Audio: ac3 ([0] [0][0] / 0x2000), 48000 Hz, stereo, fltp, 160 kb/s (default)
    Metadata:
      creation_time   : 2019-06-01T19:40:52.000000Z
      handler_name    : Sound Media Handler
      encoder         : Lavc58.35.100 ac3
frame= 1080 fps=4.1 q=31.0 Lsize=   30522kB time=00:00:21.59 bitrate=11578.4kbits/s speed=0.0815x    
video:30086kB audio:422kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.046764%
(...)

This is how the pictures look like imported and converted after running the import flow. We still have original 2019.06.01.2140.MP4 movie but we can delete it of course.

% exa ~/photo.NEW/2019.06.10.DUMP
2019.05.08.0732.jpg   2019.05.22.0548.jpg   2019.05.25.2111.jpg   2019.06.01.0914.jpg   2019.06.01.2140.jpg      2019.06.07.0509.jpg
2019.05.08.0732a.jpg  2019.05.22.0548a.jpg  2019.05.25.2111a.jpg  2019.06.01.0915.jpg   2019.06.01.2140.MP4      2019.06.07.0509a.jpg
2019.05.08.0732b.jpg  2019.05.22.0548b.jpg  2019.05.25.2111b.jpg  2019.06.01.2043.jpg   2019.06.01.2140.MP4.mkv  2019.06.07.0509b.jpg
2019.05.08.0733.jpg   2019.05.22.0549.jpg   2019.05.25.2111c.jpg  2019.06.01.2043a.jpg  2019.06.01.2140.XML      2019.06.07.2110.jpg
2019.05.22.0541.jpg   2019.05.22.0550.jpg   2019.05.27.0712.jpg   2019.06.01.2043b.jpg  2019.06.01.2140a.jpg     2019.06.07.2110a.jpg
2019.05.22.0541a.jpg  2019.05.22.0551.jpg   2019.05.27.0712a.jpg  2019.06.01.2043c.jpg  2019.06.01.2140b.jpg     2019.06.07.2110b.jpg
2019.05.22.0542.jpg   2019.05.23.0124.jpg   2019.05.27.0712b.jpg  2019.06.01.2043d.jpg  2019.06.01.2140c.jpg     2019.06.07.2110c.jpg
2019.05.22.0542a.jpg  2019.05.23.0124a.jpg  2019.05.27.0712c.jpg  2019.06.01.2043e.jpg  2019.06.01.2140d.jpg     2019.06.07.2110d.jpg
2019.05.22.0542b.jpg  2019.05.23.0124b.jpg  2019.05.27.0712d.jpg  2019.06.01.2043f.jpg  2019.06.01.2140e.jpg     2019.06.07.2110e.jpg
2019.05.22.0542c.jpg  2019.05.23.0124c.jpg  2019.05.27.0712e.jpg  2019.06.01.2043g.jpg  2019.06.01.2141.jpg
2019.05.22.0543.jpg   2019.05.23.1831.jpg   2019.05.27.0712f.jpg  2019.06.01.2043h.jpg  2019.06.01.2141a.jpg
2019.05.22.0543a.jpg  2019.05.25.2110.jpg   2019.05.27.0713.jpg   2019.06.01.2043i.jpg  2019.06.07.0508.jpg
2019.05.22.0543b.jpg  2019.05.25.2110a.jpg  2019.05.27.0713a.jpg  2019.06.01.2044.jpg   2019.06.07.0508a.jpg

These are differences in size before and after conversion – both for example picture and video.

% ls -lh ~/photo.NEW/2019.06.10.DUMP/2019.06.01.2140.MP4*
-rw-r--r--  1 vermaden  vermaden   134M 2019.06.01 21:41 /home/vermaden/photo.NEW/2019.06.10.DUMP/2019.06.01.2140.MP4
-rw-r--r--  1 vermaden  vermaden    30M 2019.06.10 22:57 /home/vermaden/photo.NEW/2019.06.10.DUMP/2019.06.01.2140.MP4.mkv

% ls -lh /media/mmcsd0s1/DCIM/100MSDCF/DSC00430.JPG ~/photo.NEW/2019.06.10.DUMP/2019.05.27.0712f.jpg
-rw-r--r--  1 vermaden  vermaden   4.4M 2019.06.10 22:53 /home/vermaden/photo.NEW/2019.06.10.DUMP/2019.05.27.0712f.jpg
-rw-r--r--  1 vermaden  vermaden   6.4M 2019.05.27 07:12 /media/mmcsd0s1/DCIM/100MSDCF/DSC00430.JPG

The best savings are in the video – more then 4 times smaller file. The pictures are about 30% smaller.

Totals of the size differences for the whole import are below. First the original dump from camera SD card.

% du -scm /media/mmcsd0s1/DCIM /media/mmcsd0s1/PRIVATE/M4ROOT/CLIP
400     /media/mmcsd0s1/DCIM
135     /media/mmcsd0s1/PRIVATE/M4ROOT/CLIP
534     total

… and converted/imported size.

% rm ~/photo.NEW/2019.06.10.DUMP/2019.06.01.2140.MP4

% du -scm /home/vermaden/photo.NEW/2019.06.10.DUMP/*jpg | tail -1
265     total

% du -scm /home/vermaden/photo.NEW/2019.06.10.DUMP/*mkv | tail -1
30      total

% du -scm ~/photo.NEW/2019.06.10.DUMP
295     /home/vermaden/photo.NEW/2019.06.10.DUMP
295     total

So after import and conversion the pictures went from 400 to 265 MB and movies (actually one movie) went from 135 to 30 MB. The most important thing is that I can import and convert this convent without any interactive and lengthy process.

These scripts (definitely the video renamer one) may be SONY related but nothing stops you from modifying them to the files provided by your camera manufacturer.

Feel free to share your photography flow ๐Ÿ™‚

EOF

Fix Broken Dependency on FreeBSD

Dunno about you but I update my packages often … and I have lots of them, more then 1000 actually.

% pkg info | wc -l
    1051

… but its not much, they are mostly dependencies to to software that I use.

For example I need Openbox and X11 but to use them I need 300+ dependencies in libraries and protocols, and its OK, that’s how it works … but sometimes after the upgrade one or two applications forbid to start because of missing dependency. I would sa that it happens one in twenty to thirty updates (1/20 – 1/30) which is very rare and even if it happens its very easy to solve. I also happened to me on Linux systems many times so its not FreeBSD only related, its just how open source desktop/laptop market works ๐Ÿ™‚

Today’s victim will be Chromium. I generally use Firefox but sometimes when a page behaves strangely in Firefox I verify this behavior in Chromium. I also use Chromium as file opener (or file browser should I say) for the *.htm/*.html/*.chm local files. But this time it forbid to start, so I went to the command line to check what went wrong.

% chrome
Shared object "libx264.so.155" not found, required by "libavcodec.so.58"

… a missing dependency in the form of libx264.so.155 library.

Reckless Symlink

This method is considered dangerous or quick and dirty way of fixing such problems – it can also introduce other problems by itself – but still – in many cases it temporary solves the problem.

… and its exactly that – a quick fix till the ffmpeg package finishes its rebuild – it takes longer then pkg upgrade command but when I need Chromium now its NOW, not later when ffmpeg package will be rebuilt. This problem is caused by lack of guts of the FreeBSD project to provide lame package. OpenBSD guys does not have problem with that but FreeBSD guys do, so to have MP3 support in ffmpeg you need to first manually build lame package and then select it as option in ffmpeg and again built is as package … and do that everytime you run pkg upgrade command … which is PITA to say the least.

This is why I use pkg-recompile.sh script for that purpose – to not do that โ€˜by handโ€™ everytime I update packages (which is about two times a week). This is the โ€˜workflowโ€™ if I can call it like that:

# pkg upgrade
# pkg-recompile.sh build

Lets verify it something else is not missing for Chromium then.

% which chrome
/usr/local/bin/chrome

% ldd /usr/local/bin/chrome
ldd: /usr/local/bin/chrome: not a dynamic executable

So /usr/local/bin/chrome is just a wrapper, let’s see what it contains.

% cat /usr/local/bin/chrome
#!/bin/sh

SYSCTL=kern.ipc.shm_allow_removed
if [ "`/sbin/sysctl -n $SYSCTL`" = 0 ] ; then
        cat << EOMSG
For correct operation, shared memory support has to be enabled
in Chromium by performing the following command as root :

sysctl $SYSCTL=1

To preserve this setting across reboots, append the following
to /etc/sysctl.conf :

$SYSCTL=1
EOMSG
        exit 1
fi
ulimit -c 0
exec /usr/local/share/chromium/chrome ${1+"$@"}

So our binary actually is /usr/local/share/chromium/chrome file, lets check it with ldd(8) then.

% ldd /usr/local/share/chromium/chrome
/usr/local/share/chromium/chrome:
        libthr.so.3 => /lib/libthr.so.3 (0x809b78000)
        libX11.so.6 => /usr/local/lib/libX11.so.6 (0x809da0000)
        libX11-xcb.so.1 => /usr/local/lib/libX11-xcb.so.1 (0x80a0df000)
        libxcb.so.1 => /usr/local/lib/libxcb.so.1 (0x80a2e0000)
        libXcomposite.so.1 => /usr/local/lib/libXcomposite.so.1 (0x80a506000)
        libXcursor.so.1 => /usr/local/lib/libXcursor.so.1 (0x80a708000)
        libXdamage.so.1 => /usr/local/lib/libXdamage.so.1 (0x80a913000)
        libXext.so.6 => /usr/local/lib/libXext.so.6 (0x80ab15000)
        libXfixes.so.3 => /usr/local/lib/libXfixes.so.3 (0x80ad26000)
        libXi.so.6 => /usr/local/lib/libXi.so.6 (0x80af2b000)
        libXrender.so.1 => /usr/local/lib/libXrender.so.1 (0x80b139000)
        libXtst.so.6 => /usr/local/lib/libXtst.so.6 (0x80b342000)
        libgmodule-2.0.so.0 => /usr/local/lib/libgmodule-2.0.so.0 (0x80b547000)
        libglib-2.0.so.0 => /usr/local/lib/libglib-2.0.so.0 (0x80b74a000)
        libgobject-2.0.so.0 => /usr/local/lib/libgobject-2.0.so.0 (0x80ba61000)
        libgthread-2.0.so.0 => /usr/local/lib/libgthread-2.0.so.0 (0x80bcab000)
        libintl.so.8 => /usr/local/lib/libintl.so.8 (0x80beac000)
        libnss3.so => /usr/local/lib/nss/libnss3.so (0x80c0b7000)
        libsmime3.so => /usr/local/lib/nss/libsmime3.so (0x80c3e3000)
        libnssutil3.so => /usr/local/lib/nss/libnssutil3.so (0x80c60d000)
        libplds4.so => /usr/local/lib/libplds4.so (0x80c83d000)
        libplc4.so => /usr/local/lib/libplc4.so (0x80ca40000)
        libnspr4.so => /usr/local/lib/libnspr4.so (0x80cc44000)
        libdl.so.1 => /usr/lib/libdl.so.1 (0x80ce83000)
        libcups.so.2 => /usr/local/lib/libcups.so.2 (0x80d084000)
        libxml2.so.2 => /usr/local/lib/libxml2.so.2 (0x80d315000)
        libfontconfig.so.1 => /usr/local/lib/libfontconfig.so.1 (0x80d6a8000)
        libdbus-1.so.3 => /usr/local/lib/libdbus-1.so.3 (0x80d8ef000)
        libexecinfo.so.1 => /usr/lib/libexecinfo.so.1 (0x80db40000)
        libkvm.so.7 => /lib/libkvm.so.7 (0x80dd43000)
        libutil.so.9 => /lib/libutil.so.9 (0x80df51000)
        libXss.so.1 => /usr/local/lib/libXss.so.1 (0x80e165000)
        libwebpdemux.so.2 => /usr/local/lib/libwebpdemux.so.2 (0x80e367000)
        libwebpmux.so.3 => /usr/local/lib/libwebpmux.so.3 (0x80e56b000)
        libwebp.so.7 => /usr/local/lib/libwebp.so.7 (0x80e775000)
        libfreetype.so.6 => /usr/local/lib/libfreetype.so.6 (0x80ea05000)
        libjpeg.so.8 => /usr/local/lib/libjpeg.so.8 (0x80ecbb000)
        libexpat.so.1 => /usr/local/lib/libexpat.so.1 (0x80ef4e000)
        libharfbuzz.so.0 => /usr/local/lib/libharfbuzz.so.0 (0x80f179000)
        libdrm.so.2 => /usr/local/lib/libdrm.so.2 (0x80f458000)
        libXrandr.so.2 => /usr/local/lib/libXrandr.so.2 (0x80f66b000)
        libgio-2.0.so.0 => /usr/local/lib/libgio-2.0.so.0 (0x80f875000)
        libavcodec.so.58 => /usr/local/lib/libavcodec.so.58 (0x80fe00000)
        libavformat.so.58 => /usr/local/lib/libavformat.so.58 (0x811800000)
        libavutil.so.56 => /usr/local/lib/libavutil.so.56 (0x811c52000)
        libopenh264.so.4 => /usr/local/lib/libopenh264.so.4 (0x811eca000)
        libasound.so.2 => /usr/local/lib/libasound.so.2 (0x8121da000)
        libsnappy.so.1 => /usr/local/lib/libsnappy.so.1 (0x8124de000)
        libopus.so.0 => /usr/local/lib/libopus.so.0 (0x8126e6000)
        libpangocairo-1.0.so.0 => /usr/local/lib/libpangocairo-1.0.so.0 (0x812956000)
        libpango-1.0.so.0 => /usr/local/lib/libpango-1.0.so.0 (0x812b63000)
        libcairo.so.2 => /usr/local/lib/libcairo.so.2 (0x812db1000)
        libGL.so.1 => /usr/local/lib/libGL.so.1 (0x8130d8000)
        libpci.so.3 => /usr/local/lib/libpci.so.3 (0x813366000)
        libatk-1.0.so.0 => /usr/local/lib/libatk-1.0.so.0 (0x813571000)
        libatk-bridge-2.0.so.0 => /usr/local/lib/libatk-bridge-2.0.so.0 (0x81379c000)
        libatspi.so.0 => /usr/local/lib/libatspi.so.0 (0x8139cc000)
        libFLAC.so.8 => /usr/local/lib/libFLAC.so.8 (0x813bfd000)
        libgtk-3.so.0 => /usr/local/lib/libgtk-3.so.0 (0x814000000)
        libgdk-3.so.0 => /usr/local/lib/libgdk-3.so.0 (0x8148b9000)
        libcairo-gobject.so.2 => /usr/local/lib/libcairo-gobject.so.2 (0x814bb0000)
        libgdk_pixbuf-2.0.so.0 => /usr/local/lib/libgdk_pixbuf-2.0.so.0 (0x814db8000)
        libxslt.so.1 => /usr/local/lib/libxslt.so.1 (0x814fdb000)
        libz.so.6 => /lib/libz.so.6 (0x815218000)
        liblzma.so.5 => /usr/lib/liblzma.so.5 (0x815430000)
        libm.so.5 => /lib/libm.so.5 (0x815659000)
        librt.so.1 => /usr/lib/librt.so.1 (0x815886000)
        libc++.so.1 => /usr/lib/libc++.so.1 (0x815a8c000)
        libcxxrt.so.1 => /lib/libcxxrt.so.1 (0x815d5a000)
        libc.so.7 => /lib/libc.so.7 (0x800823000)
        libXau.so.6 => /usr/local/lib/libXau.so.6 (0x815f79000)
        libXdmcp.so.6 => /usr/local/lib/libXdmcp.so.6 (0x81617c000)
        libiconv.so.2 => /usr/local/lib/libiconv.so.2 (0x816381000)
        libpcre.so.1 => /usr/local/lib/libpcre.so.1 (0x81667c000)
        libffi.so.6 => /usr/local/lib/libffi.so.6 (0x81691a000)
        libgnutls.so.30 => /usr/local/lib/libgnutls.so.30 (0x816b21000)
        libavahi-common.so.3 => /usr/local/lib/libavahi-common.so.3 (0x816ed4000)
        libavahi-client.so.3 => /usr/local/lib/libavahi-client.so.3 (0x8170e0000)
        libcrypt.so.5 => /lib/libcrypt.so.5 (0x8172ef000)
        libelf.so.2 => /lib/libelf.so.2 (0x81750e000)
        libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x817725000)
        libbz2.so.4 => /usr/lib/libbz2.so.4 (0x817934000)
        libgraphite2.so.3 => /usr/local/lib/libgraphite2.so.3 (0x817b48000)
        libswresample.so.3 => /usr/local/lib/libswresample.so.3 (0x817d71000)
        libvpx.so.6 => /usr/local/lib/libvpx.so.6 (0x818000000)
        libdav1d.so.1 => /usr/local/lib/libdav1d.so.1 (0x818411000)
        libmp3lame.so.0 => /usr/local/lib/libmp3lame.so.0 (0x818732000)
        libtheoraenc.so.1 => /usr/local/lib/libtheoraenc.so.1 (0x8189b3000)
        libtheoradec.so.1 => /usr/local/lib/libtheoradec.so.1 (0x818be2000)
        libvorbis.so.0 => /usr/local/lib/libvorbis.so.0 (0x818df3000)
        libvorbisenc.so.2 => /usr/local/lib/libvorbisenc.so.2 (0x819024000)
        libx264.so.155 => not found (0)
        libx265.so.170 => /usr/local/lib/libx265.so.170 (0x819400000)
        libxvidcore.so.4 => /usr/local/lib/libxvidcore.so.4 (0x819b4b000)
        libva.so.2 => /usr/local/lib/libva.so.2 (0x819e70000)
        libgmp.so.10 => /usr/local/lib/libgmp.so.10 (0x81a096000)
        libva-drm.so.2 => /usr/local/lib/libva-drm.so.2 (0x81a316000)
        libva-x11.so.2 => /usr/local/lib/libva-x11.so.2 (0x81a518000)
        libvdpau.so.1 => /usr/local/lib/libvdpau.so.1 (0x81a71d000)
        libpangoft2-1.0.so.0 => /usr/local/lib/libpangoft2-1.0.so.0 (0x81a920000)
        libfribidi.so.0 => /usr/local/lib/libfribidi.so.0 (0x81ab36000)
        libpixman-1.so.0 => /usr/local/lib/libpixman-1.so.0 (0x81ad4c000)
        libEGL.so.1 => /usr/local/lib/libEGL.so.1 (0x81b016000)
        libpng16.so.16 => /usr/local/lib/libpng16.so.16 (0x81b24e000)
        libxcb-shm.so.0 => /usr/local/lib/libxcb-shm.so.0 (0x81b489000)
        libxcb-render.so.0 => /usr/local/lib/libxcb-render.so.0 (0x81b68b000)
        libxcb-dri3.so.0 => /usr/local/lib/libxcb-dri3.so.0 (0x81b898000)
        libxcb-xfixes.so.0 => /usr/local/lib/libxcb-xfixes.so.0 (0x81ba9b000)
        libxcb-present.so.0 => /usr/local/lib/libxcb-present.so.0 (0x81bca2000)
        libxcb-sync.so.1 => /usr/local/lib/libxcb-sync.so.1 (0x81bea4000)
        libxshmfence.so.1 => /usr/local/lib/libxshmfence.so.1 (0x81c0aa000)
        libglapi.so.0 => /usr/local/lib/libglapi.so.0 (0x81c2ab000)
        libxcb-glx.so.0 => /usr/local/lib/libxcb-glx.so.0 (0x81c505000)
        libxcb-dri2.so.0 => /usr/local/lib/libxcb-dri2.so.0 (0x81c71e000)
        libXxf86vm.so.1 => /usr/local/lib/libXxf86vm.so.1 (0x81c922000)
        libogg.so.0 => /usr/local/lib/libogg.so.0 (0x81cb26000)
        libXinerama.so.1 => /usr/local/lib/libXinerama.so.1 (0x81cd2c000)
        libxkbcommon.so.0 => /usr/local/lib/libxkbcommon.so.0 (0x81cf2e000)
        libwayland-cursor.so.0 => /usr/local/lib/libwayland-cursor.so.0 (0x81d16b000)
        libwayland-egl.so.1 => /usr/local/lib/libwayland-egl.so.1 (0x81d372000)
        libwayland-client.so.0 => /usr/local/lib/libwayland-client.so.0 (0x81d573000)
        libepoxy.so.0 => /usr/local/lib/libepoxy.so.0 (0x81d782000)
        libp11-kit.so.0 => /usr/local/lib/libp11-kit.so.0 (0x81da91000)
        libtasn1.so.6 => /usr/local/lib/libtasn1.so.6 (0x81ddb2000)
        libnettle.so.6 => /usr/local/lib/libnettle.so.6 (0x81dfc7000)
        libhogweed.so.4 => /usr/local/lib/libhogweed.so.4 (0x81e1ff000)
        libidn2.so.0 => /usr/local/lib/libidn2.so.0 (0x81e435000)
        libunistring.so.2 => /usr/local/lib/libunistring.so.2 (0x81e653000)
        libgbm.so.1 => /usr/local/lib/libgbm.so.1 (0x81ea07000)
        libwayland-server.so.0 => /usr/local/lib/libwayland-server.so.0 (0x81ec15000)
        libepoll-shim.so.0 => /usr/local/lib/libepoll-shim.so.0 (0x81ee28000)

Lots of deps here, lets cut to the point with grep(1) as shown below.

% ldd /usr/local/share/chromium/chrome | grep found
        libx264.so.155 => not found (0)

Only one – libx264.so.155 – dependency is missing. Let’s fix it then.

% cd /usr/local/lib
% ls -l libx264.so*
lrwxr-xr-x  1 root  wheel       14 2019.03.19 02:11 libx264.so -> libx264.so.157
-rwxr-xr-x  1 root  wheel  2090944 2019.03.19 02:11 libx264.so.157

There is little newer version available libx264.so.157 so we will link to it with our ‘missing’ libx264.so.155 name.

# pwd
/usr/local/lib
# ln -s libx264.so libx264.so.155
# ls -l libx264.so*
lrwxr-xr-x  1 root  wheel       14 2019.03.19 02:11 libx264.so -> libx264.so.157
lrwxr-xr-x  1 root  wheel       10 2019.03.21 15:26 libx264.so.155 -> libx264.so
-rwxr-xr-x  1 root  wheel  2090944 2019.03.19 02:11 libx264.so.157

Chromium should be happy now.

% ldd /usr/local/share/chromium/chrome | grep found
% 

Zero not found results.

Let’s start Chromium then with chrome command.

% chrome

Starts as usual and everything works ๐Ÿ™‚

This whole process can be visualized with this simple screenshots below.

vermaden_2019-03-21_15-47-40.png

Using /etc/libmap.conf File

Instead making ad symlink – which will work globally – you can create the proper libmap.conf file with configuration only for /usr/local/share/chromium/chrome binary.

Here is the fix only for Chromium browser.

# cat /etc/libmap.conf

[/usr/local/share/chromium/chrome]
libx264.so.155 libx264.so

… and equivalent solution that works globally as symlink would be as follows.

# cat /etc/libmap.conf

libx264.so.155 libx264.so

Its also easier to migrate or mass populate such changes instead of copying a symlink.

Fixing Broken Dependency in pkg(8) Database

I already wrote about it in the Less Known pkg(8) Features article but its worth mentioning here for the completeness of options.

There was time when one missing dependency about vulnerable www/libxul19 package started to torture me for some time.

I was even desperate to compile everything with portmaster already.

I started with portmaster --check-depends command, but said no ‘n‘ when asked for fix as it will downgrade a lot of packages needlessly.

# portmaster --check-depends
(...)
Checking dependencies: evince
graphics/evince has a missing dependency: www/libxul19
(...)

>>> Missing package dependencies were detected.
>>> Found 1 issue(s) in total with your package database.

The following packages will be installed:

        Downgrading perl: 5.14.2_3 -> 5.14.2_2
        Downgrading glib: 2.34.3 -> 2.28.8_5
        Downgrading gio-fam-backend: 2.34.3 -> 2.28.8_1
        Downgrading libffi: 3.0.12 -> 3.0.11
        Downgrading gobject-introspection: 1.34.2 -> 0.10.8_3
        Downgrading atk: 2.6.0 -> 2.0.1
        Downgrading gdk-pixbuf2: 2.26.5 -> 2.23.5_3
        Downgrading pango: 1.30.1 -> 1.28.4_1
        Downgrading gtk-update-icon-cache: 2.24.17 -> 2.24.6_1
        Downgrading dbus: 1.6.8 -> 1.4.14_4
        Downgrading gtk: 2.24.17 -> 2.24.6_2
        Downgrading dbus-glib: 0.100.1 -> 0.94
        Installing libxul: 1.9.2.28_1

The installation will require 66 MB more space

38 MB to be downloaded

>>> Try to fix the missing dependencies [y/N]: n
>>> Summary of actions performed:

www/libxul19 dependency failed to be fixed

>>> There are still missing dependencies.
>>> You are advised to try fixing them manually.

>>> Also make sure to check 'pkg updating' for known issues.

Lets see what pkg(8) shows we have installed.

# pkg info | grep libxul
libxul-10.0.12                 Mozilla runtime package that can be used to bootstrap XUL+XPCOM apps

# pkg info -qoa | grep libxul
www/libxul

So the problem is that we have installed www/libxul instead of www/libxul19 and that is why portmaster (and not only) complains about it.

Before pkg(8) was introduced it was easy just to grep -r the entire /var/db/pkg directory with its ‘file database’ but now its quite more complicated as the package database is kept in SQLite database. Using pkg shell command You can connect to that database. Lets check what we can find there.

# pkg shell
SQLite version 3.7.13 2012-06-11 02:05:22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .databases
seq  name             file
---  ---------------  ----------------------------------------------------------
0    main             /var/db/pkg/local.sqlite
sqlite> .tables
categories       licenses         pkg_directories  scripts
deps             mtree            pkg_groups       shlibs
directories      options          pkg_licenses     users
files            packages         pkg_shlibs
groups           pkg_categories   pkg_users
sqlite> .header on
sqlite> .mode column
sqlite> pragma table_info(deps);
cid         name        type        notnull     dflt_value  pk
----------  ----------  ----------  ----------  ----------  ----------
0           origin      TEXT        1                       1
1           name        TEXT        1                       0
2           version     TEXT        1                       0
3           package_id  INTEGER     0                       1
sqlite> .quit

So now we know that ‘deps‘ table is probably what we are looking for ;).

As pkg shell is quite limited for SQLite ‘browsing’ I will use the sqlite3 command itself. By limited I mean that You can not type pkg shell "select * from deps;" query, You first need to start pkg shell and then You can type your query.

# sqlite3 -column /var/db/pkg/local.sqlite "select * from deps;" | grep libxul
www/libxul19   libxul      1.9.2.28_1  104

The second column is name so lets try to use it.

sqlite3 -header -column /var/db/pkg/local.sqlite "select * from deps where name='libxul';"
origin        name        version     package_id
------------  ----------  ----------  ----------
www/libxul19  libxul      1.9.2.28_1  104

So now we have the ‘problematic’ dependency entry nailed, lets modify it a little to the real installed packages state.

# sqlite3 /var/db/pkg/local.sqlite "update deps set origin='www/libxul' where name='libxul';"
# sqlite3 /var/db/pkg/local.sqlite "update deps set version='10.0.12' where name='libxul';"

You can of course use the ‘official’ way by using the pkg shell command.

# pkg shell
SQLite version 3.7.13 2012-06-11 02:05:22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> update deps set origin='www/libxul' where name='libxul';
sqlite> update deps set version='10.0.12' where name='libxul';
sqlite> .header on
sqlite> .mode column
sqlite> select * from deps where name='libxul';
origin      name        version     package_id
----------  ----------  ----------  ----------
www/libxul  libxul      10.0.12     104
sqlite> .quit

Now portmaster is happy and does not complain about any missing dependencies.

# portmaster --check-depends
(...)
Checking dependencies: zenity
Checking dependencies: zip
Checking dependencies: zsh
# 

Viola! Problem solved ๐Ÿ˜‰

… but pkg(8) has a tool for that already ๐Ÿ™‚

Its called pkg set and two most useful options from man pkg-set are.

  -n oldname:newname, --change-name oldname:newname
       Change the package name of a given dependency from oldname to newname.

(...)

  -o oldorigin:neworigin, --change-origin oldorigin:neworigin
       Change the port origin of a given dependency from oldorigin to neworigin.
       This corresponds to the port directory that the package originated from.
       Typically, this is only needed for upgrading a library or package that
       has MOVED or when the default version of a major port dependency changes.
       (DEPRECATED) Usually this will be explained in /usr/ports/UPDATING.
       Also see pkg-updating(8) and EXAMPLES.

In our case we would use pkg set -o www/libxul19:www/libxul command.

Not sure if it will solve that problem in the same way as I also updated the version in the database.

Use pkg_libchk from bsdadminscripts2 Package

There is also other way to fix/check for such problems – its the pkg_libchk from the bsdadminscripts2 package. Keep in mind that there are TWO conflicting (!) packages with bsdadminscripts in their name.

# pkg search bsdadmin
bsdadminscripts-6.1.1_8        Collection of administration scripts
bsdadminscripts2-0.2.1         BSD Administration Scripts 2

ย 

… and once you install bsdadminscripts2 you will not be able to install bsdadminscripts because they are conflicting. I already had bsdadminscripts2 installed and wanted to add bsdadminscripts to my system.

# pkg install bsdadminscripts
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (1 conflicting)
  - bsdadminscripts-6.1.1_8 conflicts with bsdadminscripts2-0.2.1 on /usr/local/sbin/distviper
Checking integrity... done (0 conflicting)
The following 2 package(s) will be affected (of 0 checked):

Installed packages to be REMOVED:
        bsdadminscripts2-0.2.1

New packages to be INSTALLED:
        bsdadminscripts: 6.1.1_8

Number of packages to be removed: 1
Number of packages to be installed: 1

Proceed with this action? [y/N]: n

Here is the description of the /usr/ports/ports-mgmt/bsdadminscripts2 port/package.

# cat /usr/ports/ports-mgmt/bsdadminscripts2/pkg-descr
This is a collection of scripts around the use of ports and packages.

It allows you to: 
- check library dependencies without producing false positives (pkg_libchk)
- lets you manage the autoremove flag for leaf packages (pkg_trim)
- remove obsolete or damaged distfiles (distviper)
- manage build flags (buildflags.conf)
- auto-create pkg-plist files taking port options into account (makeplist)

WWW: https://github.com/lonkamikaze/bsda2

There are exactly 4 tools in this package.

% pkg info -l bsdadminscripts2 | grep bin
        /usr/local/sbin/distviper
        /usr/local/sbin/makeplist
        /usr/local/sbin/pkg_libchk
        /usr/local/sbin/pkg_trim

Invoked without any arguments it will check all packages installed in a system.

# pkg_libchk
Jobs done:   35 of 1057
bhyve-firmware-1.0_1
bash-5.0.3
beadm-1.2.9_1

… so in order to make the ckecks only for Chromium you will need to specify chromium package with pkg_libchk chromium command.

The pkg_libchk allows you to fetch missing dependencies based on which package provides what files or create a list of the packages that need to be rebuilt.

Use Provides Database

You can also use ‘provides’ database from pkg(8) command.

% pkg provides lib/libx264.so
Name    : libx264-0.157.2945
Desc    : H.264/MPEG-4 AVC Video Encoding (Library)
Repo    : FreeBSD
Filename: /usr/local/lib/libx264.so.155
          /usr/local/lib/libx264.so

To learn how to setup ‘provides’ database for pkg(8) command check the Less Known pkg(8) Features article please.

UPDATE 1 – Rework Entire Article

The Roman philosopher Seneca once said – “While we teach, we learn.” – it is very true – especially for this article. After I posted it on various places people reminded my that its not the best way to just create symlink and that its not the best way to do it. I stand corrected and added additional sections and methods of fixing a broken dependency on a FreeBSD (or Linux/Illumos) system.

EOF

Ghost in the Shell – Part 4

Long time no see. Its been a while since last post in the Ghost in the Shell series. Its also exactly one full year since I started this blog – from the first Ghost in the Shell series article – the Part 1 – that was published on 2018/03/15 day.

Today I would like to show you new pack of useful tricks and features for productive terminal/shell use. Lets start with something simple yet useful.

You may want to check other articles in the Ghost in the Shell series on the Ghost in the Shell – Global Page where you will find links to all episodes of the series along with table of contents for each episodeโ€™s contents.

Named Pipes

We all (or at least most :>) know and love pipes in UNIX. For the record – ls | grep match | awk '{print $3}' | sed 's/.jpg//g' – command ‘chains’ like that one ๐Ÿ™‚

What is a named pipe then? A manually defined pipe for special purposes. For example some applications – especially the so called Enterprise ones – often do not support UNIX pipes mechanisms – they only can dump something to a file. A great example of such Enterprise software is Oracle database whose dump command can only make dump to a file. With tool that supports UNIX pipes you would probably want to pipe that data to gzip(1)/xz(1) to compress it on the fly or even pipe it directly to ssh(1) to the Backup server for example, but not with Oracle.

This is where named pipes feature helps. We will create named pipe called /tmp/PIPE so Oracle’s dump command will be able to use it and on the other side of this pipe we will attach a pipe to gzip -9 command to compress that data on the fly.

Below example is from Linux system so mknod(1) command will be used. For example on FreeBSD you would use mkfifo(1) command for named pipe. Complete example of such named pipe is presented below.

root # cd /tmp
root # mknod /tmp/PIPE p
root # chown oracle:oinstall /tmp/PIPE
root # dd if=/tmp/PIPE bs=1M | gzip -9 > /mnt/oracle/oracle-database-backup.dmp.gz &

Now the /tmp/PIPE named pipe is ready to be used. When any process will start to write something to the /tmp/PIPE named pipe it will be automatically grabbed by dd(8) command and piped to the gzip(1) command that will compress that input and write it into the /mnt/oracle/oracle-database-backup.dmp.gz file.

Now we can start the Oracle dumping process with dump command.

root # su - oracle
oracle % dump file=/tmp/PIPE

When the dump command finishes its work you will find all your dumped data compressed in the /mnt/oracle/oracle-database-backup.dmp.gz file.

Other example of named pipes usage is my desktop dzen2 setup with unusual update schedule – described in detail in the FreeBSD Desktop – Part 13 – Configuration – Dzen2 article.

Modify Command Environment on the Fly

For most of the time we use export(1) builtin to export needed environment values that our command needs. You can then check what environment exported values are with the env(1) command of course … but you can use the same env(1) command to run any command with modified environment without exporting variables using export(1).

Here is brief example of this feature.

For the record – the gls(1) command is a GNU/Linux ls(1) command from sysutils/coreutils package/port but to make it work without name conflicts on FreeBSD where BSD ls(1) is also present it had to be renamed to gls(1).

% gls -l | head -1
total 8609K

% env LC_ALL=pl_PL.UTF-8 gls -l | head -1
razem 8609K

In the example above we run gls(1) command with default environment – I use en_US.UTF-8 locale daily. The second invocation with LC_ALL=pl_PL.UTF-8 modified environment made gls(1) command display its output in Polish (pl_PL.UTF-8) language. The word ‘razem‘ means ‘total‘ in Polish.

Other useful example may be using make(1) to build FreeBSD port with known vulnerabilities. By default FreeBSD’s build(7) system will not allow us to build such port (and that is good defaults) but if we know what we are doing we will use following spell.

# env DISABLE_VULNERABILITIES=yes make -C /usr/ports/security/bdes/ build install clean

Its also useful with commands that do not play well with UTF-8 input like tr(1) for example. When LC_ALL is set to en_US.UTF-8 it will throw an error upon as.

% tr -cd '0-9' < /dev/random | head -c 16
tr: Illegal byte sequence
%

We just wanted to generate random 16 numbers.

To make it work we will modify the LC_ALL environment for this invocation.

% env LC_ALL=C tr -cd '0-9' < /dev/random | head -c 16
9571949869123855
%

Much better ๐Ÿ™‚

Other example with timezones using date(1) command and TZ variable as shown in the example below.

% date
Fri Mar 15 14:03:38 CET 2019

% env TZ=Australia/Darwin date 
Fri Mar 15 22:35:26 ACST 2019

The Real Path

The symlinks with ln(1) are very useful for many ways, to organize stuff, for quick fixes, for versioning … you will find tons of other use cases.

There is just one problem, if you make to many levels or symlinks or its just too much nested you do not know where you are anymore … this is where the realpath(1) comes handy. No matter how many levels of links you have made, it will tell you the truth – what is the current real path. The pwd(1) command will not help you here thou.

Here is a short example how it works.

% pwd
/home/vermaden
% ln -s /home/vermaden ASD
% cd ASD
% pwd
/home/vermaden/ASD
% realpath
/home/vermaden

Browsing the PATH

Many times I wanted to ‘browse’ through the PATH to search for something. As you possibly know the PATH variable stores paths that are colon (:) separated.

You can redefine the IFS variable which by default contains space ‘ ‘ which will work as field delimited for the for loop.

Here is the example.

% export IFS=":"

% for I in $( echo ${PATH} ); do echo ${I}; done
/sbin
/bin
/usr/sbin
/usr/bin
/usr/local/sbin
/usr/local/bin 

% for I in $( echo ${PATH} ); do find ${I} -name ifconfig; done
/sbin/ifconfig

The other way to do this is to use plain old tr tool to translate colons (:) into newlines (\n) so we will be able to use the while loop here.

Here is the tr(1) example.

% echo ${PATH} | tr ':' '\n' | while read I; do echo ${I}; done
/sbin
/bin
/usr/sbin
/usr/bin
/usr/local/sbin
/usr/local/bin

% echo ${PATH} | tr ':' '\n' | while read I; do find ${I} -name dd; done
/bin/dd

You can also achieve same thing using the Parameter Expansion in which we will change the colons (:) into newlines (\n) as shown in the example below.

% echo "${PATH//:/\n}"
/sbin
/bin
/usr/sbin
/usr/bin
/usr/local/sbin
/usr/local/bin

# echo "${PATH//:/\n}" | while read I; do find ${I} -name camcontrol; done
/sbin/camcontrol

Parameter Expansion

I will not show all possible Parameter Expansion methods – just the most useful ones.

The typical use is to get the extension of a file or to ’emulate’ basename(1) or dirname(1) commands – it will be faster to use Parameter Expansion instead of invoking these commands each time. Below are two tables showing what you will get from which Parameter Expansion method.

PARAMETER    RESULT                       DESC 
-----------  ---------------------------  --------------
${name}      kubica.polish.racing.legend  content
${name#*.}          polish.racing.legend  -
${name##*.}                       legend  extension
${name%%.*}  kubica                       -
${name%.*}   kubica.polish.racing         -

… and with slash (/) character.

PARAMETER    RESULT                       DESC 
-----------  ---------------------------  --------------
${name}      kubica/polish/racing/legend  content
${name#*/}          polish/racing/legend  -
${name##*/}                       legend  basename(1)
${name%%.*}  kubica                       root directory
${name%/*}   kubica/polish/racing         dirname(1)

You can also use Parameter Expansion methods to grab the protocol from an URL like shown below.

% URL="https://vermaden.wordpress.com"

% echo "${URL%%/*}"
https:

Sort Human Readable Values

Its simple and easy to sort just numerical values, we use sort -n for that – but values sometimes comes in human readable form like 4G, 350M and 120K. To sort these properly you will have to use sort -h flag as shown in the example below.

% du -sh /usr/*
102M    /usr/bin
228G    /usr/home
9.0M    /usr/include
 53M    /usr/lib
 43M    /usr/lib32
116K    /usr/libdata
1.9M    /usr/libexec
365M    /usr/local
512B    /usr/obj
9.5M    /usr/sbin
 39M    /usr/share
251K    /usr/tests

% du -sh /usr/* | sort -h
512B    /usr/obj
116K    /usr/libdata
251K    /usr/tests
1.9M    /usr/libexec
9.0M    /usr/include
9.5M    /usr/sbin
 39M    /usr/share
 43M    /usr/lib32
 53M    /usr/lib
102M    /usr/bin
365M    /usr/local
228G    /usr/home

If the values are in the first column then its simple but what to do when the values are not in the first column? You will use -k parameter of sort(1) which takes which column to sort as argument. Needed example below sorted bu human readable values and on the second USED column.

% zfs list | sort -h -k 2
NAME                         USED  AVAIL  REFER  MOUNTPOINT
local/usr/obj                 88K   130G    88K  /usr/obj
local/var/cache/pkg          128K   130G   128K  /var/cache/pkg
local/var/cache              216K   130G    88K  none
local/var                    304K   130G    88K  none
sys/ROOT/11.1-RELEASE        482M  2.39G  6.04G  /
local/usr/ports              729M   130G   729M  /usr/ports
local/jail/nextcloud         927M   130G   897M  /jail/nextcloud
local/jail                  1.00G   130G   100M  /jail
local/usr/src               1.28G   130G  1.28G  /usr/src
local/usr                   1.99G   130G    88K  none
sys/ROOT/11.2-RELEASE       8.69G  2.39G  7.10G  /
sys/ROOT                    9.16G  2.39G    88K  none
sys                         9.17G  2.39G    88K  none
local/home                   281G   130G   281G  /home
local                        288G   130G    88K  none

Write a File from vi(1) with Different Rights

How many times you have opened a system configuration file like /etc/sysctl.conf or /etc/fstab in your favorite vi(1) editor, made some changes and then when you wanted to save it – no luck – you are trying to write to file owned by root with regular user … the Read-only file, not written; use ! to override. message will be displayed. Of course you can save that file somewhere else like your home directory and them move it with doas(1)/sudo(8)/su(8) help to original location and fix its rights … or you may do that in one step instead.

After opening a file with vi(1) and some changes to write a file with doas(1)/sudo(8) rights you just need to type this.

:w !doas tee %

Then exit the vi(1) editor with force.

:q!

Here is how it looks in the editor.

:w !doas tee %

+=+=+=+=+=+=+=+
File contents are displayed here.

Press any key to continue [: to enter more ex commands]: [ENTER]

Here is the ‘legend’ for that spell.

:      vi(1) prompt
w      write a file
!doas  invoke doas(1) command
tee    command that will be started using doas(1) command
%      tells vi(1) to use current filename

In this process the current vi(1) contents will be redirected using tee(1) with doas(1) rights to the current (open that you opened) filename.

Of course it also works in vim(1) or neovim(1) and if sudo(8) is your poison then just use sudo instead doas(1) there.

Search Contents of PDF Files

We all love plain text files then they can be searched using grep(1) for data that is interesting for us … but grep(1) does not work with PDF files … or should I say its pointless/useless to use grep(1) to search PDF files. Fortunately pdfgrep(1) command exists and works beautifully with PDF files – including colored output.

Recently FreeBSD Journal has been made free and you will like to search for bhyve articles in FreeBSD Journal issues then this is the command for you.

% cd books/unix-bsd-journal
% exa
FreeBSD Journal - 2014-01-02.pdf FreeBSD Journal - 2016-09-10.pdf
FreeBSD Journal - 2014-03-04.pdf FreeBSD Journal - 2016-11-12.pdf
FreeBSD Journal - 2014-05-06.pdf FreeBSD Journal - 2017-01-02.pdf
FreeBSD Journal - 2014-07-08.pdf FreeBSD Journal - 2017-03-04.pdf
FreeBSD Journal - 2014-09-10.pdf FreeBSD Journal - 2017-05-06.pdf
FreeBSD Journal - 2014-11-12.pdf FreeBSD Journal - 2017-07-08.pdf
FreeBSD Journal - 2015-01-02.pdf FreeBSD Journal - 2017-09-10.pdf
FreeBSD Journal - 2015-03-04.pdf FreeBSD Journal - 2017-11-12.pdf
FreeBSD Journal - 2015-05-06.pdf FreeBSD Journal - 2018-01-02.pdf
FreeBSD Journal - 2015-07-08.pdf FreeBSD Journal - 2018-03-04.pdf
FreeBSD Journal - 2015-09-10.pdf FreeBSD Journal - 2018-05-06.pdf
FreeBSD Journal - 2015-11-12.pdf FreeBSD Journal - 2018-07-08.pdf
FreeBSD Journal - 2016-01-02.pdf FreeBSD Journal - 2018-09-10.pdf
FreeBSD Journal - 2016-03-04.pdf FreeBSD Journal - 2018-11-12.pdf
FreeBSD Journal - 2016-05-06.pdf FreeBSD Journal - 2019-01-02.pdf
FreeBSD Journal - 2016-07-08.pdf

% pdfgrep -i -n bhyve *.pdf
FreeBSD Journal - 2014-01-02 - Old Release.pdf:6: machine hypervisors, such as BHy
FreeBSD Journal - 2014-01-02 - Old Release.pdf:6: BHyVe
FreeBSD Journal - 2014-01-02 - Old Release.pdf:6: BHyVe IS THE BSD Hypervisor, de
FreeBSD Journal - 2014-01-02 - Old Release.pdf:6: Grehan and Neel Natu. The desig
FreeBSD Journal - 2014-01-02 - Old Release.pdf:6: BHyVe requires Intel CPUs w
FreeBSD Journal - 2014-01-02 - Old Release.pdf:6: BHyVe appeared in FreeBSD 1
FreeBSD Journal - 2014-01-02.pdf:42: machine hypervisors, such as BHyVe, Virtual
FreeBSD Journal - 2014-01-02.pdf:42: BHyVe e d
FreeBSD Journal - 2014-01-02.pdf:42: BHyVe IS THE BSD Hypervisor, developed by P
FreeBSD Journal - 2014-01-02.pdf:42: Grehan and Neel Natu. The design goal of BH
FreeBSD Journal - 2014-01-02.pdf:42: BHyVe requires Intel CPUs with VT-x and
FreeBSD Journal - 2014-01-02.pdf:42: BHyVe appeared in FreeBSD 10-CURRENT in
(...)

Here is how it looks in the xterm(1) terminal.

xterm-pdfgrep.png

Hope that today’s pack of spells will end up useful for you.

EOF

GlusterFS Cluster on FreeBSD with Ansible and GNU Parallel

Today I would like to present an article about setting up GlusterFS cluster on a FreeBSD system with Ansible and GNU Parallel tools.

gluster-logo.png

To cite Wikipedia “GlusterFS is a scale-out network-attached storage file system. It has found applications including cloud computing, streaming media services, and content delivery networks.” The GlusterFS page describes it similarly “Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace.”

Here are its advantages:

  • Scales to several petabytes.
  • Handles thousands of clients.
  • POSIX compatible.
  • Uses commodity hardware.
  • Can use any ondisk filesystem that supports extended attributes.
  • Accessible using industry standard protocols like NFS and SMB.
  • Provides replication/quotas/geo-replication/snapshots/bitrot detection.
  • Allows optimization for different workloads.
  • Open Source.

Lab Setup

It will be entirely VirtualBox based and it will consist of 6 hosts. To not create 6 same FreeBSD installations I used 12.0-RELEASE virtual machine image available from the FreeBSD Project directly:

There are several formats available – qcow2/raw/vhd/vmdk – but as I will be using VirtualBox I used the VMDK one.

I will use different prompts depending on where the command is executed to make the article more readable. Also then there is ‘%‘ at the prompt then a regular user is needed and if there is ‘#‘ at the prompt then a superuser is needed.

gluster1 #    // command run on the gluster1 node
gluster* #    // command run on all gluster nodes
client #      // command run on gluster client
vbhost %      // command run on the VirtualBox host

Here is the list of the machines for the GlusterFS cluster:

10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6

Each VirtualBox virtual machine for FreeBSD is the default one (as suggested in the VirtualBox wizard) with 512 MB RAM and NAT Network as shown on the image below.

virtualbox-freebsd-gluster-host.jpg

Here is the configuration of the NAT Network on VirtualBox.

virtualbox-nat-network.jpg

The cloned/copied FreeBSD-12.0-RELEASE-amd64.vmdk image will need to have different UUIDs so we will use VBoxManage internalcommands sethduuid command to achieve this.

vbhost % for I in $( seq 6 ); do cp FreeBSD-12.0-RELEASE-amd64.vmdk    vbox_GlusterFS_${I}.vmdk; done
vbhost % for I in $( seq 6 ); do VBoxManage internalcommands sethduuid vbox_GlusterFS_${I}.vmdk; done

To start the whole GlusterFS environment on VirtualBox use these commands.

vbhost % VBoxManage list vms | grep GlusterFS
"FreeBSD GlusterFS 1" {162a3b6f-4ec9-4709-bff8-162b0c8c9c41}
"FreeBSD GlusterFS 2" {2e30326c-ac5d-41d2-9b28-483375df38f6}
"FreeBSD GlusterFS 3" {6b2747ab-3ec6-4b1a-a28e-5d871d7891b3}
"FreeBSD GlusterFS 4" {12379cf8-31d9-4ff1-9945-465fc3ed15f0}
"FreeBSD GlusterFS 5" {a4b0d515-5924-4517-9052-df238c366f2b}
"FreeBSD GlusterFS 6" {66621755-1b97-4486-aa15-a7bec9edb343}

Check which GlusterFS machines are running.

vbhost % VBoxManage list runningvms | grep GlusterFS
vbhost %

Starting of the machines in VirtualBox Headless mode in parallel.

vbhost % VBoxManage list vms \
           | grep GlusterFS \
           | awk -F \" '{print $2}' \
           | while read I; do VBoxManage startvm "${I}" --type headless & done

After that command you should see these machines running.

vbhost % VBoxManage list runningvms
"FreeBSD GlusterFS 1" {162a3b6f-4ec9-4709-bff8-162b0c8c9c41}
"FreeBSD GlusterFS 2" {2e30326c-ac5d-41d2-9b28-483375df38f6}
"FreeBSD GlusterFS 3" {6b2747ab-3ec6-4b1a-a28e-5d871d7891b3}
"FreeBSD GlusterFS 4" {12379cf8-31d9-4ff1-9945-465fc3ed15f0}
"FreeBSD GlusterFS 5" {a4b0d515-5924-4517-9052-df238c366f2b}
"FreeBSD GlusterFS 6" {66621755-1b97-4486-aa15-a7bec9edb343}

Before we will try connect to our FreeBSD machines we need to make the minimal network configuration. Each FreeBSD machine will have such minimal /etc/rc.conf file as shown example for gluster1 host.

gluster1 # cat /etc/rc.conf
hostname=gluster1
ifconfig_DEFAULT="inet 10.0.10.11/24 up"
defaultrouter=10.0.10.1
sshd_enable=YES

For the setup purposes we will need to allow root login on these FreeBSD GlusterFS machines with PermitRootLogin yes option in the /etc/ssh/sshd_config file. You will also need to restart the sshd(8) service after the changes.

gluster1 # grep '^PermitRootLogin' /etc/ssh/sshd_config
PermitRootLogin yes
# service sshd restart

By using NAT Network with Port Forwarding the FreeBSD machines will be accessible on the localhost ports. For example the gluster1 machine will be available on port 2211, the gluster2 machine will be available on port 2212 and so on. This is shown in the sockstat utility output below.

vbhost % sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
vermaden VBoxNetNAT 57622 17 udp4   *:*                   *:*
vermaden VBoxNetNAT 57622 19 tcp4   *:2211                *:*
vermaden VBoxNetNAT 57622 20 tcp4   *:2212                *:*
vermaden VBoxNetNAT 57622 21 tcp4   *:2213                *:*
vermaden VBoxNetNAT 57622 22 tcp4   *:2214                *:*
vermaden VBoxNetNAT 57622 23 tcp4   *:2215                *:*
vermaden VBoxNetNAT 57622 24 tcp4   *:2216                *:*
vermaden VBoxNetNAT 57622 28 tcp4   *:2240                *:*
vermaden VBoxNetNAT 57622 29 tcp4   *:9140                *:*
vermaden VBoxNetNAT 57622 30 tcp4   *:2220                *:*
root     sshd       96791 4  tcp4   *:22                  *:*

I think the corelation between IP address and the port on the host is obvious ๐Ÿ™‚

Here is the list of the machines with ports on localhost:

10.0.10.11 gluster1 2211
10.0.10.12 gluster2 2212
10.0.10.13 gluster3 2213
10.0.10.14 gluster4 2214
10.0.10.15 gluster5 2215
10.0.10.16 gluster6 2216

To connect to such machine from the VirtualBox host system you will need this command:

vbhost % ssh -l root localhost -p 2211

To not type that every time you need to login to gluster1 let’s make come changes to ~/.ssh/config file for convenience. This way it will be possible to login in very short way.

vbhost % ssh gluster1

Here is the modified ~/.ssh/config file.

vbhost % cat ~/.ssh/config
# GENERAL
  StrictHostKeyChecking no
  LogLevel              quiet
  KeepAlive             yes
  ServerAliveInterval   30
  VerifyHostKeyDNS      no

# ALL HOSTS SETTINGS
Host *
  StrictHostKeyChecking no
  Compression           yes

# GLUSTER
Host gluster1
  User root
  Hostname 127.0.0.1
  Port 2211

Host gluster2
  User root
  Hostname 127.0.0.1
  Port 2212

Host gluster3
  User root
  Hostname 127.0.0.1
  Port 2213

Host gluster4
  User root
  Hostname 127.0.0.1
  Port 2214

Host gluster5
  User root
  Hostname 127.0.0.1
  Port 2215

Host gluster6
  User root
  Hostname 127.0.0.1
  Port 2216

I assume that you already have some SSH keys generated (with ~/.ssh/id_rsa as private key) so lets remove the need to type password on each SSH login.

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster1
Password for root@gluster1:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster2
Password for root@gluster2:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster3
Password for root@gluster3:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster4
Password for root@gluster4:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster5
Password for root@gluster5:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster6
Password for root@gluster6:

Ansible Setup

As we already have SSH integration now we will configure Ansible to connect to out ‘localhost’ ports for FreeBSD machines.

Here is the Ansible’s hosts file.

vbhost % cat hosts
[gluster]
gluster1 ansible_port=2211 ansible_host=127.0.0.1 ansible_user=root
gluster2 ansible_port=2212 ansible_host=127.0.0.1 ansible_user=root
gluster3 ansible_port=2213 ansible_host=127.0.0.1 ansible_user=root
gluster4 ansible_port=2214 ansible_host=127.0.0.1 ansible_user=root
gluster5 ansible_port=2215 ansible_host=127.0.0.1 ansible_user=root
gluster6 ansible_port=2216 ansible_host=127.0.0.1 ansible_user=root

[gluster:vars]
ansible_python_interpreter=/usr/local/bin/python2.7

Here is the listing of these machines using ansible command.

vbhost % ansible -i hosts --list-hosts gluster
  hosts (6):
    gluster1
    gluster2
    gluster3
    gluster4
    gluster5
    gluster6

Lets verify that out Ansible setup works correctly.

vbhost % ansible -i hosts -m raw -a 'echo' gluster
gluster1 | CHANGED | rc=0 >>



gluster3 | CHANGED | rc=0 >>



gluster2 | CHANGED | rc=0 >>



gluster5 | CHANGED | rc=0 >>



gluster4 | CHANGED | rc=0 >>



gluster6 | CHANGED | rc=0 >>

It works as desired.

We are not able to use Ansible modules other then Raw because by default Python is not installed on FreeBSD as shown below.

vbhost % ansible -i hosts -m ping gluster
gluster1 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster2 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster4 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster5 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster3 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster6 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}

We need to get Python installed on FreeBSD.

We will partially use Ansible for this and partially the GNU Parallel.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do ssh ${I} env ASSUME_ALWAYS_YES=yes pkg install python; done
pkg: Error fetching http://pkg.FreeBSD.org/FreeBSD:12:amd64/quarterly/Latest/pkg.txz: No address record
A pre-built version of pkg could not be found for your system.
Consider changing PACKAGESITE or installing it from ports: 'ports-mgmt/pkg'.
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:12:amd64/quarterly, please wait...

… we forgot about setting up DNS in the FreeBSD machines, let’s fix that.

It is as easy as executing echo nameserver 1.1.1.1 > /etc/resolv.conf command on each FreeBSD machine.

Lets verify what input will be sent to GNU Parallel before executing it.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh ${I} 'echo nameserver 1.1.1.1 > /etc/resolv.conf'"; done
ssh gluster1 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster2 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster3 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster4 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster5 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster6 'echo nameserver 1.1.1.1 > /etc/resolv.conf'

Looks reasonable, lets engage the GNU Parallel then.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh ${I} 'echo nameserver 1.1.1.1 > /etc/resolv.conf'"; done | parallel

Computers / CPU cores / Max jobs to run
1:local / 2 / 2

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:0/6/100%/1.0s

We will now verify that the DNS is configured properly on the FreeBSD machines.

vbhost % for I in $( jot 6 ); do echo -n "gluster${I} "; ssh gluster${I} 'cat /etc/resolv.conf'; done
gluster1 nameserver 1.1.1.1
gluster2 nameserver 1.1.1.1
gluster3 nameserver 1.1.1.1
gluster4 nameserver 1.1.1.1
gluster5 nameserver 1.1.1.1
gluster6 nameserver 1.1.1.1

Verification of the DNS by using ping(8) to test Internet connectivity.

vbhost % for I in $( jot 6 ); do echo; echo "gluster${I}"; ssh gluster${I} host freebsd.org; done

gluster1
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 10 mx1.freebsd.org.
freebsd.org mail is handled by 30 mx66.freebsd.org.

gluster2
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 30 mx66.freebsd.org.
freebsd.org mail is handled by 10 mx1.freebsd.org.

gluster3
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 30 mx66.freebsd.org.
freebsd.org mail is handled by 10 mx1.freebsd.org.

gluster4
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 30 mx66.freebsd.org.
freebsd.org mail is handled by 10 mx1.freebsd.org.

gluster5
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 10 mx1.freebsd.org.
freebsd.org mail is handled by 30 mx66.freebsd.org.

gluster6
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 10 mx1.freebsd.org.
freebsd.org mail is handled by 30 mx66.freebsd.org.

The DNS resolution works properly, now we will switch from the default quarterly pkg(8) repository to the latest one which has more frequent updates as the name suggests. We will need to use sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf command on each FreeBSD machine.

Verification what will be sent to GNU Parallel.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh ${I} 'sed -i \"\" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'"; done
ssh gluster1 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster2 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster3 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster4 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster5 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster6 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'

Let’s send the command to FreeBSD machines then.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh $I 'sed -i \"\" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'"; done | parallel

Computers / CPU cores / Max jobs to run
1:local / 2 / 2

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:0/6/100%/1.0s

As shown below the latest repository is configured in the /etc/pkg/FreeBSD.conf file on each FreeBSD machine.

vbhost % ssh gluster3 tail -7 /etc/pkg/FreeBSD.conf
FreeBSD: {
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",
  mirror_type: "srv",
  signature_type: "fingerprints",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}

We may now get back to Python.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo ssh ${I} env ASSUME_ALWAYS_YES=yes pkg install python; done
ssh gluster1 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster2 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster3 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster4 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster5 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster6 env ASSUME_ALWAYS_YES=yes pkg install python

… and execution on the FreeBSD machines with GNU Parallel.

vbhost % ansible -i hosts --list-hosts gluster \ 
           | sed 1d \
           | while read I; do echo ssh ${I} env ASSUME_ALWAYS_YES=yes pkg install python; done | parallel

Computers / CPU cores / Max jobs to run
1:local / 2 / 2

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:0/6/100%/156.0s

The Python packages and its dependencies are installed.

vbhost % ssh gluster3 pkg info
gettext-runtime-0.19.8.1_2     GNU gettext runtime libraries and programs
indexinfo-0.3.1                Utility to regenerate the GNU info page index
libffi-3.2.1_3                 Foreign Function Interface
pkg-1.10.5_5                   Package manager
python-2.7_3,2                 "meta-port" for the default version of Python interpreter
python2-2_3                    The "meta-port" for version 2 of the Python interpreter
python27-2.7.15                Interpreted object-oriented programming language
readline-7.0.5                 Library for editing command lines as they are typed

Now with Ansible Ping module works as desired.

% ansible -i hosts -m ping gluster
gluster1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster4 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster5 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster3 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster6 | SUCCESS => {
"changed": false,
"ping": "pong"
}

GlusterFS Volume Options

GlusterFS has a lot of options to setup the volume. They are described in the GlusterFS Administration Guide in the Setting up GlusterFS Volumes part. Here they are:

Distributed – Distributed volumes distribute files across the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy is either not important or is provided by other hardware/software layers.

Replicated – Replicated volumes replicate files across bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical.

Distributed Replicated – Distributed replicated volumes distribute files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments.

Dispersed – Dispersed volumes are based on erasure codes, providing space-efficient protection against disk or server failures. It stores an encoded fragment of the original file to each brick in a way that only a subset of the fragments is needed to recover the original file. The number of bricks that can be missing without losing access to data is configured by the administrator on volume creation time.

Distributed Dispersed – Distributed dispersed volumes distribute files across dispersed subvolumes. This has the same advantages of distribute replicate volumes, but using disperse to store the data into the bricks.

Striped [Deprecated] – Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files.

Distributed Striped [Deprecated] – Distributed striped volumes stripe data across two or more nodes in the cluster. You should use distributed striped volumes where the requirement is to scale storage and in high concurrency environments accessing very large files is critical.

Distributed Striped Replicated [Deprecated] – Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, you should use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

Striped Replicated [Deprecated] – Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

From all of the above still supported the Dispersed volume seems to be the best choice. Like Minio Dispersed volumes are based on erasure codes.

As we have 6 servers we will use 4 + 2 setup which is logical RAID6 against these 6 servers. This means that we will be able to lost 2 of them without service outage. This also means that if we will upload 100 MB file to our volume we will use 150 MB of space across these 6 servers with 25 MB on each node.

We can visualize this as following ASCII diagram.

+-----------+ +-----------+ +-----------+ +-----------+ +-----------+ +-----------+
|  gluster1 | |  gluster2 | |  gluster3 | |  gluster4 | |  gluster5 | |  gluster6 |
|           | |           | |           | |           | |           | |           |
|    brick1 | |    brick2 | |    brick3 | |    brick4 | |    brick5 | |    brick6 |
+-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+
      |             |             |             |             |             |
    25|MB         25|MB         25|MB         25|MB         25|MB         25|MB
      |             |             |             |             |             |
      +-------------+-------------+------+------+-------------+-------------+
                                         |
                                      100|MB
                                         |
                                     +---+---+
                                     | file0 |
                                     +-------+

Deploy GlusterFS Cluster

We will use gluster-setup.yml as our Ansible playbook.

Lets create something for the start, for example to always install the latest Python package.

vbhost % cat gluster-setup.yml
---
- name: Install and Setup GlusterFS on FreeBSD
  hosts: gluster
  user: root
  tasks:

  - name: Install Latest Python Package
    pkgng:
      name: python
      state: latest

We will now execute it.

vbhost % ansible-playbook -i hosts gluster-setup.yml

PLAY [Install and Setup GlusterFS on FreeBSD] **********************************

TASK [Gathering Facts] *********************************************************
ok: [gluster3]
ok: [gluster5]
ok: [gluster1]
ok: [gluster4]
ok: [gluster2]
ok: [gluster6]

TASK [Install Latest Python Package] *******************************************
ok: [gluster4]
ok: [gluster2]
ok: [gluster5]
ok: [gluster3]
ok: [gluster1]
ok: [gluster6]

PLAY RECAP *********************************************************************
gluster1                   : ok=2    changed=0    unreachable=0    failed=0
gluster2                   : ok=2    changed=0    unreachable=0    failed=0
gluster3                   : ok=2    changed=0    unreachable=0    failed=0
gluster4                   : ok=2    changed=0    unreachable=0    failed=0
gluster5                   : ok=2    changed=0    unreachable=0    failed=0
gluster6                   : ok=2    changed=0    unreachable=0    failed=0

We just installed Python on these machines no update was needed.

As we will be creating cluster we need to add time synchronization between the nodes of the cluster. We will use mose obvious solution – the ntpd(8) daemon that is in the FreeBSD base system. These lines are added to our gluster-setup.yml playbook to achieve this goal

  - name: Enable NTPD Service
    raw: sysrc ntpd_enable=YES

  - name: Start NTPD Service
    service:
      name: ntpd
      state: started

After executing the playbook again with the ansible-playbook -i hosts gluster-setup.yml command we will see additional output as the one shown below.

TASK [Enable NTPD Service] ************************************************
changed: [gluster2]
changed: [gluster1]
changed: [gluster4]
changed: [gluster5]
changed: [gluster3]
changed: [gluster6]

TASK [Start NTPD Service] ******************************************************
changed: [gluster5]
changed: [gluster4]
changed: [gluster2]
changed: [gluster1]
changed: [gluster3]
changed: [gluster6]

Random verification of the NTP service.

vbhost % ssh gluster1 ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 0.freebsd.pool. .POOL.          16 p    -   64    0    0.000    0.000   0.000
 ntp.ifj.edu.pl  10.0.2.4         3 u    1   64    1  119.956  -345759  32.552
 news-archive.ic 229.30.220.210   2 u    -   64    1   60.533  -345760  21.104

Now we need to install GlusterFS on FreeBSD machines – the glusterfs package.

We will add appropriate section to the playbook.

  - name: Install Latest GlusterFS Package
    pkgng:
      state: latest
      name:
      - glusterfs
      - ncdu

You can add more then one package to the pkgng Ansible module – for example I have also added ncdu package.

You can read more about pkgng Ansible module by typing the ansible-doc pkgng command or at least its short version with -s argument.

vbhost % ansible-doc -s pkgng
- name: Package manager for FreeBSD >= 9.0
  pkgng:
      annotation:            # A comma-separated list of keyvalue-pairs of the form `[=]'. A `+' denotes adding
                               an annotation, a `-' denotes removing an annotation, and `:' denotes
                               modifying an annotation. If setting or modifying annotations, a value
                               must be provided.
      autoremove:            # Remove automatically installed packages which are no longer needed.
      cached:                # Use local package base instead of fetching an updated one.
      chroot:                # Pkg will chroot in the specified environment. Can not be used together with `rootdir' or `jail'
                               options.
      jail:                  # Pkg will execute in the given jail name or id. Can not be used together with `chroot' or `rootdir'
                               options.
      name:                  # (required) Name or list of names of packages to install/remove.
      pkgsite:               # For pkgng versions before 1.1.4, specify packagesite to use for downloading packages. If not
                               specified, use settings from `/usr/local/etc/pkg.conf'. For newer
                               pkgng versions, specify a the name of a repository configured in
                               `/usr/local/etc/pkg/repos'.
      rootdir:               # For pkgng versions 1.5 and later, pkg will install all packages within the specified root directory.
                               Can not be used together with `chroot' or `jail' options.
      state:                 # State of the package. Note: "latest" added in 2.7

You can read more about this particular module on the following – https://docs.ansible.com/ansible/latest/modules/pkgng_module.html – Ansible page.

We will now add GlusterFS nodes to the /etc/hosts file and add autoboot_delay=1 parameter to the /boot/loader.conf file so our systems will boot 9 seconds faster as 10 is the default delay setting.

Here is out gluster-setup.yml Ansible playbook this far.

vbhost % cat gluster-setup.yml
---
- name: Install and Setup GlusterFS on FreeBSD
  hosts: gluster
  user: root
  tasks:

  - name: Install Latest Python Package
    pkgng:
      name: python
      state: latest

  - name: Enable NTPD Service
    raw: sysrc ntpd_enable=YES

  - name: Start NTPD Service
    service:
      name: ntpd
      state: started

  - name: Install Latest GlusterFS Package
    pkgng:
      state: latest
      name:
      - glusterfs
      - ncdu

  - name: Add Nodes to /etc/hosts File
    blockinfile:
      path: /etc/hosts
      block: |
        10.0.10.11 gluster1
        10.0.10.12 gluster2
        10.0.10.13 gluster3
        10.0.10.14 gluster4
        10.0.10.15 gluster5
        10.0.10.16 gluster6

  - name: Add autoboot_delay to /boot/loader.conf File
    lineinfile:
      path: /boot/loader.conf
      line: autoboot_delay=1
      create: yes

Here is the result of the execution of this playbook.

vbhost % ansible-playbook -i hosts gluster-setup.yml

PLAY [Install and Setup GlusterFS on FreeBSD] **********************************

TASK [Gathering Facts] *********************************************************
ok: [gluster3]
ok: [gluster5]
ok: [gluster1]
ok: [gluster4]
ok: [gluster2]
ok: [gluster6]

TASK [Install Latest Python Package] *******************************************
ok: [gluster4]
ok: [gluster2]
ok: [gluster5]
ok: [gluster3]
ok: [gluster1]
ok: [gluster6]

TASK [Install Latest GlusterFS Package] ****************************************
ok: [gluster2]
ok: [gluster1]
ok: [gluster3]
ok: [gluster5]
ok: [gluster4]
ok: [gluster6]

TASK [Add Nodes to /etc/hosts File] ********************************************
changed: [gluster5]
changed: [gluster4]
changed: [gluster2]
changed: [gluster3]
changed: [gluster1]
changed: [gluster6]

TASK [Enable GlusterFS Service] ************************************************
changed: [gluster1]
changed: [gluster4]
changed: [gluster2]
changed: [gluster3]
changed: [gluster5]
changed: [gluster6]

TASK [Add autoboot_delay to /boot/loader.conf File] ****************************
changed: [gluster3]
changed: [gluster2]
changed: [gluster5]
changed: [gluster1]
changed: [gluster4]
changed: [gluster6]

PLAY RECAP *********************************************************************
gluster1                   : ok=6    changed=3    unreachable=0    failed=0
gluster2                   : ok=6    changed=3    unreachable=0    failed=0
gluster3                   : ok=6    changed=3    unreachable=0    failed=0
gluster4                   : ok=6    changed=3    unreachable=0    failed=0
gluster5                   : ok=6    changed=3    unreachable=0    failed=0
gluster6                   : ok=6    changed=3    unreachable=0    failed=0

Let’s check that FreeBSD machines can now ping each other by names.

vbhost % ssh gluster6 cat /etc/hosts
# LOOPBACK
127.0.0.1      localhost localhost.my.domain
::1            localhost localhost.my.domain

# BEGIN ANSIBLE MANAGED BLOCK
10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6
# END ANSIBLE MANAGED BLOCK

vbhost % ssh gluster1 ping -c 1 gluster3
PING gluster3 (10.0.10.13): 56 data bytes
64 bytes from 10.0.10.13: icmp_seq=0 ttl=64 time=1.924 ms

--- gluster3 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.924/1.924/1.924/0.000 ms

… and our /boot/loader.conf file.

vbhost % ssh gluster4 cat /boot/loader.conf
autoboot_delay=1

Now we need to create directories for GlusterFS data. Without better idea we will use /data directory with /data/colume1 as the directory for volume1 and bricks will be put as /data/volume1/brick1 dirs. In this setup I will use just one brick per server but in production environment you would probably use one brick per physical disk.

Here is the playbook command we will use to create these directories on FreeBSD machines.

  - name: Create brick* Directories for volume1
    raw: mkdir -p /data/volume1/brick` hostname | grep -o -E '[0-9]+' `

After executing it with ansible-playbook -i hosts gluster-setup.yml command the directories has beed created.

vbhost % ssh gluster2 find /data -ls | column -t
2247168  8  drwxr-xr-x  3  root  wheel  512  Dec  28  17:48  /data
2247169  8  drwxr-xr-x  3  root  wheel  512  Dec  28  17:48  /data/volume2
2247170  8  drwxr-xr-x  2  root  wheel  512  Dec  28  17:48  /data/volume2/brick2


We now need to add glusterd_enable=YES to the /etc/rc.conf file on GlusterFS nodes and then start the GlsuterFS service.

This is the snippet we will add to our playbook.

  - name: Enable GlusterFS Service
    raw: sysrc glusterd_enable=YES

  - name: Start GlusterFS Service
    service:
      name: glusterd
      state: started

Let’s make quick random verification.

vbhost % ssh gluster4 service glusterd status
glusterd is running as pid 2684.

Now we need to proceed to the last part of the GlusterFS setup – create the volume.

We will do this from the gluster1 – the 1st node of the GlusterFS cluster.

First we need to peer probe other nodes.

gluster1 # gluster peer probe gluster1
peer probe: success. Probe on localhost not needed
gluster1 # gluster peer probe gluster2
peer probe: success.
gluster1 # gluster peer probe gluster3
peer probe: success.
gluster1 # gluster peer probe gluster4
peer probe: success.
gluster1 # gluster peer probe gluster5
peer probe: success.
gluster1 # gluster peer probe gluster6
peer probe: success.

Then we can create the volume. We will need to use force option to because for our example setup we will use directories on the root partition.

gluster1 # gluster volume create volume1 \
             disperse-data 4 \
             redundancy 2 \
             transport tcp \
             gluster1:/data/volume1/brick1 \
             gluster2:/data/volume1/brick2 \
             gluster3:/data/volume1/brick3 \
             gluster4:/data/volume1/brick4 \
             gluster5:/data/volume1/brick5 \
             gluster6:/data/volume1/brick6 \
             force
volume create: volume1: success: please start the volume to access data

We can now start the volume1 GlsuerFS volume.

gluster1 # gluster volume start volume1
volume start: volume1: success

gluster1 # gluster volume status volume1
Status of volume: volume1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster1:/data/volume1/brick1         N/A       N/A        N       N/A
Brick gluster2:/data/volume1/brick2         N/A       N/A        N       N/A
Brick gluster3:/data/volume1/brick3         N/A       N/A        N       N/A
Brick gluster4:/data/volume1/brick4         N/A       N/A        N       N/A
Brick gluster5:/data/volume1/brick5         N/A       N/A        N       N/A
Brick gluster6:/data/volume1/brick6         N/A       N/A        N       N/A
Self-heal Daemon on localhost               N/A       N/A        N       644
Self-heal Daemon on gluster6                N/A       N/A        N       643
Self-heal Daemon on gluster5                N/A       N/A        N       647
Self-heal Daemon on gluster2                N/A       N/A        N       645
Self-heal Daemon on gluster3                N/A       N/A        N       645
Self-heal Daemon on gluster4                N/A       N/A        N       645

Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks

gluster1 # gluster volume info volume1

Volume Name: volume1
Type: Disperse
Volume ID: 68cf9607-16bc-4550-9b6b-16a5c7656f51
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/data/volume1/brick1
Brick2: gluster2:/data/volume1/brick2
Brick3: gluster3:/data/volume1/brick3
Brick4: gluster4:/data/volume1/brick4
Brick5: gluster5:/data/volume1/brick5
Brick6: gluster6:/data/volume1/brick6
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

Here are contents of currently unused/empty brick.

gluster1 # find /data/volume1/brick1
/data/volume1/brick1
/data/volume1/brick1/.glusterfs
/data/volume1/brick1/.glusterfs/indices
/data/volume1/brick1/.glusterfs/indices/xattrop
/data/volume1/brick1/.glusterfs/indices/entry-changes
/data/volume1/brick1/.glusterfs/quarantine
/data/volume1/brick1/.glusterfs/quarantine/stub-00000000-0000-0000-0000-000000000008
/data/volume1/brick1/.glusterfs/changelogs
/data/volume1/brick1/.glusterfs/changelogs/htime
/data/volume1/brick1/.glusterfs/changelogs/csnap
/data/volume1/brick1/.glusterfs/brick1.db
/data/volume1/brick1/.glusterfs/brick1.db-wal
/data/volume1/brick1/.glusterfs/brick1.db-shm
/data/volume1/brick1/.glusterfs/00
/data/volume1/brick1/.glusterfs/00/00
/data/volume1/brick1/.glusterfs/00/00/00000000-0000-0000-0000-000000000001
/data/volume1/brick1/.glusterfs/landfill
/data/volume1/brick1/.glusterfs/unlink
/data/volume1/brick1/.glusterfs/health_check

The 6-node GlusterFS cluster is now complete and volume1 available to use.

Alternative

The GlusterFS’s documentation Quick Start Guide also suggests using Ansible to deploy and manage GlusterFS with gluster-ansible repository or gluster-ansible-cluster but they have below requirements.

  • Ansible version 2.5 or above.
  • GlusterFS version 3.2 or above.

As GlusterFS on FreeBSD is at 3.11.1 version I did not used them.

FreeBSD Client

We will now use another VirtualBox machine – also based on the same FreeBSD 12.0-RELEASE image – to create FreeBSD Client machine that will mount our volume1 volume.

We will need to install glusterfs package with pkg(8) command. Then we will use mount_glusterfs command to mount the volume. Keep in mind that in order to mount GlusterFS volume the FUSE (fuse.ko kernel module is needed.

client # pkg install glusterfs

client # kldload fuse

client # mount_glusterfs 10.0.10.11:volume1 /mnt

client # echo $?
0

client # mount
/dev/gpt/rootfs on / (ufs, local, soft-updates)
devfs on /dev (devfs, local, multilabel)
/dev/fuse on /mnt (fusefs, local, synchronous)

client # ls /mnt
ls: /mnt: Socket is not connected

It is mounted but does not work. The solution to this problem is to add appropriate /etc/hosts entries to the GlusterFS nodes.

client # cat /etc/hosts
::1                     localhost localhost.my.domain
127.0.0.1               localhost localhost.my.domain

10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6

Lets mount it again now with needed /etc/hosts entries.

client # umount /mnt

client # mount_glusterfs gluster1:volume1 /mnt

client # ls /mnt
client #

We now have our GlusterFS volume properly mounted and working on the FreeBSD Client machine.

Lets write some file there with dd(8) to see how it works.

client # dd  FILE bs=1m count=100 status=progress
  73400320 bytes (73 MB, 70 MiB) transferred 1.016s, 72 MB/s
100+0 records in
100+0 records out
104857600 bytes transferred in 1.565618 secs (66975227 bytes/sec)

Let’s see how it looks in the brick directory.

gluster1 # ls -lh /data/volume1/brick1
total 25640
drw-------  10 root  wheel   512B Jan  3 18:31 .glusterfs
-rw-r--r--   2 root  wheel    25M Jan  3 18:31 FILE

gluster1 # find /data
/data/
/data/volume1
/data/volume1/brick1
/data/volume1/brick1/.glusterfs
/data/volume1/brick1/.glusterfs/indices
/data/volume1/brick1/.glusterfs/indices/xattrop
/data/volume1/brick1/.glusterfs/indices/xattrop/xattrop-aed814f1-0eb0-46a1-b569-aeddf5048e06
/data/volume1/brick1/.glusterfs/indices/entry-changes
/data/volume1/brick1/.glusterfs/quarantine
/data/volume1/brick1/.glusterfs/quarantine/stub-00000000-0000-0000-0000-000000000008
/data/volume1/brick1/.glusterfs/changelogs
/data/volume1/brick1/.glusterfs/changelogs/htime
/data/volume1/brick1/.glusterfs/changelogs/csnap
/data/volume1/brick1/.glusterfs/brick1.db
/data/volume1/brick1/.glusterfs/brick1.db-wal
/data/volume1/brick1/.glusterfs/brick1.db-shm
/data/volume1/brick1/.glusterfs/00
/data/volume1/brick1/.glusterfs/00/00
/data/volume1/brick1/.glusterfs/00/00/00000000-0000-0000-0000-000000000001
/data/volume1/brick1/.glusterfs/landfill
/data/volume1/brick1/.glusterfs/unlink
/data/volume1/brick1/.glusterfs/health_check
/data/volume1/brick1/.glusterfs/ac
/data/volume1/brick1/.glusterfs/ac/b4
/data/volume1/brick1/.glusterfs/11
/data/volume1/brick1/.glusterfs/11/50
/data/volume1/brick1/.glusterfs/11/50/115043ca-420f-48b5-af05-c9552db2e585
/data/volume1/brick1/FILE

Linux Client

I will also show how to mount GlusterFS volume on the Red Hat clone CentOS in its latest 7.6 incarnation. It will require glusterfs-fuse package installation.

[root@localhost ~]# yum install glusterfs-fuse


[root@localhost ~]# rpm -q --filesbypkg glusterfs-fuse | grep /sbin/mount.glusterfs
glusterfs-fuse            /sbin/mount.glusterfs

[root@localhost ~]# mount.glusterfs 10.0.10.11:volume1 /mnt
Mount failed. Please check the log file for more details.

Similarly like with FreeBSD Client the /etc/hosts entries are needed.

[root@localhost ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6

[root@localhost ~]# mount.glusterfs 10.0.10.11:volume1 /mnt

[root@localhost ~]# ls /mnt
FILE

[root@localhost ~]# mount
10.0.10.11:volume1 on /mnt type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

With apropriate /etc/hosts entries it works as desired. We see the FILE file generated fron the FreeBSD Client machine.

GlusterFS Cluster Redundancy

After messing with the volume and creating and deleting various files I also tested its redundancy. In theory this RAID6 equivalent protection should protect us from the loss of two of six servers. After shutdown of two VirtualBox machines the volume is still available and ready to use.

Closing Thougts

Pity that FreeBSD does not provide more modern GlusterFS package as currently only 3.11.1 version is available.

EOF