Tag Archives: ansible

Valuable News – 2019/01/25

The Valuable News weekly series is dedicated to provide summary about news, articles and other interesting stuff mostly but not always related to the UNIX or BSD systems. Whenever I stumble upon something worth mentioning on the Internet I just put it here so someone else can

Today the amount information that we get using various information streams is at massive overload. Thus one needs to focus only on what is important without the need to grep(1) the Internet everyday. Hence the idea of providing such information ‘bulk’ as I already do that grep(1).

UNIX

FreeBSD fixed pfsync change that break CARP.
https://svnweb.freebsd.org/base?view=revision&revision=343130

Restoration of Early UNIX Artifacts.
https://www.usenix.org/legacy/events/usenix09/tech/full_papers/toomey/toomey.pdf

Amiga 500 Papercraft Pattern.
http://rockybergen.com/whatsnew/2019/1/15/amiga-500-papercraft-design
https://www.docdroid.net/file/download/PE7D4vg/amiga-500-ver1.pdf

Ansible BSD Working Group.
https://github.com/ansible/community/wiki/BSD

Ansible Solaris Working Group.
https://github.com/ansible/community/wiki/Solaris

In Other BSDs for 2019/01/19.
https://www.dragonflydigest.com/2019/01/19/22394.html

Debian systemd maintainer will not maintain systemd anymore.
“What’s going on is just too stupid/crazy.”
https://lists.freedesktop.org/archives/systemd-devel/2019-January/041971.html
https://www.reddit.com/r/linux/comments/agna5n/debian_systemd_maintainer_steps_down_over/

AMIGA gets LZ4 for 68k CPUs.
https://github.com/arnaud-carre/lz4-68k

[POLISH] ZFS – Rewolucja w Systemach Plikow (Mariusz Zaborski).
https://oshogbo.vexillium.org/pdf/programista_zfs_2015.pdf

LiteCLI is user friendly command line client for SQLite database.
https://www.pgcli.com/launching-litecli.html

OpenBSD can now join any open WiFi network.
https://undeadly.org/cgi?action=article;sid=20190120142708

OpenBSD deletes vmm(4) on i386.
https://undeadly.org/cgi?action=article;sid=20190120142529

NomadBSD 1.2-RC1 Released.
nomadbsd.org/index.html#rel1.2-rc1

Open source Spotify UNIX client available on FreeBSD as audio/spotifyd port.
https://svnweb.freebsd.org/ports?view=revision&revision=490765

OpenBSD added support for 2TB of memory on amd64.
https://undeadly.org/cgi?action=article;sid=20190121081245

HardenedBSD 1200058.2 Available.
https://hardenedbsd.org/article/op/2019-01-20/stable-release-hardenedbsd-stable-12-stable-v12000582

SoloBSD 19.01-STABLE based on HardenedBSD 1200058.2 Available.
https://www.solobsd.org/index.php/2019/01/23/solobsd-19-01-stable/

OmniOS Community Edition r151028l/r151026al/r151022cj Available.
https://omniosce.org/article/028l-026al-022cj

Wayland Support on BSDs Continuing to Improve.
https://www.phoronix.com/scan.php?page=news_item&px=Wayland-BSD-Improving-2019

Lumina from Project Trident will replace Fluxbox with their own WM after its finished.
https://twitter.com/q5sys/status/1087422097499062274

For the Love of Pipes.
https://blog.jessfraz.com/post/for-the-love-of-pipes/

FreeBSD adds trim(8) command for devices with wear leveling algorithms.
https://svnweb.freebsd.org/base?view=revision&revision=343118

ClarityOS is Safe/Secure/Modern/Reliable HardenedBSD fork.
Goal is to make BSD as open and customizable as possible, while keeping it easy, simple and accessible to as many people as possible!
https://clarityos.net/

MidnightBSD Developer Journal.
https://www.justjournal.com/users/mbsd/entry/33772

Out-of-the-Box 10GE Network Benchmarks on Linux Distributions and FreeBSD 12.
https://www.phoronix.com/scan.php?page=article&item=10gbe-linux-freebsd12&num=3

FreeBSD in top(1) will not now show swap line if there are no swap devices.
https://reviews.freebsd.org/D18928

Wine 4.0 Released.
https://www.winehq.org/news/2019012201

AMD Open Source Driver for Vulkan.
https://github.com/GPUOpen-Drivers/AMDVLK

Kerberized NFS4 in Mixed Environment.
FreeBSD 12.0-RELEASE as KDC/Kerberos/NFS4 Server with Windows/Ubuntu as Clients.
https://amoradi.org/public/kerberized_nfsv4.txt

XigmaNAS 11.2.0.4.6400 Released.
https://sourceforge.net/projects/xigmanas/files/XigmaNAS-11.2.0.4/11.2.0.4.6400/

FreeBSD utility nsysctl to show MIB tree in XML.
https://twitter.com/alfsiciliano/status/1087561685693427712

HOWTO – VNET Jails on FreeBSD 12 with ZFS.
https://www.reddit.com/r/freebsd/comments/ahdbbq/howto_jails_freebsd_12_vnet_zfs/

Building FreeBSD based home router.
https://kamila.is/learning/building-my-home-router/

Project Trident 18.12 Overview.
https://www.youtube.com/watch?v=4YizY1YblzU

What’s Wrong with tar?
https://www.cyphar.com/blog/post/20190121-ociv2-images-i-tar

BSD Now 282 – Open the Rsync.
https://bsdnow.fireside.fm/282

ClonOS 19.01-RELEASE Available – First Public Release.
https://clonos.tekroutine.com/download.html

Hardware

Rock Pi 4 Review – Is this Raspberry Pi challenger you’ve been looking for?
https://www.techrepublic.com/article/rock-pi-4-review-is-this-the-raspberry-pi-challenger-youve-been-looking-for/

Freedom and Privacy in the Datacenter.
https://media.ccc.de/v/froscon2018-2170-freedom_and_privacy_in_the_datacenter

Introducing Ultra EPYC AMD Powered Sun Ultra 24 Workstation.
https://www.servethehome.com/introducing-the-ultra-epyc-amd-powered-sun-ultra-24-workstation/

Thin-ITX AMD AM4 Motherboard.
https://www.anandtech.com/show/13885/the-true-shortest-am4-motherboard-thinitx-comes-to-amd

Backblaze Hard Drive Stats for 2018.
https://www.backblaze.com/blog/hard-drive-stats-for-2018/

Double Height DDR4 – 32GB Modules Reviewed.
https://www.anandtech.com/show/13694/double-height-ddr4-gskill-zadak-2x32gb-ddr4-3200

Life

Men’s and Women’s Brains.
https://www.youtube.com/watch?v=AQ9L9YBJkk8

Philip Zimbardo’s Response to Recent Criticisms of the Stanford Prison Experiment.
https://www.prisonexp.org/response/

50 Years Ago Sugar Industry Quietly Paid Scientists to Point Blame at Fat.
https://www.npr.org/sections/thetwo-way/2016/09/13/493739074/50-years-ago-sugar-industry-quietly-paid-scientists-to-point-blame-at-fat

Students Learn From People They Love.
https://www.nytimes.com/2019/01/17/opinion/learning-emotion-education.html

The Process of Mastering a Skill.
https://azeria-labs.com/the-process-of-mastering-a-skill/

Real Hardware for the FreeBSD BeaST Storage System.
https://mezzantrop.wordpress.com/2019/01/18/the-real-hardware-for-the-beast-storage-system/

Remote Code Execution in Linux apt/apt-get Commands.
https://justi.cz/security/2019/01/22/apt-rce.html

The Alarming Decline of Quality Youth Playtime.
https://houseoflawandorder.com/the-alarming-decline-of-quality-youth-playtime/

Other

Quake 1.5 Refreshed FPS Classic with New Maps/Monsters/Mayhem.
https://www.rockpapershotgun.com/2019/01/18/quake-1-5-is-a-massive-mod-overhaul-for-the-fps-classic/amp/

EOF
Advertisements

Valuable News – 2019/01/18

The Valuable News weekly series is dedicated to provide summary about news, articles and other interesting stuff mostly but not always related to the UNIX or BSD systems. Whenever I stumble upon something worth mentioning on the Internet I just put it here so someone else can

Today the amount information that we get using various information streams is at massive overload. Thus one needs to focus only on what is important without the need to grep(1) the Internet everyday. Hence the idea of providing such information ‘bulk’ as I already do that grep(1).

UNIX

Installing and Updating FreeBSD 11.0 on Raspberry Pi.
https://solence.de/2017/03/15/installing-and-updating-freebsd-11-0-release-on-a-raspberry-pi/

ZFS on Linux Runs into Snag with Linux 5.0.
https://www.phoronix.com/scan.php?page=news_item&px=ZFS-On-Linux-5.0-Problem

Create your own video streaming server with Linux or FreeBSD.
https://opensource.com/article/19/1/basic-live-video-streaming-server

hexyl – command-line hex viewer for terminal.
Colors different categories of bytes (NULL/printable ASCII/ASCII whitespace/other ASCII/non-ASCII).
https://github.com/sharkdp/hexyl

How Vim Shaped My Writing.
https://gilesbowkett.blogspot.com/2016/10/how-vim-shaped-my-writing.html

Debugging Rust with VSCode on FreeBSD.
https://venshare.com/debugging-rust-with-vscode-on-freebsd/

In Other BSDs for 2019/01/12.
https://www.dragonflydigest.com/2019/01/12/22379.html

How OpenBSD is secure compared to other operating systems?
https://www.reddit.com/r/openbsd/comments/af1s00/how_openbsd_is_secure_compared_to_other_operating/

FreeBSD adds bluetooth-config script to simplify setting up Bluetooth connections.
https://svnweb.freebsd.org/changeset/base/342945

The good and the bad of Linux’s NetworkManager.
https://utcc.utoronto.ca/~cks/space/blog/linux/NetworkManagerGoodBad

Linux network-scripts being deprecated.
https://utcc.utoronto.ca/~cks/space/blog/linux/NetworkScriptsAndPPPoE

The CADT Model – mostly used in Linux environments.
https://www.jwz.org/doc/cadt.html

We are now closer to the Y2038 bug than the Y2K bug.
https://www.jwz.org/blog/2019/01/we-are-now-closer-to-the-y2038-bug-than-the-y2k-bug/

FreeBSD adds support for Clang Coverage Sanitizer in kernel (KCOV).
https://svnweb.freebsd.org/base?view=revision&revision=342962

Powersaving with DragonFly BSD Laptop.
https://www.dragonflybsd.org/docs/user/Powersave/?updated

Difference Between ZFS Scrub and Resilver.
https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSResilversVsScrubs

Risk that comes from ZFS on Linux not being GPL Compatible.
Actually its GPL that is not compatible with CDDL (ZFS license). Not the other way around.
https://utcc.utoronto.ca/~cks/space/blog/linux/ZFSNonGPLRisk

FreeBSD takes vmm(4) towards multicore Bhyve AMD support.
https://svnweb.freebsd.org/base?view=revision&revision=343075

Ansible plugin for remotely provisioning FreeBSD Jails separately from their Jail host.
https://github.com/austinhyde/ansible-sshjail

Update Intel Microcode on FreeBSD.
https://www.thomas-krenn.com/en/wiki/Update_Intel_Microcode_on_FreeBSD

Building spotifyd on NetBSD.
https://atomicules.co.uk/2019/01/17/Building-Spotifyd-on-NetBSD.html

Building my FreeBSD-based home router.
https://kamila.is/learning/building-my-home-router/

HardenedBSD 1100056.12 Available.
https://hardenedbsd.org/article/op/2019-01-18/stable-release-hardenedbsd-stable-11-stable-v110005612

BSD Now 281 – EPYC Server Battle.
https://www.jupiterbroadcasting.com/128846/epyc-server-battle-bsd-now-281/

OpenRSYNC – clean-room implementation of rsync with a BSD (ISC) license.
https://github.com/kristapsdz/openrsync

FreeBSD and UEFI Boot.
https://blog.tyk.nu/blog/freebsd-and-uefi-boot/

Synaptics Touchpad on FreeBSD.
https://wiki.freebsd.org/SynapticsTouchpad

OpenBSD Content Filtering Proxy HOWTO in One Tweet πŸ™‚
https://twitter.com/ogmaconnect1/status/983813448504094721

Aphelia – minimalist window manager.
https://github.com/vardy/aphelia

The tinywm.c is a window manager in 50 lines of C code.
https://github.com/mackstann/tinywm/blob/master/tinywm.c

OpenDoas: Portable Version of OpenBSD doas Command.
https://git.duncano.de/opendoas/about/

OpenBSD Packages Statistics.
https://pkgstat-openbsd.perso.pw/

The Art of Unix Programming by Eric Steven Raymond reformatted by Martin Tournoij.
https://arp242.net/the-art-of-unix-programming/

Reproducible NetBSD!
https://tests.reproducible-builds.org/netbsd/netbsd.html

FreeBSD PR (Problem Reports) Stats.
https://people.freebsd.org/~miwi/gnats/

Using cloud-init with SmartOS.
https://shaner.life/using-cloud-init-with-smartos/

Shawn Webb (co-creator of HardenedBSD) started new project to rewrite FreeBSD bhyve in Rust.
https://twitter.com/lattera/status/1085711849469952000

Hardware

Flashing my Lenovo x230 with Coreboot.
https://www.chucknemeth.com/flash-lenovo-x230-coreboot/

POWER9 Scales Up To 1.2 TB/s of I/O.
https://fuse.wikichip.org/news/1653/power9-scales-up-to-1-2-tb-s-of-i-o-targets-nvlink-3-opencapi-memory-for-2019/

V-Raptor is 24-Core ARM Server Based on SocioNext SC2A11 SoC.
https://www.cnx-software.com/2019/01/08/v-raptor-24-core-arm-server-socionext-sc2a11/

AMD Opteron X3421 Benchmarks and Review Low Cost Atom Competitor.
https://www.servethehome.com/amd-opteron-x3421-benchmarks-and-review-a-low-cost-atom-competitor/

Computer for CyberSecurity BSD-OS (SecBSD).
https://www.gofundme.com/computer-for-cybersecurity-bsdos

Life

Australia Becomes First Western Nation to Ban Secure Encryption.
https://www.extremetech.com/internet/281991-australia-becomes-first-western-nation-to-ban-secure-encryption

Facebook’s 10 Year Challenge is Just a Harmless Meme Right?
https://www.wired.com/story/facebook-10-year-meme-challenge/

Other

ŁódΕΊ Revitalised – Poland’s Third Largest City is Under Transformation.
https://uml.lodz.pl/files/public/dla_biznesu/investlodz/fdi-lodz-revitalised.pdf

Why Microsoft Word must Die.
http://www.antipope.org/charlie/blog-static/2013/10/why-microsoft-word-must-die.html

EOF

 

GlusterFS Cluster on FreeBSD with Ansible and GNU Parallel

Today I would like to present an article about setting up GlusterFS cluster on a FreeBSD system with Ansible and GNU Parallel tools.

gluster-logo.png

To cite Wikipedia “GlusterFS is a scale-out network-attached storage file system. It has found applications including cloud computing, streaming media services, and content delivery networks.” The GlusterFS page describes it similarly “Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace.”

Here are its advantages:

  • Scales to several petabytes.
  • Handles thousands of clients.
  • POSIX compatible.
  • Uses commodity hardware.
  • Can use any ondisk filesystem that supports extended attributes.
  • Accessible using industry standard protocols like NFS and SMB.
  • Provides replication/quotas/geo-replication/snapshots/bitrot detection.
  • Allows optimization for different workloads.
  • Open Source.

Lab Setup

It will be entirely VirtualBox based and it will consist of 6 hosts. To not create 6 same FreeBSD installations I used 12.0-RELEASE virtual machine image available from the FreeBSD Project directly:

There are several formats available – qcow2/raw/vhd/vmdk – but as I will be using VirtualBox I used the VMDK one.

I will use different prompts depending on where the command is executed to make the article more readable. Also then there is ‘%‘ at the prompt then a regular user is needed and if there is ‘#‘ at the prompt then a superuser is needed.

gluster1 #    // command run on the gluster1 node
gluster* #    // command run on all gluster nodes
client #      // command run on gluster client
vbhost %      // command run on the VirtualBox host

Here is the list of the machines for the GlusterFS cluster:

10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6

Each VirtualBox virtual machine for FreeBSD is the default one (as suggested in the VirtualBox wizard) with 512 MB RAM and NAT Network as shown on the image below.

virtualbox-freebsd-gluster-host.jpg

Here is the configuration of the NAT Network on VirtualBox.

virtualbox-nat-network.jpg

The cloned/copied FreeBSD-12.0-RELEASE-amd64.vmdk image will need to have different UUIDs so we will use VBoxManage internalcommands sethduuid command to achieve this.

vbhost % for I in $( seq 6 ); do cp FreeBSD-12.0-RELEASE-amd64.vmdk    vbox_GlusterFS_${I}.vmdk; done
vbhost % for I in $( seq 6 ); do VBoxManage internalcommands sethduuid vbox_GlusterFS_${I}.vmdk; done

To start the whole GlusterFS environment on VirtualBox use these commands.

vbhost % VBoxManage list vms | grep GlusterFS
"FreeBSD GlusterFS 1" {162a3b6f-4ec9-4709-bff8-162b0c8c9c41}
"FreeBSD GlusterFS 2" {2e30326c-ac5d-41d2-9b28-483375df38f6}
"FreeBSD GlusterFS 3" {6b2747ab-3ec6-4b1a-a28e-5d871d7891b3}
"FreeBSD GlusterFS 4" {12379cf8-31d9-4ff1-9945-465fc3ed15f0}
"FreeBSD GlusterFS 5" {a4b0d515-5924-4517-9052-df238c366f2b}
"FreeBSD GlusterFS 6" {66621755-1b97-4486-aa15-a7bec9edb343}

Check which GlusterFS machines are running.

vbhost % VBoxManage list runningvms | grep GlusterFS
vbhost %

Starting of the machines in VirtualBox Headless mode in parallel.

vbhost % VBoxManage list vms \
           | grep GlusterFS \
           | awk -F \" '{print $2}' \
           | while read I; do VBoxManage startvm "${I}" --type headless & done

After that command you should see these machines running.

vbhost % VBoxManage list runningvms
"FreeBSD GlusterFS 1" {162a3b6f-4ec9-4709-bff8-162b0c8c9c41}
"FreeBSD GlusterFS 2" {2e30326c-ac5d-41d2-9b28-483375df38f6}
"FreeBSD GlusterFS 3" {6b2747ab-3ec6-4b1a-a28e-5d871d7891b3}
"FreeBSD GlusterFS 4" {12379cf8-31d9-4ff1-9945-465fc3ed15f0}
"FreeBSD GlusterFS 5" {a4b0d515-5924-4517-9052-df238c366f2b}
"FreeBSD GlusterFS 6" {66621755-1b97-4486-aa15-a7bec9edb343}

Before we will try connect to our FreeBSD machines we need to make the minimal network configuration. Each FreeBSD machine will have such minimal /etc/rc.conf file as shown example for gluster1 host.

gluster1 # cat /etc/rc.conf
hostname=gluster1
ifconfig_DEFAULT="inet 10.0.10.11/24 up"
defaultrouter=10.0.10.1
sshd_enable=YES

For the setup purposes we will need to allow root login on these FreeBSD GlusterFS machines with PermitRootLogin yes option in the /etc/ssh/sshd_config file. You will also need to restart the sshd(8) service after the changes.

gluster1 # grep '^PermitRootLogin' /etc/ssh/sshd_config
PermitRootLogin yes
# service sshd restart

By using NAT Network with Port Forwarding the FreeBSD machines will be accessible on the localhost ports. For example the gluster1 machine will be available on port 2211, the gluster2 machine will be available on port 2212 and so on. This is shown in the sockstat utility output below.

vbhost % sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
vermaden VBoxNetNAT 57622 17 udp4   *:*                   *:*
vermaden VBoxNetNAT 57622 19 tcp4   *:2211                *:*
vermaden VBoxNetNAT 57622 20 tcp4   *:2212                *:*
vermaden VBoxNetNAT 57622 21 tcp4   *:2213                *:*
vermaden VBoxNetNAT 57622 22 tcp4   *:2214                *:*
vermaden VBoxNetNAT 57622 23 tcp4   *:2215                *:*
vermaden VBoxNetNAT 57622 24 tcp4   *:2216                *:*
vermaden VBoxNetNAT 57622 28 tcp4   *:2240                *:*
vermaden VBoxNetNAT 57622 29 tcp4   *:9140                *:*
vermaden VBoxNetNAT 57622 30 tcp4   *:2220                *:*
root     sshd       96791 4  tcp4   *:22                  *:*

I think the corelation between IP address and the port on the host is obvious πŸ™‚

Here is the list of the machines with ports on localhost:

10.0.10.11 gluster1 2211
10.0.10.12 gluster2 2212
10.0.10.13 gluster3 2213
10.0.10.14 gluster4 2214
10.0.10.15 gluster5 2215
10.0.10.16 gluster6 2216

To connect to such machine from the VirtualBox host system you will need this command:

vbhost % ssh -l root localhost -p 2211

To not type that every time you need to login to gluster1 let’s make come changes to ~/.ssh/config file for convenience. This way it will be possible to login in very short way.

vbhost % ssh gluster1

Here is the modified ~/.ssh/config file.

vbhost % cat ~/.ssh/config
# GENERAL
  StrictHostKeyChecking no
  LogLevel              quiet
  KeepAlive             yes
  ServerAliveInterval   30
  VerifyHostKeyDNS      no

# ALL HOSTS SETTINGS
Host *
  StrictHostKeyChecking no
  Compression           yes

# GLUSTER
Host gluster1
  User root
  Hostname 127.0.0.1
  Port 2211

Host gluster2
  User root
  Hostname 127.0.0.1
  Port 2212

Host gluster3
  User root
  Hostname 127.0.0.1
  Port 2213

Host gluster4
  User root
  Hostname 127.0.0.1
  Port 2214

Host gluster5
  User root
  Hostname 127.0.0.1
  Port 2215

Host gluster6
  User root
  Hostname 127.0.0.1
  Port 2216

I assume that you already have some SSH keys generated (with ~/.ssh/id_rsa as private key) so lets remove the need to type password on each SSH login.

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster1
Password for root@gluster1:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster2
Password for root@gluster2:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster3
Password for root@gluster3:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster4
Password for root@gluster4:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster5
Password for root@gluster5:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster6
Password for root@gluster6:

Ansible Setup

As we already have SSH integration now we will configure Ansible to connect to out ‘localhost’ ports for FreeBSD machines.

Here is the Ansible’s hosts file.

vbhost % cat hosts
[gluster]
gluster1 ansible_port=2211 ansible_host=127.0.0.1 ansible_user=root
gluster2 ansible_port=2212 ansible_host=127.0.0.1 ansible_user=root
gluster3 ansible_port=2213 ansible_host=127.0.0.1 ansible_user=root
gluster4 ansible_port=2214 ansible_host=127.0.0.1 ansible_user=root
gluster5 ansible_port=2215 ansible_host=127.0.0.1 ansible_user=root
gluster6 ansible_port=2216 ansible_host=127.0.0.1 ansible_user=root

[gluster:vars]
ansible_python_interpreter=/usr/local/bin/python2.7

Here is the listing of these machines using ansible command.

vbhost % ansible -i hosts --list-hosts gluster
  hosts (6):
    gluster1
    gluster2
    gluster3
    gluster4
    gluster5
    gluster6

Lets verify that out Ansible setup works correctly.

vbhost % ansible -i hosts -m raw -a 'echo' gluster
gluster1 | CHANGED | rc=0 >>



gluster3 | CHANGED | rc=0 >>



gluster2 | CHANGED | rc=0 >>



gluster5 | CHANGED | rc=0 >>



gluster4 | CHANGED | rc=0 >>



gluster6 | CHANGED | rc=0 >>

It works as desired.

We are not able to use Ansible modules other then Raw because by default Python is not installed on FreeBSD as shown below.

vbhost % ansible -i hosts -m ping gluster
gluster1 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster2 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster4 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster5 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster3 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster6 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}

We need to get Python installed on FreeBSD.

We will partially use Ansible for this and partially the GNU Parallel.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do ssh ${I} env ASSUME_ALWAYS_YES=yes pkg install python; done
pkg: Error fetching http://pkg.FreeBSD.org/FreeBSD:12:amd64/quarterly/Latest/pkg.txz: No address record
A pre-built version of pkg could not be found for your system.
Consider changing PACKAGESITE or installing it from ports: 'ports-mgmt/pkg'.
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:12:amd64/quarterly, please wait...

… we forgot about setting up DNS in the FreeBSD machines, let’s fix that.

It is as easy as executing echo nameserver 1.1.1.1 > /etc/resolv.conf command on each FreeBSD machine.

Lets verify what input will be sent to GNU Parallel before executing it.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh ${I} 'echo nameserver 1.1.1.1 > /etc/resolv.conf'"; done
ssh gluster1 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster2 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster3 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster4 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster5 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster6 'echo nameserver 1.1.1.1 > /etc/resolv.conf'

Looks reasonable, lets engage the GNU Parallel then.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh ${I} 'echo nameserver 1.1.1.1 > /etc/resolv.conf'"; done | parallel

Computers / CPU cores / Max jobs to run
1:local / 2 / 2

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:0/6/100%/1.0s

We will now verify that the DNS is configured properly on the FreeBSD machines.

vbhost % for I in $( jot 6 ); do echo -n "gluster${I} "; ssh gluster${I} 'cat /etc/resolv.conf'; done
gluster1 nameserver 1.1.1.1
gluster2 nameserver 1.1.1.1
gluster3 nameserver 1.1.1.1
gluster4 nameserver 1.1.1.1
gluster5 nameserver 1.1.1.1
gluster6 nameserver 1.1.1.1

Verification of the DNS by using ping(8) to test Internet connectivity.

vbhost % for I in $( jot 6 ); do echo; echo "gluster${I}"; ssh gluster${I} host freebsd.org; done

gluster1
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 10 mx1.freebsd.org.
freebsd.org mail is handled by 30 mx66.freebsd.org.

gluster2
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 30 mx66.freebsd.org.
freebsd.org mail is handled by 10 mx1.freebsd.org.

gluster3
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 30 mx66.freebsd.org.
freebsd.org mail is handled by 10 mx1.freebsd.org.

gluster4
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 30 mx66.freebsd.org.
freebsd.org mail is handled by 10 mx1.freebsd.org.

gluster5
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 10 mx1.freebsd.org.
freebsd.org mail is handled by 30 mx66.freebsd.org.

gluster6
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 10 mx1.freebsd.org.
freebsd.org mail is handled by 30 mx66.freebsd.org.

The DNS resolution works properly, now we will switch from the default quarterly pkg(8) repository to the latest one which has more frequent updates as the name suggests. We will need to use sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf command on each FreeBSD machine.

Verification what will be sent to GNU Parallel.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh ${I} 'sed -i \"\" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'"; done
ssh gluster1 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster2 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster3 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster4 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster5 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster6 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'

Let’s send the command to FreeBSD machines then.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh $I 'sed -i \"\" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'"; done | parallel

Computers / CPU cores / Max jobs to run
1:local / 2 / 2

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:0/6/100%/1.0s

As shown below the latest repository is configured in the /etc/pkg/FreeBSD.conf file on each FreeBSD machine.

vbhost % ssh gluster3 tail -7 /etc/pkg/FreeBSD.conf
FreeBSD: {
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",
  mirror_type: "srv",
  signature_type: "fingerprints",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}

We may now get back to Python.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo ssh ${I} env ASSUME_ALWAYS_YES=yes pkg install python; done
ssh gluster1 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster2 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster3 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster4 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster5 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster6 env ASSUME_ALWAYS_YES=yes pkg install python

… and execution on the FreeBSD machines with GNU Parallel.

vbhost % ansible -i hosts --list-hosts gluster \ 
           | sed 1d \
           | while read I; do echo ssh ${I} env ASSUME_ALWAYS_YES=yes pkg install python; done | parallel

Computers / CPU cores / Max jobs to run
1:local / 2 / 2

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:0/6/100%/156.0s

The Python packages and its dependencies are installed.

vbhost % ssh gluster3 pkg info
gettext-runtime-0.19.8.1_2     GNU gettext runtime libraries and programs
indexinfo-0.3.1                Utility to regenerate the GNU info page index
libffi-3.2.1_3                 Foreign Function Interface
pkg-1.10.5_5                   Package manager
python-2.7_3,2                 "meta-port" for the default version of Python interpreter
python2-2_3                    The "meta-port" for version 2 of the Python interpreter
python27-2.7.15                Interpreted object-oriented programming language
readline-7.0.5                 Library for editing command lines as they are typed

Now with Ansible Ping module works as desired.

% ansible -i hosts -m ping gluster
gluster1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster4 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster5 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster3 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster6 | SUCCESS => {
"changed": false,
"ping": "pong"
}

GlusterFS Volume Options

GlusterFS has a lot of options to setup the volume. They are described in the GlusterFS Administration Guide in the Setting up GlusterFS Volumes part. Here they are:

Distributed – Distributed volumes distribute files across the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy is either not important or is provided by other hardware/software layers.

Replicated – Replicated volumes replicate files across bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical.

Distributed Replicated – Distributed replicated volumes distribute files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments.

Dispersed – Dispersed volumes are based on erasure codes, providing space-efficient protection against disk or server failures. It stores an encoded fragment of the original file to each brick in a way that only a subset of the fragments is needed to recover the original file. The number of bricks that can be missing without losing access to data is configured by the administrator on volume creation time.

Distributed Dispersed – Distributed dispersed volumes distribute files across dispersed subvolumes. This has the same advantages of distribute replicate volumes, but using disperse to store the data into the bricks.

Striped [Deprecated] – Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files.

Distributed Striped [Deprecated] – Distributed striped volumes stripe data across two or more nodes in the cluster. You should use distributed striped volumes where the requirement is to scale storage and in high concurrency environments accessing very large files is critical.

Distributed Striped Replicated [Deprecated] – Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, you should use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

Striped Replicated [Deprecated] – Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

From all of the above still supported the Dispersed volume seems to be the best choice. Like Minio Dispersed volumes are based on erasure codes.

As we have 6 servers we will use 4 + 2 setup which is logical RAID6 against these 6 servers. This means that we will be able to lost 2 of them without service outage. This also means that if we will upload 100 MB file to our volume we will use 150 MB of space across these 6 servers with 25 MB on each node.

We can visualize this as following ASCII diagram.

+-----------+ +-----------+ +-----------+ +-----------+ +-----------+ +-----------+
|  gluster1 | |  gluster2 | |  gluster3 | |  gluster4 | |  gluster5 | |  gluster6 |
|           | |           | |           | |           | |           | |           |
|    brick1 | |    brick2 | |    brick3 | |    brick4 | |    brick5 | |    brick6 |
+-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+
      |             |             |             |             |             |
    25|MB         25|MB         25|MB         25|MB         25|MB         25|MB
      |             |             |             |             |             |
      +-------------+-------------+------+------+-------------+-------------+
                                         |
                                      100|MB
                                         |
                                     +---+---+
                                     | file0 |
                                     +-------+

Deploy GlusterFS Cluster

We will use gluster-setup.yml as our Ansible playbook.

Lets create something for the start, for example to always install the latest Python package.

vbhost % cat gluster-setup.yml
---
- name: Install and Setup GlusterFS on FreeBSD
  hosts: gluster
  user: root
  tasks:

  - name: Install Latest Python Package
    pkgng:
      name: python
      state: latest

We will now execute it.

vbhost % ansible-playbook -i hosts gluster-setup.yml

PLAY [Install and Setup GlusterFS on FreeBSD] **********************************

TASK [Gathering Facts] *********************************************************
ok: [gluster3]
ok: [gluster5]
ok: [gluster1]
ok: [gluster4]
ok: [gluster2]
ok: [gluster6]

TASK [Install Latest Python Package] *******************************************
ok: [gluster4]
ok: [gluster2]
ok: [gluster5]
ok: [gluster3]
ok: [gluster1]
ok: [gluster6]

PLAY RECAP *********************************************************************
gluster1                   : ok=2    changed=0    unreachable=0    failed=0
gluster2                   : ok=2    changed=0    unreachable=0    failed=0
gluster3                   : ok=2    changed=0    unreachable=0    failed=0
gluster4                   : ok=2    changed=0    unreachable=0    failed=0
gluster5                   : ok=2    changed=0    unreachable=0    failed=0
gluster6                   : ok=2    changed=0    unreachable=0    failed=0

We just installed Python on these machines no update was needed.

As we will be creating cluster we need to add time synchronization between the nodes of the cluster. We will use mose obvious solution – the ntpd(8) daemon that is in the FreeBSD base system. These lines are added to our gluster-setup.yml playbook to achieve this goal

  - name: Enable NTPD Service
    raw: sysrc ntpd_enable=YES

  - name: Start NTPD Service
    service:
      name: ntpd
      state: started

After executing the playbook again with the ansible-playbook -i hosts gluster-setup.yml command we will see additional output as the one shown below.

TASK [Enable NTPD Service] ************************************************
changed: [gluster2]
changed: [gluster1]
changed: [gluster4]
changed: [gluster5]
changed: [gluster3]
changed: [gluster6]

TASK [Start NTPD Service] ******************************************************
changed: [gluster5]
changed: [gluster4]
changed: [gluster2]
changed: [gluster1]
changed: [gluster3]
changed: [gluster6]

Random verification of the NTP service.

vbhost % ssh gluster1 ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 0.freebsd.pool. .POOL.          16 p    -   64    0    0.000    0.000   0.000
 ntp.ifj.edu.pl  10.0.2.4         3 u    1   64    1  119.956  -345759  32.552
 news-archive.ic 229.30.220.210   2 u    -   64    1   60.533  -345760  21.104

Now we need to install GlusterFS on FreeBSD machines – the glusterfs package.

We will add appropriate section to the playbook.

  - name: Install Latest GlusterFS Package
    pkgng:
      state: latest
      name:
      - glusterfs
      - ncdu

You can add more then one package to the pkgng Ansible module – for example I have also added ncdu package.

You can read more about pkgng Ansible module by typing the ansible-doc pkgng command or at least its short version with -s argument.

vbhost % ansible-doc -s pkgng
- name: Package manager for FreeBSD >= 9.0
  pkgng:
      annotation:            # A comma-separated list of keyvalue-pairs of the form `[=]'. A `+' denotes adding
                               an annotation, a `-' denotes removing an annotation, and `:' denotes
                               modifying an annotation. If setting or modifying annotations, a value
                               must be provided.
      autoremove:            # Remove automatically installed packages which are no longer needed.
      cached:                # Use local package base instead of fetching an updated one.
      chroot:                # Pkg will chroot in the specified environment. Can not be used together with `rootdir' or `jail'
                               options.
      jail:                  # Pkg will execute in the given jail name or id. Can not be used together with `chroot' or `rootdir'
                               options.
      name:                  # (required) Name or list of names of packages to install/remove.
      pkgsite:               # For pkgng versions before 1.1.4, specify packagesite to use for downloading packages. If not
                               specified, use settings from `/usr/local/etc/pkg.conf'. For newer
                               pkgng versions, specify a the name of a repository configured in
                               `/usr/local/etc/pkg/repos'.
      rootdir:               # For pkgng versions 1.5 and later, pkg will install all packages within the specified root directory.
                               Can not be used together with `chroot' or `jail' options.
      state:                 # State of the package. Note: "latest" added in 2.7

You can read more about this particular module on the following – https://docs.ansible.com/ansible/latest/modules/pkgng_module.html – Ansible page.

We will now add GlusterFS nodes to the /etc/hosts file and add autoboot_delay=1 parameter to the /boot/loader.conf file so our systems will boot 9 seconds faster as 10 is the default delay setting.

Here is out gluster-setup.yml Ansible playbook this far.

vbhost % cat gluster-setup.yml
---
- name: Install and Setup GlusterFS on FreeBSD
  hosts: gluster
  user: root
  tasks:

  - name: Install Latest Python Package
    pkgng:
      name: python
      state: latest

  - name: Enable NTPD Service
    raw: sysrc ntpd_enable=YES

  - name: Start NTPD Service
    service:
      name: ntpd
      state: started

  - name: Install Latest GlusterFS Package
    pkgng:
      state: latest
      name:
      - glusterfs
      - ncdu

  - name: Add Nodes to /etc/hosts File
    blockinfile:
      path: /etc/hosts
      block: |
        10.0.10.11 gluster1
        10.0.10.12 gluster2
        10.0.10.13 gluster3
        10.0.10.14 gluster4
        10.0.10.15 gluster5
        10.0.10.16 gluster6

  - name: Add autoboot_delay to /boot/loader.conf File
    lineinfile:
      path: /boot/loader.conf
      line: autoboot_delay=1
      create: yes

Here is the result of the execution of this playbook.

vbhost % ansible-playbook -i hosts gluster-setup.yml

PLAY [Install and Setup GlusterFS on FreeBSD] **********************************

TASK [Gathering Facts] *********************************************************
ok: [gluster3]
ok: [gluster5]
ok: [gluster1]
ok: [gluster4]
ok: [gluster2]
ok: [gluster6]

TASK [Install Latest Python Package] *******************************************
ok: [gluster4]
ok: [gluster2]
ok: [gluster5]
ok: [gluster3]
ok: [gluster1]
ok: [gluster6]

TASK [Install Latest GlusterFS Package] ****************************************
ok: [gluster2]
ok: [gluster1]
ok: [gluster3]
ok: [gluster5]
ok: [gluster4]
ok: [gluster6]

TASK [Add Nodes to /etc/hosts File] ********************************************
changed: [gluster5]
changed: [gluster4]
changed: [gluster2]
changed: [gluster3]
changed: [gluster1]
changed: [gluster6]

TASK [Enable GlusterFS Service] ************************************************
changed: [gluster1]
changed: [gluster4]
changed: [gluster2]
changed: [gluster3]
changed: [gluster5]
changed: [gluster6]

TASK [Add autoboot_delay to /boot/loader.conf File] ****************************
changed: [gluster3]
changed: [gluster2]
changed: [gluster5]
changed: [gluster1]
changed: [gluster4]
changed: [gluster6]

PLAY RECAP *********************************************************************
gluster1                   : ok=6    changed=3    unreachable=0    failed=0
gluster2                   : ok=6    changed=3    unreachable=0    failed=0
gluster3                   : ok=6    changed=3    unreachable=0    failed=0
gluster4                   : ok=6    changed=3    unreachable=0    failed=0
gluster5                   : ok=6    changed=3    unreachable=0    failed=0
gluster6                   : ok=6    changed=3    unreachable=0    failed=0

Let’s check that FreeBSD machines can now ping each other by names.

vbhost % ssh gluster6 cat /etc/hosts
# LOOPBACK
127.0.0.1      localhost localhost.my.domain
::1            localhost localhost.my.domain

# BEGIN ANSIBLE MANAGED BLOCK
10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6
# END ANSIBLE MANAGED BLOCK

vbhost % ssh gluster1 ping -c 1 gluster3
PING gluster3 (10.0.10.13): 56 data bytes
64 bytes from 10.0.10.13: icmp_seq=0 ttl=64 time=1.924 ms

--- gluster3 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.924/1.924/1.924/0.000 ms

… and our /boot/loader.conf file.

vbhost % ssh gluster4 cat /boot/loader.conf
autoboot_delay=1

Now we need to create directories for GlusterFS data. Without better idea we will use /data directory with /data/colume1 as the directory for volume1 and bricks will be put as /data/volume1/brick1 dirs. In this setup I will use just one brick per server but in production environment you would probably use one brick per physical disk.

Here is the playbook command we will use to create these directories on FreeBSD machines.

  - name: Create brick* Directories for volume1
    raw: mkdir -p /data/volume1/brick` hostname | grep -o -E '[0-9]+' `

After executing it with ansible-playbook -i hosts gluster-setup.yml command the directories has beed created.

vbhost % ssh gluster2 find /data -ls | column -t
2247168  8  drwxr-xr-x  3  root  wheel  512  Dec  28  17:48  /data
2247169  8  drwxr-xr-x  3  root  wheel  512  Dec  28  17:48  /data/volume2
2247170  8  drwxr-xr-x  2  root  wheel  512  Dec  28  17:48  /data/volume2/brick2


We now need to add glusterd_enable=YES to the /etc/rc.conf file on GlusterFS nodes and then start the GlsuterFS service.

This is the snippet we will add to our playbook.

  - name: Enable GlusterFS Service
    raw: sysrc glusterd_enable=YES

  - name: Start GlusterFS Service
    service:
      name: glusterd
      state: started

Let’s make quick random verification.

vbhost % ssh gluster4 service glusterd status
glusterd is running as pid 2684.

Now we need to proceed to the last part of the GlusterFS setup – create the volume.

We will do this from the gluster1 – the 1st node of the GlusterFS cluster.

First we need to peer probe other nodes.

gluster1 # gluster peer probe gluster1
peer probe: success. Probe on localhost not needed
gluster1 # gluster peer probe gluster2
peer probe: success.
gluster1 # gluster peer probe gluster3
peer probe: success.
gluster1 # gluster peer probe gluster4
peer probe: success.
gluster1 # gluster peer probe gluster5
peer probe: success.
gluster1 # gluster peer probe gluster6
peer probe: success.

Then we can create the volume. We will need to use force option to because for our example setup we will use directories on the root partition.

gluster1 # gluster volume create volume1 \
             disperse-data 4 \
             redundancy 2 \
             transport tcp \
             gluster1:/data/volume1/brick1 \
             gluster2:/data/volume1/brick2 \
             gluster3:/data/volume1/brick3 \
             gluster4:/data/volume1/brick4 \
             gluster5:/data/volume1/brick5 \
             gluster6:/data/volume1/brick6 \
             force
volume create: volume1: success: please start the volume to access data

We can now start the volume1 GlsuerFS volume.

gluster1 # gluster volume start volume1
volume start: volume1: success

gluster1 # gluster volume status volume1
Status of volume: volume1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster1:/data/volume1/brick1         N/A       N/A        N       N/A
Brick gluster2:/data/volume1/brick2         N/A       N/A        N       N/A
Brick gluster3:/data/volume1/brick3         N/A       N/A        N       N/A
Brick gluster4:/data/volume1/brick4         N/A       N/A        N       N/A
Brick gluster5:/data/volume1/brick5         N/A       N/A        N       N/A
Brick gluster6:/data/volume1/brick6         N/A       N/A        N       N/A
Self-heal Daemon on localhost               N/A       N/A        N       644
Self-heal Daemon on gluster6                N/A       N/A        N       643
Self-heal Daemon on gluster5                N/A       N/A        N       647
Self-heal Daemon on gluster2                N/A       N/A        N       645
Self-heal Daemon on gluster3                N/A       N/A        N       645
Self-heal Daemon on gluster4                N/A       N/A        N       645

Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks

gluster1 # gluster volume info volume1

Volume Name: volume1
Type: Disperse
Volume ID: 68cf9607-16bc-4550-9b6b-16a5c7656f51
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/data/volume1/brick1
Brick2: gluster2:/data/volume1/brick2
Brick3: gluster3:/data/volume1/brick3
Brick4: gluster4:/data/volume1/brick4
Brick5: gluster5:/data/volume1/brick5
Brick6: gluster6:/data/volume1/brick6
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

Here are contents of currently unused/empty brick.

gluster1 # find /data/volume1/brick1
/data/volume1/brick1
/data/volume1/brick1/.glusterfs
/data/volume1/brick1/.glusterfs/indices
/data/volume1/brick1/.glusterfs/indices/xattrop
/data/volume1/brick1/.glusterfs/indices/entry-changes
/data/volume1/brick1/.glusterfs/quarantine
/data/volume1/brick1/.glusterfs/quarantine/stub-00000000-0000-0000-0000-000000000008
/data/volume1/brick1/.glusterfs/changelogs
/data/volume1/brick1/.glusterfs/changelogs/htime
/data/volume1/brick1/.glusterfs/changelogs/csnap
/data/volume1/brick1/.glusterfs/brick1.db
/data/volume1/brick1/.glusterfs/brick1.db-wal
/data/volume1/brick1/.glusterfs/brick1.db-shm
/data/volume1/brick1/.glusterfs/00
/data/volume1/brick1/.glusterfs/00/00
/data/volume1/brick1/.glusterfs/00/00/00000000-0000-0000-0000-000000000001
/data/volume1/brick1/.glusterfs/landfill
/data/volume1/brick1/.glusterfs/unlink
/data/volume1/brick1/.glusterfs/health_check

The 6-node GlusterFS cluster is now complete and volume1 available to use.

Alternative

The GlusterFS’s documentation Quick Start Guide also suggests using Ansible to deploy and manage GlusterFS with gluster-ansible repository or gluster-ansible-cluster but they have below requirements.

  • Ansible version 2.5 or above.
  • GlusterFS version 3.2 or above.

As GlusterFS on FreeBSD is at 3.11.1 version I did not used them.

FreeBSD Client

We will now use another VirtualBox machine – also based on the same FreeBSD 12.0-RELEASE image – to create FreeBSD Client machine that will mount our volume1 volume.

We will need to install glusterfs package with pkg(8) command. Then we will use mount_glusterfs command to mount the volume. Keep in mind that in order to mount GlusterFS volume the FUSE (fuse.ko kernel module is needed.

client # pkg install glusterfs

client # kldload fuse

client # mount_glusterfs 10.0.10.11:volume1 /mnt

client # echo $?
0

client # mount
/dev/gpt/rootfs on / (ufs, local, soft-updates)
devfs on /dev (devfs, local, multilabel)
/dev/fuse on /mnt (fusefs, local, synchronous)

client # ls /mnt
ls: /mnt: Socket is not connected

It is mounted but does not work. The solution to this problem is to add appropriate /etc/hosts entries to the GlusterFS nodes.

client # cat /etc/hosts
::1                     localhost localhost.my.domain
127.0.0.1               localhost localhost.my.domain

10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6

Lets mount it again now with needed /etc/hosts entries.

client # umount /mnt

client # mount_glusterfs gluster1:volume1 /mnt

client # ls /mnt
client #

We now have our GlusterFS volume properly mounted and working on the FreeBSD Client machine.

Lets write some file there with dd(8) to see how it works.

client # dd  FILE bs=1m count=100 status=progress
  73400320 bytes (73 MB, 70 MiB) transferred 1.016s, 72 MB/s
100+0 records in
100+0 records out
104857600 bytes transferred in 1.565618 secs (66975227 bytes/sec)

Let’s see how it looks in the brick directory.

gluster1 # ls -lh /data/volume1/brick1
total 25640
drw-------  10 root  wheel   512B Jan  3 18:31 .glusterfs
-rw-r--r--   2 root  wheel    25M Jan  3 18:31 FILE

gluster1 # find /data
/data/
/data/volume1
/data/volume1/brick1
/data/volume1/brick1/.glusterfs
/data/volume1/brick1/.glusterfs/indices
/data/volume1/brick1/.glusterfs/indices/xattrop
/data/volume1/brick1/.glusterfs/indices/xattrop/xattrop-aed814f1-0eb0-46a1-b569-aeddf5048e06
/data/volume1/brick1/.glusterfs/indices/entry-changes
/data/volume1/brick1/.glusterfs/quarantine
/data/volume1/brick1/.glusterfs/quarantine/stub-00000000-0000-0000-0000-000000000008
/data/volume1/brick1/.glusterfs/changelogs
/data/volume1/brick1/.glusterfs/changelogs/htime
/data/volume1/brick1/.glusterfs/changelogs/csnap
/data/volume1/brick1/.glusterfs/brick1.db
/data/volume1/brick1/.glusterfs/brick1.db-wal
/data/volume1/brick1/.glusterfs/brick1.db-shm
/data/volume1/brick1/.glusterfs/00
/data/volume1/brick1/.glusterfs/00/00
/data/volume1/brick1/.glusterfs/00/00/00000000-0000-0000-0000-000000000001
/data/volume1/brick1/.glusterfs/landfill
/data/volume1/brick1/.glusterfs/unlink
/data/volume1/brick1/.glusterfs/health_check
/data/volume1/brick1/.glusterfs/ac
/data/volume1/brick1/.glusterfs/ac/b4
/data/volume1/brick1/.glusterfs/11
/data/volume1/brick1/.glusterfs/11/50
/data/volume1/brick1/.glusterfs/11/50/115043ca-420f-48b5-af05-c9552db2e585
/data/volume1/brick1/FILE

Linux Client

I will also show how to mount GlusterFS volume on the Red Hat clone CentOS in its latest 7.6 incarnation. It will require glusterfs-fuse package installation.

[root@localhost ~]# yum install glusterfs-fuse


[root@localhost ~]# rpm -q --filesbypkg glusterfs-fuse | grep /sbin/mount.glusterfs
glusterfs-fuse            /sbin/mount.glusterfs

[root@localhost ~]# mount.glusterfs 10.0.10.11:volume1 /mnt
Mount failed. Please check the log file for more details.

Similarly like with FreeBSD Client the /etc/hosts entries are needed.

[root@localhost ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6

[root@localhost ~]# mount.glusterfs 10.0.10.11:volume1 /mnt

[root@localhost ~]# ls /mnt
FILE

[root@localhost ~]# mount
10.0.10.11:volume1 on /mnt type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

With apropriate /etc/hosts entries it works as desired. We see the FILE file generated fron the FreeBSD Client machine.

GlusterFS Cluster Redundancy

After messing with the volume and creating and deleting various files I also tested its redundancy. In theory this RAID6 equivalent protection should protect us from the loss of two of six servers. After shutdown of two VirtualBox machines the volume is still available and ready to use.

Closing Thougts

Pity that FreeBSD does not provide more modern GlusterFS package as currently only 3.11.1 version is available.

EOF

Valuable News – 2018/09/15

The Valuable News weekly series is dedicated to provide summary about news, articles and other interesting stuff mostly but not always related to the UNIX or BSD systems. Whenever I stumble upon something worth mentioning on the Internet I just put it here so someone else can

Today the amount information that we get using various information streams is at massive overload. Thus one needs to focus only on what is important without the need to grep(1) the Internet everyday. Hence the idea of providing such information ‘bulk’ as I already do that grep(1).

UNIX

In Other BSDs for 2018/09/08.
https://www.dragonflydigest.com/2018/09/08/21755.html

Including optimized out kernel symbols in dtrace on FreeBSD.
https://farhan.codes/2018/08/16/including-optimized-out-kernel-symbols-in-dtrace-on-freebsd/

Lazy Reading for 2018/09/09.
https://www.dragonflydigest.com/2018/09/09/21750.html

NIH Package manager for pkgsrc.
http://pkgsrc.se/pkgtools/nih

OpenBSD on the Desktop: some thoughts.
https://blog.gsora.xyz/openbsd-on-the-desktop-some-thoughts/

Ansible modules for OpenBSD vmm(4).
https://github.com/jasperla/ansible-vmm

OpenBSD vmm(4) got qcow2 image support.
https://marc.info/?l=openbsd-cvs&m=153646618113084&w=2

FreeBSD bootonly ISO failures PXE booting under UEFI.
FreeBSD Bootonly ISO failures PXE booting under UEFI

Dosbox-X is fork of DOSBox wth goal of accurate emulation for Windows 95/98/ME systems.
http://dosbox-x.com/

vm-bhyve with NAT on FreeBSD.
https://www.foxk.it/blog/2018-09-08_vm-bhyve/

FreeNAS 11.2-BETA3 Available.
https://www.ixsystems.com/blog/library/freenas-11-2-beta3/

Nextcloud in FreeBSD Ports updated to 14.0.
https://svnweb.freebsd.org/ports?view=revision&revision=479446

New FreeNAS 11.2 Beta 3 Interface.
https://twitter.com/TomLawrenceTech/status/1039623935170682880

FreeBSD PKGng plugin for querying which package provides a particular file.
https://github.com/rosorio/pkg-provides

FreeBSD SA 18:12.elf.
https://www.freebsd.org/security/advisories/FreeBSD-SA-18:12.elf.asc

FreeBSD EN 18:08.lazyfpu.
https://www.freebsd.org/security/advisories/FreeBSD-EN-18:08.lazyfpu.asc

FreeBSD 12.x suspend/resume works on ThinkPad X1 Carbon 6th Generation after BIOS 1.30 update and sleep state set to ‘Linux’.
https://twitter.com/lwhsu/status/1039711710913945601

FreeBSD 12.x amd64 has NUMA enabled for GENERIC/MINIMAL kernels.
https://lists.freebsd.org/pipermail/freebsd-current/2018-September/071168.html
https://svnweb.freebsd.org/base?view=revision&revision=338602

My FreeBSD packages are out of date, how do I fix that?
https://vincerants.com/my+freebsd+packages+are+out+of+date%2c+how+do+i+fix+that%3f

FreeBSD 12.0-ALPHA4 on Pinebook.
https://twitter.com/manuvadot/status/1039943151383977985

BSD Now 263 – Encrypt That Pool.
https://www.jupiterbroadcasting.com/127081/encrypt-that-pool-bsd-now-263/

RFC8446 compliant OpenSSL 1.1.1 landed in FreeBSD 12.0.
https://twitter.com/mzar_y/status/1040502530072096768

Today in History. Brought to You by UNIX.
https://akr.am/blog/posts/today-in-history-brought-to-you-by-unix

Remote Code Execution in Alpine Linux.
https://justi.cz/security/2018/09/13/alpine-apk-rce.html

There are 90% Linux and 9% BSD Systems in the TOR RELAYS.
https://www.torbsd.org/oostats/relays-bw-by-os.html

HardenedBSD 1100056.5 Available.
https://hardenedbsd.org/article/op/2018-09-14/stable-release-hardenedbsd-stable-11-stable-v11000565

Wine Development Release 3.16 Available.
https://www.winehq.org/announce/3.16

FreeBSD 12.0-ALPHA6 Available.
http://ftp.freebsd.org/pub/FreeBSD/snapshots/ISO-IMAGES/12.0/

Hardware

Another Intel ME Vulnerability.
https://twitter.com/h0t_max/status/1039597102693851136
https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00125.html

Intel Power Management Controller (PMC) Security Advisory.
https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00131.html

Seagate BarraCuda Pro 14TB HDD Review – Massive Storage for Desktops.
https://www.anandtech.com/show/13340/seagate-barracuda-pro-14tb-hdd-review/

The rv8 is 1.9.1 privileged specification RISC-V emulator and x86-64 binary translator.
https://rv8.io/
https://github.com/rv8-io/rv8

Custom 51nb made ThinkPad X210 with Kaby Lake QUAD-CORE i5-8250, 1920×1200 IPS screen and 16GB RAM.
https://twitter.com/jcs/status/1039624550168842241
https://www.51nb.com/mb/
x210.jpg

AMD EPYC 32 Core 7551P Benchmarks and Review.
https://twitter.com/ServeTheHome/status/1039656584496873473
https://www.servethehome.com/amd-epyc-7551p-benchmarks-and-review-a-32-core-value-monster/

How Intel Xeon Platinum is Marketing Headache versus AMD EPYC Prices.
https://www.servethehome.com/how-the-intel-xeon-platinum-is-a-marketing-headache/

Lenovo latest Yoga laptop with new ARM CPU and 25 hour battery life.
https://www.theverge.com/circuitbreaker/2018/8/30/17799382/lenovo-yoga-c630-wos-features-specs-price-release-date-battery-life-ifa-2018

34C3: Hacking into a CPU Microcode.
34C3: Hacking into a CPU’s Microcode

Analysis of Data Corruption in the Storage Stack.
This is why checksumming and healing of data in ZFS is that much important.
https://www.usenix.org/legacy/events/fast08/tech/full_papers/bairavasundaram/bairavasundaram_html/index.html

The x86 finds its way into your iPhone.
https://lcq2.github.io/x86_iphone/

Life

So We Got Tracked Anyway.
https://youbroketheinternet.org/trackedanyway

Real Cost of the 2008 Financial Crisis.
https://www.newyorker.com/magazine/2018/09/17/the-real-cost-of-the-2008-financial-crisisz

Other

Firefox 62.0 Released.
https://www.mozilla.org/en-US/firefox/62.0/releasenotes/

What is First-Party Isolation in Firefox and what breaks if you enabled it.
https://www.ctrl.blog/entry/firefox-fpi

Firefox is now built with clang LTO on all platforms.
https://glandium.org/blog/?p=3888

Good books for deep hacks.
https://begriffs.com/posts/2017-04-13-longterm-computing-reading.html

EOF

New ZFS Boot Environments Tool

About a month ago I was honored to give talk about ZFS Boot Environments on PBUG. At the end of the presentation I mentioned the history of tools to manage ZFS Boot Environments on FreeBSD.

zfs-boot-environments-history.png

Pawel Jakub Dawidek – which also was on this PBUG #3 meeting – suggested that I should try to add beadm into the FreeBSD base system. I also heard that idea from many beadm users which repetitively asked why beadm is not in the FreeBSD base system. So after finished PBUG #3 that is exactly what I did. I created new PR – Bug 230323 – Idea/Feature Request – include beadm in the base – and to my (positive) surprise they included new bectl tool into the FreeBSD base! We now have new member of that ZFS Boot Environment tools family – the bectl tool.

I will of course maintain and update beadm tool and it will still be available in the FreeBSD Ports under sysutils/beadm category as having such tool written in POSIX /bin/sh allows fast debugging and easy changes to such tool. In short (TLDR) the bectl tool is beadm implemented in C language and as it has just been imported into FreeBSD base which means that it will be part of the FreeBSD 12.0-RELEASE. Currently bectl is already available in the 12.0-ALPHA2 image.

Comparison

The new bectl tool is at very early stage and does not (yet) offer full replacement for the beadm tool. Here is quick comparision of the usage information between bectl and beadm tools.

root@fbsd12:~ # beadm
usage:
  beadm activate 
  beadm create [-e nonActiveBe | -e beName@snapshot] 
  beadm create 
  beadm destroy [-F] 
  beadm list [-a] [-s] [-D] [-H]
  beadm rename  
  beadm mount  [mountpoint]
  beadm { umount | unmount } [-f] 
  beadm version

… and new bectl tool.

root@fbsd12:~ # bectl
missing command
usage:  bectl ( -h | -? | subcommand [args...] )
        bectl activate [-t] beName
        bectl create [-e nonActiveBe | -e beName@snapshot] beName
        bectl create beName@snapshot
        bectl destroy [-F] beName | beName@snapshot⟩
        bectl export sourceBe
        bectl import targetBe
        bectl jail [ -o key=value | -u key ]... bootenv
        bectl list [-a] [-D] [-H] [-s]
        bectl mount beName [mountpoint]
        bectl rename origBeName newBeName
        bectl { ujail | unjail } ⟨jailID | jailName | bootenv)
        bectl { umount | unmount } [-f] beName

For example bectl is not able to rename currently used/mounted boot environment while beadm can.

root@fbsd12:~ # bectl rename safe new
boot environment is already mounted
failed to rename bootenv safe to new

Its possible to rename such ZFS dataset mounted as / with zfs rename -u ... command (this is exactly what beadm does under the hood) as a workaround for bectl tool.

root@fbsd12:~ # bectl list
BE      Active Mountpoint Space Created
safe    NR     /          188K  2018-08-18 02:32
default -      -          427M  2018-08-18 02:26

root@fbsd12:~ # zfs list | grep safe
zroot/ROOT/safe      108K  6.85G   427M  /

root@fbsd12:~ # zfs rename -u zroot/ROOT/safe zroot/ROOT/new

Its then listed as usual under new name in bectl as shown below:

root@fbsd12:~ # bectl list
BE      Active Mountpoint Space Created
new     NR     /          188K  2018-08-18 02:32
default -      -          427M  2018-08-18 02:26

One nice addition that bectl has that beadm lacks is dynamic FreeBSD Jail creation in specified boot environment.

Here is bectl FreeBSD Jail creation in action.

root@fbsd12:~ # bectl list
BE      Active Mountpoint Space Created
new     NR     /          188K  2018-08-18 02:32
default -      -          427M  2018-08-18 02:26

root@fbsd12:~ # bectl jail default
# pwd
/
# ls /
.cshrc          bin             entropy         libexec         net             root            usr
.profile        boot            etc             media           proc            sbin            var
COPYRIGHT       dev             lib             mnt             rescue          tmp             zroot
# exit
root@fbsd12:~ # jls
   JID  IP Address      Hostname                      Path
     1                  default                       /tmp/be_mount.OnRc

root@fbsd12:~ # mount | grep default
zroot/ROOT/default on /tmp/be_mount.OnRc (zfs, local, noatime, nfsv4acls)

root@fbsd12:~ # bectl unjail default

root@fbsd12:~ # jls
   JID  IP Address      Hostname                      Path

If you move/migrate to bectl from beadm you will also have to be more careful as bectl does not ask questions πŸ™‚

For example beadm tool asks if you are sure that you want to destroy specified boot environment. The bectl tool will just remove it without even writing anything on the screen.

root@fbsd12:~ # bectl list
BE      Active Mountpoint Space Created
new     NR     /          188K  2018-08-18 02:32
default -      -          427M  2018-08-18 02:26

root@fbsd12:~ # beadm destroy safe
Are you sure you want to destroy 'safe'?
This action cannot be undone (y/[n]): n

root@fbsd12:~ # bectl destroy safe

root@fbsd12:~ # bectl list
BE      Active Mountpoint Space Created
new     NR     /          188K  2018-08-18 02:32

One of the things that bectl lacks is also the Ansible plugin, beadm is supported by the Ansible plugin so if you prefer to use that configuration management tool, then bectl will ‘backport’ you to raw Ansible module πŸ™‚

The good information is that beadm and bectl can work together on the same host, so you do not have to choose. You may still use beadm tool for daily tasks (or for Ansible module) and bectl for the jail/unjail options for example.

But I think in time bectl will have needed features added and having such tool in FreeBSD base system is a welcome addition.

UPDATE 1

The New ZFS Boot Environments Tool article was featured in the BSD Now 262 – OpenBSD Surfacing episode.

Thanks for mentioning!

UPDATE 2

Finally I had time to check new bectl command again in the newer FreeBSD-12.0-ALPHA6 release for possible improvements.

Now bectl does not display missing command when invoked without arguments.

It is now possible to rename currently used Boot Environment with bectl command.

The last thing I noticed is that bectl jail command does not leave enabled/running Jail after you exit from it, cosmetic but important.

… and last but not least, the easiest path of migration is to create simple alias.

# alias beadm=bectl

… or for (T)CSH shell.

# alias beadm bectl
EOF

Valuable News – 2018/08/18

UNIX

The docker-machine driver plugin for vmm OpenBSD hypervisor.
https://github.com/WIZARDISHUNGRY/docker-machine-driver-openbsd

MacOS 6 in the browser.
https://archive.org/details/mac_MacOS_6.0.8

Serviio DLNA Media Server on OpenBSD.
https://www.geoghegan.ca/serviio.html

NomadBSD 1.1 featured on DistroWatch Weekly.
https://distrowatch.com/weekly.php?issue=20180813#nomadbsd

NSD 4.1.24 Released.
https://www.nlnetlabs.nl/news/2018/Aug/13/nsd-4.1.24-released/

FreeBSD 12.0-RELEASE currently depends on 91 open bugs.
https://bugs.freebsd.org/bugzilla/showdependencytree.cgi?id=228911&hide_resolved=1

FreeBSD bug affecting Chrome stability has been fixed.
https://twitter.com/ed_maste/status/1029362715838636032

FreeBSD fix for Foreshadow/L1TF CPU vulnerability (CVE-2018-3620).
https://svnweb.freebsd.org/base?view=revision&revision=337774

Improve hashing of IPv4 fragments on FreeBSD.
https://svnweb.freebsd.org/base?view=revision&revision=337775

Automatic OpenBSD solution for email/web/DNS/XMPP/ZNC.
https://github.com/cullum/dank-selfhosted

FreeNAS iSCSI MPIO Configuration.
https://www.virtualizationhowto.com/2018/08/freenas-iscsi-configuration-for-mpio/

Hyper-V Cluster Installation and Configuration of MPIO iSCSI from FreeNAS.
https://www.virtualizationhowto.com/2018/08/hyper-v-cluster-mpio-iscsi-installation-and-configuration/

FreeBSD implemented kernel support for early loading of Intel microcode updates.
https://svnweb.freebsd.org/base?view=revision&revision=337715

OPNsense 18.7.1 Released.
https://forum.opnsense.org/index.php?topic=9424.0

Your own VPN with OpenIKED and OpenBSD.
https://medium.com/@cmacrae/your-own-vpn-with-openiked-openbsd-13d7abd3d1d4

Theo de Raadt on the latest Intel security issues.
https://www.undeadly.org/cgi?action=article;sid=20180815070400

FreeBSD lockless algorithm – seq.
http://oshogbo.vexillium.org/blog/50

The libbe and bectl tool (the beadm written in C) landed in FreeBSD base.
https://svnweb.freebsd.org/base/head/sbin/bectl/bectl.c
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230323

Final cover of the long awaited 3rd Edition of the Absolute FreeBSD book.
https://mwl.io/nonfiction/os#af3e

Amazon Kindle edition of the MidnightBSD: An Introduction book is available.
https://twitter.com/nixcraft/status/1030518022367264768
https://www.amazon.com/MidnightBSD-Introduction-Lucas-Holt-ebook/dp/B07G6V2B6M

Ansible dynamic inventory provider for FreeBSD Jails.
https://github.com/pageflt/ansible-jailventory

FreeBSD 12.0-ALPHA2 is ready to download. And yes it does come in black with bectl.

http://ftp.freebsd.org/pub/FreeBSD/snapshots/ISO-IMAGES/12.0/

Hardware

AMD Threadripper 2990WX 32-Core and 2950X 16-Core Review.
https://www.anandtech.com/show/13124/the-amd-threadripper-2990wx-and-2950x-review/

AMD Threadripper 2990WX Linux Benchmarks – 32-Core Beast.
https://www.phoronix.com/scan.php?page=article&item=amd-linux-2990wx

Foreshadow L1 Terminal Fault (L1TF) is attack on Intel SGX which allows attacker to steal information.
AMD on Tuesday said it was not impacted by the new flaws disclosed by Intel.
https://foreshadowattack.eu/
https://www.intel.com/content/www/us/en/architecture-and-technology/l1tf.html
https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00161.html
https://www.theregister.co.uk/2018/08/14/intel_l1_terminal_fault_bugs/

Single Board Computer Database.
https://www.board-db.org/

Replacing Intel or x86 chips for security reasons. Options. (2013)
https://pastebin.com/raw/0mQHNbRK

AMD Threadripper 2990WX Much Faster on Linux than on Windows 10.
https://www.forbes.com/sites/jasonevangelho/2018/08/15/want-faster-performance-with-amd-threadripper-2-use-linux/#4e1bf34539c9

Gigabyte Launches New Cavium ThunderX2 ARM64 Servers.
https://www.servethehome.com/gigabyte-launches-new-cavium-thunderx2-servers/
https://www.anandtech.com/show/13234/gigabyte-starts-sales-of-cavium-thunderx2-to-general-customers

ARM Announces Client CPU Roadmap – Accelerating Mobile and Laptop Performance.
https://www.arm.com/company/news/2018/08/accelerating-mobile-and-laptop-performance
https://www.anandtech.com/show/13226/arm-unveils-client-cpu-performance-roadmap
https://liliputing.com/2018/08/arm-takes-aim-at-laptops-with-upcoming-deimos-hercules-processors.html

IBM POWER9 Talos II vs. X86 (Intel/AMD) Linux Benchmarks.
… along with great correction/analysis of results on POWER9 CPU …
Improving performance of Phoronix benchmarks on POWER9.
https://www.phoronix.com/scan.php?page=article&item=power9-talos-2
https://sthbrx.github.io/blog/2018/08/15/improving-performance-of-phoronix-benchmarks-on-power9/

Solo is first open source security key supporting FIDO2.
https://solokeys.com/

Life

Why Small Teams Win and Bigger Ones Fail.
https://uxplanet.org/why-small-teams-win-and-bigger-ones-fail-304cb9e43a42

How the shared family computer protected us from our worst selves.
https://www.theverge.com/2018/8/9/17661466/shared-family-computer

People Start Hating Their Jobs at Age 35.
https://www.bloomberg.com/news/articles/2017-08-21/people-start-hating-their-jobs-at-age-35?m

Books that explain (parts of) how the world really works.
https://apenwarr.ca/log/?m=201807#24

Other

Make your Firefox browser a privacy superpower with these extensions.
https://blog.mozilla.org/firefox/make-your-firefox-browser-a-privacy-superpower-with-these-extensions/

MATE Desktop GTK2 vs GTK3 memory consumption.
https://ubuntu-mate.org/blog/mate-desktop-gtk2-vs-gtk3-memory-consumption/

EOF

.
.
.
.
.