Tag Archives: virtualbox

Automated Kickstart Install of RHEL/Clones

There are two approaches to making your life more automated about installing multiple instances of operating systems. You can either maintain up-to-date templates for them or you can have automated/scripted installations. In this article I will share how to generate ISO image and Kickstart configuration to install Red Hat Enterprise Linux (and its clones such as Alma/Rocky/CentOS/Scientific/…) in easy and fast way. For the process I will use 8.5 version of RHEL.

Here is the Table of Contents for this article.

  • Logo
  • Possibilities
  • Environment
    • FreeBSD – www
    • RHEL Client – kickme
  • Validate
  • Generation
    • kickstart.config
    • kickstart.skel
    • kickstart.sh
    • ISO
  • Result
  • Alternatives
  • Summary

Logo

Shortly after IBM acquisition Red Hat started to use kinda boring ‘just a red hat’ logo – but its earlier logo – shown below – was more interesting.

rhel-logo

If you stare long enough you will see two dinosaurs. The Tyrannosaurus (red) punching a Triceratops (white) in the head. Once you see it you will not be able to unsee it πŸ™‚

Possibilities

There are many ways to do that automated Kickstart installation. You can use NFS/HTTP/FTP/HTTPS or your own generated DVD media … or use ‘stock’ DVD and Kickstart config available somewhere on the network.

I will use the following method that I find currently is best suited for my needs:

  • FreeBSD host with NGINX serving RHEL 8.5 DVD content (rhel-8.5-x86_64-dvd.iso) over HTTP.
  • Generate small (less then 1 MB in size) ISO with Kickstart config on it.
  • Boot from small rhel-8.5-x86_64-boot.iso ISO and also with generated Kickstart ISO.

Environment

I will use VirtualBox for this demo with NAT Network configuration for the virtual machines network adapters. The nat0 VirtualBox network is defined as 10.0.10.0/24 and I use Port Forwarding to access these machines from the FreeBSD host system.

Machines:

  • 10.0.10.210 - www – FreeBSD system with NGINX to serve RHEL 8.5 DVD contents
  • 10.0.10.199 - kickme – RHEL machine that would be installed with automated Kickstart install

FreeBSD – www

Below you will find the FreeBSD machine configuration as seen on VirtualBox.

vm-www

Its default FreeBSD ZFS install on single disk. Nothing fancy here to be honest. Below you will find its configuration from /etc/rc.conf file. I also installed the nginx package but the only thing I did with NGINX was to enable it to start automatically. I used the default stock config that points at /usr/local/www/nginx place. I later copied the RHEL 8.5 DVD contents to the /usr/local/www/nginx/rhel-8.5 directory. It takes about 10 GB.

www # cat /etc/rc.conf
hostname=www
ifconfig_em0="inet 10.0.10.210 netmask 255.255.255.0"
defaultrouter=10.0.10.1
sshd_enable=YES
nginx_enable=YES
zfs_enable=YES
dumpdev=AUTO
sendmail_enable=NO
sendmail_submit_enable=NO
sendmail_outbound_enable=NO
sendmail_msp_queue_enable=NO
update_motd=NO

Here is the unmodified NGINX config but with comments non displayed.

www # grep -v '#' /usr/local/etc/nginx/nginx.conf | grep '^[^#]'
worker_processes  1;
events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    server {
        listen       80;
        server_name  localhost;
        location / {
            root   /usr/local/www/nginx;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/local/www/nginx-dist;
        }
    }
}

Here are the contents of the /usr/local/www/nginx/rhel-8.5 directory after copying here contents of RHEL 8.5 DVD.

www:~ # ls -l /usr/local/www/nginx/rhel-8.5/
total 71
-r--r--r--  1 root  wheel     60 Apr  6 21:34 .discinfo
-r--r--r--  1 root  wheel   1560 Apr  6 21:34 .treeinfo
dr-xr-xr-x  4 root  wheel      4 Apr  6 21:50 AppStream
dr-xr-xr-x  4 root  wheel      4 Apr  6 21:53 BaseOS
dr-xr-xr-x  3 root  wheel      3 Apr  6 21:53 EFI
-r--r--r--  1 root  wheel   8154 Apr  6 21:53 EULA
-r--r--r--  1 root  wheel  18092 Apr  6 21:53 GPL
-r--r--r--  1 root  wheel   1669 Apr  6 21:53 RPM-GPG-KEY-redhat-beta
-r--r--r--  1 root  wheel   5135 Apr  6 21:53 RPM-GPG-KEY-redhat-release
-r--r--r--  1 root  wheel   1796 Apr  6 21:53 TRANS.TBL
-r--r--r--  1 root  wheel   1455 Apr  6 21:53 extra_files.json
dr-xr-xr-x  3 root  wheel      6 Apr  6 21:54 images
dr-xr-xr-x  2 root  wheel     16 Apr  6 21:54 isolinux
-r--r--r--  1 root  wheel    103 Apr  6 21:54 media.repo

RHEL Client – kickme

Below you will find the RHEL machine that will be used for the automated Kickstart installation – also as seen on VirtualBox.

vm-kickme

To make that example more interesting (and more corporate) I added second NIC for the backup network – so we will also have to generate additional route for it in the Kickstart config.

The kickme RHEL machine needs to have two (2) CD-ROM drives. In the PRIMARY (the one to boot from) we will load the rhel-8.5-x86_64-boot.iso ISO file. In the SECONDARY one we will load our generated kickme.oemdrv.iso ISO file containing Kickstart config file.

Validate

There also exists pykickstart package which offers ksvalidator tool. Theoretically it allows you to make sure that your Kickstart config has proper syntax and that it would work – but only in theory. Here is what it RHEL documentation states about its accuracy.

rhel-ksvalidator

It means that you will not know if your Kickstart config will work until you really try it – thus I do not cared that much about this tool as is not available on FreeBSD.

Generation

Now to the most important part – Kickstart generation and ISO generation. Probably the easiest way to create new Kickstart config is to install ‘by hand’ new RHEL system under virtual machine and then take the generated by Anaconda /root/anaconda-ks.cfg file as a starting point. This is what I also did.

When you would like to install next host you will have to edit the hostname and IP addresses for next host in that Kickstart file – which seems not very convenient to say the least. In order to make that less PITA I created a kickstart.sh script that will read values from kickstart.config file and then put them into the kickstart.skel file that would be base for our future installations. Then the kickstart.sh will copy the generated Kickstart file and used Kickstart config with that hostname as a backup or for future reference – or for example for documentation purposes.

kickstart.config

Here is how such kickstart.config file looks like.

# cat kickstart.config
  SYSTEM_NAME=kickme
  REPO_SERVER_IP=10.0.10.210
  INTERFACE1=enp0s3
  IP_ADDRESS1=10.0.10.199
  NETMASK1=255.255.255.0
  INTERFACE2=enp0s8
  IP_ADDRESS2=10.0.90.199
  NETMASK2=255.255.255.0
  GATEWAY=10.0.10.1
  NAMESERVER1=1.1.1.1
  NAMESERVER2=9.9.9.9
  NTP1=132.163.97.6
  NTP2=216.239.35.0
  ROUTE_NET=10.0.40.10/24
  ROUTE_VIA=10.0.20.1

kickstart.skel

The kickstart.skel file is little longer – this is our barebone for the Kickstart configs.

# cat kickstart.skel
#version=RHEL8

# USE sda DISK
ignoredisk --only-use=sda

# CLEAR DISK PARTITIONS BEFORE INSTALL
clearpart --all --initlabel

# USE text INSTALL
text

# USE ONLINE INSTALLATION MEDIA
url --url=http://REPO_SERVER_IP/rhel-8.5/BaseOS --noverifyssl

# KEYBOARD LAYOUTS
keyboard --vckeymap=us --xlayouts='us','pl'

# LANGUAGE
lang en_US.UTF-8

# NETWORK INFORMATION
network --bootproto=static --device=INTERFACE1 --ip=IP_ADDRESS1 --netmask=NETMASK1 --gateway=GATEWAY --nameserver=NAMESERVER1,NAMESERVER2 --noipv6 --activate
network --bootproto=static --device=INTERFACE2 --ip=IP_ADDRESS2 --netmask=NETMASK2 --noipv6 --activate --onboot=on
network --hostname=SYSTEM_NAME

# REPOS
repo --name="AppStream" --baseurl=http://REPO_SERVER_IP/rhel-8.5/AppStream

# ROOT PASSWORD
rootpw --plaintext asd

# DISABLE Setup Agent ON FIRST BOOT
firstboot --disable

# DISABLE SELinux AND FIREWALL
selinux --disabled
firewall --disabled

# OMIT X11
skipx

# REBOOT AND EJECT BOOT MEDIUM
reboot --eject

# TIMEZONE
timezone Europe/Warsaw --isUtc --nontp

# PARTITIONS
part   /boot/efi --fstype="efi"   --size=600  --ondisk=sda --label=EFI  --fsoptions="umask=0077,shortname=winnt"
part   /boot     --fstype="xfs"   --size=1024 --ondisk=sda --label=BOOT --fsoptions="rw,noatime,nodiratime"
part   pv.475    --fstype="lvmpv" --size=1    --ondisk=sda --grow

# LVM
volgroup rootvg --pesize=4096 pv.475
logvol /         --fstype="xfs"   --size=1024 --name=root --label="ROOT" --vgname=rootvg --fsoptions="rw,noatime,nodiratime"
logvol /usr      --fstype="xfs"   --size=5120 --name=usr  --label="USR"  --vgname=rootvg --fsoptions="rw,noatime,nodiratime"
logvol /var      --fstype="xfs"   --size=3072 --name=var  --label="VAR"  --vgname=rootvg --fsoptions="rw,noatime,nodiratime"
logvol /tmp      --fstype="xfs"   --size=1024 --name=tmp  --label="TMP"  --vgname=rootvg --fsoptions="rw,noatime,nodiratime"
logvol /opt      --fstype="xfs"   --size=1024 --name=opt  --label="OPT"  --vgname=rootvg --fsoptions="rw,noatime,nodiratime"
logvol /home     --fstype="xfs"   --size=1024 --name=home --label="HOME" --vgname=rootvg --fsoptions="rw,noatime,nodiratime"
logvol swap      --fstype="swap"  --size=4096 --name=swap --label="SWAP" --vgname=rootvg

# RPM PACKAGES
%packages
@^minimal-environment
kexec-tools
nfs-utils
nfs4-acl-tools
perl
chrony
%end

# KDUMP
%addon com_redhat_kdump --enable --reserve-mb='auto'
%end

# POST INSTALL COMMANDS TO EXECUTE
%post --log=/root/ks-post.log --interpreter=/usr/bin/bash

  # POST: route
  echo ROUTE_NET via ROUTE_VIA > /etc/sysconfig/network-scripts/route-INTERFACE2

  # POST: chrony CONFIG
  cat << TIME > /etc/chrony.conf
server NTP1 iburst
server NTP2 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
keyfile /etc/chrony.keys
leapsectz right/UTC
TIME

  # POST: chrony SERVICE
  systemctl enable chronyd

%end

# PASSWORD REQUIREMENTS
%anaconda
pwpolicy root --minlen=6 --minquality=1 --notstrict --nochanges --notempty
pwpolicy user --minlen=6 --minquality=1 --notstrict --nochanges --emptyok
pwpolicy luks --minlen=6 --minquality=1 --notstrict --nochanges --notempty
%end

kickstart.sh

… and last but not least – the kickstart.sh script. It does not take any arguments – it just loads the kickstart.config file variables and then replaces all config options in kickstart.skel with sed(1) to generate new Kickstart file as files/${SYSTEM_NAME}.cfg file. It also copies that config into files/${SYSTEM_NAME}.config for convenience.

# cat kickstart.sh
#! /bin/sh

if [ ! -f kickstart.config ]
then
  echo "ERROR: file 'kickstart.config' not available"
  exit 1
fi

if [ ! -f kickstart.skel ]
then
  echo "ERROR: file 'kickstart.skel' not available"
  exit 1
fi

. "$( pwd )/kickstart.config"

mkdir -p files ksfloppy iso

cp kickstart.config files/${SYSTEM_NAME}.config

if [ ${?} -eq 0 ]
then
  echo "INFO: kickstart config copied to 'files/${SYSTEM_NAME}.config' location"
else
  echo "ERROR: could not copy config to 'files/${SYSTEM_NAME}.config' location"
  exit 1
fi

sed                                           \
  -e s@SYSTEM_NAME@${SYSTEM_NAME}@g           \
  -e s@SALT_MINION_NAME@${SALT_MINION_NAME}@g \
  -e s@SALT_MASTER_IP@${SALT_MASTER_IP}@g     \
  -e s@REPO_SERVER_IP@${REPO_SERVER_IP}@g     \
  -e s@RHEL_MAJOR@${RHEL_MAJOR}@g             \
  -e s@RHEL_VERSION@${RHEL_VERSION}@g         \
  -e s@RHEL_ARCH@${RHEL_ARCH}@g               \
  -e s@INTERFACE1@${INTERFACE1}@g             \
  -e s@IP_ADDRESS1@${IP_ADDRESS1}@g           \
  -e s@NETMASK1@${NETMASK1}@g                 \
  -e s@INTERFACE2@${INTERFACE2}@g             \
  -e s@IP_ADDRESS2@${IP_ADDRESS2}@g           \
  -e s@NETMASK2@${NETMASK2}@g                 \
  -e s@GATEWAY@${GATEWAY}@g                   \
  -e s@NAMESERVER1@${NAMESERVER1}@g           \
  -e s@NAMESERVER2@${NAMESERVER2}@g           \
  -e s@NTP1@${NTP1}@g                         \
  -e s@NTP2@${NTP2}@g                         \
  -e s@ROUTE_NET@${ROUTE_NET}@g               \
  -e s@ROUTE_VIA@${ROUTE_VIA}@g               \
  kickstart.skel > files/${SYSTEM_NAME}.cfg

if [ ${?} -eq 0 ]
then
  echo "INFO: kickstart file 'files/${SYSTEM_NAME}.cfg' generated"
else
  echo "ERROR: failed to generate 'files/${SYSTEM_NAME}.cfg' kickstart file"
  exit 1
fi

echo "INFO: mkisofs(8) output BEGIN"
echo "-----------------------------"

mkisofs -J -R -l -graft-points -V "OEMDRV" \
        -input-charset utf-8 \
        -o iso/${SYSTEM_NAME}.oemdrv.iso \
        ks.cfg=files/${SYSTEM_NAME}.cfg ksfloppy

echo "-----------------------------"
echo "INFO: mkisofs(8) output ENDED"

if [ ${?} -eq 0 ]
then
  echo "INFO: ISO image 'iso/${SYSTEM_NAME}.oemdrv.iso' generated"
else
  echo "ERROR: failed to generate 'iso/${SYSTEM_NAME}.oemdrv.iso' ISO image"
  exit 1
fi

ISO

It finishes its work in less then a second. Here is its output.

kickstart.sh

… and the generated ISO file.

# ls -lh iso/kickme.oemdrv.iso
-rw-r--r--  1 root  wheel   366K Apr 10 21:57 iso/kickme.oemdrv.iso

Result

Now – when you boot the kickme VirtualBox virtual machine with CD-ROM devices loaded you will end up with RHEL system installed according to your generated Kickstart config. Here are some files from the kickme RHEL installed system.

Filesystems with LABELs such as BOOT or VAR defined.

# lsblk -i -f
NAME            FSTYPE      LABEL UUID                                   MOUNTPOINT
sda
|-sda1          xfs         BOOT  b5c66ea5-b38a-4072-b1a8-0d5882ace179   /boot
|-sda2          vfat        EFI   BB7A-4BFD                              /boot/efi
`-sda3          LVM2_member       e9BwIq-4I2W-zX6y-As42-f9N2-WTTR-WfHKdC
  |-rootvg-root xfs         ROOT  dbf8dd30-51cc-408a-9d05-b1ae67c0637c   /
  |-rootvg-swap swap        SWAP  c8a016b5-f43d-4510-9703-e9c68f02ae64   [SWAP]
  |-rootvg-usr  xfs         USR   c6694796-a5bd-4833-9a6b-a740e8bf83bf   /usr
  |-rootvg-home xfs         HOME  5264afe6-5d9c-4dc3-9d8d-e19078864aea   /home
  |-rootvg-opt  xfs         OPT   039e9575-3af8-4a8b-95c0-54fb8a515f70   /opt
  |-rootvg-tmp  xfs         TMP   11709a48-a64f-4b93-86a3-e943ff9ecf01   /tmp
  `-rootvg-var  xfs         VAR   ed20343c-b915-4234-b13e-0c6c94e03edc   /var
sr0
sr1

The /etc/fstab file with rw,noatime,nodiratime mount options.

# grep '^[^#] /etc/fstab
/dev/mapper/rootvg-root /                       xfs     rw,noatime,nodiratime 0 0
UUID=b5c66ea5-b38a-4072-b1a8-0d5882ace179 /boot xfs     rw,noatime,nodiratime 0 0
UUID=BB7A-4BFD          /boot/efi               vfat    defaults,uid=0,gid=0,umask=077,shortname=winnt 0 2
/dev/mapper/rootvg-home /home                   xfs     rw,noatime,nodiratime 0 0
/dev/mapper/rootvg-opt  /opt                    xfs     rw,noatime,nodiratime 0 0
/dev/mapper/rootvg-tmp  /tmp                    xfs     rw,noatime,nodiratime 0 0
/dev/mapper/rootvg-usr  /usr                    xfs     rw,noatime,nodiratime 0 0
/dev/mapper/rootvg-var  /var                    xfs     rw,noatime,nodiratime 0 0
/dev/mapper/rootvg-swap none                    swap    defaults        0 0

Networking on two network interfaces and additional route generated.

# cat /etc/sysconfig/network-scripts/ifcfg-enp0s3
# Generated by parse-kickstart
TYPE=Ethernet
DEVICE=enp0s3
UUID=e1fddd41-f398-4c1d-8bef-e28ef705d568
ONBOOT=yes
IPADDR=10.0.10.199
NETMASK=255.255.255.0
GATEWAY=10.0.10.1
IPV6INIT=no
DNS1=1.1.1.1
DNS2=9.9.9.9
PROXY_METHOD=none
BROWSER_ONLY=no
PREFIX=24
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME="System enp0s3"

# cat /etc/sysconfig/network-scripts/ifcfg-enp0s8
# Generated by parse-kickstart
TYPE=Ethernet
DEVICE=enp0s8
UUID=5454b587-8c29-41a7-93f8-532c814865de
ONBOOT=yes
IPADDR=10.0.90.199
NETMASK=255.255.255.0
IPV6INIT=no
PROXY_METHOD=none
BROWSER_ONLY=no
PREFIX=24
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME="System enp0s8"

# cat /etc/sysconfig/network-scripts/route-enp0s8
10.0.40.10/24 via 10.0.20.1

# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 1.1.1.1
nameserver 9.9.9.9

Alternatives

Its also possible to use the livemedia-creator from the lorax package … but I would omit it. To be honest I tried it – and it did not worked at all. I started the following process … and it run for more then a DAY and produced NOTHING.

# livemedia-creator \
    --make-iso \
    --ram 4096 \
    --vcpus 4 \
    --iso=/mnt/rhel-8.5-x86_64-boot.iso \
    --ks=/mnt/mykick.cfg

The log file for the operation was also EMPTY. At least that was the run when using the virt-install option with creating everything under virtual machine. This also intrigue me a lot. Why use virtual machines just to create installation media? Its just a bunch of files. There are better options available such as chroot(8) for example … or even so glorified containers such as Docker or Podman. Why use fully fledged virtual machine just to create ISO image? This is a big mystery for me.

Seems that livemedia-creator also has --no-virt option available … but as documentation states – it can “render the entire system unusable” – not very production ready solution for my taste. Below is a screenshot from the official RHEL documentation.

rhel-render

Pity that livemedia-creator did not worked for me – but I already have a working process for that.

Some people also suggested these as valuable alternatives:

Maybe some day I will find time to check them out.

Summary

I am not the best at summaries so I will just write here that the article has ended successfully πŸ™‚

Regards.

EOF

GlusterFS 8 on FreeBSD 13

About two years ago I have made a guide for really old GlusterFS 3.11 version that was available back then on FreeBSD 12.0. Recently I noticed that GlusterFS version in FreeBSD Ports (and packages) is not finally up-to-date with upstream GlusterFS versions.

gluster-logo

This guide will show you how to create GlusterFS 8 distributed filesystem on latest FreeBSD 13. At the moment of writing this article FreeBSD 13 is at RC1 state but it will be released within a month.

While in the earlier guide I created dispersed volume with redundancy comparably to RAID6 but between 6 nodes not disks. This means that 2 of 6 nodes can crash and that GlusterFS would still work without a problem. Today I will show you more minimalistic approach with 3 node setup and a volume that takes space only on nodes node0 and node1 while node2 will be used as an arbiter only and does not hold any data. The arbiter greatly improves split brain problems because instead of vulnerable two node cluster we have a three nodes in the cluster so even if any of them fails we still have 2 of 3 votes.

I will not repeat all ‘initial’ steps needed to prepare these three FreeBSD hosts as it was already described here – GlusterFS Cluster on FreeBSD with Ansible and GNU Parallel – in the older article about that topic. I will focus on the GlusterFS commands that need to be executed to achieve our goal.

We will use several prompts in this guide to show which commands will be executed on which nodes.

  [ALL] # command that will be executed on all node0/node1/node2 nodes
[node0] # command that will be executed on node0 only

GlusterFS

We have three nodes on our lab.

  • node0 - 10.0.10.200 - DATA NODE 'A'
  • node1 - 10.0.10.201 - DATA NODE 'B'
  • node2 - 10.0.10.202 - ARBITER NODE

Install and then enable and start the GlusterFS.

[ALL] # pkg install glusterfs

[ALL] # sysrc glusterd_enable=YES
glusterd_enable:  -> YES

[ALL] # service glusterd start
Starting glusterd.

Enable and mount the /proc filesystem and create needed directories for GlusterFS bricks.

[ALL] # grep procfs /etc/fstab
proc  /proc  procfs  rw  0 0

[ALL] # mount /proc

[ALL] # mkdir -p /bricks/data/{01,02,03,04}

Now connect all these nodes into one cluster and create GlusterFS volume.

[node0] # gluster peer status
Number of Peers: 0

[node0] # gluster peer probe node1
peer probe: success

[node0] # gluster peer probe node2
peer probe: success

[node0] # gluster peer status
Number of Peers: 2

Hostname: node1
Uuid: b5bc1602-a7bb-4f62-8149-98ca97be1784
State: Peer in Cluster (Connected)

Hostname: node2
Uuid: 2bfa0c71-04b4-4660-8a5c-373efc5da15c
State: Peer in Cluster (Connected)

[node0] # gluster volume create data \
  replica 2 \
  arbiter 1 \
  node0:/bricks/data/01 \
  node1:/bricks/data/01 \
  node2:/bricks/data/01 \
  node0:/bricks/data/02 \
  node1:/bricks/data/02 \
  node2:/bricks/data/02 \
  node0:/bricks/data/03 \
  node1:/bricks/data/03 \
  node2:/bricks/data/03 \
  node0:/bricks/data/04 \
  node1:/bricks/data/04 \
  node2:/bricks/data/04 \
  force
volume create: data: success: please start the volume to access data

[node0] # gluster volume start data
volume start: data: success

[node0] # gluster volume info
 
Volume Name: data
Type: Distributed-Replicate
Volume ID: f73d57ea-6f10-4840-86e7-f8178540e948
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x (2 + 1) = 12
Transport-type: tcp
Bricks:
Brick1: node0:/bricks/data/01
Brick2: node1:/bricks/data/01
Brick3: node2:/bricks/data/01 (arbiter)
Brick4: node0:/bricks/data/02
Brick5: node1:/bricks/data/02
Brick6: node2:/bricks/data/02 (arbiter)
Brick7: node0:/bricks/data/03
Brick8: node1:/bricks/data/03
Brick9: node2:/bricks/data/03 (arbiter)
Brick10: node0:/bricks/data/04
Brick11: node1:/bricks/data/04
Brick12: node2:/bricks/data/04 (arbiter)
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

[node0] # gluster volume status
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node0:/bricks/data/01                 49152     0          Y       4595 
Brick node1:/bricks/data/01                 49152     0          Y       1022 
Brick node2:/bricks/data/01                 49152     0          Y       3356 
Brick node0:/bricks/data/02                 49153     0          Y       4597 
Brick node1:/bricks/data/02                 49153     0          Y       1024 
Brick node2:/bricks/data/02                 49153     0          Y       3358 
Brick node0:/bricks/data/03                 49154     0          Y       4599 
Brick node1:/bricks/data/03                 49154     0          Y       1026 
Brick node2:/bricks/data/03                 49154     0          Y       3360 
Brick node0:/bricks/data/04                 49155     0          Y       4601 
Brick node1:/bricks/data/04                 49155     0          Y       1028 
Brick node2:/bricks/data/04                 49155     0          Y       3362 
Self-heal Daemon on localhost               N/A       N/A        Y       4604 
Self-heal Daemon on node1                   N/A       N/A        Y       1031 
Self-heal Daemon on node2                   N/A       N/A        Y       3365 
 
Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks
 
[node0] # ps aux | grep -e gluster -e RSS | cut -d ' ' -f 1-27
USER   PID %CPU %MEM   VSZ   RSS TT  STAT STARTED       TIME COMMAND
root  4604  4.0  0.7 64364 22520  -  Rs   21:15     53:50.30 /usr/local/sbin/glusterfs -s localhost --volfile-id shd/data -p
root  4585  3.0  0.7 48264 21296  -  Rs   21:14     56:13.25 /usr/local/sbin/glusterd --pid-file=/var/run/glusterd.pid (glusterfsd)
root  4597  3.0  0.7 66472 22484  -  Rs   21:15     48:54.63 /usr/local/sbin/glusterfsd -s node0 --volfile-id data.node0.bricks-data-02 -p
root  4599  3.0  0.7 62376 22464  -  Rs   21:15     48:23.41 /usr/local/sbin/glusterfsd -s node0 --volfile-id data.node0.bricks-data-03 -p
root  4595  2.0  0.8 66864 23724  -  Rs   21:15     49:03.23 /usr/local/sbin/glusterfsd -s node0 --volfile-id data.node0.bricks-data-01 -p
root  4601  2.0  0.7 62376 22444  -  Rs   21:15     49:17.01 /usr/local/sbin/glusterfsd -s node0 --volfile-id data.node0.bricks-data-04 -p
root  6748  0.0  0.1 12868  2560  2  S+   19:59      0:00.00 grep -e gluster -e

The GlusterFS data volume is now created and started. You can mount it and use it the way you like.

[node2] # mkdir /data

[node2] # kldload fusefs

[node2] # mount_glusterfs node0:/data /data

[node2] # echo $?
0

[node2] # df -h /data
Filesystem    Size    Used   Avail Capacity  Mounted on
/dev/fuse     123G    2.5G    121G     2%    /data

Voila! Mounted and ready to serve.

Tuning

GlusterFS comes without any tuning applied so I suggest something to start with.

[node0] # gluster volume set data client.event-threads 8
[node0] # gluster volume set data cluster.lookup-optimize on
[node0] # gluster volume set data cluster.readdir-optimize on
[node0] # gluster volume set data features.cache-invalidation on
[node0] # gluster volume set data group metadata-cache
[node0] # gluster volume set data network.inode-lru-limit 200000
[node0] # gluster volume set data performance.cache-invalidation on
[node0] # gluster volume set data performance.cache-refresh-timeout 10
[node0] # gluster volume set data performance.cache-size 1GB
[node0] # gluster volume set data performance.io-thread-count 16
[node0] # gluster volume set data performance.parallel-readdir on
[node0] # gluster volume set data performance.stat-prefetch on
[node0] # gluster volume set data performance.write-behind-trickling-writes on
[node0] # gluster volume set data performance.write-behind-window-size 100MB
[node0] # gluster volume set data server.event-threads 8
[node0] # gluster volume set data server.outstanding-rpc-limit 256

That is all in this rather short guide.

Treat it as an addendum to the original GlusterFS article linked earlier.

EOF

Oldschool Gaming on FreeBSD

When was the last time you played a computer game? I really like one of Benjamin Franklin quotes – “We do not stop playing because we grow old, we grow old because we stop playing.” – he lived in times where computer games did not existed yet but the quote remains current. I do not play games a lot, but when I do I make sure that they are the right and best ones. They are often games from the past and some of these games just do not age … they are timeless actually. Today I will show you some oldschool gaming on FreeBSD system.

Here is the Table of Contents for the article.

  • Native Games
    • Native Console/Terminal Games
      • Interactive
      • Passive
    • Native X11 Games
  • AMIGA Games
  • DOS Games
    • Fourteen Years Later
  • Windows Games
  • Flash/SWF Games
  • Web Browser Games
  • Last Resort
  • Closing Thoughts

Here is my Openbox ‘games’ menu.

openbox-games-menu-update

Discussions and comments from ‘external’ sources are available here:

Native Games

First we will start with ‘native’ games on FreeBSD – as of today there are more then thousand games available in the FreeBSD Ports collection.

% ls /usr/ports/games | wc -l
    1130

You can get nice description for each of these games (from the pkg-descr file) by using the below command. I assume that your FreeBSD Ports tree is under /usr/ports directory.

% for I in /usr/ports/games/*/pkg-descr
> do
>   echo ${I}
>   echo
>   cat ${I}
>   echo
>   echo
>   echo
> done \
>   | grep \
>       --color=always \
>       -A 100 \
>       -E "^/usr/ports/games/.*/pkg-descr" \
>   | less -R

Here is the one-liner that you can actually copy and paste into your terminal.

% for I in /usr/ports/games/*/pkg-descr; do echo ${I}; echo; cat ${I}; echo; echo; echo; done | grep --color=always -A 100 -E "^/usr/ports/games/.*/pkg-descr" | less -R

Here is how it looks.

native-ports-list

This way you can browse (and search in less(1) command) for interesting titles.

Native Console/Terminal Games

Interactive

Lets start with the most simple games – the text games played in terminal. I play only two of these and they are 2048 and ctris games.

The 2048 is generally a single C file – 2048.c – from here – https://github.com/mevdschee/2048.c/blob/master/2048.c – you need to compile it with the cc(1) command – like that.

% cc -o 2048 2048.c
% ./2048

game-2048

The other one ctris is available in the FreeBSD Ports or you can add it by package with pkg(8) command.

# pkg install -y ctris

game-ctris

There are also several other terminal games like Tetris in the FreeBSD Ports – they are bsdtris or vitetris ones for example.

Passive

The are also terminal ‘non-interactive’ games (or maybe I should call them terminal screensavers alternatively).

My favorite two are cmatrix and pipes. The first one is available from FreeBSD Ports.

IMHO it looks best when launched this way.

% cmatrix -a -b -u 6 -C blue

game-cmatrix

Some time ago I ‘ported’ or should I say modified the pipes so it will work properly on FreeBSD and its available from – https://github.com/vermaden/pipes/blob/master/pipes.sh – here.

game-pipes

Native X11 Games

Time to move to some more graphically appealing games – the X11 games.

One of the better open source games it the Battle for Wesnoth which is also available in the FreeBSD Ports so adding package it easy.

# pkg install -y wesnoth

game-wesnoth

AMIGA Games

Most AMIGA games have been ported to DOS and its generally more convenient and a lot faster to play the DOS ‘ports’ using dosbox(1) instead of playing their original AMIGA versions under fs-uae(1) emulator. Some games like Sensible World of Soccer are better in original AMIGA version (little larger field view for example in the AMIGA version – but that only makes the DOS game little harder as you see less) then in DOS port but still the difference is not that huge to wait for each game start roughly 60 seconds with fs-uae(1) and manually switching virtual floppies.

swos-amiga-dos-xbla

As you can see on the far right the Sensible World of Soccer game has been even ported to the Microsoft XBOX console – SWOS – available here πŸ™‚

There is however (at least) one AMIGA game that has not been ported to DOS and its made by the legendary TEAM17 studio. Its the All Terrain Racing game. When you check its reviews back when it was released it did not get that high scores as Sensible World of Soccer for example but its one of the better looking and fun racing games made for AMIGA. But Sensible World of Soccer was named one of The 10 Most Important Video Games of All Time on 2007 so it really hard to beat that. Even Sensible Gold got a lot worse reviews.

game-atr

Originally it came in two floppies version so everytime you will launch this game in fs-uae(1) you will need to change the virtual floppy … which is real PITA I must say … not to mention 60 seconds of waiting for it to start. But there is other possibility. The All Terrain Racing game was also created for the AMIGA CD32 variant which used CD-ROM discs instead of floppies. That way by loading single ISO file you do not need to switch floppies anymore each time the game starts. Yay!

Fortunately the fs-uae(1) config for All Terrain Racing game is not long or complicated either.

fs-uae

The fs-uae(1) is also easily installable on FreeBSD by using packages.

# pkg install -y fs-uae

As the All Terrain Racing game is started/loaded from ISO file the save/load game state is not made ‘natively’ in the game but level up above – in the fs-uae(1) itself with SAVE STATE and LOAD STATE options as shown below.

game-atr-save-load

Not all AMIGA games are available as CD32 version but one may also use virtual Hard Disk option on the fs-uae(1) emulator to avoid switching floppies.

DOS Games

The DOS games can be very conveniently played by using the DOSBox which is available on FreeBSD as dosbox packages (or port).

# pkg install -y dosbox

Games in DOSBox start very quickly which is very nice. They also run very smoothly.

dosbox

Like you see I prefer to keep my games outside of the ~/.doxbox directory while keeping only configuration files there. But that is just ‘organizational’ choice. Make your own choices how and where to keep the games that suits you best.

Its also very convenient to redefine all keyboard shortcuts with DOSBox builtin keyboard remapper. For example instead of default [CTRL] for ‘FIRE’ button in Sensible World of Soccer I prefer to use [Z] key instead and that is my only mapping currently.

dosbox-keys

Keep in mind that as the DOSBox main config file is kept as ~/.dosbox/dosbox-${VERSION}.conf file (its ~/.dosbox/dosbox-0.74-3.conf as of time of writing the article) the remapped keyboard shortcuts as kept in the ~/.dosbox/mapper-${VERSION}.map file (its ~/.dosbox/mapper-0.74-3.map as of time of writing the article). Also keep in mind that if you will start dosbox in ~ (home) dir and not in ~/.dosbox~dir then dosbox will creates ~/mapper-0.74-3.map file (in your home dir) instead of proper ~/.dosbox/mapper-0.74-3.map place.

I also made script wrappers for each game so I can launch them quickly both from command line or by using dmenu.

scripts-games

You will find them all as games-* scripts in my GitHub repository – https://github.com/vermaden/scripts – available here. The DOSBox configuration files are in the dosbox dir on the same repo – https://github.com/vermaden/scripts/tree/master/dosbox – here.

My favorite DOS (originally from AMIGA) game is Sensible World of Soccer. I also like to play first Settlers game and Theme Hospital occasionally.

The DOSBox also allows you to easily record both audio (into WAV files) and video (into AVI files) with keyboard shortcuts.

For example I have recorded replay of my Sensible World of Soccer goals this way (then converted it to GIF using ffmpeg(1) for this).

SWOS Goals.

This is the ffmpeg(1) spell that I used to convert the DOSBox made AVI file to GIF file.

% ffmpeg -i ~/.dosbox/capture/sws_eng_001.avi -vf "fps=30" -loop 0 swos.goals.gif

Keep in mind that some games – and Sensible World of Soccer is one of these games – have more then one graphical mode to run them. When you start the game without any switches then it starts in low graphics mode which is easy to spot on by looking at pixelated/dotted ‘S‘ logo on the top right corner. The lines on the field are also not antialiased.

game-swos-not-full

When you add /f flag to the Sensible World of Soccer binary then it starts in full graphics mode and the ‘S‘ letter has now solid grey color on the back and lines on the field are also antialiased now.

game-swos-full
Here is how it looks in the DOSBox config file.

[autoexec]
@echo off
mount C: ~/.dosbox
C:
cd swos-SFX
sws_eng.exe /f

The Sensible World of Soccer has a special place in my private games ‘Hall of Fame’. Its the only game that I was able to play straight for 26 hours with breaks only for meals and pee … but that was in the old AMIGA times in the 90s.

Fourteen Years Later

One of the very old but also very nice logic games I played two decades ago was Swing game. I was not able to start this game in ‘normal’ mode as it started in ‘network’ mode each time. While searching for a possible solution I found … my own bug on DOSBox created 14 years agohttps://www.dosbox.com/comp_list.php?showID=2499 – here. I was not able to force the Swing game to start in ‘normal’ mode back then so I ‘marked’ it as ‘non working’ and moved on.

Now when I checked the bug report I see useful solutions to the problem. Pity I am not able to login and ‘thank’ as I do not remember my password and DOSBox page does not offer password reset service.

Seems that Swing needs to have its game directory mounted again as CD-ROM device. That way Swing starts in ‘normal’ mode and local Single and Multi Player games are now possible.

game-swing

The most important part of DOSBox config is here:

[autoexec]
@echo off
mount C ~/.dosbox
mount D ~/.dosbox/swing -t cdrom -usecd 0
C:
cd swing
swing.bat

Windows Games

Good old WINE. On FreeBSD there are two WINE versions. There is 64bit version as emulators/wine package and 32bit version names emulators/i386-wine. You want to use the latter because most games are 32bit and the 64bit version of WINE is not able to run them 32bit games. The installation on FreeBSD is typical as shown below.

# pkg install i386-wine

Old/classic Windows games usually keep your saved games directly in their installation folders under dirs named ‘SAVE’ or ‘SAVEDGAMES’ but in some time between 2005 and now the game developers started to think that its a ‘great’ idea to store them in your ‘My Documents’ directory … I do not have to tell you how I fell about that ‘decision’ but on FreeBSD it means that you will have saved games directories created directly in your ~ home directory (its /home/vermaden in my case) directory. What a mess.

winecfg

That is probably the only thing I configure in WINE on FreeBSD with winecfg – I set ‘My Documents’ location to ~/games.EXTRACT/profile directory instead.

The DOSBox is also better for gaming then WINE because it allows convenient [ALT]+[ENTER] shortcut to switch between fullscreen and windowed modes. With WINE I need to keep two game ‘startup’ scripts. Separate ones for windowed mode and for fullscreen mode.

wine-window-fullscreen

Below is an example of Colin McRae Rally 2.0 game under WINE on FreeBSD.

game-colin

My best time for Stage 1 on Italy was ‘only’ 2:09.84 so I was not fast enough to beat the all time best with 2:05:75 immortalized here – https://youtu.be/iLLMIJzpoVk – on YouTube.

Other classic – original Baldur’s Gate game below. It was possible to dual class into specialist mage – not possible now in Enhanced Edition.

game-baldurs-bg1

More up to date Baldur’s Gate: Enhanced Edition also works well.

game-baldurs-play

Less popular titles like Lionheart: Legacy of the Crusader also work well under WINE on FreeBSD. Very unusual game as it used the S.P.E.C.I.A.L system from Fallout instead of ‘typical’ choice like Advanced Dungeons and Dragons like in other Black Isle games.

game-lionheart-play

If for some reason your game does not work under WINE on FreeBSD then you should try Project Homura solution. Its also available as games/homura package (or port) on FreeBSD.

Flash/SWF Games

As I really hate Adobe Flash technology when browsing the web pages but I quite like the compact SWF files as simple flash games using WINE and Flash Player Projector from Adobe. I also use WINE to start the Windows version of that Flash Player Projector program. Its available here – https://www.adobe.com/support/flashplayer/debug_downloads.html – in the debug downloads.

You can pick one of these two but I use the first one.

An example of Governor of Poker 2 game running in the Flash Player Projector under WINE.

game-poker

All of these games can be found on various sites Flash games by looking in the View Page Source in your browser and looking for the link to the SWF file. I can not post these games here for download but if you will have problem finding them then let me know πŸ™‚

Web Browser Games

A class of games that are played directly in the web browser. Examples of such games can be Krunker

game-krunker

… or Spelunky for example.

game-spelunky

If you are VERY bored then you can also try the Chrome Dinosaur Game built into the Chromium browser. To access it try to open the page that does not exists like http://non-existing-site.com for example.

game-chromium

The Chromium browser will then display No Internet error message. Press [UP] arrow now and start to play.

game-chromium-end

If you liked the 2048 game and you DO have Internet connection you may also play 2048 directly at DuckDuckGo page.

game-duck-2048

Last Resort

Sometimes WINE does not work and the game is available only for Windows or Linux. The solution is to use the Virtualbox here. Remember to select/enable the 3D acceleration and install Virtualbox Guest Additions for good performance.

virtualbox

Closing Thoughts

All of these games were played smoothly on oldschool Intel HD Graphics 3000 card from 2011 Sandy Bridge CPU model i7-2820QM as this is with what my ThinkPad W520 came.

If I forgot to post something or its not obvious then feel free to let me know. This post as usual grow more then it should πŸ™‚ Also if you think that I missed some important dosbox(1)/wine(1)/fs-uae(1) options then let me know please.

EOF

GlusterFS Cluster on FreeBSD with Ansible and GNU Parallel

Today I would like to present an article about setting up GlusterFS cluster on a FreeBSD system with Ansible and GNU Parallel tools.

gluster-logo.png

To cite Wikipedia “GlusterFS is a scale-out network-attached storage file system. It has found applications including cloud computing, streaming media services, and content delivery networks.” The GlusterFS page describes it similarly “Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace.”

Here are its advantages:

  • Scales to several petabytes.
  • Handles thousands of clients.
  • POSIX compatible.
  • Uses commodity hardware.
  • Can use any ondisk filesystem that supports extended attributes.
  • Accessible using industry standard protocols like NFS and SMB.
  • Provides replication/quotas/geo-replication/snapshots/bitrot detection.
  • Allows optimization for different workloads.
  • Open Source.

Lab Setup

It will be entirely VirtualBox based and it will consist of 6 hosts. To not create 6 same FreeBSD installations I used 12.0-RELEASE virtual machine image available from the FreeBSD Project directly:

There are several formats available – qcow2/raw/vhd/vmdk – but as I will be using VirtualBox I used the VMDK one.

I will use different prompts depending on where the command is executed to make the article more readable. Also then there is ‘%‘ at the prompt then a regular user is needed and if there is ‘#‘ at the prompt then a superuser is needed.

gluster1 #    // command run on the gluster1 node
gluster* #    // command run on all gluster nodes
client #      // command run on gluster client
vbhost %      // command run on the VirtualBox host

Here is the list of the machines for the GlusterFS cluster:

10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6

Each VirtualBox virtual machine for FreeBSD is the default one (as suggested in the VirtualBox wizard) with 512 MB RAM and NAT Network as shown on the image below.

virtualbox-freebsd-gluster-host.jpg

Here is the configuration of the NAT Network on VirtualBox.

virtualbox-nat-network.jpg

The cloned/copied FreeBSD-12.0-RELEASE-amd64.vmdk image will need to have different UUIDs so we will use VBoxManage internalcommands sethduuid command to achieve this.

vbhost % for I in $( seq 6 ); do cp FreeBSD-12.0-RELEASE-amd64.vmdk    vbox_GlusterFS_${I}.vmdk; done
vbhost % for I in $( seq 6 ); do VBoxManage internalcommands sethduuid vbox_GlusterFS_${I}.vmdk; done

To start the whole GlusterFS environment on VirtualBox use these commands.

vbhost % VBoxManage list vms | grep GlusterFS
"FreeBSD GlusterFS 1" {162a3b6f-4ec9-4709-bff8-162b0c8c9c41}
"FreeBSD GlusterFS 2" {2e30326c-ac5d-41d2-9b28-483375df38f6}
"FreeBSD GlusterFS 3" {6b2747ab-3ec6-4b1a-a28e-5d871d7891b3}
"FreeBSD GlusterFS 4" {12379cf8-31d9-4ff1-9945-465fc3ed15f0}
"FreeBSD GlusterFS 5" {a4b0d515-5924-4517-9052-df238c366f2b}
"FreeBSD GlusterFS 6" {66621755-1b97-4486-aa15-a7bec9edb343}

Check which GlusterFS machines are running.

vbhost % VBoxManage list runningvms | grep GlusterFS
vbhost %

Starting of the machines in VirtualBox Headless mode in parallel.

vbhost % VBoxManage list vms \
           | grep GlusterFS \
           | awk -F \" '{print $2}' \
           | while read I; do VBoxManage startvm "${I}" --type headless & done

After that command you should see these machines running.

vbhost % VBoxManage list runningvms
"FreeBSD GlusterFS 1" {162a3b6f-4ec9-4709-bff8-162b0c8c9c41}
"FreeBSD GlusterFS 2" {2e30326c-ac5d-41d2-9b28-483375df38f6}
"FreeBSD GlusterFS 3" {6b2747ab-3ec6-4b1a-a28e-5d871d7891b3}
"FreeBSD GlusterFS 4" {12379cf8-31d9-4ff1-9945-465fc3ed15f0}
"FreeBSD GlusterFS 5" {a4b0d515-5924-4517-9052-df238c366f2b}
"FreeBSD GlusterFS 6" {66621755-1b97-4486-aa15-a7bec9edb343}

Before we will try connect to our FreeBSD machines we need to make the minimal network configuration. Each FreeBSD machine will have such minimal /etc/rc.conf file as shown example for gluster1 host.

gluster1 # cat /etc/rc.conf
hostname=gluster1
ifconfig_DEFAULT="inet 10.0.10.11/24 up"
defaultrouter=10.0.10.1
sshd_enable=YES

For the setup purposes we will need to allow root login on these FreeBSD GlusterFS machines with PermitRootLogin yes option in the /etc/ssh/sshd_config file. You will also need to restart the sshd(8) service after the changes.

gluster1 # grep '^PermitRootLogin' /etc/ssh/sshd_config
PermitRootLogin yes
# service sshd restart

By using NAT Network with Port Forwarding the FreeBSD machines will be accessible on the localhost ports. For example the gluster1 machine will be available on port 2211, the gluster2 machine will be available on port 2212 and so on. This is shown in the sockstat utility output below.

vbhost % sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
vermaden VBoxNetNAT 57622 17 udp4   *:*                   *:*
vermaden VBoxNetNAT 57622 19 tcp4   *:2211                *:*
vermaden VBoxNetNAT 57622 20 tcp4   *:2212                *:*
vermaden VBoxNetNAT 57622 21 tcp4   *:2213                *:*
vermaden VBoxNetNAT 57622 22 tcp4   *:2214                *:*
vermaden VBoxNetNAT 57622 23 tcp4   *:2215                *:*
vermaden VBoxNetNAT 57622 24 tcp4   *:2216                *:*
vermaden VBoxNetNAT 57622 28 tcp4   *:2240                *:*
vermaden VBoxNetNAT 57622 29 tcp4   *:9140                *:*
vermaden VBoxNetNAT 57622 30 tcp4   *:2220                *:*
root     sshd       96791 4  tcp4   *:22                  *:*

I think the corelation between IP address and the port on the host is obvious πŸ™‚

Here is the list of the machines with ports on localhost:

10.0.10.11 gluster1 2211
10.0.10.12 gluster2 2212
10.0.10.13 gluster3 2213
10.0.10.14 gluster4 2214
10.0.10.15 gluster5 2215
10.0.10.16 gluster6 2216

To connect to such machine from the VirtualBox host system you will need this command:

vbhost % ssh -l root localhost -p 2211

To not type that every time you need to login to gluster1 let’s make come changes to ~/.ssh/config file for convenience. This way it will be possible to login in very short way.

vbhost % ssh gluster1

Here is the modified ~/.ssh/config file.

vbhost % cat ~/.ssh/config
# GENERAL
  StrictHostKeyChecking no
  LogLevel              quiet
  KeepAlive             yes
  ServerAliveInterval   30
  VerifyHostKeyDNS      no

# ALL HOSTS SETTINGS
Host *
  StrictHostKeyChecking no
  Compression           yes

# GLUSTER
Host gluster1
  User root
  Hostname 127.0.0.1
  Port 2211

Host gluster2
  User root
  Hostname 127.0.0.1
  Port 2212

Host gluster3
  User root
  Hostname 127.0.0.1
  Port 2213

Host gluster4
  User root
  Hostname 127.0.0.1
  Port 2214

Host gluster5
  User root
  Hostname 127.0.0.1
  Port 2215

Host gluster6
  User root
  Hostname 127.0.0.1
  Port 2216

I assume that you already have some SSH keys generated (with ~/.ssh/id_rsa as private key) so lets remove the need to type password on each SSH login.

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster1
Password for root@gluster1:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster2
Password for root@gluster2:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster3
Password for root@gluster3:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster4
Password for root@gluster4:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster5
Password for root@gluster5:

vbhost % ssh-copy-id -i ~/.ssh/id_rsa gluster6
Password for root@gluster6:

Ansible Setup

As we already have SSH integration now we will configure Ansible to connect to out ‘localhost’ ports for FreeBSD machines.

Here is the Ansible’s hosts file.

vbhost % cat hosts
[gluster]
gluster1 ansible_port=2211 ansible_host=127.0.0.1 ansible_user=root
gluster2 ansible_port=2212 ansible_host=127.0.0.1 ansible_user=root
gluster3 ansible_port=2213 ansible_host=127.0.0.1 ansible_user=root
gluster4 ansible_port=2214 ansible_host=127.0.0.1 ansible_user=root
gluster5 ansible_port=2215 ansible_host=127.0.0.1 ansible_user=root
gluster6 ansible_port=2216 ansible_host=127.0.0.1 ansible_user=root

[gluster:vars]
ansible_python_interpreter=/usr/local/bin/python2.7

Here is the listing of these machines using ansible command.

vbhost % ansible -i hosts --list-hosts gluster
  hosts (6):
    gluster1
    gluster2
    gluster3
    gluster4
    gluster5
    gluster6

Lets verify that out Ansible setup works correctly.

vbhost % ansible -i hosts -m raw -a 'echo' gluster
gluster1 | CHANGED | rc=0 >>



gluster3 | CHANGED | rc=0 >>



gluster2 | CHANGED | rc=0 >>



gluster5 | CHANGED | rc=0 >>



gluster4 | CHANGED | rc=0 >>



gluster6 | CHANGED | rc=0 >>

It works as desired.

We are not able to use Ansible modules other then Raw because by default Python is not installed on FreeBSD as shown below.

vbhost % ansible -i hosts -m ping gluster
gluster1 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster2 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster4 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster5 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster3 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}
gluster6 | FAILED! => {
    "changed": false,
    "module_stderr": "",
    "module_stdout": "/bin/sh: /usr/local/bin/python2.7: not found\r\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 127
}

We need to get Python installed on FreeBSD.

We will partially use Ansible for this and partially the GNU Parallel.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do ssh ${I} env ASSUME_ALWAYS_YES=yes pkg install python; done
pkg: Error fetching http://pkg.FreeBSD.org/FreeBSD:12:amd64/quarterly/Latest/pkg.txz: No address record
A pre-built version of pkg could not be found for your system.
Consider changing PACKAGESITE or installing it from ports: 'ports-mgmt/pkg'.
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:12:amd64/quarterly, please wait...

… we forgot about setting up DNS in the FreeBSD machines, let’s fix that.

It is as easy as executing echo nameserver 1.1.1.1 > /etc/resolv.conf command on each FreeBSD machine.

Lets verify what input will be sent to GNU Parallel before executing it.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh ${I} 'echo nameserver 1.1.1.1 > /etc/resolv.conf'"; done
ssh gluster1 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster2 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster3 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster4 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster5 'echo nameserver 1.1.1.1 > /etc/resolv.conf'
ssh gluster6 'echo nameserver 1.1.1.1 > /etc/resolv.conf'

Looks reasonable, lets engage the GNU Parallel then.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh ${I} 'echo nameserver 1.1.1.1 > /etc/resolv.conf'"; done | parallel

Computers / CPU cores / Max jobs to run
1:local / 2 / 2

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:0/6/100%/1.0s

We will now verify that the DNS is configured properly on the FreeBSD machines.

vbhost % for I in $( jot 6 ); do echo -n "gluster${I} "; ssh gluster${I} 'cat /etc/resolv.conf'; done
gluster1 nameserver 1.1.1.1
gluster2 nameserver 1.1.1.1
gluster3 nameserver 1.1.1.1
gluster4 nameserver 1.1.1.1
gluster5 nameserver 1.1.1.1
gluster6 nameserver 1.1.1.1

Verification of the DNS by using ping(8) to test Internet connectivity.

vbhost % for I in $( jot 6 ); do echo; echo "gluster${I}"; ssh gluster${I} host freebsd.org; done

gluster1
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 10 mx1.freebsd.org.
freebsd.org mail is handled by 30 mx66.freebsd.org.

gluster2
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 30 mx66.freebsd.org.
freebsd.org mail is handled by 10 mx1.freebsd.org.

gluster3
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 30 mx66.freebsd.org.
freebsd.org mail is handled by 10 mx1.freebsd.org.

gluster4
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 30 mx66.freebsd.org.
freebsd.org mail is handled by 10 mx1.freebsd.org.

gluster5
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 10 mx1.freebsd.org.
freebsd.org mail is handled by 30 mx66.freebsd.org.

gluster6
freebsd.org has address 96.47.72.84
freebsd.org has IPv6 address 2610:1c1:1:606c::50:15
freebsd.org mail is handled by 10 mx1.freebsd.org.
freebsd.org mail is handled by 30 mx66.freebsd.org.

The DNS resolution works properly, now we will switch from the default quarterly pkg(8) repository to the latest one which has more frequent updates as the name suggests. We will need to use sed -i '' s/quarterly/latest/g /etc/pkg/FreeBSD.conf command on each FreeBSD machine.

Verification what will be sent to GNU Parallel.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh ${I} 'sed -i \"\" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'"; done
ssh gluster1 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster2 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster3 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster4 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster5 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'
ssh gluster6 'sed -i "" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'

Let’s send the command to FreeBSD machines then.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo "ssh $I 'sed -i \"\" s/quarterly/latest/g /etc/pkg/FreeBSD.conf'"; done | parallel

Computers / CPU cores / Max jobs to run
1:local / 2 / 2

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:0/6/100%/1.0s

As shown below the latest repository is configured in the /etc/pkg/FreeBSD.conf file on each FreeBSD machine.

vbhost % ssh gluster3 tail -7 /etc/pkg/FreeBSD.conf
FreeBSD: {
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",
  mirror_type: "srv",
  signature_type: "fingerprints",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}

We may now get back to Python.

vbhost % ansible -i hosts --list-hosts gluster \
           | sed 1d \
           | while read I; do echo ssh ${I} env ASSUME_ALWAYS_YES=yes pkg install python; done
ssh gluster1 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster2 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster3 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster4 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster5 env ASSUME_ALWAYS_YES=yes pkg install python
ssh gluster6 env ASSUME_ALWAYS_YES=yes pkg install python

… and execution on the FreeBSD machines with GNU Parallel.

vbhost % ansible -i hosts --list-hosts gluster \ 
           | sed 1d \
           | while read I; do echo ssh ${I} env ASSUME_ALWAYS_YES=yes pkg install python; done | parallel

Computers / CPU cores / Max jobs to run
1:local / 2 / 2

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
local:0/6/100%/156.0s

The Python packages and its dependencies are installed.

vbhost % ssh gluster3 pkg info
gettext-runtime-0.19.8.1_2     GNU gettext runtime libraries and programs
indexinfo-0.3.1                Utility to regenerate the GNU info page index
libffi-3.2.1_3                 Foreign Function Interface
pkg-1.10.5_5                   Package manager
python-2.7_3,2                 "meta-port" for the default version of Python interpreter
python2-2_3                    The "meta-port" for version 2 of the Python interpreter
python27-2.7.15                Interpreted object-oriented programming language
readline-7.0.5                 Library for editing command lines as they are typed

Now with Ansible Ping module works as desired.

% ansible -i hosts -m ping gluster
gluster1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster4 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster5 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster3 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
gluster6 | SUCCESS => {
"changed": false,
"ping": "pong"
}

GlusterFS Volume Options

GlusterFS has a lot of options to setup the volume. They are described in the GlusterFS Administration Guide in the Setting up GlusterFS Volumes part. Here they are:

Distributed – Distributed volumes distribute files across the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy is either not important or is provided by other hardware/software layers.

Replicated – Replicated volumes replicate files across bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical.

Distributed Replicated – Distributed replicated volumes distribute files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments.

Dispersed – Dispersed volumes are based on erasure codes, providing space-efficient protection against disk or server failures. It stores an encoded fragment of the original file to each brick in a way that only a subset of the fragments is needed to recover the original file. The number of bricks that can be missing without losing access to data is configured by the administrator on volume creation time.

Distributed Dispersed – Distributed dispersed volumes distribute files across dispersed subvolumes. This has the same advantages of distribute replicate volumes, but using disperse to store the data into the bricks.

Striped [Deprecated] – Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files.

Distributed Striped [Deprecated] – Distributed striped volumes stripe data across two or more nodes in the cluster. You should use distributed striped volumes where the requirement is to scale storage and in high concurrency environments accessing very large files is critical.

Distributed Striped Replicated [Deprecated] – Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, you should use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

Striped Replicated [Deprecated] – Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

From all of the above still supported the Dispersed volume seems to be the best choice. Like Minio Dispersed volumes are based on erasure codes.

As we have 6 servers we will use 4 + 2 setup which is logical RAID6 against these 6 servers. This means that we will be able to lost 2 of them without service outage. This also means that if we will upload 100 MB file to our volume we will use 150 MB of space across these 6 servers with 25 MB on each node.

We can visualize this as following ASCII diagram.

+-----------+ +-----------+ +-----------+ +-----------+ +-----------+ +-----------+
|  gluster1 | |  gluster2 | |  gluster3 | |  gluster4 | |  gluster5 | |  gluster6 |
|           | |           | |           | |           | |           | |           |
|    brick1 | |    brick2 | |    brick3 | |    brick4 | |    brick5 | |    brick6 |
+-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+
      |             |             |             |             |             |
    25|MB         25|MB         25|MB         25|MB         25|MB         25|MB
      |             |             |             |             |             |
      +-------------+-------------+------+------+-------------+-------------+
                                         |
                                      100|MB
                                         |
                                     +---+---+
                                     | file0 |
                                     +-------+

Deploy GlusterFS Cluster

We will use gluster-setup.yml as our Ansible playbook.

Lets create something for the start, for example to always install the latest Python package.

vbhost % cat gluster-setup.yml
---
- name: Install and Setup GlusterFS on FreeBSD
  hosts: gluster
  user: root
  tasks:

  - name: Install Latest Python Package
    pkgng:
      name: python
      state: latest

We will now execute it.

vbhost % ansible-playbook -i hosts gluster-setup.yml

PLAY [Install and Setup GlusterFS on FreeBSD] **********************************

TASK [Gathering Facts] *********************************************************
ok: [gluster3]
ok: [gluster5]
ok: [gluster1]
ok: [gluster4]
ok: [gluster2]
ok: [gluster6]

TASK [Install Latest Python Package] *******************************************
ok: [gluster4]
ok: [gluster2]
ok: [gluster5]
ok: [gluster3]
ok: [gluster1]
ok: [gluster6]

PLAY RECAP *********************************************************************
gluster1                   : ok=2    changed=0    unreachable=0    failed=0
gluster2                   : ok=2    changed=0    unreachable=0    failed=0
gluster3                   : ok=2    changed=0    unreachable=0    failed=0
gluster4                   : ok=2    changed=0    unreachable=0    failed=0
gluster5                   : ok=2    changed=0    unreachable=0    failed=0
gluster6                   : ok=2    changed=0    unreachable=0    failed=0

We just installed Python on these machines no update was needed.

As we will be creating cluster we need to add time synchronization between the nodes of the cluster. We will use mose obvious solution – the ntpd(8) daemon that is in the FreeBSD base system. These lines are added to our gluster-setup.yml playbook to achieve this goal

  - name: Enable NTPD Service
    raw: sysrc ntpd_enable=YES

  - name: Start NTPD Service
    service:
      name: ntpd
      state: started

After executing the playbook again with the ansible-playbook -i hosts gluster-setup.yml command we will see additional output as the one shown below.

TASK [Enable NTPD Service] ************************************************
changed: [gluster2]
changed: [gluster1]
changed: [gluster4]
changed: [gluster5]
changed: [gluster3]
changed: [gluster6]

TASK [Start NTPD Service] ******************************************************
changed: [gluster5]
changed: [gluster4]
changed: [gluster2]
changed: [gluster1]
changed: [gluster3]
changed: [gluster6]

Random verification of the NTP service.

vbhost % ssh gluster1 ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 0.freebsd.pool. .POOL.          16 p    -   64    0    0.000    0.000   0.000
 ntp.ifj.edu.pl  10.0.2.4         3 u    1   64    1  119.956  -345759  32.552
 news-archive.ic 229.30.220.210   2 u    -   64    1   60.533  -345760  21.104

Now we need to install GlusterFS on FreeBSD machines – the glusterfs package.

We will add appropriate section to the playbook.

  - name: Install Latest GlusterFS Package
    pkgng:
      state: latest
      name:
      - glusterfs
      - ncdu

You can add more then one package to the pkgng Ansible module – for example I have also added ncdu package.

You can read more about pkgng Ansible module by typing the ansible-doc pkgng command or at least its short version with -s argument.

vbhost % ansible-doc -s pkgng
- name: Package manager for FreeBSD >= 9.0
  pkgng:
      annotation:            # A comma-separated list of keyvalue-pairs of the form `[=]'. A `+' denotes adding
                               an annotation, a `-' denotes removing an annotation, and `:' denotes
                               modifying an annotation. If setting or modifying annotations, a value
                               must be provided.
      autoremove:            # Remove automatically installed packages which are no longer needed.
      cached:                # Use local package base instead of fetching an updated one.
      chroot:                # Pkg will chroot in the specified environment. Can not be used together with `rootdir' or `jail'
                               options.
      jail:                  # Pkg will execute in the given jail name or id. Can not be used together with `chroot' or `rootdir'
                               options.
      name:                  # (required) Name or list of names of packages to install/remove.
      pkgsite:               # For pkgng versions before 1.1.4, specify packagesite to use for downloading packages. If not
                               specified, use settings from `/usr/local/etc/pkg.conf'. For newer
                               pkgng versions, specify a the name of a repository configured in
                               `/usr/local/etc/pkg/repos'.
      rootdir:               # For pkgng versions 1.5 and later, pkg will install all packages within the specified root directory.
                               Can not be used together with `chroot' or `jail' options.
      state:                 # State of the package. Note: "latest" added in 2.7

You can read more about this particular module on the following – https://docs.ansible.com/ansible/latest/modules/pkgng_module.html – Ansible page.

We will now add GlusterFS nodes to the /etc/hosts file and add autoboot_delay=1 parameter to the /boot/loader.conf file so our systems will boot 9 seconds faster as 10 is the default delay setting.

Here is out gluster-setup.yml Ansible playbook this far.

vbhost % cat gluster-setup.yml
---
- name: Install and Setup GlusterFS on FreeBSD
  hosts: gluster
  user: root
  tasks:

  - name: Install Latest Python Package
    pkgng:
      name: python
      state: latest

  - name: Enable NTPD Service
    raw: sysrc ntpd_enable=YES

  - name: Start NTPD Service
    service:
      name: ntpd
      state: started

  - name: Install Latest GlusterFS Package
    pkgng:
      state: latest
      name:
      - glusterfs
      - ncdu

  - name: Add Nodes to /etc/hosts File
    blockinfile:
      path: /etc/hosts
      block: |
        10.0.10.11 gluster1
        10.0.10.12 gluster2
        10.0.10.13 gluster3
        10.0.10.14 gluster4
        10.0.10.15 gluster5
        10.0.10.16 gluster6

  - name: Add autoboot_delay to /boot/loader.conf File
    lineinfile:
      path: /boot/loader.conf
      line: autoboot_delay=1
      create: yes

Here is the result of the execution of this playbook.

vbhost % ansible-playbook -i hosts gluster-setup.yml

PLAY [Install and Setup GlusterFS on FreeBSD] **********************************

TASK [Gathering Facts] *********************************************************
ok: [gluster3]
ok: [gluster5]
ok: [gluster1]
ok: [gluster4]
ok: [gluster2]
ok: [gluster6]

TASK [Install Latest Python Package] *******************************************
ok: [gluster4]
ok: [gluster2]
ok: [gluster5]
ok: [gluster3]
ok: [gluster1]
ok: [gluster6]

TASK [Install Latest GlusterFS Package] ****************************************
ok: [gluster2]
ok: [gluster1]
ok: [gluster3]
ok: [gluster5]
ok: [gluster4]
ok: [gluster6]

TASK [Add Nodes to /etc/hosts File] ********************************************
changed: [gluster5]
changed: [gluster4]
changed: [gluster2]
changed: [gluster3]
changed: [gluster1]
changed: [gluster6]

TASK [Enable GlusterFS Service] ************************************************
changed: [gluster1]
changed: [gluster4]
changed: [gluster2]
changed: [gluster3]
changed: [gluster5]
changed: [gluster6]

TASK [Add autoboot_delay to /boot/loader.conf File] ****************************
changed: [gluster3]
changed: [gluster2]
changed: [gluster5]
changed: [gluster1]
changed: [gluster4]
changed: [gluster6]

PLAY RECAP *********************************************************************
gluster1                   : ok=6    changed=3    unreachable=0    failed=0
gluster2                   : ok=6    changed=3    unreachable=0    failed=0
gluster3                   : ok=6    changed=3    unreachable=0    failed=0
gluster4                   : ok=6    changed=3    unreachable=0    failed=0
gluster5                   : ok=6    changed=3    unreachable=0    failed=0
gluster6                   : ok=6    changed=3    unreachable=0    failed=0

Let’s check that FreeBSD machines can now ping each other by names.

vbhost % ssh gluster6 cat /etc/hosts
# LOOPBACK
127.0.0.1      localhost localhost.my.domain
::1            localhost localhost.my.domain

# BEGIN ANSIBLE MANAGED BLOCK
10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6
# END ANSIBLE MANAGED BLOCK

vbhost % ssh gluster1 ping -c 1 gluster3
PING gluster3 (10.0.10.13): 56 data bytes
64 bytes from 10.0.10.13: icmp_seq=0 ttl=64 time=1.924 ms

--- gluster3 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.924/1.924/1.924/0.000 ms

… and our /boot/loader.conf file.

vbhost % ssh gluster4 cat /boot/loader.conf
autoboot_delay=1

Now we need to create directories for GlusterFS data. Without better idea we will use /data directory with /data/colume1 as the directory for volume1 and bricks will be put as /data/volume1/brick1 dirs. In this setup I will use just one brick per server but in production environment you would probably use one brick per physical disk.

Here is the playbook command we will use to create these directories on FreeBSD machines.

  - name: Create brick* Directories for volume1
    raw: mkdir -p /data/volume1/brick` hostname | grep -o -E '[0-9]+' `

After executing it with ansible-playbook -i hosts gluster-setup.yml command the directories has beed created.

vbhost % ssh gluster2 find /data -ls | column -t
2247168  8  drwxr-xr-x  3  root  wheel  512  Dec  28  17:48  /data
2247169  8  drwxr-xr-x  3  root  wheel  512  Dec  28  17:48  /data/volume2
2247170  8  drwxr-xr-x  2  root  wheel  512  Dec  28  17:48  /data/volume2/brick2


We now need to add glusterd_enable=YES to the /etc/rc.conf file on GlusterFS nodes and then start the GlsuterFS service.

This is the snippet we will add to our playbook.

  - name: Enable GlusterFS Service
    raw: sysrc glusterd_enable=YES

  - name: Start GlusterFS Service
    service:
      name: glusterd
      state: started

Let’s make quick random verification.

vbhost % ssh gluster4 service glusterd status
glusterd is running as pid 2684.

Now we need to proceed to the last part of the GlusterFS setup – create the volume.

We will do this from the gluster1 – the 1st node of the GlusterFS cluster.

First we need to peer probe other nodes.

gluster1 # gluster peer probe gluster1
peer probe: success. Probe on localhost not needed
gluster1 # gluster peer probe gluster2
peer probe: success.
gluster1 # gluster peer probe gluster3
peer probe: success.
gluster1 # gluster peer probe gluster4
peer probe: success.
gluster1 # gluster peer probe gluster5
peer probe: success.
gluster1 # gluster peer probe gluster6
peer probe: success.

Then we can create the volume. We will need to use force option to because for our example setup we will use directories on the root partition.

gluster1 # gluster volume create volume1 \
             disperse-data 4 \
             redundancy 2 \
             transport tcp \
             gluster1:/data/volume1/brick1 \
             gluster2:/data/volume1/brick2 \
             gluster3:/data/volume1/brick3 \
             gluster4:/data/volume1/brick4 \
             gluster5:/data/volume1/brick5 \
             gluster6:/data/volume1/brick6 \
             force
volume create: volume1: success: please start the volume to access data

We can now start the volume1 GlsuerFS volume.

gluster1 # gluster volume start volume1
volume start: volume1: success

gluster1 # gluster volume status volume1
Status of volume: volume1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster1:/data/volume1/brick1         N/A       N/A        N       N/A
Brick gluster2:/data/volume1/brick2         N/A       N/A        N       N/A
Brick gluster3:/data/volume1/brick3         N/A       N/A        N       N/A
Brick gluster4:/data/volume1/brick4         N/A       N/A        N       N/A
Brick gluster5:/data/volume1/brick5         N/A       N/A        N       N/A
Brick gluster6:/data/volume1/brick6         N/A       N/A        N       N/A
Self-heal Daemon on localhost               N/A       N/A        N       644
Self-heal Daemon on gluster6                N/A       N/A        N       643
Self-heal Daemon on gluster5                N/A       N/A        N       647
Self-heal Daemon on gluster2                N/A       N/A        N       645
Self-heal Daemon on gluster3                N/A       N/A        N       645
Self-heal Daemon on gluster4                N/A       N/A        N       645

Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks

gluster1 # gluster volume info volume1

Volume Name: volume1
Type: Disperse
Volume ID: 68cf9607-16bc-4550-9b6b-16a5c7656f51
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/data/volume1/brick1
Brick2: gluster2:/data/volume1/brick2
Brick3: gluster3:/data/volume1/brick3
Brick4: gluster4:/data/volume1/brick4
Brick5: gluster5:/data/volume1/brick5
Brick6: gluster6:/data/volume1/brick6
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

Here are contents of currently unused/empty brick.

gluster1 # find /data/volume1/brick1
/data/volume1/brick1
/data/volume1/brick1/.glusterfs
/data/volume1/brick1/.glusterfs/indices
/data/volume1/brick1/.glusterfs/indices/xattrop
/data/volume1/brick1/.glusterfs/indices/entry-changes
/data/volume1/brick1/.glusterfs/quarantine
/data/volume1/brick1/.glusterfs/quarantine/stub-00000000-0000-0000-0000-000000000008
/data/volume1/brick1/.glusterfs/changelogs
/data/volume1/brick1/.glusterfs/changelogs/htime
/data/volume1/brick1/.glusterfs/changelogs/csnap
/data/volume1/brick1/.glusterfs/brick1.db
/data/volume1/brick1/.glusterfs/brick1.db-wal
/data/volume1/brick1/.glusterfs/brick1.db-shm
/data/volume1/brick1/.glusterfs/00
/data/volume1/brick1/.glusterfs/00/00
/data/volume1/brick1/.glusterfs/00/00/00000000-0000-0000-0000-000000000001
/data/volume1/brick1/.glusterfs/landfill
/data/volume1/brick1/.glusterfs/unlink
/data/volume1/brick1/.glusterfs/health_check

The 6-node GlusterFS cluster is now complete and volume1 available to use.

Alternative

The GlusterFS’s documentation Quick Start Guide also suggests using Ansible to deploy and manage GlusterFS with gluster-ansible repository or gluster-ansible-cluster but they have below requirements.

  • Ansible version 2.5 or above.
  • GlusterFS version 3.2 or above.

As GlusterFS on FreeBSD is at 3.11.1 version I did not used them.

FreeBSD Client

We will now use another VirtualBox machine – also based on the same FreeBSD 12.0-RELEASE image – to create FreeBSD Client machine that will mount our volume1 volume.

We will need to install glusterfs package with pkg(8) command. Then we will use mount_glusterfs command to mount the volume. Keep in mind that in order to mount GlusterFS volume the FUSE (fuse.ko kernel module is needed.

client # pkg install glusterfs

client # kldload fuse

client # mount_glusterfs 10.0.10.11:volume1 /mnt

client # echo $?
0

client # mount
/dev/gpt/rootfs on / (ufs, local, soft-updates)
devfs on /dev (devfs, local, multilabel)
/dev/fuse on /mnt (fusefs, local, synchronous)

client # ls /mnt
ls: /mnt: Socket is not connected

It is mounted but does not work. The solution to this problem is to add appropriate /etc/hosts entries to the GlusterFS nodes.

client # cat /etc/hosts
::1                     localhost localhost.my.domain
127.0.0.1               localhost localhost.my.domain

10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6

Lets mount it again now with needed /etc/hosts entries.

client # umount /mnt

client # mount_glusterfs gluster1:volume1 /mnt

client # ls /mnt
client #

We now have our GlusterFS volume properly mounted and working on the FreeBSD Client machine.

Lets write some file there with dd(8) to see how it works.

client # dd  FILE bs=1m count=100 status=progress
  73400320 bytes (73 MB, 70 MiB) transferred 1.016s, 72 MB/s
100+0 records in
100+0 records out
104857600 bytes transferred in 1.565618 secs (66975227 bytes/sec)

Let’s see how it looks in the brick directory.

gluster1 # ls -lh /data/volume1/brick1
total 25640
drw-------  10 root  wheel   512B Jan  3 18:31 .glusterfs
-rw-r--r--   2 root  wheel    25M Jan  3 18:31 FILE

gluster1 # find /data
/data/
/data/volume1
/data/volume1/brick1
/data/volume1/brick1/.glusterfs
/data/volume1/brick1/.glusterfs/indices
/data/volume1/brick1/.glusterfs/indices/xattrop
/data/volume1/brick1/.glusterfs/indices/xattrop/xattrop-aed814f1-0eb0-46a1-b569-aeddf5048e06
/data/volume1/brick1/.glusterfs/indices/entry-changes
/data/volume1/brick1/.glusterfs/quarantine
/data/volume1/brick1/.glusterfs/quarantine/stub-00000000-0000-0000-0000-000000000008
/data/volume1/brick1/.glusterfs/changelogs
/data/volume1/brick1/.glusterfs/changelogs/htime
/data/volume1/brick1/.glusterfs/changelogs/csnap
/data/volume1/brick1/.glusterfs/brick1.db
/data/volume1/brick1/.glusterfs/brick1.db-wal
/data/volume1/brick1/.glusterfs/brick1.db-shm
/data/volume1/brick1/.glusterfs/00
/data/volume1/brick1/.glusterfs/00/00
/data/volume1/brick1/.glusterfs/00/00/00000000-0000-0000-0000-000000000001
/data/volume1/brick1/.glusterfs/landfill
/data/volume1/brick1/.glusterfs/unlink
/data/volume1/brick1/.glusterfs/health_check
/data/volume1/brick1/.glusterfs/ac
/data/volume1/brick1/.glusterfs/ac/b4
/data/volume1/brick1/.glusterfs/11
/data/volume1/brick1/.glusterfs/11/50
/data/volume1/brick1/.glusterfs/11/50/115043ca-420f-48b5-af05-c9552db2e585
/data/volume1/brick1/FILE

Linux Client

I will also show how to mount GlusterFS volume on the Red Hat clone CentOS in its latest 7.6 incarnation. It will require glusterfs-fuse package installation.

[root@localhost ~]# yum install glusterfs-fuse


[root@localhost ~]# rpm -q --filesbypkg glusterfs-fuse | grep /sbin/mount.glusterfs
glusterfs-fuse            /sbin/mount.glusterfs

[root@localhost ~]# mount.glusterfs 10.0.10.11:volume1 /mnt
Mount failed. Please check the log file for more details.

Similarly like with FreeBSD Client the /etc/hosts entries are needed.

[root@localhost ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.0.10.11 gluster1
10.0.10.12 gluster2
10.0.10.13 gluster3
10.0.10.14 gluster4
10.0.10.15 gluster5
10.0.10.16 gluster6

[root@localhost ~]# mount.glusterfs 10.0.10.11:volume1 /mnt

[root@localhost ~]# ls /mnt
FILE

[root@localhost ~]# mount
10.0.10.11:volume1 on /mnt type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

With apropriate /etc/hosts entries it works as desired. We see the FILE file generated fron the FreeBSD Client machine.

GlusterFS Cluster Redundancy

After messing with the volume and creating and deleting various files I also tested its redundancy. In theory this RAID6 equivalent protection should protect us from the loss of two of six servers. After shutdown of two VirtualBox machines the volume is still available and ready to use.

Closing Thougts

Pity that FreeBSD does not provide more modern GlusterFS package as currently only 3.11.1 version is available.

EOF

MongoDB Replica Set Cluster on Oracle Linux

Meet MongoDB.

mongodb-logo

MongoDB is a free and open-source cross-platform document database with scalability and flexibility. Classified as NoSQL database MongoDB uses JSON like documents with schemas. MongoDB is a distributed database at its core, so high availability, horizontal scaling, and geographic distribution are built in and easy to use.

Today I will show you how to install and configure MongoDB Cluster Replica Set with 4 data nodes and 1 arbiter node. Minimal replica set configuration is three members and largest replica set can support only 12 members in total. The replica set must have odd number of voting members. As I always used FreeBSD or its forks for various setups I will today use latest Oracle Linux 7.5 for this example.

Architecture

Below is the POOR MAN’S ASCII ARCHITECT diagram showing that five node MongoDB replica set cluster installation.

mongo0 [DATA]       |   |
/var/lib/mongo -- > |   |
                    |   |
mongo1 [DATA]       | M |
/var/lib/mongo -- > | o |
                    | n |
mongo2 [DATA]       | g |
/var/lib/mongo -- > | o |
                    |   |
mongo3 [DATA]       | D |
/var/lib/mongo -- > | B |
                    |   |
mongo4 [ARBITER]    |   |
/var/lib/mongo -- x |   |

The MongoDB project visualizes this little differently, as show below.

mongodb-replica-set-four-members-one-arbiter

VirtualBox

For the convenience of the setup we will use VirtualBox virtual machines for our MongoDB replica set cluster setup. Below is list of VirtualBox virtual machines used in the setup.

virtualbox-mongodb-list

We will use VirtualBox NAT Network connectivity for the virtual machines communication. Below are settings for the NAT Network we will use here.

virtualbox-mongodb-nat-01

virtualbox-mongodb-nat-02

virtualbox-mongodb-nat-03-forward

virtualbox-mongodb-nat-04-vm

We can verify that ports forwarding is working with sockstat command from the host FreeBSD system.

host % sockstat -l4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS      
vermaden VBoxNetNAT 13138 17 udp4   *:*                   *:*
vermaden VBoxNetNAT 13138 19 tcp4   *:2200                *:*
vermaden VBoxNetNAT 13138 20 tcp4   *:2201                *:*
vermaden VBoxNetNAT 13138 21 tcp4   *:2202                *:*
vermaden VBoxNetNAT 13138 22 tcp4   *:2203                *:*
vermaden VBoxNetNAT 13138 23 tcp4   *:2204                *:*
root     sshd       986   4  tcp4   *:22                  *:*

The table below lists all MongoDB nodes and their IP addresses and roles that we will use.

NODE    ADDRESS        ROLE
mongo0  10.0.10.10/24  DATA
mongo1  10.0.10.11/24  DATA
mongo2  10.0.10.12/24  DATA
mongo3  10.0.10.13/24  DATA
mongo4  10.0.10.14/24  ARBITER (does not contain data)

The ‘last’ mongo4 node will be have the ARBITER role while mongo0 to mongo3 nodes will have DATA role. Similarly like with the Distributed Object Storage with Minio on FreeBSD You can place two nodes (mongo0 and mongo2 for example) in primary datacenter, other two nodes (mongo1 and mongo3 for example) in secondary datacenter and mongo4 with ARBITER node in the third datacenter or other location available from the primary and secondary datacenters.

To not do the same thing five times I installed the first node (mongo0) then updated it and made some preconfigurations, then powered it off and I cloned into the remaining nodes. Remember to regenerate the MAC addresses in VirtualBox interface in the cloning process for these machines to omit ‘strange’ connectivity problems πŸ™‚

After cloning the only files that needs to be modified are these:

  • /etc/sysconfig/network-scripts/ifcfg-eth0
  • /etc/hostname

Below is an example for mongo4 machine.

[root@mongo4 ~]# grep 4$ /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/hostname 
/etc/sysconfig/network-scripts/ifcfg-eth0:IPADDR=10.0.10.14
/etc/sysconfig/network-scripts/ifcfg-eth0:PREFIX=24
/etc/hostname:mongo4

To distinguish commands I type on the host system and mongoX virtual machines I use two different prompts, this way it should be obvious what command to execute and where.

Command on the host system.

host % command

Command on the mongoX virtual machine.

[root@mongoX ~]# command

Linux

I have installed Oracle Linux 7.5 on a single primary / partition on XFS filesystem as show on the images below, this is Minimal install with statically configured network connection in VirtualBox NAT Network mode.

oracle-linux-7.5-install-01

oracle-linux-7.5-install-02

oracle-linux-7.5-install-03

If that will make life easier for anybody, here is the /root/anaconda-ks.cfg file.

[root@mongo0 ~]# cat /root/anaconda-ks.cfg 
#version=DEVEL
# System authorization information
auth --enableshadow --passalgo=sha512
repo --name="Server-HighAvailability" --baseurl=file:///run/install/repo/addons/HighAvailability
repo --name="Server-ResilientStorage" --baseurl=file:///run/install/repo/addons/ResilientStorage
# Use CDROM installation media
cdrom
# Use graphical install
graphical
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=sda
# Keyboard layouts
keyboard --vckeymap=us --xlayouts='us'
# System language
lang en_US.UTF-8

# Network information
network  --bootproto=static --device=enp0s3 --gateway=10.0.10.1 --ip=10.0.10.10 --nameserver=1.1.1.1 --netmask=255.255.255.0 --ipv6=auto --activate
network  --hostname=mongo0

# Root password
rootpw --iscrypted $6$EzciOQdLpJD8IJTv$wnAvxjgP.JluqsRAPu/mbTv8Upvg02AAb4.T5zBi6VMGdNfNsiRw7Gp0FyRtwAGW5Orpqc1nRwtRFwLQDJU/l.
# System services
services --disabled="chronyd"
# System timezone
timezone Europe/Warsaw --isUtc --nontp
# System bootloader configuration
bootloader --location=mbr --boot-drive=sda
# Partition clearing information
clearpart --none --initlabel
# Disk partitioning information
part / --fstype="xfs" --ondisk=sda --size=16383 --label=ROOT

%packages
@^minimal
@core

%end

%addon com_redhat_kdump --disable --reserve-mb='auto'

%end

%anaconda
pwpolicy root --minlen=6 --minquality=1 --notstrict --nochanges --notempty
pwpolicy user --minlen=6 --minquality=1 --notstrict --nochanges --emptyok
pwpolicy luks --minlen=6 --minquality=1 --notstrict --nochanges --notempty
%end

After the first boot I will yum update the system to the latest version.

[root@mongo0 ~]# yum update
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package initscripts.x86_64 0:9.49.41-1.0.1.el7 will be updated
---> Package initscripts.x86_64 0:9.49.41-1.0.3.el7 will be an update
---> Package kernel-uek.x86_64 0:4.1.12-124.14.1.el7uek will be installed
---> Package kernel-uek-firmware.noarch 0:4.1.12-124.14.1.el7uek will be installed
---> Package krb5-libs.x86_64 0:1.15.1-18.el7 will be updated
---> Package krb5-libs.x86_64 0:1.15.1-19.el7 will be an update
---> Package selinux-policy.noarch 0:3.13.1-192.0.1.el7 will be updated
---> Package selinux-policy.noarch 0:3.13.1-192.0.1.el7_5.3 will be an update
---> Package selinux-policy-targeted.noarch 0:3.13.1-192.0.1.el7 will be updated
---> Package selinux-policy-targeted.noarch 0:3.13.1-192.0.1.el7_5.3 will be an update
---> Package tzdata.noarch 0:2018c-1.el7 will be updated
---> Package tzdata.noarch 0:2018d-1.el7 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================
 Package                        Arch          Version                       Repository         Size
====================================================================================================
Installing:
 kernel-uek                     x86_64        4.1.12-124.14.1.el7uek        ol7_UEKR4          46 M
 kernel-uek-firmware            noarch        4.1.12-124.14.1.el7uek        ol7_UEKR4         2.5 M
Updating:
 initscripts                    x86_64        9.49.41-1.0.3.el7             ol7_latest        437 k
 krb5-libs                      x86_64        1.15.1-19.el7                 ol7_latest        747 k
 selinux-policy                 noarch        3.13.1-192.0.1.el7_5.3        ol7_latest        452 k
 selinux-policy-targeted        noarch        3.13.1-192.0.1.el7_5.3        ol7_latest        6.6 M
 tzdata                         noarch        2018d-1.el7                   ol7_latest        480 k

Transaction Summary
====================================================================================================
Install  2 Packages
Upgrade  5 Packages

Total download size: 57 M
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
warning: /var/cache/yum/x86_64/7Server/ol7_latest/packages/initscripts-9.49.41-1.0.3.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Public key for initscripts-9.49.41-1.0.3.el7.x86_64.rpm is not installed
(1/7): initscripts-9.49.41-1.0.3.el7.x86_64.rpm                              | 437 kB  00:00:02     
(2/7): selinux-policy-3.13.1-192.0.1.el7_5.3.noarch.rpm                      | 452 kB  00:00:01     
(3/7): krb5-libs-1.15.1-19.el7.x86_64.rpm                                    | 747 kB  00:00:04     
(4/7): tzdata-2018d-1.el7.noarch.rpm                                         | 480 kB  00:00:02     
Public key for kernel-uek-firmware-4.1.12-124.14.1.el7uek.noarch.rpm is not installed  00:01:09 ETA 
(5/7): kernel-uek-firmware-4.1.12-124.14.1.el7uek.noarch.rpm                 | 2.5 MB  00:00:13     
(6/7): selinux-policy-targeted-3.13.1-192.0.1.el7_5.3.noarch.rpm             | 6.6 MB  00:00:22     
(7/7): kernel-uek-4.1.12-124.14.1.el7uek.x86_64.rpm                          |  46 MB  00:01:19     
----------------------------------------------------------------------------------------------------
Total                                                               732 kB/s |  57 MB  00:01:19     
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Importing GPG key 0xEC551F03:
 Userid     : "Oracle OSS group (Open Source Software group) "
 Fingerprint: 4214 4123 fecf c55b 9086 313d 72f9 7b74 ec55 1f03
 Package    : 7:oraclelinux-release-7.5-1.0.3.el7.x86_64 (@anaconda/7.5)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : selinux-policy-3.13.1-192.0.1.el7_5.3.noarch                                    1/12 
  Updating   : initscripts-9.49.41-1.0.3.el7.x86_64                                            2/12 
  Installing : kernel-uek-firmware-4.1.12-124.14.1.el7uek.noarch                               3/12 
  Installing : kernel-uek-4.1.12-124.14.1.el7uek.x86_64                                        4/12 
  Updating   : selinux-policy-targeted-3.13.1-192.0.1.el7_5.3.noarch                           5/12 
  Updating   : tzdata-2018d-1.el7.noarch                                                       6/12 
  Updating   : krb5-libs-1.15.1-19.el7.x86_64                                                  7/12 
  Cleanup    : selinux-policy-targeted-3.13.1-192.0.1.el7.noarch                               8/12 
  Cleanup    : selinux-policy-3.13.1-192.0.1.el7.noarch                                        9/12 
  Cleanup    : tzdata-2018c-1.el7.noarch                                                      10/12 
  Cleanup    : krb5-libs-1.15.1-18.el7.x86_64                                                 11/12 
  Cleanup    : initscripts-9.49.41-1.0.1.el7.x86_64                                           12/12 
  Verifying  : kernel-uek-4.1.12-124.14.1.el7uek.x86_64                                        1/12 
  Verifying  : selinux-policy-targeted-3.13.1-192.0.1.el7_5.3.noarch                           2/12 
  Verifying  : kernel-uek-firmware-4.1.12-124.14.1.el7uek.noarch                               3/12 
  Verifying  : initscripts-9.49.41-1.0.3.el7.x86_64                                            4/12 
  Verifying  : selinux-policy-3.13.1-192.0.1.el7_5.3.noarch                                    5/12 
  Verifying  : krb5-libs-1.15.1-19.el7.x86_64                                                  6/12 
  Verifying  : tzdata-2018d-1.el7.noarch                                                       7/12 
  Verifying  : initscripts-9.49.41-1.0.1.el7.x86_64                                            8/12 
  Verifying  : tzdata-2018c-1.el7.noarch                                                       9/12 
  Verifying  : krb5-libs-1.15.1-18.el7.x86_64                                                 10/12 
  Verifying  : selinux-policy-3.13.1-192.0.1.el7.noarch                                       11/12 
  Verifying  : selinux-policy-targeted-3.13.1-192.0.1.el7.noarch                              12/12 

Installed:
  kernel-uek.x86_64 0:4.1.12-124.14.1.el7uek   kernel-uek-firmware.noarch 0:4.1.12-124.14.1.el7uek  

Updated:
  initscripts.x86_64 0:9.49.41-1.0.3.el7                                                            
  krb5-libs.x86_64 0:1.15.1-19.el7                                                                  
  selinux-policy.noarch 0:3.13.1-192.0.1.el7_5.3                                                    
  selinux-policy-targeted.noarch 0:3.13.1-192.0.1.el7_5.3                                           
  tzdata.noarch 0:2018d-1.el7                                                                       

Complete!
[root@mongo0 ~]#

As on of the packages was kernel I will now reboot the system.

[root@mongo0 ~]# reboot

After reboot there will be two kernels 4.x kernels installed, the original one that came on the ISO image and the latest one, lets remove the unneeded older version.

[root@mongo0 ~]# rpm -qa | grep kernel | sort
kernel-3.10.0-862.el7.x86_64
kernel-tools-3.10.0-862.el7.x86_64
kernel-tools-libs-3.10.0-862.el7.x86_64
kernel-uek-4.1.12-112.16.4.el7uek.x86_64
kernel-uek-4.1.12-124.14.1.el7uek.x86_64
kernel-uek-firmware-4.1.12-112.16.4.el7uek.noarch
kernel-uek-firmware-4.1.12-124.14.1.el7uek.noarch

[root@mongo0 ~]# uname -r
4.1.12-124.14.1.el7uek.x86_64

[root@mongo0 ~]# rpm -e kernel-uek-firmware-4.1.12-112.16.4.el7uek.noarch kernel-uek-4.1.12-112.16.4.el7uek.x86_64

[root@mongo0 ~]# rpm -qa | grep kernel | sort
kernel-3.10.0-862.el7.x86_64
kernel-tools-3.10.0-862.el7.x86_64
kernel-tools-libs-3.10.0-862.el7.x86_64
kernel-uek-4.1.12-124.14.1.el7uek.x86_64
kernel-uek-firmware-4.1.12-124.14.1.el7uek.noarch

Now we will add the MongoDB repository.

[root@mongo0 ~]# cat > /etc/yum.repos.d/mongodb-org-3.6.repo << __EOF
> [mongodb-org-3.6]
> name=MongoDB Repository
> baseurl=https://repo.mongodb.org/yum/redhat/\$releasever/mongodb-org/3.6/x86_64/
> gpgcheck=1
> enabled=1
> gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc
> __EOF

[root@mongo0 ~]# cat /etc/yum.repos.d/mongodb-org-3.6.repo
[mongodb-org-3.6]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.6/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc

We will then install the MongoDB package.

[root@mongo0 ~]# yum install mongodb-org
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package mongodb-org.x86_64 0:3.6.4-1.el7 will be installed
--> Processing Dependency: mongodb-org-tools = 3.6.4 for package: mongodb-org-3.6.4-1.el7.x86_64
--> Processing Dependency: mongodb-org-shell = 3.6.4 for package: mongodb-org-3.6.4-1.el7.x86_64
--> Processing Dependency: mongodb-org-server = 3.6.4 for package: mongodb-org-3.6.4-1.el7.x86_64
--> Processing Dependency: mongodb-org-mongos = 3.6.4 for package: mongodb-org-3.6.4-1.el7.x86_64
--> Running transaction check
---> Package mongodb-org-mongos.x86_64 0:3.6.4-1.el7 will be installed
---> Package mongodb-org-server.x86_64 0:3.6.4-1.el7 will be installed
---> Package mongodb-org-shell.x86_64 0:3.6.4-1.el7 will be installed
---> Package mongodb-org-tools.x86_64 0:3.6.4-1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================
 Package                     Arch            Version                 Repository                Size
====================================================================================================
Installing:
 mongodb-org                 x86_64          3.6.4-1.el7             mongodb-org-3.6          5.8 k
Installing for dependencies:
 mongodb-org-mongos          x86_64          3.6.4-1.el7             mongodb-org-3.6           12 M
 mongodb-org-server          x86_64          3.6.4-1.el7             mongodb-org-3.6           20 M
 mongodb-org-shell           x86_64          3.6.4-1.el7             mongodb-org-3.6           12 M
 mongodb-org-tools           x86_64          3.6.4-1.el7             mongodb-org-3.6           46 M

Transaction Summary
====================================================================================================
Install  1 Package (+4 Dependent packages)

Total download size: 90 M
Installed size: 265 M
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7Server/mongodb-org-3.6/packages/mongodb-org-3.6.4-1.el7.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 91fa4ad5: NOKEY
Public key for mongodb-org-3.6.4-1.el7.x86_64.rpm is not installed
(1/5): mongodb-org-3.6.4-1.el7.x86_64.rpm                                    | 5.8 kB  00:00:01     
(2/5): mongodb-org-mongos-3.6.4-1.el7.x86_64.rpm                             |  12 MB  00:00:32     
(3/5): mongodb-org-server-3.6.4-1.el7.x86_64.rpm                             |  20 MB  00:00:57     
(4/5): mongodb-org-shell-3.6.4-1.el7.x86_64.rpm                              |  12 MB  00:00:30     
(5/5): mongodb-org-tools-3.6.4-1.el7.x86_64.rpm                              |  46 MB  00:01:06     
----------------------------------------------------------------------------------------------------
Total                                                               740 kB/s |  90 MB  00:02:04     
Retrieving key from https://www.mongodb.org/static/pgp/server-3.6.asc
Importing GPG key 0x91FA4AD5:
 Userid     : "MongoDB 3.6 Release Signing Key "
 Fingerprint: 2930 adae 8caf 5059 ee73 bb4b 5871 2a22 91fa 4ad5
 From       : https://www.mongodb.org/static/pgp/server-3.6.asc
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : mongodb-org-shell-3.6.4-1.el7.x86_64                                             1/5 
  Installing : mongodb-org-tools-3.6.4-1.el7.x86_64                                             2/5 
  Installing : mongodb-org-mongos-3.6.4-1.el7.x86_64                                            3/5 
  Installing : mongodb-org-server-3.6.4-1.el7.x86_64                                            4/5 
Created symlink from /etc/systemd/system/multi-user.target.wants/mongod.service to /usr/lib/systemd/system/mongod.service.
  Installing : mongodb-org-3.6.4-1.el7.x86_64                                                   5/5 
  Verifying  : mongodb-org-3.6.4-1.el7.x86_64                                                   1/5 
  Verifying  : mongodb-org-server-3.6.4-1.el7.x86_64                                            2/5 
  Verifying  : mongodb-org-mongos-3.6.4-1.el7.x86_64                                            3/5 
  Verifying  : mongodb-org-tools-3.6.4-1.el7.x86_64                                             4/5 
  Verifying  : mongodb-org-shell-3.6.4-1.el7.x86_64                                             5/5 

Installed:
  mongodb-org.x86_64 0:3.6.4-1.el7                                                                  

Dependency Installed:
  mongodb-org-mongos.x86_64 0:3.6.4-1.el7          mongodb-org-server.x86_64 0:3.6.4-1.el7         
  mongodb-org-shell.x86_64 0:3.6.4-1.el7           mongodb-org-tools.x86_64 0:3.6.4-1.el7          

Complete!
[root@mongo0 ~]#

Network Manager

As we do not need Network Manager we will disable it entirely.

[root@mongo0 ~]# systemctl list-unit-files | grep -i network
dbus-org.freedesktop.NetworkManager.service   enabled 
NetworkManager-dispatcher.service             enabled 
NetworkManager-wait-online.service            enabled 
NetworkManager.service                        enabled 
network-online.target                         static  
network-pre.target                            static  
network.target                                static 

[root@mongo0 ~]# systemctl stop NetworkManager

[root@mongo0 ~]# systemctl disable NetworkManager 
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.

[root@mongo0 ~]# systemctl stop NetworkManager-wait-online

[root@mongo0 ~]# systemctl disable NetworkManager-wait-online 
Removed symlink /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.

[root@mongo0 ~]# systemctl stop NetworkManager-dispatcher

[root@mongo0 ~]# systemctl disable NetworkManager-dispatcher

[root@mongo0 ~]# systemctl list-unit-files | grep -i network
NetworkManager-dispatcher.service             disabled
NetworkManager-wait-online.service            disabled
NetworkManager.service                        disabled
network-online.target                         static  
network-pre.target                            static  
network.target                                static

SELinux

We do not need SELinux either.

[root@mongo0 ~]# sestatus 
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      29

[root@mongo0 ~]# setenforce 0

[root@mongo0 ~]# sestatus 
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      29

[root@mongo0 ~]# cat /etc/sysconfig/selinux 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 


[root@mongo0 ~]# sed -i -e 's@^SELINUX=enforcing$@SELINUX=disabled@g' /etc/selinux/config

[root@mongo0 ~]# cat /etc/sysconfig/selinux 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Firewall

… and iptables to the disabled state.

[root@mongo0 ~]# systemctl stop firewalld

[root@mongo0 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@mongo0 ~]# iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Old Deterministic Naming Scheme

With the introduction of RHEL 7.x (as Oracle Linux and CentOS systems are just ‘dumb’ clones) the old network interfaces naming scheme eth0, eth1 is gone. In 7.x the interfaces will now be named in a “Predictable Interface Names” which makes these names very unpredictable … fortunately there is a way to move back to old ‘unpredictable’ RHEL 6.x naming scheme with net.ifnames=0 biosdevname=0 options in the GRUB_CMDLINE_LINUX variable in the /etc/default/grub file. Lets do it then.

[root@mongo0 ~]# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rhgb quiet"
GRUB_DISABLE_RECOVERY="true"

[root@mongo0 ~]# cp /etc/default/grub /etc/default/grub.ORG

[root@mongo0 ~]# vi /etc/default/grub

[root@mongo0 ~]# diff -u /etc/default/grub.ORG /etc/default/grub
--- /etc/default/grub.ORG       2018-04-24 10:56:03.094000000 +0200
+++ /etc/default/grub   2018-04-24 10:56:13.668000000 +0200
@@ -3,5 +3,5 @@
 GRUB_DEFAULT=saved
 GRUB_DISABLE_SUBMENU=true
 GRUB_TERMINAL_OUTPUT="console"
-GRUB_CMDLINE_LINUX="rhgb quiet"
+GRUB_CMDLINE_LINUX="rhgb quiet net.ifnames=0 biosdevname=0"
 GRUB_DISABLE_RECOVERY="true"

[root@mongo0 ~]# grub2-mkconfig
Generating grub configuration file ...
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub2-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/00_header ###
set pager=1

if [ -s $prefix/grubenv ]; then
  load_env
fi
if [ "${next_entry}" ] ; then
   set default="${next_entry}"
   set next_entry=
   save_env next_entry
   set boot_once=true
else
   set default="${saved_entry}"
fi

if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
else
  menuentry_id_option=""
fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then
  set saved_entry="${prev_saved_entry}"
  save_env saved_entry
  set prev_saved_entry=
  save_env prev_saved_entry
  set boot_once=true
fi

function savedefault {
  if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
  fi
}

function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

terminal_output console
if [ x$feature_timeout_style = xy ] ; then
  set timeout_style=menu
  set timeout=5
# Fallback normal timeout code in case the timeout_style feature is
# unavailable.
else
  set timeout=5
fi
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/00_tuned ###
set tuned_params=""
set tuned_initrd=""
### END /etc/grub.d/00_tuned ###

### BEGIN /etc/grub.d/01_users ###
if [ -f ${prefix}/user.cfg ]; then
  source ${prefix}/user.cfg
  if [ -n "${GRUB2_PASSWORD}" ]; then
    set superusers="root"
    export superusers
    password_pbkdf2 root ${GRUB2_PASSWORD}
  fi
fi
### END /etc/grub.d/01_users ###

### BEGIN /etc/grub.d/10_linux ###
Found linux image: /boot/vmlinuz-4.1.12-124.14.1.el7uek.x86_64
Found initrd image: /boot/initramfs-4.1.12-124.14.1.el7uek.x86_64.img
menuentry 'Oracle Linux Server (4.1.12-124.14.1.el7uek.x86_64 with Unbreakable Enterprise Kernel) 7.5' --class oracle --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-4.1.12-124.14.1.el7uek.x86_64-advanced-621c9873-8ad4-4a24-9a2f-14763bb1b77f' {
        load_video
        set gfxpayload=keep
        insmod gzio
        insmod part_msdos
        insmod xfs
        set root='hd0,msdos1'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  621c9873-8ad4-4a24-9a2f-14763bb1b77f
        else
          search --no-floppy --fs-uuid --set=root 621c9873-8ad4-4a24-9a2f-14763bb1b77f
        fi
        linux16 /boot/vmlinuz-4.1.12-124.14.1.el7uek.x86_64 root=UUID=621c9873-8ad4-4a24-9a2f-14763bb1b77f ro rhgb quiet net.ifnames=0 biosdevname=0 
        initrd16 /boot/initramfs-4.1.12-124.14.1.el7uek.x86_64.img
}
Found linux image: /boot/vmlinuz-3.10.0-862.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-862.el7.x86_64.img
menuentry 'Oracle Linux Server (3.10.0-862.el7.x86_64 with Linux) 7.5' --class oracle --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-862.el7.x86_64-advanced-621c9873-8ad4-4a24-9a2f-14763bb1b77f' {
        load_video
        set gfxpayload=keep
        insmod gzio
        insmod part_msdos
        insmod xfs
        set root='hd0,msdos1'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  621c9873-8ad4-4a24-9a2f-14763bb1b77f
        else
          search --no-floppy --fs-uuid --set=root 621c9873-8ad4-4a24-9a2f-14763bb1b77f
        fi
        linux16 /boot/vmlinuz-3.10.0-862.el7.x86_64 root=UUID=621c9873-8ad4-4a24-9a2f-14763bb1b77f ro rhgb quiet net.ifnames=0 biosdevname=0 
        initrd16 /boot/initramfs-3.10.0-862.el7.x86_64.img
}
Found linux image: /boot/vmlinuz-0-rescue-141943d2370a45fe9230ea2413f80d41
Found initrd image: /boot/initramfs-0-rescue-141943d2370a45fe9230ea2413f80d41.img
menuentry 'Oracle Linux Server (0-rescue-141943d2370a45fe9230ea2413f80d41 with Linux) 7.5' --class oracle --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-0-rescue-141943d2370a45fe9230ea2413f80d41-advanced-621c9873-8ad4-4a24-9a2f-14763bb1b77f' {
        load_video
        insmod gzio
        insmod part_msdos
        insmod xfs
        set root='hd0,msdos1'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  621c9873-8ad4-4a24-9a2f-14763bb1b77f
        else
          search --no-floppy --fs-uuid --set=root 621c9873-8ad4-4a24-9a2f-14763bb1b77f
        fi
        linux16 /boot/vmlinuz-0-rescue-141943d2370a45fe9230ea2413f80d41 root=UUID=621c9873-8ad4-4a24-9a2f-14763bb1b77f ro rhgb quiet net.ifnames=0 biosdevname=0 
        initrd16 /boot/initramfs-0-rescue-141943d2370a45fe9230ea2413f80d41.img
}

### END /etc/grub.d/10_linux ###

### BEGIN /etc/grub.d/20_linux_xen ###

### END /etc/grub.d/20_linux_xen ###

### BEGIN /etc/grub.d/20_ppc_terminfo ###
### END /etc/grub.d/20_ppc_terminfo ###

### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###

### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###

### BEGIN /etc/grub.d/41_custom ###
if [ -f  ${config_directory}/custom.cfg ]; then
  source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
  source $prefix/custom.cfg;
fi
### END /etc/grub.d/41_custom ###
done
[root@mongo0 ~]# 

[root@mongo0 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.1.12-124.14.1.el7uek.x86_64
Found initrd image: /boot/initramfs-4.1.12-124.14.1.el7uek.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-862.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-862.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-141943d2370a45fe9230ea2413f80d41
Found initrd image: /boot/initramfs-0-rescue-141943d2370a45fe9230ea2413f80d41.img
done
[root@mongo0 ~]#

Network

As Anaconda installer got the award for the worst installer [Citation Needed] we will now have to clean up the installer generated configuration files. The interface is still enp0s3 instead eth0 because we haven’t done reboot yet.

Below are files generated by Anaconda installer.

[root@mongo0 ~]# ip li 
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3:  mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:dd:93:cf brd ff:ff:ff:ff:ff:ff

[root@mongo0 ~]# cat /etc/sysconfig/network
# Created by anaconda

[root@mongo0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp0s3
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="enp0s3"
UUID="3eba4d78-3392-49ed-807e-70fe6bc134b7"
DEVICE="enp0s3"
ONBOOT="yes"
IPADDR="10.0.10.10"
PREFIX="24"
IPV6_PRIVACY="no"
GATEWAY="10.0.10.1"
DNS1="1.1.1.1"

Lets make some cleanup and ‘migration’ to the old ethX naming scheme.

[root@mongo0 ~]# mv /etc/sysconfig/network-scripts/ifcfg-enp0s3 /etc/sysconfig/network-scripts/ifcfg-eth0

[root@mongo0 ~]# cat !$ | tr -d \" > ASD

[root@mongo0 ~]# mv -f !$ /etc/sysconfig/network-scripts/ifcfg-eth0

[root@mongo0 ~]# grep GATEWAY /etc/sysconfig/network-scripts/ifcfg-enp0s3 > /etc/sysconfig/network

[root@mongo0 ~]# cat /etc/sysconfig/network
GATEWAY=10.0.10.1

[root@mongo0 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

[root@mongo0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
IPV6INIT=no
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=10.0.10.10
PREFIX=24

[root@mongo0 ~]# echo nameserver 1.1.1.1 > /etc/resolv.conf 

[root@mongo0 ~]# diff -u /root/ifcfg-eth0.ORG /etc/sysconfig/network-scripts/ifcfg-eth0
--- /root/ifcfg-eth0.ORG        2018-04-24 11:00:17.493000000 +0200
+++ /etc/sysconfig/network-scripts/ifcfg-eth0   2018-04-24 11:00:57.914000000 +0200
@@ -1,20 +1,8 @@
 TYPE=Ethernet
-PROXY_METHOD=none
-BROWSER_ONLY=no
 BOOTPROTO=none
-DEFROUTE=yes
-IPV4_FAILURE_FATAL=no
-IPV6INIT=yes
-IPV6_AUTOCONF=yes
-IPV6_DEFROUTE=yes
-IPV6_FAILURE_FATAL=no
-IPV6_ADDR_GEN_MODE=stable-privacy
-NAME=enp0s3
-UUID=3eba4d78-3392-49ed-807e-70fe6bc134b7
-DEVICE=enp0s3
+IPV6INIT=no
+NAME=eth0
+DEVICE=eth0
 ONBOOT=yes
 IPADDR=10.0.10.10
 PREFIX=24
-IPV6_PRIVACY=no
-GATEWAY=10.0.10.1
-DNS1=1.1.1.1

We will now reboot the system to get the eth0 interface.

[root@mongo0 ~]# reboot

After the reboot the interface is plain old eth0 device.

[root@mongo0 ~]# ip li
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0:  mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:dd:93:cf brd ff:ff:ff:ff:ff:ff

Vim

If You will work with PuTTY with these hosts this may (or not) make your work more pleasant.

[root@mongo0 ~]# echo 'set mouse-=a' >> /root/.vimrc

Filesystem

We will disable atime for performance reasons in the /etc/fstab file.

[root@mongo0 ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Tue Apr 24 00:23:14 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=621c9873-8ad4-4a24-9a2f-14763bb1b77f /                       xfs     defaults        0 0

[root@mongo0 ~]# sed -i -e s@defaults@rw,noatime,nodiratime@g /etc/fstab

[root@mongo0 ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Tue Apr 24 00:23:14 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=621c9873-8ad4-4a24-9a2f-14763bb1b77f /                       xfs     rw,noatime,nodiratime        0 0

Lets see what output will give us the mount command on a modern Linux system with just one single / filesystem …

[root@mongo0 ~]# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=746928k,nr_inodes=186732,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/sda1 on / type xfs (rw,noatime,nodiratime,attr2,inode64,noquota)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=32,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=153040k,mode=700)

Horrible mess.

We can limit that output to something readable.

[root@mongo0 ~]# mount -t xfs
/dev/sda1 on / type xfs (rw,noatime,nodiratime,attr2,inode64,noquota)

[root@mongo0 ~]# mount | grep ^/
/dev/sda1 on / type xfs (rw,noatime,nodiratime,attr2,inode64,noquota)

Better.

Time Daemon

As with every cluster we will have to install and configure the time daemon, ntp for example.

First installation …

[root@mongo0 ~]# yum install ntp
Loaded plugins: ulninfo
mongodb-org-3.6                                                                         | 2.5 kB  00:00:00     
ol7_UEKR4                                                                               | 1.2 kB  00:00:00     
ol7_latest                                                                              | 1.4 kB  00:00:00     
Resolving Dependencies
--> Running transaction check
---> Package ntp.x86_64 0:4.2.6p5-28.0.1.el7 will be installed
--> Processing Dependency: ntpdate = 4.2.6p5-28.0.1.el7 for package: ntp-4.2.6p5-28.0.1.el7.x86_64
--> Processing Dependency: libopts.so.25()(64bit) for package: ntp-4.2.6p5-28.0.1.el7.x86_64
--> Running transaction check
---> Package autogen-libopts.x86_64 0:5.18-5.el7 will be installed
---> Package ntpdate.x86_64 0:4.2.6p5-28.0.1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===========================================================================================================
 Package                       Arch                Version                      Repository            Size
===========================================================================================================
Installing:
 ntp                           x86_64              4.2.6p5-28.0.1.el7           ol7_latest           548 k
Installing for dependencies:
 autogen-libopts               x86_64              5.18-5.el7                   ol7_latest            65 k
 ntpdate                       x86_64              4.2.6p5-28.0.1.el7           ol7_latest            85 k

Transaction Summary
===========================================================================================================
Install  1 Package (+2 Dependent packages)

Total download size: 698 k
Installed size: 1.6 M
Is this ok [y/d/N]: y
Downloading packages:
(1/3): autogen-libopts-5.18-5.el7.x86_64.rpm                                        |  65 kB  00:00:03     
(2/3): ntpdate-4.2.6p5-28.0.1.el7.x86_64.rpm                                        |  85 kB  00:00:00     
(3/3): ntp-4.2.6p5-28.0.1.el7.x86_64.rpm                                            | 548 kB  00:00:05     
-----------------------------------------------------------------------------------------------------------
Total                                                                      136 kB/s | 698 kB  00:00:05     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.                               
  Installing : autogen-libopts-5.18-5.el7.x86_64                                                        1/3 
  Installing : ntpdate-4.2.6p5-28.0.1.el7.x86_64                                                        2/3 
  Installing : ntp-4.2.6p5-28.0.1.el7.x86_64                                                            3/3 
  Verifying  : ntpdate-4.2.6p5-28.0.1.el7.x86_64                                                        1/3 
  Verifying  : autogen-libopts-5.18-5.el7.x86_64                                                        2/3 
  Verifying  : ntp-4.2.6p5-28.0.1.el7.x86_64                                                            3/3 

Installed:
  ntp.x86_64 0:4.2.6p5-28.0.1.el7                                                                                                   

Dependency Installed:
  autogen-libopts.x86_64 0:5.18-5.el7                              ntpdate.x86_64 0:4.2.6p5-28.0.1.el7                             

Complete!
[root@mongo0 ~]#

… and configuration.

[root@mongo0 ~]# cat /etc/sysconfig/ntpd
# Command line options for ntpd
OPTIONS="-g"

[root@mongo0 ~]# cp /etc/sysconfig/ntpd /etc/sysconfig/ntpd.ORG

[root@mongo0 ~]# vi /etc/sysconfig/ntpd

[root@mongo0 ~]# cat /etc/sysconfig/ntpd
# Command line options for ntpd
OPTIONS="-g -x"

[root@mongo0 ~]# diff -u /etc/sysconfig/ntpd.ORG /etc/sysconfig/ntpd
--- /etc/sysconfig/ntpd.ORG     2018-04-24 15:22:46.215788131 +0200
+++ /etc/sysconfig/ntpd 2018-04-24 15:22:31.464368114 +0200
@@ -1,2 +1,2 @@
 # Command line options for ntpd
-OPTIONS="-g"
+OPTIONS="-g -x"
[root@mongo0 ~]# 

[root@mongo0 ~]# systemctl start ntpd

[root@mongo0 ~]# systemctl enable ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.

[root@mongo0 ~]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
+jamesl.tk       130.149.17.8     2 u    3   64    1   99.361  -66.467  38.578
+sunsite.icm.edu 194.146.251.100  2 u    2   64    1   79.675  -55.486  12.338
+maggo.info      124.216.164.14   2 u    3   64    1   84.604  -65.630  14.804
*91-211-101-141. 5.226.98.186     2 u    2   64    1   74.071  -62.252  14.619
[root@mongo0 ~]#

Attack of the Clones

As our mongo0 machine install is finished we can now power it off and clone it into the remainng mongo1/mongo2/mongo3/mongo4 nodes.

SSH

Lets setup the keys to not have to type password everytime we want to do anything.

host % ssh-copy-id -i ~/.ssh/id_rsa.pub -p 2200 root@localhost
Password for root@mongo0:

host % ssh -p 2200 root@localhost
[root@mongo0 ~]#

Cluster SSH

For the convenience you may wish to use Cluster SSH to connect to all nodes for the tasks that are the same on all nodes.

Here is the Cluster SSH cssh command used to connect to our MongoDB cluster.

host % cssh \
  root@localhost:2200 \
  root@localhost:2201 \
  root@localhost:2202 \
  root@localhost:2203 \
  root@localhost:2204 \

… or like that.

host % cssh root@localhost:220{0,1,2,3,4}

… and here is how it looks like.

clusterssh-mongodb
If there are taks to be made only on DATA nodes you may connect only to 4 nodes with Cluster SSH of course.

Environment

As we have our clones ready lets start them.

host % for I in 0 1 2 3 4; do VBoxManage startvm mongo${I} --type headless; done
Waiting for VM "mongo0" to power on...
VM "mongo0" has been successfully started.
Waiting for VM "mongo1" to power on...
VM "mongo1" has been successfully started.
Waiting for VM "mongo2" to power on...
VM "mongo2" has been successfully started.
Waiting for VM "mongo3" to power on...
VM "mongo3" has been successfully started.
Waiting for VM "mongo4" to power on...
VM "mongo4" has been successfully started.

As we have our nodes installed and started lets check the connectivity between them.

[root@mongo0 ~]# awk '/mongo/ {print $1}' /etc/hosts | xargs -n1 ping -c 1 -t 3 | grep loss
1 packets transmitted, 1 received, 0% packet loss, time 0ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms

Lets verify that MongoDB is installed.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost which mongod; done
/usr/bin/mongod
/usr/bin/mongod
/usr/bin/mongod
/usr/bin/mongod
/usr/bin/mongod

Now we will configure /etc/hosts file.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost 'cat >> /etc/hosts << __EOF
10.0.10.10 mongo0
10.0.10.11 mongo1
10.0.10.12 mongo2
10.0.10.13 mongo3
10.0.10.14 mongo4
__EOF'
done

Lets verify it.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost "grep mongo${I} /etc/hosts"; done
10.0.10.10 mongo0
10.0.10.11 mongo1
10.0.10.12 mongo2
10.0.10.13 mongo3
10.0.10.14 mongo4

MongoDB

It is now (at last) time to configure MongoDB, lets start with the configuration files.

Configuration Files

Create the config files for the MongoDB data nodes.

host % for I in 0 1 2 3; do ssh -p 220${I} root@localhost "cat > /etc/mongod.conf << __EOF
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

storage:
  dbPath: /var/lib/mongo
  journal.enabled: true  # ONLY DIFFERENCE BETWEEN DATA AND ARBITER NODE #

processManagement:
  fork: true
  pidFilePath: /var/run/mongodb/mongod.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  port: 27017
  bindIp: localhost,10.0.10.1${I}

replication:
   replSetName: \"replica0\"

__EOF"
done

Create the config file for the MongoDB arbiter node.

host % for I in 4; do ssh -p 220${I} root@localhost "cat > /etc/mongod.conf << __EOF
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

storage:
  dbPath: /var/lib/mongo
  journal.enabled: false # ONLY DIFFERENCE BETWEEN DATA AND ARBITER NODE #

processManagement:
  fork: true
  pidFilePath: /var/run/mongodb/mongod.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  port: 27017
  bindIp: localhost,10.0.10.1${I}

replication:
   replSetName: \"replica0\"

__EOF"
done

Lets verify these configuration files.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost grep -H bindIp /etc/mongod.conf; done  
/etc/mongod.conf:  bindIp: localhost,10.0.10.10
/etc/mongod.conf:  bindIp: localhost,10.0.10.11
/etc/mongod.conf:  bindIp: localhost,10.0.10.12
/etc/mongod.conf:  bindIp: localhost,10.0.10.13
/etc/mongod.conf:  bindIp: localhost,10.0.10.14
host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost grep -H /var /etc/mongod.conf; echo; done | column -t 
/etc/mongod.conf:  path:         /var/log/mongodb/mongod.log
/etc/mongod.conf:  dbPath:       /var/lib/mongo
/etc/mongod.conf:  pidFilePath:  /var/run/mongodb/mongod.pid

/etc/mongod.conf:  path:         /var/log/mongodb/mongod.log
/etc/mongod.conf:  dbPath:       /var/lib/mongo
/etc/mongod.conf:  pidFilePath:  /var/run/mongodb/mongod.pid

/etc/mongod.conf:  path:         /var/log/mongodb/mongod.log
/etc/mongod.conf:  dbPath:       /var/lib/mongo
/etc/mongod.conf:  pidFilePath:  /var/run/mongodb/mongod.pid

/etc/mongod.conf:  path:         /var/log/mongodb/mongod.log
/etc/mongod.conf:  dbPath:       /var/lib/mongo
/etc/mongod.conf:  pidFilePath:  /var/run/mongodb/mongod.pid

/etc/mongod.conf:  path:         /var/log/mongodb/mongod.log
/etc/mongod.conf:  dbPath:       /var/lib/mongo
/etc/mongod.conf:  pidFilePath:  /var/run/mongodb/mongod.pid
host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost grep -H DIFFERENCE /etc/mongod.conf; done 
/etc/mongod.conf:  journal.enabled: true  # ONLY DIFFERENCE BETWEEN DATA AND ARBITER NODE #
/etc/mongod.conf:  journal.enabled: true  # ONLY DIFFERENCE BETWEEN DATA AND ARBITER NODE #
/etc/mongod.conf:  journal.enabled: true  # ONLY DIFFERENCE BETWEEN DATA AND ARBITER NODE #
/etc/mongod.conf:  journal.enabled: true  # ONLY DIFFERENCE BETWEEN DATA AND ARBITER NODE #
/etc/mongod.conf:  journal.enabled: false # ONLY DIFFERENCE BETWEEN DATA AND ARBITER NODE #

Lets start the MongoDB nodes, if MongoDB is already running with the default config (not ours) then restart it.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost service mongod stop; done
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service
host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost service mongod start; done     
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service

Lets verify that MongoDB is running on our nodes with the new onfiguration.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost pgrep mongod; done                             
735
744
738
736
748
host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost ss -ln4; echo; done
Netid  State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
udp    UNCONN     0      0      10.0.10.10:123                   *:*                  
udp    UNCONN     0      0      127.0.0.1:123                   *:*                  
udp    UNCONN     0      0         *:123                   *:*                  
tcp    LISTEN     0      128    127.0.0.1:27017                 *:*                  
tcp    LISTEN     0      128       *:22                    *:*                  
tcp    LISTEN     0      100    127.0.0.1:25                    *:*                  

Netid  State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
udp    UNCONN     0      0      10.0.10.11:123                   *:*                  
udp    UNCONN     0      0      127.0.0.1:123                   *:*                  
udp    UNCONN     0      0         *:123                   *:*                  
tcp    LISTEN     0      128    127.0.0.1:27017                 *:*                  
tcp    LISTEN     0      128       *:22                    *:*                  
tcp    LISTEN     0      100    127.0.0.1:25                    *:*                  

Netid  State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
udp    UNCONN     0      0      10.0.10.12:123                   *:*                  
udp    UNCONN     0      0      127.0.0.1:123                   *:*                  
udp    UNCONN     0      0         *:123                   *:*                  
tcp    LISTEN     0      128    127.0.0.1:27017                 *:*                  
tcp    LISTEN     0      128       *:22                    *:*                  
tcp    LISTEN     0      100    127.0.0.1:25                    *:*                  

Netid  State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
udp    UNCONN     0      0      10.0.10.13:123                   *:*                  
udp    UNCONN     0      0      127.0.0.1:123                   *:*                  
udp    UNCONN     0      0         *:123                   *:*                  
tcp    LISTEN     0      128    127.0.0.1:27017                 *:*                  
tcp    LISTEN     0      128       *:22                    *:*                  
tcp    LISTEN     0      100    127.0.0.1:25                    *:*                  

Netid  State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
udp    UNCONN     0      0      10.0.10.14:123                   *:*                  
udp    UNCONN     0      0      127.0.0.1:123                   *:*                  
udp    UNCONN     0      0         *:123                   *:*                  
tcp    LISTEN     0      128    127.0.0.1:27017                 *:*                  
tcp    LISTEN     0      128       *:22                    *:*                  
tcp    LISTEN     0      100    127.0.0.1:25                    *:*

Replica Set

We may now configure our MongoDB Replica Set Cluster.

We will use replica0 name for the replica set.

We will paste these instructions into the MongoDB prompt on the first node (mongo0) to configure replica set.

use admin
rs.initiate(
  {
    _id : "replica0",
    members: [
      { _id: 0, host: "mongo0:27017" },
      { _id: 1, host: "mongo1:27017" },
      { _id: 2, host: "mongo2:27017" },
      { _id: 3, host: "mongo3:27017" }
    ]
  }
)

Lets do it then. As You will paste it you will see that prompt changed to replica0:SECONDARY> string. Hit [ENTER] once a second and after about 15-20 seconds it should change to replica0:PRIMARY> as this will be currently the node role in the cluster after forming it.

% ssh root@localhost -p 2200
Last login: Tue Apr 24 14:39:06 2018 from 10.0.10.2
[root@mongo0 ~]# mongo
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.4
Server has startup warnings: 
2018-04-24T14:39:33.161+0200 I CONTROL  [initandlisten] 
2018-04-24T14:39:33.162+0200 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-04-24T14:39:33.162+0200 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2018-04-24T14:39:33.162+0200 I CONTROL  [initandlisten] 
> use admin
switched to db admin
> rs.initiate(
...   {
...     _id : "replica0",
...     members: [
...       { _id: 0, host: "mongo0:27017" },
...       { _id: 1, host: "mongo1:27017" },
...       { _id: 2, host: "mongo2:27017" },
...       { _id: 3, host: "mongo3:27017" }
...     ]
...   }
... )
{
        "ok" : 1,
        "operationTime" : Timestamp(1524574334, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1524574334, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:SECONDARY> 
replica0:PRIMARY> 
replica0:PRIMARY>

We will not create admin (less powerful) and root (as the name suggests can do anything) users on our new MongoDB cluster.

We will paste these instructions into the MongoDB prompt on the PRIMARY node (currnetly mongo0) to add users.

use admin
db.createUser(
  {
    user: "admin",
    pwd: "ADMIN-PASSWORD",
    roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
  }
)
use admin
db.createUser(
  {
    user: "root",
    pwd: "ROOT-PASSWORD",
    roles:["root"]
  }
)

Lets do it then.

replica0:PRIMARY> use admin
switched to db admin
replica0:PRIMARY> db.createUser(
...   {
...     user: "admin",
...     pwd: "ADMIN-PASSWORD",
...     roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
...   }
... )
Successfully added user: {
        "user" : "admin",
        "roles" : [
                {
                        "role" : "userAdminAnyDatabase",
                        "db" : "admin"
                }
        ]
}
replica0:PRIMARY>
replica0:PRIMARY> use admin
switched to db admin
replica0:PRIMARY> db.createUser(
...   {
...     user: "root",
...     pwd: "ROOT-PASSWORD",
...     roles:["root"]
...   }
... )
Successfully added user: { "user" : "root", "roles" : [ "root" ] }
replica0:PRIMARY>

We can now exit from the MongoDB prompt.

replica0:PRIMARY> exit
[root@mongo0 ~]#

We will not stop the MongoDB services and enable authorization and also configure shared keyfile.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost service mongod stop; done   
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service
Redirecting to /bin/systemctl stop mongod.service

Lets add needed configuration files settings.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost "cat >> /etc/mongod.conf << __EOF
security:
  authorization: enabled
  keyFile: /etc/mongod.conf.key

__EOF"
done

Now lets generate a new key …

host % dd  /dev/null | sha256
66700abfea54b9f07e9767acd912f4ab17f9153fa0718984fe3b0c4fe2116baf

… and put it into the nodes as /etc/mongod.conf.key file.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost 'echo 66700abfea54b9f07e9767acd912f4ab17f9153fa0718984fe3b0c4fe2116baf > /etc/mongod.conf.key'; done
host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost chmod 600 /etc/mongod.conf.key; done
host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost chown mongod:mongod /etc/mongod.conf.key; done

Lets verify out new key is there.

% for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost cat /etc/mongod.conf.key; done
66700abfea54b9f07e9767acd912f4ab17f9153fa0718984fe3b0c4fe2116baf
66700abfea54b9f07e9767acd912f4ab17f9153fa0718984fe3b0c4fe2116baf
66700abfea54b9f07e9767acd912f4ab17f9153fa0718984fe3b0c4fe2116baf
66700abfea54b9f07e9767acd912f4ab17f9153fa0718984fe3b0c4fe2116baf
66700abfea54b9f07e9767acd912f4ab17f9153fa0718984fe3b0c4fe2116baf

We can now start the MongoDB with new settings.

host % for I in 0 1 2 3 4; do ssh -p 220${I} root@localhost service mongod start; done                    
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service
Redirecting to /bin/systemctl start mongod.service

We can now connect to our MongoDB cluster with root user.

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY>

Lets see how MongoDB rs.conf() function shows our configuration (yet before ARBITER node role added).

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY> rs.conf()
{
        "_id" : "replica0",
        "version" : 1,
        "protocolVersion" : NumberLong(1),
        "members" : [
                {
                        "_id" : 0,
                        "host" : "mongo0:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "mongo1:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "mongo2:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 3,
                        "host" : "mongo3:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "catchUpTimeoutMillis" : -1,
                "catchUpTakeoverDelayMillis" : 30000,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("5adf287d597df99256d11280")
        }
}
replica0:PRIMARY>

Lets see how MongoDB rs.status() function shows our configuration (yet before ARBITER node role added).

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY> rs.status()
{
        "set" : "replica0",
        "date" : ISODate("2018-04-24T13:14:14.653Z"),
        "myState" : 1,
        "term" : NumberLong(2),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1524575648, 1),
                        "t" : NumberLong(2)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1524575648, 1),
                        "t" : NumberLong(2)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1524575648, 1),
                        "t" : NumberLong(2)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1524575648, 1),
                        "t" : NumberLong(2)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "mongo0:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 111,
                        "optime" : {
                                "ts" : Timestamp(1524575648, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:14:08Z"),
                        "infoMessage" : "could not find member to sync from",
                        "electionTime" : Timestamp(1524575557, 1),
                        "electionDate" : ISODate("2018-04-24T13:12:37Z"),
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "mongo1:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 104,
                        "optime" : {
                                "ts" : Timestamp(1524575648, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1524575648, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:14:08Z"),
                        "optimeDurableDate" : ISODate("2018-04-24T13:14:08Z"),
                        "lastHeartbeat" : ISODate("2018-04-24T13:14:13.546Z"),
                        "lastHeartbeatRecv" : ISODate("2018-04-24T13:14:13.892Z"),
                        "pingMs" : NumberLong(1),
                        "syncingTo" : "mongo0:27017",
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "mongo2:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 101,
                        "optime" : {
                                "ts" : Timestamp(1524575648, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1524575648, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:14:08Z"),
                        "optimeDurableDate" : ISODate("2018-04-24T13:14:08Z"),
                        "lastHeartbeat" : ISODate("2018-04-24T13:14:13.546Z"),
                        "lastHeartbeatRecv" : ISODate("2018-04-24T13:14:13.863Z"),
                        "pingMs" : NumberLong(1),
                        "syncingTo" : "mongo0:27017",
                        "configVersion" : 1
                },
                {
                        "_id" : 3,
                        "name" : "mongo3:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 99,
                        "optime" : {
                                "ts" : Timestamp(1524575648, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1524575648, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:14:08Z"),
                        "optimeDurableDate" : ISODate("2018-04-24T13:14:08Z"),
                        "lastHeartbeat" : ISODate("2018-04-24T13:14:13.548Z"),
                        "lastHeartbeatRecv" : ISODate("2018-04-24T13:14:13.725Z"),
                        "pingMs" : NumberLong(1),
                        "syncingTo" : "mongo0:27017",
                        "configVersion" : 1
                }
        ],
        "ok" : 1,
        "operationTime" : Timestamp(1524575648, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1524575648, 1),
                "signature" : {
                        "hash" : BinData(0,"v6kIdsFS93nZcf2hJ/EYTrVjsso="),
                        "keyId" : NumberLong("6547996956390588417")
                }
        }
}
replica0:PRIMARY>

Arbiter

We can now add the ARBITER role on mongo4 node.

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY> rs.addArb("mongo4:27017")
{
        "ok" : 1,
        "operationTime" : Timestamp(1524575694, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1524575694, 1),
                "signature" : {
                        "hash" : BinData(0,"Bkvnl5fskD4NLvA1qhaU+BYLFNo="),
                        "keyId" : NumberLong("6547996956390588417")
                }
        }
}
replica0:PRIMARY>

The ARBITER role has different prompt also.

[root@mongo4 ~]# mongo
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.4
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
        http://docs.mongodb.org/
Questions? Try the support group
        http://groups.google.com/group/mongodb-user
replica0:ARBITER>

Lets see how MongoDB rs.config() function shows our configuration after adding the ARBITER node.

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY> rs.config()
{
        "_id" : "replica0",
        "version" : 2,
        "protocolVersion" : NumberLong(1),
        "members" : [
                {
                        "_id" : 0,
                        "host" : "mongo0:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "mongo1:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "mongo2:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 3,
                        "host" : "mongo3:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 4,
                        "host" : "mongo4:27017",
                        "arbiterOnly" : true,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 0,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "catchUpTimeoutMillis" : -1,
                "catchUpTakeoverDelayMillis" : 30000,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("5adf287d597df99256d11280")
        }
}
replica0:PRIMARY>

Lets see how MongoDB rs.status() function shows our configuration after adding the ARBITER node.

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY> rs.status()
{
        "set" : "replica0",
        "date" : ISODate("2018-04-24T13:19:31.989Z"),
        "myState" : 1,
        "term" : NumberLong(2),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1524575969, 1),
                        "t" : NumberLong(2)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1524575969, 1),
                        "t" : NumberLong(2)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1524575969, 1),
                        "t" : NumberLong(2)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1524575969, 1),
                        "t" : NumberLong(2)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "mongo0:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 428,
                        "optime" : {
                                "ts" : Timestamp(1524575969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:19:29Z"),
                        "electionTime" : Timestamp(1524575557, 1),
                        "electionDate" : ISODate("2018-04-24T13:12:37Z"),
                        "configVersion" : 2,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "mongo1:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 421,
                        "optime" : {
                                "ts" : Timestamp(1524575969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1524575969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:19:29Z"),
                        "optimeDurableDate" : ISODate("2018-04-24T13:19:29Z"),
                        "lastHeartbeat" : ISODate("2018-04-24T13:19:31.139Z"),
                        "lastHeartbeatRecv" : ISODate("2018-04-24T13:19:30.455Z"),
                        "pingMs" : NumberLong(1),
                        "syncingTo" : "mongo0:27017",
                        "configVersion" : 2
                },
                {
                        "_id" : 2,
                        "name" : "mongo2:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 418,
                        "optime" : {
                                "ts" : Timestamp(1524575969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1524575969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:19:29Z"),
                        "optimeDurableDate" : ISODate("2018-04-24T13:19:29Z"),
                        "lastHeartbeat" : ISODate("2018-04-24T13:19:31.145Z"),
                        "lastHeartbeatRecv" : ISODate("2018-04-24T13:19:30.571Z"),
                        "pingMs" : NumberLong(2),
                        "syncingTo" : "mongo0:27017",
                        "configVersion" : 2
                },
                {
                        "_id" : 3,
                        "name" : "mongo3:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 416,
                        "optime" : {
                                "ts" : Timestamp(1524575969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1524575969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2018-04-24T13:19:29Z"),
                        "optimeDurableDate" : ISODate("2018-04-24T13:19:29Z"),
                        "lastHeartbeat" : ISODate("2018-04-24T13:19:31.145Z"),
                        "lastHeartbeatRecv" : ISODate("2018-04-24T13:19:30.445Z"),
                        "pingMs" : NumberLong(2),
                        "syncingTo" : "mongo0:27017",
                        "configVersion" : 2
                },
                {
                        "_id" : 4,
                        "name" : "mongo4:27017",
                        "health" : 1,
                        "state" : 7,
                        "stateStr" : "ARBITER",
                        "uptime" : 215,
                        "lastHeartbeat" : ISODate("2018-04-24T13:19:31.111Z"),
                        "lastHeartbeatRecv" : ISODate("2018-04-24T13:19:30.730Z"),
                        "pingMs" : NumberLong(2),
                        "configVersion" : 2
                }
        ],
        "ok" : 1,
        "operationTime" : Timestamp(1524575969, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1524575969, 1),
                "signature" : {
                        "hash" : BinData(0,"+nUr+dY6LufEIZIjfzwKRw4cQpM="),
                        "keyId" : NumberLong("6547996956390588417")
                }
        }
}
replica0:PRIMARY>

We can also check what roles are configured by default on our MongoDB cluster.

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY> show roles
{
        "role" : "dbAdmin",
        "db" : "test",
        "isBuiltin" : true,
        "roles" : [ ],
        "inheritedRoles" : [ ]
}
{
        "role" : "dbOwner",
        "db" : "test",
        "isBuiltin" : true,
        "roles" : [ ],
        "inheritedRoles" : [ ]
}
{
        "role" : "enableSharding",
        "db" : "test",
        "isBuiltin" : true,
        "roles" : [ ],
        "inheritedRoles" : [ ]
}
{
        "role" : "read",
        "db" : "test",
        "isBuiltin" : true,
        "roles" : [ ],
        "inheritedRoles" : [ ]
}
{
        "role" : "readWrite",
        "db" : "test",
        "isBuiltin" : true,
        "roles" : [ ],
        "inheritedRoles" : [ ]
}
{
        "role" : "userAdmin",
        "db" : "test",
        "isBuiltin" : true,
        "roles" : [ ],
        "inheritedRoles" : [ ]
}
replica0:PRIMARY>

… and users.

[root@mongo0 ~]# mongo --port 27017 -u root -p ROOT-PASSWORD --authenticationDatabase admin
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.6.4
replica0:PRIMARY> use admin
switched to db admin
replica0:PRIMARY> show users
{
        "_id" : "admin.admin",
        "user" : "admin",
        "db" : "admin",
        "roles" : [
                {
                        "role" : "userAdminAnyDatabase",
                        "db" : "admin"
                }
        ]
}
{
        "_id" : "admin.root",
        "user" : "root",
        "db" : "admin",
        "roles" : [
                {
                        "role" : "root",
                        "db" : "admin"
                }
        ]
}
replica0:PRIMARY>

Backup

Below are simple backup commands for the completeness of the article.

As ‘local‘ database is not being backed up by default we will have to backup it exclusively with separate command.

backup % mongodump \
           --host "rs0/mongo0:27017,mongo1:27017,mongo2:27017,mongo3:27017" \
           --username root \
           --password ROOT-PASSWORD \
           --authenticationDatabase admin \
           --db local \
           --out /backup/replica0-local

backup % mongodump \
           --host "rs0/mongo0:27017,mongo1:27017,mongo2:27017,mongo3:27017" \
           --username root \
           --password ROOT-PASSWORD \
           --authenticationDatabase admin \
           --out /backup/replica0-dat

Pretty

To auto format the query response add this to your ~/.mongorc.js file.

DBQuery.prototype._prettyShell = true
DBQuery.prototype.unpretty = function () {
  this._prettyShell = false;
  return this;
}

I would also set how much results will the .find() print before asking to type the result in the ~/.mongorc.js file.

DBQuery.shellBatchSize = 50

Final ~/.mongorc.js file.

% cat ~/.mongorc.js
DBQuery.shellBatchSize = 50
DBQuery.prototype._prettyShell = true
DBQuery.prototype.unpretty = function () {
  this._prettyShell = false;
  return this;
}

You will find other useful tips in the MongoDB: Tips & Tricks blog post.

Performance

As it seems MongoDB is not always the fastest option as PostgreSQL database also can work with JSON data type.

benchmark-mongodb24-postgresql94

benchmark-persister-timeline

Check these two below for more information and insight and decide which database is best for your needs.

Management

For the convenience of management I would also suggest adding MongoDB Ops Manager, but this is not covered in this (already big) article.

mongodb-ops-manager-deploy

EOF

Β